Modelling the Survival of Financial and Industrial Enterprises Advantages, Challenges and Problems with the Internal Rat...
34 downloads
451 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Modelling the Survival of Financial and Industrial Enterprises Advantages, Challenges and Problems with the Internal Ratings-Based (IRB) Method
Dimitris N. Chorafas
Modelling the Survival of Financial and Industrial Enterprises
Also by Dimitris Chorafas The Management of Philanthropy in the 21st Century Liabilities, Liquidity and Cash Management: Balancing Financial Risk Alternative Investments and the Management of Risk Managing Risk in the New Economy Managing Operational Risk: Risk Reduction Strategies for Investment Banks and Commercial Banks Enterprise Architecture and New Generation Information Systems Implementing and Auditing the Internal Control System Integrating ERP, Supply Chain Management, and Smart Materials Internet Supply Chain: Its Impact on Accounting and Logistics Reliable Financial Reporting and Internal Control: A Global Implementation Guide New Regulation of the Financial Industry Managing Credit Risk: Volume 1, Analysing, Rating and Pricing the Profitability of Default Managing Credit Risk: Volume 2, The Lessons of VAR Failures and Imprudent Exposure Credit Derivatives and Management of Risk Setting Limits for Market Risk Commercial Banking Handbook Understanding Volatility and Liquidity in Financial Markets The Market Risk Amendment: Understanding Marking to Model and Value-at-Risk Cost-Effective IT Solutions for Financial Services Agent Technology Handbook Transaction Management Internet Financial Services: Secure Electronic Banking and Electronic Commerce? Network Computers versus High Performance Computers Visual Programming Technology High Performance Networks, Mobile Computing and Personal Communications Protocols, Servers and Projects for Multimedia Real-time Systems The Money Magnet: Regulating International Finance, Analyzing Money Flows and Selecting a Strategy for Personal Hedging Managing Derivatives Risk Rocket Scientists in Banking
Modelling the Survival of Financial and Industrial Enterprises Advantages, Challenges and Problems with the Internal Ratings-Based (IRB) Method Dimitris N. Chorafas
© Dimitris N. Chorafas 2002 All rights reserved. No reproduction, copy or transmission of this publication may be made without written permission. No paragraph of this publication may be reproduced, copied or transmitted save with written permission or in accordance with the provisions of the Copyright, Designs and Patents Act 1988, or under the terms of any licence permitting limited copying issued by the Copyright Licensing Agency, 90 Tottenham Court Road, London W1T 4LP. Any person who does any unauthorised act in relation to this publication may be liable to criminal prosecution and civil claims for damages. The author has asserted his right to be identified as the author of this work in accordance with the Copyright, Designs and Patents Act 1988. First published 2002 by PALGRAVE MACMILLAN Houndmills, Basingstoke, Hampshire RG21 6XS and 175 Fifth Avenue, New York, N.Y. 10010 Companies and representatives throughout the world. PALGRAVE MACMILLAN is the global academic imprint of the Palgrave Macmillan division of St. Martin’s Press, LLC and of Palgrave Macmillan Ltd. Macmillan® is a registered trademark in the United States, United Kingdom and other countries. Palgrave is a registered trademark in the European Union and other countries. ISBN 0–333–98466–8 This book is printed on paper suitable for recycling and made from fully managed and sustained forest sources. A catalogue record for this book is available from the British Library. Library of Congress Cataloging-in-Publication Data Chorafas, Dimitris N. Modelling the survival of financial and industrial enterprises: advantages, challenges and problems with the internal ratings-based method/by Dimitris N. Chorafas. p. cm. Includes bibliographical references and index. ISBN 0–333–98466–8 1. Financial institutions—Mathematical models. 2. Business enterprises—Mathematical models. I. Title. HG173.C565 2002 338.7′01′5118—dc21 2001059114 10 9 11 10
8 09
7 08
6 5 07 06
4 05
3 04
2 1 03 02
Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham and Eastbourne
Contents
List of Tables
ix
List of Figures
x
Preface
xiii
Part One Understanding the Contribution of Science and of Models 1 Science and the Solution of Real-life Business Problems 1.1 Introduction 1.2 Thinking is the common ground between science and philosophy 1.3 Principles underlying scientific thought 1.4 What is meant by the scientific method? 1.5 Models and the internal rating-based solution 1.6 Natural death and oblivion of models, products, factories, companies and people
2 Is the Work of Financial Analysts Worth the Cost and the Effort? 2.1 Introduction 2.2 The role of financial analysts 2.3 Metaknowledge is a basic concept of science and technology 2.4 Metaphors, real world problems and their solution 2.5 Characteristics of an internally consistent analysis 2.6 Financial studies and the methodology of physicists and inventors 2.7 Management based on research and analysis
3 The Contribution of Modelling and Experimentation in Modern Business 3.1 3.2 3.3 3.4
Introduction The multiple role of analysis in the financial industry Can models help in improving business leadership? Non-traditional financial analysis and qualitative criteria v
3 3 5 8 12 16 20
24 24 25 29 32 36 39 42
45 45 46 48 54
vi Contents
3.5 Models become more important in conjunction to internal control 3.6 Human factors in organisation and modelling
57 60
Part Two Elements of the Internal Rating-based Method 4 Practical Applications: the Assessment of Creditworthiness 4.1 4.2 4.3 4.4 4.5 4.6 4.7
Introduction Notions underpinning the control of credit risk RAROC as a strategic tool Standardised approach and IRB method of Basle II Amount of leverage, loss threshold and counterparty risk Risk factors help in better appreciation of exposure Has the Westdeutsche Landesbank Girozentrale (West LB) an AA + or a D rating?
67 67 68 74 78 81 84 88
5 Debts and the Use of Models in Evaluating Credit Risk
91
5.1 Introduction 5.2 Contribution of information technology (IT) to the control of credit exposure 5.3 Credit risk, rating and exposure: examples with credit derivatives 5.4 Rules by Banque de France on securitisation of corporate debt 5.5 Credit derivatives with non-performing loans: Banca di Roma and Thai Farmers’ Bank 5.6 Don’t use market risk models for credit risk
91
6 Models for Actuarial Science and the Cost of Money 6.1 6.2 6.3 6.4 6.5 6.6
Introduction Basic principles underpinning actuarial science The stochastic nature of actuarial models Interest rates, present value and discounting Modelling a cash flow system Actuarial reserves and collective models
Part Three
94 97 101 106 108
113 113 114 120 123 126 129
Forecasting, Reporting, Evaluating and Exercising Market Discipline
7 Scenario Analysis and the Delphi Method 7.1 Introduction 7.2 Why expert opinion is not available matter-of-course
137 137 139
Contents
7.3 The delphi method helps management avoid tunnel vision 7.4 Scenarios and the pattern of expert advice 7.5 Extending the scope of analytics and the planning horizon 7.6 Making effective use of informed intuitive judgement
8 Financial Forecasting and Economic Predictions 8.1 Introduction 8.2 The art of prognostication and its pitfalls 8.3 Predictive trends, evolutionary concepts and rocket scientists 8.4 A prediction theory based on the underlying simplicity of systems 8.5 Undocumented hypotheses are in the background of many model failures 8.6 Investment horizon and the arrow of time
9 Reliable Financial Reporting and Market Discipline 9.1 Introduction 9.2 Committee of Sponsoring Organisations (COSO) of the Treadway Commission and implementation of COSO 9.3 Qualitative and quantitative disclosures by financial institutions 9.4 Proactive regulation and the use of an accounting metalanguage 9.5 Defining the territory where new regulations must apply 9.6 Measurement practices, reporting guidelines and management intent 9.7 Why fair value in financial reporting is a superior method
vii
141 146 151 154
157 157 158 162 166 171 174
179 179 181 184 188 191 194 198
Part Four What to do and not to do with Models 10 The Model’ss Contribution: Examples with Value at Risk and the Monte Carlo Method 10.1 10.2 10.3 10.4 10.5 10.6
Introduction Concepts underpinning value at risk and its usage What VAR is and what it is not Historical correlation and simulation with VAR models The bootstrapping method and backtesting Levels of confidence with models and operating characteristics curves
203 203 204 209 213 215 218
viii Contents
11 Is Value at Risk an Alternative to Setting Limits? 11.1 11.2 11.3 11.4
Introduction Establishing a policy of prudential limits Limits, VAR and market risk The impact of level of confidence on the usability of VAR 11.5 Can we use eigenmodels for precommitment? 11.6 Using the warning signals given by value at risk
Part Five
Introduction ‘For’ and ‘against’ the use of models for forecasting Faulty assumptions by famous people and their models The detection of extreme events Costly errors in option pricing and volatility smiles Imperfections with modelling and simulation
13 Model Risk is Part of Operational Risk 13.1 13.2 13.3 13.4 13.5 13.6 13.7
224 226 230 233 237 240
Facing the Challenge of Model Risk
12 Errors in Prognostication 12.1 12.2 12.3 12.4 12.5 12.6
224
Introduction The risk you took is the risk you got Model risk whose origin is in low technology The downside may also be in overall operational risk Operational risk in the evaluation of investment factors How far can internal control reduce operational risk? The contribution that is expected from auditing
247 247 249 252 257 261 265
268 268 270 272 275 278 281 285
Notes
288
Index
292
List of Tables
3.1 Banking problems studied through models and simulation since the 1960s 4.1 Increasing probabilities of average cumulative default rates over a 15-year timespan (%) 4.2 Risk factors for swaps trades worked out by the Federal Reserve and the Bank of England 6.1 The top eight telecoms debt defaults in the first half of 2001 6.2 Consolidated statements of cash flows 9.1 Three types of risk to which the board and senior management must pay attention 10.1 The time schedule of major regulatory measures by the Basle Committee on banking supervision
ix
49 69 86 130 131 190 204
List of Figures
1.1 The top five business topics chosen by 1296 business executives 1.2 Frame of reference of the analytical study of time series 1.3 Both analytics and rapid project development use prototyping as a stepping stone 1.4 Solutions to real-world problems can be helped through simulation 2.1 More effective communication is key target of scientific disciplines 2.2 Metaknowledge exists in all processes and it contrasts to the more visible object knowledge 2.3 In a quality control chart by variables the control limits should be within the tolerances 2.4 Radar chart for off-balance sheet risk management to keep top management alert 3.1 Pareto’s Law is widely applicable to business and industry 3.2 A practical application of Pareto’s Law versus equal payoff 3.3 Problem definition is only the starting point of modelling 4.1 A simple model for evaluation of credit risk 4.2 A more complex model for evaluation of credit risk 4.3 A sequential sampling plan allows computation of interest rates commensurate to risks being assumed 4.4 Cumulative default probabilities for AAA, AA, A, BBB, BB, B and CC rated companies 5.1 The best credit risk models are those that focus at events at the tail of the distribution 5.2 A finer definition of capital at risk must be done in a 3-dimensional space 5.3 The rapid growth in derivatives versus the slow growth in assets, loans, equity and reserves 5.4 Other assets and other liabilities reported to the authorities over a 6-year timeframe by one of the credit institutions 6.1 Probability distribution of default rates in a relatively normal business environment x
4 10 14 18 29 30 34 37 52 53 59 71 72 75 87 92 93 99
103 116
List of Figures
6.2 Yield curves for interest rate swaps in US$ and Euro 6.3 Bell-shaped normal distribution and leptokyrtotic distribution 7.1 A linear plot of answers given by experts to the first round of a Delphi procedure 7.2 The opinions of participating experts can be presented as a pattern with corresponding frequencies 7.3 A 3-dimensional frame of reference for calculating the premium rate in connection to country risk 7.4 The value derived by the degree of satisfaction of an action Ci might be represented by an ogive curve 8.1 Interest rates impact the way investors value equities 8.2 A lognormal distribution for option pricing reflecting volatility and maturity 8.3 Between fine grain and coarse grain financial data the difference is orders of magnitude 8.4 From low yield stability through chaos to higher yield stability 8.5 A procedure for online generation of hypotheses regarding intraday currency exchange rates 8.6 Intrinsic time can be shorter or much longer than clock time 9.1 The reliable reporting structure created by COSO 9.2 The viewpoint of SEC and of the Austrian National Bank 9.3 The areas covered by accounting, auditing, risk management and internal control overlap: each also has its own sphere of interest 9.4 Measuring derivatives at current value and reporting gains and losses 10.1 The money at risk increases as the level of confidence increases 10.2 According to the majority of banks, even for market risk, the concept of VAR alone is not a sufficient measure 10.3 The operating characteristics curve of a statistical distribution 10.4 The use of Monte Carlo simulation in connection with income from interest rate instruments 11.1 Establishing limits: top-down or bottom-up? 11.2 Establishing limits and testing procedures for all main types of risk 11.3 Intraday changes in equity index, therefore, in market risk, may totally upset equity limits
xi
118 121 144 145 147 150 161 167 168 170 172 176 182 183
186 194 206 210 219 222 226 229 231
xii List of Figures
11.4 The correlation between marking-to-market and marking-to-model is not perfect, but it can be revealing 11.5 Characteristic behaviour of a nonlinear system 12.1 A comparison between VAR estimates and trading losses at Crédit Suisse First Boston 12.2 Schedule of research and development: real-world computing 12.3 Volatility in daily gold prices: a short-lived spike 12.4 Volatility smile and volatility valley with interest rate products 13.1 A feedback mechanism is fundamental to any process in engineering, accounting, management and internal control 13.2 The common frontier between internal control and operational risk management
232 239 253 256 260 262
278 283
Preface
Mathematical models offer opportunities for better management of our company, particularly in the domain of risk control. But they also pose questions that are of interest to us all: How should they be developed, tested and implemented? How their deliverables should be validated in terms of accuracy and usefulness? Are the results they provide useful over a longer period of time or their dependability changes? These queries are not of an academic nature. They are practical and an answer to them is both important and urgent. Models are an indispensable part of modern finance, which has become so technical that at least theoretically only a small number of specialists are able to master its mathematics. Practically, however, basic ideas about the development and use of models can be understood by everyone. This is the purpose the present book puts to itself. This book is written for non-mathematicians, commercial bankers, treasurers, investment officers, stock brokers, fund managers, portfolio managers, financial analysts, auditors, actuaries and risk controllers. It requires no deep mathematical background because its goal is not to teach how to write algorithms but to help the reader understand and appreciate what is behind the mathematical model and its usage: What are the opportunities, challenges and pitfalls? This understanding is at a level above the model’s mathematics, and it has its own prerequisites. Foremost is to explain models without complex equations, in a form that people without a degree in science can comprehend. This is what I have tried to do in this book. The reader must judge whether this effort of explaining without cluttering the text with detailed complex equations, algorithms and probability theory has indeed been successful. There are two reasons for abstaining from complex formulae. One is the readership to which this text addresses itself. This involves business people in finance, banking, treasury operations and other industrial sectors. Till recently business education has not included much mathematical analysis apart from some elementary statistics. This is currently changing. xiii
xiv Preface
A second basic reason for keeping complex algorithms out of the confines of this text is that the intention has been to present my experience in the methodology of financial analysis rather than the details of its tools. The tools are, of course, most important, but sometimes they are like the trees hiding the forest. Vital in this text is the thesis and the antithesis concerning financial models and their usage. *** The book is divided into five parts and thirteen chapters. Chapter 1 offers a perspective in science and philosophy, by way of discussing why philosophy and science have a common origin, what underpins science and how the scientific method can contribute to the solution of real-life business problems. Based on this background Chapter 2 focuses on the work the analyst is expected to perform, and on whether this work worth the cost and the effort. The theme of Chapter 3 is what models can and cannot really contribute to modern business; a similar query is posed and answered in connection to experimentation done through models through the implementation of the scientific method. This sets the stage for practical applications presented in Chapter 4, whose central theme is the assessment of creditworthiness: past, present and future. The assessment of the counterparty’s creditworthiness is a challenge as old as banking. The tools, however, are evolving. The latest is the Internal Rating-Based (IRB) method of the New Capital Adequacy Framework of the Basle Committee on Banking Supervision. The position taken by the American Bankers’ Association (ABA) in regard to IRB is that: • every bank should be encouraged to develop its own model; and • it should definitely test to see if its model works well in real life applications. But ABA also thinks that smaller and medium-sized banks may find difficulties in adopting IRB, and many might choose the so-called standard method, which is less sophisticated and eventually more onerous in terms of capital requirements. For this reason the introductory text on IRB in Chapter 4 is expanded in Chapter 5. This is done in two ways: through practical examples and by means of showing the vast domain where IRB may be applicable. Credit derivatives have been chosen as a reference.
Preface
xv
Chapter 6 introduces to the reader the basics of actuarial science. The background reason is to impress upon the reader the importance of appreciating the cost of money. I see discounted cash flow as one of the basic tools in connection to IRB. The modelling of cash flow systems is a good exercise, and is also relevant to all financial issues confronting business and industry. Since it has been a deliberate choice to limit to a bare minimum the mathematical formulae included in this book, emphasis has been placed on scenarios. Chapter 7 looks into scenario analysis and its contribution, taking as an example practical applications with the Delphi Method. Chapter 8 elaborates on the use of scenarios in forecasting, and it explains why the use of undocumented hypotheses is behind many model failures. Chapter 9 underlines the fact that even the best model will be powerless without reliable financial reporting. If the data that we use is not reliable it would hardly worth our time to look at the output of the model or the scenario. Since Chapters 4 and 5 have centred on credit risk, Chapters 10 and 11 look into models for market risk. Chapter 10 covers value-at-risk (VAR) and the Monte Carlo Method. Chapter 11 concentrates on limits and brings the reader’s attention to the fact that the substitution of limits by VAR (as some banks are recently doing) is a very bad practice. The subject of the last two chapters is model risk and its management. Chapter 12 explains why errors are made in prognostication, and it suggests ways and means for correcting these errors. Chapter 13 looks at model risk as being part of operational risk. It then presents to the reader a policy centred on internal control and on auditing, which can help to reduce operational risk. A fundamental understanding of what models are and are not, as well as what they might contribute to the modern enterprise, is essential to all executives and professionals. Today computers, communications and an increasingly more sophisticated software, are the pillars on which rests a great deal of our company’s profitability. In the years to come modelilliteracy will be synonymous with lack of professional skills, and therefore will be interpreted as a personal weakness. Yet, as this book demonstrates, it does not take much to become model-literate. I am indebted to many knowledgeable people, and organisations, for their contribution to the research that made this book feasible – also to several senior executives and experts for constructive criticism during the preparation of the manuscript. Let me take this opportunity to thank Stephen Rutt, Zelah Pengilley and Caitlin Cornish, for suggesting this project and seeing it all the way
xvi Preface
to publication, and Keith Povey and Ann Marangos for the editing work. To Eva-Maria Binder goes the credit for compiling the research results, typing the text and making the camera-ready artwork and index. Valmer and Vitznau 3 April 2002
DIMITRIS N. CHORAFAS
Part One Understanding the Contribution of Science and of Models
This page intentionally left blank
1 Science and the Solution of Real-life Business Problems
1.1
Introduction
Down to basics, the fundamental issue of science – and of society at large – is that of the nature of the human individual, as well as his or her perception, simplification and conception of the natural world. Science is, at the bottom line, a cross between a methodology and cognition. The principles of cognition elaborate how facts are observed, ideas are validated and theories are developed. Theories can live and reign in the universe until they are demolished by some new facts: • As long as they reign, the concepts, laws, and postulates underlying our theories govern the kind of man-made world we live in. • Scientific thought is a development of pre-scientific thought, which lacked a methodological approach for conceiving, testing and validating theories – or for their demolition. The subject of scientific thought is most pertinent because, as Figure 1.1 demonstrates, the use of technology to enhance scientific processes as well as management decisions and competitiveness is today the second most important issue, as chosen by 1296 business executives in the United States (statistics published in USA Today, 26 July 2001). This is a subject that runs second only to Internet commerce and privacy issues, but ahead of leadership, cultural changes and strategic planning. The ultimate test of soundness of any science, and of every scientific principle, is the ability of theories, rules and postulates to predict outcomes, and of procedures to lead to the investigation of observable phenomena. Without hypotheses (tentative statements), which help to explain them, such phenomena would be thought of as miracles or 3
4 The Contribution of Science
70 60 50 40 %
30 20 10 0 E-BUSINESS AND PRIVACY ISSUES
TECHNOLOGY TO ENHANCE MANAGEMENT DECISIONS AND COMPETITIVENESS
LEADERSHIP CULTURAL CHANGES TO RESPOND QUICKLY TO COMPETITIVE CHALLENGES
STRATEGIC PLANNING
Source: Statistics published in USA Today, 26 July 2001.
Figure 1.1
The top five hot business topics chosen by 1296 business executives
magic. Miracles and magic are those things we do not understand or don’t care to know their origin. Physics and chemistry are sciences whose methodology has undergone a long process of evolution. Banking is not a science, and the same is true of law. Both are based on practice and experience that crystallises into knowledge. Both, however, can use analytical tools for investigation and documentation, as well as a methodology, which can be called scientific because it has been borrowed from physics, engineering and mathematics. Analytical thinking, modelling tools and a sound methodology is what the rocket scientists brought to finance and banking.1 These are engineers, physicists and mathematicians with experience in the aerospace industry, nuclear science and weapons systems. Since the late 1980s they have become the technologists of finance, enriching with their skills processes that are as old as mankind. Science and technology should not be confused with one another: • The scientists see the laws underpinning processes and the change taking place in nature. • The technologists see structures, systems and the change in organisations.
Science and the Solution of Problems
5
Whether in nature or in business and industry, change is the expression of hidden, bottled-up forces. The process of analysis tries to understand these forces, predict their trend and judge their aftermath. This must be done in a way that is transparent, documented and factual: • Science is allowed to be esoteric. • Technology must always be useful. Technology can only then have a purpose, and justify its costs, when it is truly of service to the world of human experience, whose needs and driving concepts vary from time to time, from place to place, from culture to culture, and from one philosophy of life to the next. Within this context, this book examines the contribution of financial technology to the survival of the enterprise. To do so in a meaningful way, however, we must first consider the common ground of science and philosophy, which leads us to the concept of models.
1.2 Thinking is the common ground between science and philosophy Science is based on thinking and on experiments. Freedom of discussion is the lubricant necessary to the process of thinking. Where there is no freedom to elaborate, express, defend and change one’s opinion(s), there is no freedom of thought and therefore no creative thinking: • Where there is no freedom of thought, there can be no freedom of inquiry. Hence, no scientific progress. • Whether in science or in business, the very absence of argument, criticism and dissent tends to corrode decisions. The first principle of a scientific methodology is the vigilant, focused criticism and constant questioning of all tentative statements and of all basic assumptions. There is nothing more sacred in science than this principal method of construction and eventual destruction. We regard as scientific a method based on deep analysis of facts. Theories and scientific views are presupposing unprejudiced, unfearing open discussion. In science conclusions are made after debate, by overcoming dissent. This is in contrast to authoritarian centralism, where as few as possible have the authority (but not necessarily the knowledge) to decide as secretly as possible about facts, theories, policies and beliefs.
6 The Contribution of Science
A speculative natural philosophy based on observation and reason is at the root of every science. In antiquity, the philosopher and the scientist were the same person, and this is nearly true of the philosopher and the priest. In this common origin can be found the roots of the scientific method. Both the philosopher and the scientist proceed by hypotheses that they try to verify, enriching their body of knowledge. The scientist does so by means of experiments; the philosopher through his thoughts. In ancient Greece two schools confronted one another in terms of what philosophy is or should be: • The sophists regarded philosophy as education and training on how to perceive and do things. • Socrates, and his disciples, looked at philosophy as a process of acquiring knowledge of the nature of things. To Socrates, the successful pursuit of any occupation demanded the mastery of a particular field, skill or technique. Politicians, generals, other philosophers, scientists, poets and craftsmen came under the scrutiny of his method. To his dismay, Socrates said, he discovered that, except craftsmen, none of them knew the meaning of words he used. The craftsmen have how-to-do knowledge. Whether in the arts, in science or in any other walk of life hands-on, direct experience is a great guide. However, it also has its limitations. Albert Einstein has written that ‘Experience may suggest the appropriate mathematical concepts, but they most certainly cannot be deducted from it. (Yet) experience remains the sole criterion of the utility of a mathematical construction.’2 Along with experience, thinking underpins the history of progress in philosophy, science and the arts. Thinking sees to it that a constructive, if tentative, theory can be found that covers the product or process under investigation. Yet, to some people it seems to be safer not to think at all. Others are afraid of what they might think. Things are different with creative people. A thought is like a child inside a woman’s womb. It has to be born. If it dies inside us, we die too. Precisely because they promoted the concept of the intellect and the deliverables human thought can produce, philosophers/scientists played a very important role in antiquity. Philosophers and scientists were the same person. Revolutions based on thinking and on ideas have a tremendous force. Great revolutions have arisen in two different ways:
Science and the Solution of Problems
7
• Either someone takes a philosophical approach to attack things that are really fundamental, and therefore makes a significant breakthrough based on profound insight; or • Major leaps forward take place in an empirical, and sometimes experimental, manner. An example is theories developed in response to what the scientist finds by mining available data. There can be, and often are philosophical implications of those deliverables and of theories based on them. Scientists, and society at large, have to abandon old principles. For instance, the principle of predictability, which has underlined classical Newtonian physics. Changes in science has a lot to do with the fact that old philosophical certainties no longer seem to be so certain. Practical thinking enriched by appropriate tools leads to the analytical method, whose elements are empirically rather hypothetically discovered. This short sentence describes one of the basic tenants of the natural sciences that has found its way into the analytical processes underpinning the modelling of financial and industrial enterprises, their: • products • markets, and • organisational behaviour. A model is a concretisation of our thoughts through mathematical means. Both scientists and philosophers excel in making models: The scientists of the physical entities or processes they examine; the philosophers of the idealised world of their thoughts. Thinking is the common ground of any science and any philosophy, but also cornerstones to any art and science are the processes of seeing and observing. We all see the same thing, but only some of us appreciate that two apparently similar things we look at may not be the same – because we know that differences exist and they will continue to exist even if they are ignored. Observing is fundamental to the processes of thinking and of invention. Understanding is crucial to inventing, which is a creative job that must be done with great care in all its aspects. At the heart of invention is intuition and the will to stick out one’s neck to follow up on his or her intuition, as well as the knowledge on how to use practical steps to capitalise on what this intuition represents. To depend on facts without thinking is dangerous because facts alone say nothing about new departures. Seeing, observing and thinking
8 The Contribution of Science
prepare a person for what ought to be done. Thinking may, however, be unsettling to the ‘established order’ as it challenges old conventions by suggesting innovative vistas, new directions, inefficiencies in current state of affairs, and a different way of doing things. That is why the freedom of thinking and of expression is so important, as underlined in the opening paragraphs of this section. It takes not only new ideas but also courage to challenge the established order. There is evidence that many centuries before Galileo Galilei, Aristarchos of Samos (320–250 BC) has been the astronomer of antiquity responsible for a highly developed heliocentric system. In his idealised concept of the cosmos, Aristarchos conceived a different order in the physical world surrounding him, than that prevailing in his days – and he said so. As a result, he came very close to being charged by the ancient Greeks for irreverence to the goals because he moved the centre of the universe from the Earth to somewhere else.3
1.3
Principles underlying scientific thought
It is more or less generally believed that scientists are married to their research and that they are therefore single-minded. This is a false conception. Neither is it true that scientists work in an idealised environment, uncoupled from the world in which they live. With some exceptions, scientists live in the real world. In fact, one of the challenges in science is that it is a pretty cut-throat business where the smartest compete with one another for recognition. This is good. Competition is healthy because it increases the likelihood of getting results and of compressing the timescales to discoveries. A form of competition that is not been frequently discussed is that of demolishing existing theories through the perception, evaluation and imposition of new principles: • In science, and in many other fields of activity, the present is nearly past and it steadily unravels itself. • People who are inclined to stonewall, preserve the status quo, or hide information, are unscientific. This is even more true of technologists, including the rocket scientists who, by working in finance and banking, suddenly got in contact with a universe they had never known before. Scientists are not usually educated or conditioned to exist in high financial risk and in situations that cannot be contained in a laboratory. Yet, to work in finance:
Science and the Solution of Problems
9
• They have to make their rationalisation of situations that in the past defied analysis; and • They must be able to restructure their thinking as they go along. Some of the concepts the scientist brings to banking are larger than those from the physical world; others, however, are more contained. The concept of time, for instance, has no meaning before the beginning of the universe yet time continues towards the past forever, whether or not the universe had existed forever. • One thesis is that if the universe did not have a beginning there would be an infinite period of time before any event, which is considered absurd. • The antithesis is that if the universe had a beginning, there would be an infinite period of time before it, so why should the universe begin at any one particular time?4 Financial events don’t have this sort of uncertainty. Events that took place in time are finite, but they may have to be examined over relatively long periods. This short sentence encapsulates the meaning of time series. The challenge is to collect the pertinent data over many years, database them, mine them interactively, and be able to reach analytically validated conclusions. • Chances are these time series will pertain to events that fall within the frame of reference in Figure 1.2. • Analysis addresses this frame of reference and its object is foresight and insight not idle speculation. The conclusions that we reach should be made public and be discussed openly, because only through the challenge of a broader discussion can we distil events and facts into useful ideas. Attempts to conceal research results are counter-productive, and the perpetrators are guilty of scientific illiteracy. No roadblock is able to stop the progress of science, or delay it for a considerable period of time: • Who hides the results of his research hurts, first and foremost, his own name. • Secrecy is not only harmful in physics, chemistry and medicine but also in financial modelling as well.
10 The Contribution of Science INSTRUMENT OR INSTRUMENTS
MARKET(S) VOLATILITY, LIQUIDITY
CHRONOLOGICAL ORDER OF EVENTS Figure 1.2
Frame of reference of the analytical study of time series
One of the basic tenants of scientific thought is that of sharing the outcome of research and open it up for peer review. Real scientists do not rest on their laurels or hang on to old connections; they move forward to new accomplishments. Another basic tenant of science, evidently including rocket science, is that personal hands-on experience is supreme. Scientists who are doing something with their hands are ahead of the curve. A hypothesis by Carl Sagan has been that the great scientific adventurers, indeed the great revolution in human thought, began between 600 and 400 BC. Key to this was the role played by the hand.5 Sagan explains the reason. Some of the Ionian thinkers were the sons of sailors and weavers. As such, they were accustomed to manual unlike priests and scribers, the elite of other nations, who were reluctant to ‘dirty their hands’. Not only did these Ionians with hands-on experience of life work wonders in the sense of the intellect and of discoveries but they also rejected magic and superstition. They did so because they understood
Science and the Solution of Problems
11
the nature of things – at least of some things. This led to pioneering feats that gave a great boost to the process we usually call civilisation, by promoting: • exploration, and • invention. Morality and immorality play no role in the march of science. They are philosophical concepts that mapped themselves into ethics, but also changed their content as a function of space, time, culture, and a society’s values. Asked by a young admirer to be introduced into science, because it had just saved the city-state of Syracuse besieged by the Romans, Archimedes (285–212 BC) pointed out that science is divine, but also: • She was divine before she helped Syracuse; and • She is divine independently of whether she helps the state or not. Science, John von Neumann once suggested, is probably not one iota more divine because she helped the state or society, adding that, if one subscribes to this position one should at the same time contemplate the dual proposition that: • If science is not more divine for helping society • Maybe she is not less divine for harming society. Another important point to consider in connection with scientific thought and its aftermath is whether it is moral or amoral. In fact, science is basically amoral because, after discovery and the pursuit of truth, she recognises neither good not evil; and, down to basics, she supports most equally friend or foe. The fact that science is amoral leads us to another of the basic principles underlying scientific revolution: creativity. The concept of creativity is complex because it requires free thinking and an integrative personality. In its fundamentals, a creative process is not yet well understood though hundreds of books are written with ‘creativity’ in their titles. From what we do know, creativity requires: • experience and imagination; • concentration and analytical ability; • a challenge that has to be met.
12 The Contribution of Science
Slowly but surely, logically precise reasoning conquers a mathematical world of immense riches. Gradually, the ecstasy of analytical processes has given way to a spirit of critical self-control, which is a higher level of creativity. In its modern incarnation, this started in the late 19th century, which not only became a period of new advances but has also been characterised by a successful return to some of the classical ideal of precision and rigorous proof. The pendulum swung once more away from romanticism and toward the side of logical purity and abstraction. At present, the pendulum is moving again as the concept of applied mathematics, versus that of pure mathematics, has regained strength. Breakthroughs are attained on the basis of clearer comprehension of applications domains. In many universities today the Department of Mathematics focuses on financial analysis, providing proof that it is possible to master mathematical theory without losing sight of practical implementations of scientific thought.
1.4
What is meant by the scientific method?
Science is looking for specific facts and the laws that underpin them. We think that we have discovered a truth when we have found a thought satisfying our intellectual need, our investigative spirit, or our method of testing a new theory. If we do not feel the necessity for discovering new truths or revamping old ones, the pursuit of scientific truth will not appeal to us. In 1926, Edwin Hubble, the astronomer, made the observation that wherever we look distant galaxies are moving rapidly away from us. In other words, the universe is expanding. He also suggested that there was a time, the big bang, when the universe was infinitely dense and infinitesimally small. Under the then prevailing conditions, all the laws of science as we know then today would break down. Other scientists have challenged the Hubble hypotheses, which rest on tentative statements that cannot be proven. For instance, Hubble’s statement that all galaxies have roughly the same number of stars and the same brightness. Fritz Zwicky, Hubble’s Caltech colleague, disagreed with the master and gave two arguments about why dwarf galaxies should exist. • The principle of inexhaustibility of nature is unlikely to allow for only two types of galaxy; and • If Hubble said dwarf galaxies did not exist, then almost certainly they do exist.6
Science and the Solution of Problems
13
This is the method of negation and subsequent construction. Its first step is to look for hypotheses, statements, theories, or systems of thought that pretend to absolute truth – and to deny them. Negation and reconstruction is a powerful method that I have found to be very useful in my study, and most particularly in denying assumptions and statements made by financial analysts. The concept behind these examples is that the intellect creates, but only scientific verification can confirm. This is one of the pillars of the scientific method. Even confirmation may be tentative, because our tools are most often primitive compared to the magnitude of the task in hand. Few people really appreciate that fundamentally: • Scientific truth is what quiets an anxiety in our intellect, not an absolute truth. • While technological breakthrough is what promotes new avenues of progress. Both scientific and technological endeavours start with conceptual expression, but then the goals diverge. As Figure 1.3 is suggesting, in technology the next major step to a tentative design is prototyping, which is effected by means of hardware simulation (wind tunnels, miniature water dams), or models that may be analogue or digital – therefore, mathematical. Prototyping is an important stepping stone in reaching the goal we are after. When we move from science to technology, then the solution we painstakingly develop tends to incorporate our discovery into a product or process. Science evolves over time. Men and women seeking an unknown truth are constantly renewing, correcting or restructuring the body of knowledge we call science. Researchers are curious people par excellence. They are persons who care – and are, therefore, careful persons. In any profession, however, there exist scientists and pseudo-scientists. What true scientists do, they do with attention, precision and persistence. They neither put their work in the time closet waiting for some revelation or stroke of genius nor do they neglect the issues that preoccupy them. They drive themselves hard, with dedication, till they reach the results they are after. This requires a scientific method that helps us to: • observe and describe; • classify and analyse;
14 The Contribution of Science
CONCEPTUAL EXPRESSION
PROTOTYPING
FINAL GOAL
Figure 1.3 Both analytics and rapid project development use prototyping as a stepping stone
• • • •
relate and generalise; formulate comprehensive theories on general laws; explain facts in terms of these theories; predict and verify the accuracy of our prediction.
What we have succeeded in doing with analytical tools, for management purposes, in banking and finance, is to observe, describe, classify, analyse, in a very accurate manner. These are the first two points in the above list, but there is not enough advance in the other four conditions necessary for a scientific approach, in a manner that might qualify the management of any field – general administration, sales, production, treasury, or any other – as being scientific. Down to basics, the need for a scientific methodology is closely associated to that of a rational approach to real-life problems able to offer a better opportunity to achieve the best possible solutions at present state of the art. Typically, scientific approaches in management consider a company to be an integrated system working toward a common goal. Each department is part of a whole and changing it must be done in ways beneficial to the whole. This concept is largely theoretical.
Science and the Solution of Problems
15
Even in an integrated system practically every entity follows its own goals with stronger or looser relationships between its component parts. These relationships are largely determined by responsibility lines and by means of a flow of intelligence and of information. The different parts will be better able to work in unison if the organisation incorporates certain basic principles of: • efficiency and economy; • working procedures; and • steady sustenance. Within the context of this reference even the most carefully designed scientific processes and procedures are never foolproof, nor are they immune to decay and oblivion (see section 1.6). Also, over a more or less short period of time, the process of science may have setbacks, even if in the longer run it is irreversible. The longer-term irreversibility of the march of science is important because the cultural and technological evolution in which we currently live, and whose effects we experience, transcends the biological evolution. For good or bad, through science and technology man has taken his future into his own hands, and the scientific methodology is a cornerstone to this transition. The careful reader will also note that the modern scientific ideal distinguishes itself from the more classical one of the 18th and 19th centuries, not only through its method and contents but also by its dynamics, therefore, its pace. Contemporary science pursues an explanation of the world that modifies quite substantially itself as generations change. This happened in antiquity but at a slower pace and it repeated itself in the Age of Enlightenment. The generation of philosopher–artists, who followed the philosophers–scientists of the early 18th century, brought with it a different way of thinking than their predecessors. Philosopher– artists were not interested in the powerful principles of analysis and organisation. To the contrary, they wanted to defend nature and everything natural against machines and what was man made: • The romantic philosophers of the late 18th and 19th centuries did not care in taking the universe apart, analysing it into its smallest atoms. • What they really wanted was to contemplate, interpret, feel, and see through the world to its meaning, as if the world were a poem.
16 The Contribution of Science
Contrary to the practical philosophers of the 17th and early 18th centuries: Descartes, Newton, Leibniz and so many others their late 18th and 19th century colleagues developed a vertical way of looking at things, and this lasted for more than one hundred years. Finally, it was challenged by a new breed of philosopher–scientist. It was left to the 20th century scientists to bring back to favour the analytical Socratic way of thinking promoted by Leibniz, Newton and Descartes. Investigation based on documented causal sequences again took the upper hand. During the last hundred years, philosopher– scientists tended again to expand their horizons in terms of their conception of the nature of things, sharpening their analytical tools, developing concepts that permit to map one process into another, and searching for ways and means to link-up theories into a universal system. Their models became pragmatic, and therefore practical.
1.5
Models and the internal rating-based solution
Models, the way we conceive them and use them today in business and industry, were briefly discussed in section 1.2 as the concretisation of thoughts of scientists and philosophers. Section 1.3 brought to the reader’s attention some of the principles underpinning scientific investigation and section 1.4 presented the highlights of what we call the scientific method. With this background, the notion of models can be revised: • Models are now used for experimentation in a more rigorous manner than ever before. • The Internal Rating-Based (IRB) solution promoted by the New Capital Adequacy Framework by the Basle Committee on Banking Supervision is an example of the pragmatic use of models. To offer the reader a strategic perspective on IRB (see also Chapter 4), let me start by summing up in one short paragraph what is a mathematical model and what it can do. In the broader possible definition, a model of a system or process under investigation is essentially a simplified abstraction of reality: • It eliminates the irrelevant and unimportant details of the real world; and • It concentrates on the fundamental elements, variables and their relationships.
Science and the Solution of Problems
17
This is what modelling has contributed to science and philosophy. More recently, modelling did a similar contribution to finance and banking. But the abstraction of reality, and with it the simplification we are doing, comes at a cost. The cost is that of leaving too many factors and details out of our perception or representation of problems. The human mind finds it difficult to conceive complexity over the range in which it exists. (This is discussed further in Chapter 2.) Many people do not appreciate that both simplification and structure are part of the scientific method. Like the structure of the cosmos and our hypotheses about the way in which it works, or the research that led to the laws of thermodynamics, a rigorous financial analysis aims to deduce the necessary conditions that separate events must have to satisfy to fulfil the requirements of a certain system. The advantage of this approach, and of the models through which it is served, is adaptability and clearness, rather than fine-grain logical perception. Logical perception is, of course, very important. It is one of the pillars of the methodology of science. Many major discoveries, as well as valuable contributions to research, are made by talented scientists with an extraordinary logical perception, clear thinking, and the ability to structure their thoughts. On the other hand: • The notion that the intellect can create meaningful postulational systems at its whim is a deceptive half-truth. • Only under the discipline of responsibility to the organic whole, guided by intrinsic necessity, can the mind achieve results of scientific value. In a fairly similar manner to what has been said in section 1.4 about prototyping, the model is a stepping stone towards the solution we are after. In fact, the model is the prototype. Figure 1.4 shows in a nutshell the role of modelling and experimentation in connection to with the evaluation of exposure. The goal may be the measurement of credit risk or market risk. Another issue that matters a great deal in science is verifiable facts. Hypotheses are not tested through ideas. There must be observations. Underpinning the process of verifiable observations is structure and relationship. Relationship is, for example, presented by the fact that two points determine a line, or numbers combine according to certain rules to form other numbers.
18 The Contribution of Science
CREDIT RISK, MARKET RISK
IDEALISE SIMPLIFY
DECIDE ON EXPOSURE
MODEL
EXPERIMENT
SIMULATE BY COMPUTER
Figure 1.4
Solutions to real-world problems can be helped through simulation
For all practical purposes, structure and relationship is the principle underpinning modern accounting, which is in itself a mathematical model of accounts. Like Euclid did with geometry, Luca Paciolo – a mathematician and Franciscan monk – collected all practical evidence on how people and companies kept accounts and in 1494 published his seminal work ‘Summa Mattematica’.7 Paciolo’s most important contribution is a clear insight into the necessity of a substantiation of financial accounts through elementary mathematical concepts: • This process has been one of the most important and fruitful results of postulational development in science. • Today, the same process proves to be a great help in the design and use of financial models. In these few paragraphs can be found another common ground between philosophy and science. This is the world view developed in the 17th century by philosopher–scientists like Descartes, Newton and Leibniz – a view conditioned by the idea of a clockwork mechanism that challenged
Science and the Solution of Problems
19
the then established order and led to the Enlightenment. The philosopher– scientists conception of nature, and of the universe, contained the elements that we have come to associate with scientific thought and mathematical models, namely: • thinking by means of representation and manipulation of symbols; • formalising thoughts and expressions in a way permitting induction and deduction; • using mathematics as an exact language for such representation; and • employing the concept of a methodology enriched by appropriate tools. The concept the four points above have in common is that of methods and models that make feasible the repetition of experiences under normalised conditions, that is, the performance of meaningful comparisons and of experiments. The concepts, the processes and the results obtained are modelled in a way a group of people with similar interests and comparable background can understand. That is why the help provided by mathematics is invaluable. For instance, in credit-risk analysis we need common measurements to express default probability. The raw material is financial information that, however, has to be subject to rigorous treatment which poses the challenge of modelling default likelihood in a dependable manner. Not all credit institutions have the skill to do so. But raw financial information on its own cannot be the complete answer. For this reason, in 2001 the Basle Committee on Banking Supervision has offered two alternative methodologies: • the standard approach; and • internal rating-based solution. As we will see in more detail in Part Three, the standard approach is primarily intended for small-to-medium banks that are not so sophisticated. For bigger and well-managed institutions the standard approach does not make much sense because it essentially means they would have to reserve the same 8 per cent in capital requirements for a loan to General Electric and for a loan to a local restaurant that might fail tomorrow. A different way of making this statement is that while banks are not unfamiliar with credit risk, the methodology currently being used for its management leaves much to be desired. The standard method is designed
20 The Contribution of Science
to serve institutions that don’t have credit risk steadily tracked in one of the radar screens they are using. Therefore, they are not so well placed in connection with the control of their exposure to counterparties. IRB is a solution which allows it to differentiate in regard to the creditworthiness of different customers. Among the bigger banks, many have in place an internal rating based system that tracks their major customers entity-by-entity and also allows for volatility in credit risk. Others use the rating scale employed by independent rating agencies that can be integrated into their eigen model; therefore, they are in a good position to employ IRB. Some credit institutions are now moving to align their rating to the 20 positions between AAA and D. We will return to this issue in Chapter 4.
1.6 Natural death and oblivion of models, products, factories, companies and people There were in past centuries thinkers who approached philosophy and science precisely in the way the most dynamic members of the scientific community look at them today – but they were a rare species. Apart the examples I have mentioned in the preceding sections, in the 17th century the philosopher Baruch Spinosa (1632–77) saw in nature the reason of his own existence (causa sua), a process interactive with itself whose presence did not require external causes. Spinosa’s seminal work, and the school of thought that followed it, looks at a system and its components as having an independent, justified reason of being. This concept is very important to modern science – and by extension to financial analysis, which employs scientific methods. Classical physics did not exploit the underlying idea of causa sua, but it provided some limited metaphysical concepts that could help explain existing relationships between: • matter, space and time; • rules and guidelines of behaviour; and • the broader concept of life and death. ‘Death has no meaning for us’, Epicurus (341–270 BC) suggested. ‘As when we exist death does not exist and as death exists we don’t.’ Epicurus taught that pleasure in this world is the highest good – but honesty, prudence and justice are the necessary means for achieving accomplishment in one’s life.
Science and the Solution of Problems
21
The philosophy of Epicurus and the sophists is important to modern science because it is better suited to our epoch than 19th-century classical philosophy. More than 2,500 years ago, the sophists have dealt with the solitude of man lost in the immenseness of space and time. Solitude is what provokes glacial terror to the large majority of people; by contrast it is the refuge that ancient philosophers and modern scientists have found. The average person fears finding themselves beyond the limited reach of his or her sensations; leaders do not have such fears. Nature isolates man from the infinite world, and this isolation creates a feeling of fragility, because of the exposure to the whims of both nature and society. It is in the senses that is found the good and the bad, the founder of the Epicurean philosophy said, and death is the absence of the senses. Epicurus advised his contemporaries that they have to be accommodated to the idea that death does not mean anything for them because: • everything that is good and bad lies in sensation; and • death is the cessation of perception. In the 20th century, Ludwig Wittgenstein, the logician, added to this appreciation of life and death his own thinking: ‘Death is not an event in life as we do not live to experience death. If we take eternity to mean not infinite temporal relation but timeliness, then eternal life belongs to those who live in the present.’ Man-made inanimate materials, too, obey Wittgenstein’s dictum: products, processes, factories, enterprises – and, of course, models. Diamonds may be ‘forever’, but models are not. They become obsolete, bypassed by events, or are incapable of serving the purpose for which they were made. Ironically, sometimes this happens in their time of glory. In the late 1980s, Olivetti wrote an expert system to optimise production planning. Its usage helped to identify a number of shortcomings and faults at the production floor. After these were corrected, the model could no more serve because the control conditions had changed. The fact that knowledge artefacts have taken on, in effect, a life of their own, poses a number of philosophical questions: • • • • •
Is there intelligence without life? Is there knowledge without interaction? Is there thought without experience? Is there language without living? Is there mind without communication?
22 The Contribution of Science
If intelligence in a shell of bone dies, there is no reason why intelligence in a shell of plastic or steel casting would have a different fate. The doors of life and death are adjacent and almost identical. This is valid not only for what we consider to be living organisations but also for physical entities and for man-made ones: Companies, branches, processes and financial instruments, and their models. ‘Clothes and automobiles change every year’, said Paul M. Mazur of Lehman Brothers. ‘But because the currency remains the same in appearance, though its value steadily declines, most people believe that finance does not change. Actually, debt financing changes like everything else. We have to find new models in financing, just as in clothes and automobiles, if we want to stay on top. We must remain inventive architects of the money business.’8 Financial products, branch offices, factories and companies die. They don’t die a ‘natural death’ like plants and animals, but go out of fashion, wear out, become dislocate or obsolete, become detached from reality, or are replaced by more efficient entities. Companies are eaten up by competitors, or are simply decoupled from the market. Therefore, modelling the survival of financial and industrial enterprises does not mean making them live forever. What it means is: • analysing their behaviour to manage them better while they are active; • making them more cost/effective and, therefore, better competitive; • following up on risk and return they assume, hence extending their lease of life. This all amounts to better management but, as this chapter has explained, management must be assisted through modern tools and this is done by means of models. This section has added to the foregoing concept the need to keep in mind that models and other artefacts are not eternal. Eventually they outlive their usefulness, or they are substituted by other models that are more competitive. In a similar manner and for the same reasons, the model of the world can change fairly rapidly as new ideas emerge and with them better solutions to meaningful and significant problems. Sometimes these problems develop challenging aspects, particularly so as the scientific work gains momentum and the deliverables of technology accelerate. In conclusion: • knowledge is no finite substance to be diminished by division; • but neither knowledge multiplies through arithmetic manipulation.
Science and the Solution of Problems
23
Science and the knowledge that comes with it uses mathematical models for representation and for experimental reasons, but we should not forget the old proverb: ‘Worse than ignorance is the illusion of knowledge’. There is plenty of pseudoscience and pseudoknowledge so we must be careful about the soundness of what we accept, the uses to which we put the artefacts and the quality of the deliverables we receive.
2 Is the Work of Financial Analysts Worth the Cost and the Effort?
2.1
Introduction
As Francis Bacon put it: ‘If a man will begin with certainties, he shall end in doubts. But if he will be content to begin in doubts, he will end in certainties.’ No better dictum describes the work of financial analysis and the usefulness of this work. The contribution of the efforts by financial analysts is documented by their ability to go down to basics in examining which factors influence, and by how much, observable economic and financial phenomena as well as: • where alternative paths in investments might lead us if we pursue them; and • which are the prevailing risks, and how well are we prepared to confront them. The above two points encapsulate the concepts underpinning an investigative spirit and the reason why we invest skill, time and money in financial analytics. The effort that we put in, as Bacon suggested, produces a certain level of certainty. The result we expect to get is better vision; and, at times, a roadmap to formerly uncharted waters. The emphasis is on deliverables. As Chapter 1 demonstrated, the technology that we use for financial analysis changes over time, and so do our methods and our tools. But the goals of analytical processes evolve much more slowly because there is plenty of unexplored territory and we are often unaware that there are pitfalls on the way that have to be taken care of before the next big step forward. It is the job of the analyst to provide the necessary evidence that will permit focused management decisions about the wisdom of ‘this’ or 24
Work of Financial Analysts
25
‘that’ investment under scrutiny – from loans, to equities, bonds, and derivative instruments. The principle is that an analyst tries to understand risk and return involved in a given investment, as well as the related effects of what he or she is going to suggest before doing so. This job requires that on a factual and documented basis others are informed: managers and professionals, who are the decision-makers. Analysts who are worth their salt are always dubious about statements that the market has reached a bottom, things will take care of themselves, a downturn will turn around on its own accord, an upturn will last forever, or that somewhat better management will solve the problem. Serious analysts don’t look only at numbers but they also look at faces. They sit up, look directly in the eyes of the person they are interviewing, use soft language, but are absolutely clear about what they are after. What an analyst is after varies from one case to the next. The goal of the study may be evaluation of investment(s) in equities or fixed interest instruments; analysis of credit risk for loans purposes; the tuning of a credit scoring system in conjunction with eigenmodels; analysis of discounted cash flow; medium- and short-range forecasting of economic indicators; or other analytics such as option adjusted spread. In the broad sense of the word, a financial analysis should encompass planning for the future. With the present fast pace of technological development, future systems research is an important element in the direction of industrial and business activities – therefore, of investments associated to them. Our study may range from a critical view of future programmes to the design of control tools able to ensure that the plans are carried out as intended, and that the proper corrective action is being taken. Whether connected to the observance of limits or of risk management, control action usually involves an advanced analysis of crucial financial factors, and of products, with respect to their market impact and the exposure they may create with the business environment of the firm. The control of exposure is made more efficient through a focused systems analysis that does much to identify risks resulting from trading and lending functions as well as from other operations, as Section 2.2 suggests.
2.2
The role of financial analysts
One of the most important goals of financial analysis is to create an understanding of the sources of value in investments, and integrate this understanding into the process and principles of assets management.
26 The Contribution of Science
Armed with this knowledge, fund managers should aim at translating this evaluation into specific action. Down to their fundamentals: • A value-based or growth-based analysis and associated performance measures are targeting the ability to communicate the extent of projected business opportunity. • The preferred timeframe is the current and ‘next’ reporting periods as well as projected future growth prospects resulting from strategic and tactical positioning. Within the context defined by these points above, the role of financial analysts is to contribute insight and foresight regarding events taking place in the market. To produce this kind of deliverables, analysts must use their background and experience in an ingenious way, including the methodologies and tools we have been discussing in Chapter 1. Mathematical models and computers are the supports, not the goals. Some experts think that in terms of basic definitions, the mission I have been briefly describing is not that different from the one characterising system engineering and its functions, even if the latter has to do with machine aggregates, such as computers and communications, rather than with finance. Besides, rocket scientists have been system engineers and sometimes the analytics in banking are called financial engineering. Let’s therefore take a quick look at what is done in a system engineering setting. Webster defines engineering as the art and science by which the properties of matter and the sources of power in nature are made useful to man in structures, machines and manufactured products. A system is an assemblage of objects united by some form of regular interaction and interdependence, which is critical in order to reach prescribed objectives. In these terms, systems engineering is the art and science by which nature, men and machines are arranged to form an aggregate whose members are characterised by interaction and interdependence; or to create products from nature’s materials and energy sources including, as well, man-made components. In other words, systems engineering is concerned with: • the design or analysis of productive man-machine system and their environment(s); and • the analysis of possible contribution of other products and systems that may serve to the conquest of the frontiers of knowledge.
Work of Financial Analysts
27
Studies in banking and finance fall under the second point above. A common ground between financial analysis and system engineering is that they require judgement, focus on strategic objectives, incorporate tactical moves, involve both analysis and synthesis, and pose as a prerequisite the comprehension of complex systems or situations. Financial analysis is also concerned with issues providing quantitative and qualitative findings for management decision. Given the characteristics of their work, financial analysts should have a sound background, an open mind, analytical ability, multi-versed knowledge and an inquiring spirit able to challenge the obvious. Such people usually have a varied and diversified knowledge, where 20 years of experience means 20 or more situations different from one another rather than one year of experience repeated 20 times. Besides being able to acquire a growing store of multivaried experience, the financial analyst must be a realist and an idealist at the same time. As Talleyrand once said: ‘An idealist cannot last long unless he is a realist, and a realist cannot support the stress unless he is an idealist.’ Furthermore, people working as financial analysts, system engineers or science researchers must have patience, exercise prudence and show good judgement. Like the ancient Greek philosopher who asked the gods for three gifts, they must have: • the strength to change the things they can; • the patience to accept those things they cannot change; and • the wisdom to know the difference. To change things rocket scientists must be thinkers and have a method. They should develop and use powerful metaphors (see Section 2.4) describing their thoughts and findings, employ pictorial descriptions and be able to document their findings. The idea of a road to be travelled through one’s study culminates in the method and the models that it uses. The best of analysts appreciate they cannot make even the simplest model unless: • they have previously some idea of a theory about it; and • set a course to discover what they are aiming for or, at least, map its elemental ideas. The answer to the requirements posed by these two points above is not that simple. That is why Chapter 1 paid so much attention to philosophy and science, and also brought technology into the discussion. While the tools and methods of technology are some of the basic issues financial
28 The Contribution of Science
analysts and system engineers share, one of the differences between financial analysis and system engineering is that the former is often vertically focused, while the latter follows a horizontal pattern dealing with all parts of the job. The career of Alfred Sloan, who was an engineer by training, illustrates how vital it is for an executive to be able to deal with all parts of his responsibilities, even those that didn’t play to his original strengths. As chairman of General Motors, Sloan enabled his company to become the world’s leader in motor vehicles because: • he mastered R&D and manufacturing; and • he made both marketing and management more of a conceptual job than they used to be. Henry Ford was inventive but his inability to stretch himself in a horizontal way made him vulnerable to Sloan’s comprehensive approach to the motor vehicle business. A similar paradigm is valid in finance. The strength of great bankers is conceptual and directional. The rocket scientists they employ should fill the gaps in analytics. Analytical and conceptual strengths work in unison. They are promoted through both training in creative thinking and by means of technological supports, which enhance investigation and advance the art of communication. As Figure 2.1 suggests, the intersection of analysis, engineering and computer technology provides a novel formal medium for expressing ideals, and for establishing an investigative methodology. The pattern in Figure 2.1 empowers communication, enabling analytical people to rise to increasingly higher levels of intellectual ability. It also produces linguistic changes that stimulate thinking. We know since antiquity that the language men speak stimulates their mind. Some experts believe that the emergence of computers and their programming languages led to changes in communication that are at least as pivotal as those that over the ages promoted philosophic thinking. At the same time, high speed computation has other aftermath: • it requires us to be precise about the representation of notions as computational objects; and • permits us to represent explicitly algorithms and heuristics for manipulating objects with which we work. The ideas behind both the points above are served through models. In approaching financial problems, in one form or another, the analyst
Work of Financial Analysts
ANALYSIS
29
ENGINEERING METHODOLOGY FOR THE COMMUNICATION OF IDEAS AND RESULTS
COMPUTER TECHNOLOGY
Figure 2.1
More effective communication is key target of scientific disciplines
should appreciate that the issues he or she is confronted with are very seldom presented in neat form, with the relevant alternatives identified and the procedures listed. Under these conditions, key creative parts of the analytic study are: • the perception of the problem itself; • the development of the relevant alternatives; and • the choice among different scenarios. Perception is enhanced by talking to cognisant people (see Chapter 7 on the Delphi method). For this reason, many parties should be invited to brainstorm on alternatives. Therefore, among the essential qualities for an analyst are the ability to get along with people and gain their confidence; investigate the facts; use imagination and creativity; have enough vision to devise improved solutions; and possess the ability to communicate ideas and ‘sell’ them to others. Another ‘must’ is an understanding of where and when to stop refining ideas and start putting them into effect.
2.3 Metaknowledge is a basic concept of science and technology ‘Two types of truth are counterpoised’, says José Ortega y Gasset, ‘the scientific and the philosophic.’1 The former is exact but incomplete; the
30 The Contribution of Science
META KNOWLEDGE l l l
CONCEPT DEFINITION CONSTRAINTS
OBJECT KNOWLEDGE l l l
RULES TOOLS DATA
Figure 2.2 Metaknowledge exists in all processes and it contrasts to the more visible object knowledge
latter is sufficient but inexact. The philosophic truth, according to Ortega y Gasset, is more basic than the scientific – and therefore it is a truth of higher rank. Two reasons can be found behind this statement: • the philosophic concept is much broader; and • its type of knowledge is more fundamental. By contrast, the scientific truth is better structured. The notions commanding the laws and postulates of physics that go over and beyond physical facts are metaphysics. Correspondingly, the concepts of finance over and beyond financial instruments and markets are metafinance, or finance outside itself. In both cases, the metalevel is beyond the level we usually perceive through the naked eye. Some of the notion of a higherup level underpinning financial issues is presented in Figure 2.2. Metaknowledge differs from object knowledge because it describes the prerequisites of planning and control on object knowledge. The value of an object is determined according to rules embedded at metalevel, and results of evaluating completed actions. Rules in metaknowledge are based on aims and intentions; constraints are also an important part. To appreciate what there is in a metalevel: • we must go down to the depths of its meaning; and • we must test its boundaries, which are probably established by another, higher-up level. In a way, at the metalevel the solution is anterior to a problem, and so is the knowledge. In physics, for example, a given problem is confined
Work of Financial Analysts
31
within the boundaries set by its metalevel. By setting boundaries and constraints we dimension our problem and create material that accepts methodological treatment and can be submitted to measurement. The concept of a metalevel did not necessarily exist in scientific thinking in the early history of science. It developed little by little as theoretical work followed practical work, and vice versa. Other things, too, were absent from scientific endeavours in the antiquity. For about 1,500 years the weight of Greek geometrical tradition delayed the inevitable evolution of: • the number concept; and • algebraic manipulation. Yet, today both form the basis of modern scientific and technological investigation. After a period of slow preparation, the revolution in mathematics began its vigorous phase in the 17th century leading to differential and integral calculus. The most important aftermath of this change is that while Greek geometry retained an important place, the ancient Greek ideal of axiomatic crystallisation and systemic deduction nearly disappeared. Investigation, experimentation and metaknowledge played a critical role in shaping modern scientific thought. Each of the inanimate phenomena examined by physics is, to some extent, unique. This element of uniqueness is still more significant in living beings, and it can lead to great complexity. Therefore, it is no mere accident that scientific thought has been more successful when applied to inanimate phenomena encountered in physics and in inorganic chemistry than to social science or finance. Some experts think that, at least as far as finance is concerned, this may be changing as new research themes evolve in the financial industry and they address products, processes and markets. Non-traditional financial research sought after by clear-eyed management of banks and brokers, includes non-linearities encountered in studies in economics and uses tools that until two or three decades ago were exclusively applied to physics. Examples include: • • • •
fractals theory by Dr Benoit Mandelbrot; chaos theory by Dr Edward Lorenz and Dr Mitchell J. Feigenbaum; the Butterfly effect, focusing on aperiodicity and unpredictability; chance, uncertainty and blind fortune by Dr David Ruelle.
32 The Contribution of Science
Rocket scientists rely on their skills as number crunchers and logicians of financial institutions develop, for example, instrumental forex schemes to benefit from currency differences in financial markets; map out computerised trading strategies, that fuel wild gyrations in stock prices; devise complex hedging formulas, for pension funds and other institutional investors; and invent new generations of securities, backed up by almost any kind of debt. This is no more philosophy or pure science activities that are primarily concerned with theoretical knowledge, but applied science and technology. Physicists turned financial engineers capitalise on an existing body of knowledge and try to enrich it through analytics. They make observations, develop hypotheses and test them, but this in no way means that they have reached in finance the same level of maturity as in physics. Rather, they are still at a stage of alchemy. In conclusion, abstracting, collecting and comparing are activities characterising both philosophy and science, and they are nourished on the doubt that is most relevant in the scientific world. The intellectual rigour of financial technology is also measured by the amount of scepticism and doubt its workers are able of raising, without losing their investigative spirit or their faith in the process of analysis. That is why they establish boundaries and construct metalevels which help in guiding their hand in investigations.
2.4
Metaphors, real world problems and their solution
We have spoken of metalevels and metaknowledge, which help in focusing our thoughts and direct our analytical effort, but also impose boundaries and constraints to the problems that we are handling, and their solutions. But we have not yet examined what exactly is a problem and why its solution (if it has one) is so important to us: • A problem is, by definition, a matter involving uncertainty and requiring solution. • For this solution, we need a line of conduct we propose to adopt and follow. The model that we construct of the real world maps the problem, but it is not the solution itself; it may be, however, a way to reach it. This model is a metaphor allowing us to map a real-life situation into a computer program. Metaphors are the means we use for describing at
Work of Financial Analysts
33
higher levels of reference what the computer should be doing as contrasted to what people do. As such, metaphors help in alluding to action, like: • making a given message explicit; • experimenting on selected variables; and • providing an interface between a problem in real-life and a computer program. The term programme, at large, means a line of conduct one proposes to follow. A computer program is the more or less exact formulation of the structure of a computational procedure that, for processing purposes, describes the model to the computer. Therefore, it is an algorithm that explains step-by-step to the machine the line of action permitting to reach predetermined results from variable input data. Models work through analogical thinking. As we have seen in Figure 1.4, we analyse a real-life situation (such as counterparty risk) through simplification, describe it in mathematical terms, process the model and use the result to understand better the behaviour of the real system, which may be a product, a market, a counterparty or something else under investigation. Analogical thinking has its routes in the physical sciences, and it is a process increasingly applicable in finance. When complexity makes it difficult to understand the whole system, we proceed component partby-component part, eventually trying to put the whole together again. We may as well need to simplify some components or think about them and their behaviour by means of working analogies. In other terms, when we cannot proceed directly from the real world problem to its solution, or this road proves to be too complex: • We take an alternative approach through modelling, starting with idealisation and abstraction. • The next step is algorithmic and logical expression, with the model mapped into computer memory. • Then we use the power of the computer for simulation, experimentation, inference, or to obtain or other results. The insight and foresight that we gain could lead us to the solution we are after, but we know in advance there will be margins of error or, more precisely, tolerances. What Dr Werner Heisenberg said about physics and physicists applies hand-in-glove to banking and finance:
34 The Contribution of Science UPPER TOLERANCE LIMIT UPPER CONTROL LIMIT
X MEAN OF THE MEANS
RECORDED MEANS OF CONTROL SAMPLES
LOWER CONTROL LIMIT LOWER TOLERANCE LIMIT
Figure 2.3 In a quality control chart by variables the control limits should be within the tolerances
• We can predict nothing with zero tolerance. • We always have a confidence limit, and with it a broader or narrower band of tolerance. The schema shown in Figure 2.3 brings under perspective tolerances and control limits. This dynamic expression of the behaviour of a process mapped in a statistical quality control chart is just as valid of engineering specifications as it is of financial investments, loans, derivative financial instruments or other business. Financial models working by analogy and having associated to them tolerance and control limits are integral parts of what has become known as interactive computational finance – which is today an essential element at Wall Street, the City, and the other major financial centres. One of the pillars of computational finance is the experimental method, borrowed from physics, chemistry and psychology, which opens a vast perspective towards a more rational analysis of financial situations and problems. The conceptualisation of an analogical situation to that of a real-life problem which is expressed mathematically, and the subsequent experimentation by means of computer-processed models results in deeper knowledge readily available for decision reasons. So to speak, this is an industrialisation of know-how which transforms and at the same time enriches the work of professionals: • It produces a factual and documented analysis or diagnosis. • While the opinions on which experimental results are based might be incongruent. The scientific method behind this process permits the development of alternative hypotheses and assumptions as well as their weighting and
Work of Financial Analysts
35
testing. The methodology to which I make reference incorporates both objective information and subjective judgement(s) that may exist. The latter are treated through fuzzy engineering.2 In the background of the approach I am explaining is the working analogue of a system, for instance, credit scoring or the forex market. When analogous systems are found to exist, then the experimentation that we carry out on the microcosm of a simulated environment provides significant clues in regard to the behaviour of the macrocosm – the real world that we try to comprehend. But there are constraints. Although the analysts deal with scientific, technical and financial facts, many times they find it difficult to obtain quantitative data. This constrain them because the principal work of the analyst if not intuition: • the analyst’s opinion must be rational and documented; and • its validity is proportional to the competence of the person, the accuracy of the model, and data quality. A reliable metaphor reflects the methodology we use. Allowances should be made for uncertainties; otherwise the outcome might be a misleading rationalisation of a prejudiced position. The findings of the study should never be hidden by a mass of charts, incomprehensive calculations and abstract technical terms. They should be transparent, clearly presented so that the ultimate user(s) will be able to reach an own conclusion. The use of sound judgement throughout an analytical study should not be underemphasised. When making a choice we try to balance the objective(s) against the cost of its (their) attainment. In doing so, crucial questions always arise. For example: What are the relevant alternatives? What are the criteria for choice? How do we go about the process of weighting objectives versus costs in selecting among alternatives? Much of the work in any analysis, as well as in modelling, involves the choice of pertinent factors, their integration into the model under development and, eventually, the determination of a quantitative result. What makes this work challenging is that there are no ‘cookbook’ formulas to follow when conducting an analytical study. A sound methodology would be flexible and present the research worker with options. Analysts who are worth their salt would employ whatever scientific and mathematical tools are appropriate to the case. Even if human judgement plays a most vital role in each step of the analytical process, the analyst will introduce objectivity into an otherwise subjective approach.
36 The Contribution of Science
It is as well necessary to account for the environment within which our organisation lives and operates. Its crucial factors may vary for different organisations and at different times, but at every time and for every entity the actual environment is important. Finally, it is worth remembering that one of the major contributions of metaphors, and of mathematical model-making at large, is that of clarifying the variables and of establishing their inter-relationships and interactions. Even if a quantitative study does not result to a specific construct that we can use for experimental purposes, it may have contributed greatly in establishing the nature of the problem and in bringing into light factors which otherwise might have been misunderstood, hidden or ignored.
2.5
Characteristics of an internally consistent analysis
Having a carefully worked-out, internally consistent analysis is no guarantee of being always right in terms of investment decisions. It is entirely possible the market would turn out otherwise than the way we have prognosticated. But not having a methodology that ensures the financial analysis is consistent and of high quality, increases the probability of being wrong significantly. As Section 2.2 has documented, much can be learned in terms of an internally consistent methodology from the discipline of system analysis as practised in engineering. An internally consistent analysis is decomposing a problem ( or a system) into its key component parts. Quite often, this is the best way to approach a problem because: • decomposition helps in solving it in discreet pieces; and • reintegration puts the pieces back together again into a whole. In every case, it is important to keep track of what stage has been reached in studying each of the components. Take aggregate derivatives exposure as an example. In Figure 2.4, the component parts consist of counterparty-to-counterparty evaluation of exposure, on-balance sheet and off-balance sheet (OBS); instrument-by-instrument analysis, including the effects of volatility and liquidity; and market-by-market risk exposure, accounting for both general and local conditions. Multimarket exposure necessarily involves currency exchange and interest rate risks; and it is wise to study the sensitivities, do simulations, and employ high frequency financial data (HFFD). These are activities with which financial analysts should be most versatile.
Work of Financial Analysts DELTA, GAMMA AND SENSITIVITIES USING HIGH FREQUENCY DATA AND SIMULATORS MARKET RISK EVALUATION IN ANY MARKET OR COUNTERPARTY, INCLUDING INTEREST RATE RISK AND CURRENCY RISK DELTA, GAMMA ET SENSITIVITIES USING HIGH FREQUENCY DATA AND SIMULATORS
37
AGGREGATE DERIVATIVES EXPOSURE CONVERTED TO LOANS EQUIVALENT COUNTERPARTY-BYCOUNTERPARTY RISK ON-BALANCE SHEET AND OFF-BALANCE SHEET
INSTRUMENT-BY-INSTRUMENT EXPOSURE INCLUDING VOLATILITY AND LIQUIDITY
Figure 2.4 Radar chart for off-balance sheet risk management to keep top management alert
Many banks, however, find the studies I am suggesting a difficult challenge because they involve both descriptive and quantitative skills as well as the ability to exhibit ‘how to do’ knowledge in fast situations. People with experience in the study of complex systems appreciate that the act of analysis presupposes a regular, orderly way of doing something: separating a whole into its constituent parts; determining the nature and propositions of each of the components; examining the interrelation, interaction and interdependence of each element while: • • • • •
observing, interpreting, measuring, associating, and predicting.
In a scientific environment, these functions involve a great deal of analytical skills including the ability to doubt. Doubt is the process through which man kind keeps control of its creation; that is, of its own intellect. Doubt is necessary in appreciating the fact that science is not omniscience. Scientists usually know what they learned in school or found through their own research, but this may be old and useless stuff
38 The Contribution of Science
indeed. Today researchers are as good as their last experiment, in which they constructed conceptual models in order to: • • • •
simplify and idealise; conceive and understand; gain insight and exercise foresight; establish a principle or take action.
As new experiences are integrated with old know-how, an internally consistent analysis may give rise to doubt regarding some of the elements under study or even a theory’s whole structure. When it comes to making important decisions scientists who have no doubts about their field of activity and their method, are often as incompetent as the professionals who have allowed themselves to become obsolete. Another requirement of an internally consistent analysis is the polyvalence of background. This is most often present in all forms of human enterprise, in spite of the fact we often talk about specialisation. Financial analysis is best run by persons who are both generalists and specialists, experts and amateurs, merchants of dry facts and purveyors of considered conclusions: • The views of specialists who see through the prism of their narrow discipline are necessarily restricted. • To broaden the focus we must turn to the interdisciplinary person who is able to appreciate the whole picture. Down to the fundamentals, it is not the existence or use of techniques that gets things done, but the consistent application of skill, experience and of unbiased thinking. This includes the ability to challenge what has been done, the way in which it has been performed and the conclusions that have been reached. Take computers and communications as an example. The computer system must be learned as a general aspect of industrial and cultural life – not as an introvert subject reserved to the systems specialist. Systems thinking implies stop using humans as number grinding machines. The massaging of data should be done by computers. What the analyst should do is to apply rigorous mathematical tests. He or she should follow the rules underpinning the system concept: • Challenging the obvious, and • Experimenting on the crucial factors and their variation.
Work of Financial Analysts
39
In this work, the analyst should frame the present as a function of the future not of the past. Cornerstone to this reference is the definition of the need to know. Essential is the distinction to be made between design characteristics. Two main classes of any analysis are those of functional and technical issues. The functional imply controlling parameters; the technical establish the limits. Both in functional and in technical terms, the flow of information should be timely, accurate and uninterrupted. This, too, is a characteristic of an internally consistent analysis.
2.6 Financial studies and the methodology of physicists and inventors The career of Galileo Galilei (1564–1643) marks the real beginning of natural science, combining mathematical theories and physical experiments. Before Galileo, most scientists though of physical experimentation as being something irrelevant, believing that it detracted from the beauty of pure deduction. But Galileo described his experiments and his points of view so clearly and convincingly that he helped in making experimentation fashionable, at least in the learned community. The centuries that followed built upon this heritage and over the years his way of work became known as the physicists method. We should learn a great deal from this method rather than reinventing the wheel in financial analysis. The wealth of knowledge that becomes available from experimentation in the physical sciences suggests that in financial studies we must convert and refine methods that have been successfully used in physics. This is the policy followed by men of science, in many fields. In the mid-19th century, Claude Bernard, a French medical doctor, established the experimental method in medicine. While following Galileo’s track, Bernard greatly improved upon it by: • making scientific observation a cornerstone issues; and • instituting experimentation as a medical discipline, which has since being adopted in other scientific domains. This is also what the best known inventors have done. Thomas Alva Edison and his close associates were granted 1,000 patents for such familiar constituents of our material culture as the light bulb, the phonograph and the motion picture camera. In fact, the great majority
40 The Contribution of Science
of Edison’s formative ideas never saw the light of day. Rather, the great inventor: • wrote literally to find out what he was thinking; and • revelled in his notebook drawings as sheer processes; the life of his mind. In the 19th century invention was referred to as an art, and Edison’s own road from sketch to concrete object was marked by a higher order (metalevel) imagination. Compensating for progressive hearing loss since childhood, he was a consummately visual thinker and fine draftsman. Concepts came swiftly, surely and unrelentingly, as he put them on paper to help his own understanding. Edison’s notebooks indicate a multifaced sensitivity to ideas, people, and things. Viewed in their encyclopaedic context, his laboratory notebooks vividly present the formative roots of modern R&D, all in the brain of one person. In fact, the breakthroughs Edison did in terms of an internally consistent methodology are so significant that today’s richly endowed research and development laboratories might be hard put to adopt his favourite method of problem-solving: simply try everything. Thomas Edison was equally adept at creating products, systems and companies, such as Edison Electric Light, a predecessor of General Electric. From his adolescent years as an itinerant telegrapher working the night shift in small-down train depots throughout his native Midwest, Edison was obsessed with enhancing communications technology. As with so many of his inventions, components of one machine generated elements of the next, with the telegraph the important starting point for many of his ideas. A typewriter, which he developed, came naturally from the inventor’s previous work in printing telegraphy. This is a good example of Edison’s precise methodology: stating the object of the invention in the first line, thereby staking his territory before moving on to identify the succession of inner mechanics in exhaustive, alphabetical detail. This painstaking procedure aimed to make the patent application a seamless document fairly secure from imitations. Taking a leaf out of the great Edison book, we can apply a similar approach to financial analysis. Or, if you prefer, we can take fluid dynamics as another model. Ordinarily, when we study fluid dynamics we start out by studying flow in open channels, such as a system of glass tubes and containment vessels. We learn about turbulence, currents,
Work of Financial Analysts
41
secondary flows, as well how different sorts of fluids circulate around in a channel. We don’t just copy the concept but consider similitudes and differences: • A major difference from finance is that in the case of fluids the geometry is fixed and the shape of the channel is static: only the fluid is moving. • By contrast, in a financial environment the structure itself and its environment are fluid, so that the effect of change in variables start compounding very quickly. The analogy, however, could still hold if we account for the fact that the changing flow ends up affecting the pressure gradients on the inside of the model, so that the structure we are studying starts to collapse from the inside. When this happens, more usually than not, it requires to: • re-examine the mathematics of the problem; • incorporate a module which permits to predict the effect of stress of all sizes; and • come up with some way to compensate for stress in the market, like the reserve bank does by lowering interest rates. A great deal of the insight required to confront in an able manner the challenge posed by these three bullets is germane to the culture of the scientific observer and his or her way of doing things. An example from antiquity is provided by Eratosthenes who lived in Alexandria in the 3rd century BC. He was a mathematician, astronomer, historian, geographer, philosopher, poet and theatre critic. Eratosthenes was the director of the great Library of Alexandria, when he found that in the southern frontier outpost of Syene, near the first cataract of the Nile, vertical sticks cast no shadows at noon. This happened on the longest day of the year, as the sun was directly overhead. Someone else might have ignored this observation, but it arose the interest of Eratosthenes who had the presence of mind to do an experiment: He observed whether in Alexandria vertical sticks cast shadows near noon at summer time, and discovered they do. If the same sticks in Alexandria and Syene cast no shadow at all or they cast shadows of equal length, then it would make sense that the Earth is flat, as was thought at the time. The sun’s rays would be inclined at the same angle to the two sticks. But since at the same instant there was no shadow at Syene but a shadow was observed at Alexandria, the only
42 The Contribution of Science
possible answer was that the surface of the Earth is curved. Eratosthenes provided one of the best example of what the experimental method can achieve. In a bureaucracy, including the bureaucracy of financial institutions, differences of the type that excited the imagination of a keen observer like Bernard, Edison or Eratosthenes would attract no attention, or they would be swept away in the murky waters of administrative seaweed. In a highly dynamic market where deregulation, globalisation, technology and innovation are king, no financial institution and no other type of company can afford to be less than fully observant. If it fails in this duty, it will kill its business and will soon run out of cash.
2.7
Management based on research and analysis
An analytical methodology will capitalise on the design criteria and on the information flow we have chosen to use. To be effective, such methodology must be reasonably simple and cost-conscious. It is generally unwise to abandon a methodology whose logic we understand and that continues providing commendable results, even if this means foregoing the latest tools or more sophisticated analytical approaches. One must learn well and appreciate a new methodology before adopting it. The policy I am suggesting is equally valid in trading. Projections of exceptional profits often mean embracing trading habits and taking risks that one does not fully appreciate. When this happens, the way to bet is that down the line it will lead to considerable losses. ‘My approach to bonds is like my approach to stocks’, says Warren Buffett. ‘If I can’t understand something I tend to forget it.’ What I am saying by no means implies that the financial analyst should be against change. To the contrary, dynamic organisations and their analysts should spouse the management of change in every aspect of their business. For a conscious being, and for a well-managed company, to exist is to change – because to change is both: • to mature; and • to become more efficient. Life, including professional and business life, is a matter of time rather than space. In fact, as Dr Charles P. Steinmetz, Edison Electric’s, then General Electric’s, former engineer and scientist, once said: ‘Time and space exist only as far as things or events fill them.’ Therefore, they are
Work of Financial Analysts
43
forms of perception. The evolution taking place in time and space leads dynamic entities; therefore we should choose: • change rather than position; and • quality rather than quantity. Under the impact of change, business life is much more than a redistribution of matter and motion. It is a flexible and fluid continuous creation. Like any other process, however, change has to be managed and this is just as true of change in method, in tools and in analytical concepts, as it is of change in products, processes, markets and management philosophy. Some of the participants at my seminars in banking strategy and in risk management ask me: Why should we use rigorous analytical concepts in financial activities? Of the many advantages that analytics offer to the problem-solver, perhaps the most important is that of furnishing a perspective of the situation in which all the fundamental assumptions are explicitly formulated. Therefore, they can be objectively examined and, if necessary, altered. As Chapter 1 underlined, models help in the management of change because they are used to forecast the way decisions may influence our organisation, especially in terms of risk and return. All possible alternatives must be tested, and this is done through analytics at relatively low cost compared to trial and error in actual practice, where error might highly disturb the posture or operations of our company or run our income statement into red ink. In no way, however, does it follow automatically that we can arrive at a satisfactory solution merely because we uses a model or other analytical tools. Whether or not a quantitative approach will be useful depends on our method of experimentation and on the way our model reflects real world phenomena associated to our business activities. Major failures encountered with models have in their background poor management habits such as: • Habitual thinking and resistance to change, which are always major cost factors lowering our efficiency. • ‘My problem is unique’, and generally negative attitudes to the challenges of research and analysis. • Reluctance to seek advice, and its opposite: getting the wrong advice and other common pitfalls. • Lack of information on latest developments and on alternative means and possibilities for doing business.
44 The Contribution of Science
The four points exemplify the negative attitude taken by people and companies towards what is increasingly called management by research and analysis. Each bullet is a tenant of the bad policy of managing by the seat of the pants. By contrast, rigorous analysis brings together the cumulative background and experience of people familiar with the application of mathematical modelling and statistical testing. Many companies have asked me the question: Who is qualified to do the analyst’s job? This sounds simple, but it is an involved question indeed, which has no simple answer. In the first place, in the literature and in the minds of many people, there is a certain confusion between the functions of financial analysis and those of financial forecasting. It is not the purpose of this book to discuss financial forecasting, but a quick reference helps to establish the difference between the two terms. Financial forecasting has two interpretations: • forecasting future events or trends and their likelihood; • projecting the further out aftermath of current decisions, whether these concern investments, loans or other issues. In this book, financial analysis has little to do with the message conveyed by the first point above, but it can be seen as the pillar on which rests the message conveyed by the second point. This being said, the scope and complexity of the problem would dictate the necessary level of qualified personnel. Both Chapter 1 and this chapter have discussed the basic characteristics making a person or process ‘qualified’. Often, it is advisable to use a financial analysis team rather than a single individual. In respect to effectiveness and flexibility of work, a small team would probably consist of two or three persons. Utilisation of a small team may result in a decrease in the time necessary for deliverables and in an improvement in performance due to the varied background of team members. This, however, is in no way a foregone conclusion because there may as well develop personal friction that reduces the team’s effectiveness. What about return on investment? A newly established financial analysis office may not pay off immediately. It takes time to get well acquainted with the problem(s), build the model(s), test the variables, compute multipliers, test the construct(s), develop the general structure of experimentation, and make these things pay. Management must keep in mind that analysis is a costly and time-consuming activity, but it is justified by the high returns possible when it is run like a clock.
3 The Contribution of Modelling and Experimentation in Modern Business
3.1
Introduction
‘The sciences do not try to explain, they hardly even try to interpret, they make models’, Dr John von Neumann once suggested. ‘By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observable phenomena. The justification of such mathematical construct is solely and precisely that it is expected to work.’ Nowhere does the von Neumann principle apply better than in advanced studies and in helping management to avoid tunnel vision. Matter is given to us a posteriori, as the result of a cause that may be unknown to the investigator. By contrast, form is imposed by the human mind a priori. As such it might be considered as independent of sensations, intended to reflect some sort of reality into a man-made construct. Models are an issue of form. Since the late 1940s with operations research (OR),1 models have been widely used in business and industry. Fields that profited the most from OR are industrial operations and references include: the evaluation of market potential, production planning and control, scheduling, transportation, product definition (including value analysis), equipment replacement and make-or-buy decisions. In banking, among the main subjects that have been approached through models are liquidity planning, risk analysis, portfolio management, money market studies, optimisation of fees, forecasting of interest rates and exchange rates, and evaluation of the profitability of banking services. Also the design of new financial instruments and decisions concerning risk and return. Since the early 1970s, therefore for nearly 30 years, the leaders of the financial industry featured small R&D laboratories whose goal had been modelling. I recall from a meeting at Morgan Guaranty in the mid-1970s 45
46 The Contribution of Science
that its R&D lab included 8 Ph Ds in Engineering and Mathematics, four university graduates in OR, eight university graduates in system analysis and programming and five assistants – a total of 25 people. By the late 1980s, among tier-1 financial institutions, these R&D laboratories had become much more elaborate and very profitable too. A good example is the Advanced System Group (ASG) of Morgan Stanley, with an annual budget of about $50 million and profits nearly twice that amount. In a personal meeting, the director of ASG described his unit as ‘something similar to Bell Telephone Laboratories, albeit at a smaller scale, for the financial industry’.
3.2
The multiple role of analysis in the financial industry
A question I am often asked in my seminars regarding the role a financial analyst should play, and also what’s the top of the line contribution of analysis to the financial industry. There is no unique answer to the queries because the role of the analyst in the bank’s decision process will depend on the nature of the problem and therefore of the study he or she is assigned to do. As for the three top-most contributions of analytics to the modern bank these can vary quite significantly from case to case, though invariably they would involve: • better documented and more competitive senior management decisions; • the development and marketing of new financial products and services; • a more thorough evaluation of exposure and the design of means for risk control. Up to a point, current work in financial analytics resembles developments in the late 17th century when Isaac Newton and Gottfried Leibniz made possible the mathematisation of physics. Nature is written in the language of mathematics. Nearly 200 years before them Luca Paciolo accomplished single-handedly the mathematisation of accounting.2 Other examples on the contribution of analysis and of analysts to finance and banking are more mundane. In many cases the financial analyst would assist the trader, sales personnel or manager in the identification of alternatives; a fresh viewpoint often helps. In other cases the mission would be that of developing a model of relationships existing in the problem, based on factors, events and their likelihood.
Contribution of Modelling
47
In the latter case, the objective may be product innovation, risk control, optimisation, or any other. Invariably, with optimisation comes the concept of cost/effectiveness and all this means regarding subsequent implementation of the results reached through analytical study, experiment, or both: • Cost/effectiveness is an attempt to identify the alternatives that yield the greatest effectiveness in relation to the cost incurred. • Each product or service we offer, each programme or project, uses up resources that could otherwise be put to some ‘more useful’ purpose. It is my experience that prior to the deregulation of the late 1970s/early 1980s, banks were rarely cost-conscious. The margin between the interest paid to savings and time deposits and the interest they charged for their loans was wide, and this guaranteed good profits. Large profit margins don’t exist anymore, and banks that only slightly control their costs and their risks go downhill. Cost in any product or programme should be more than compensated through benefits. Effectiveness requires that otherwise it should be allocated elsewhere. Cost/effectiveness is therefore an attack on the relevance of cost, but such effort would be half-baked if we do not consider the risk being assumed as a major cost. Quite often risk is the largest cost in an analysis of a product’s or processes’ cost components. ‘We must stop talking of profit as a reward’, says Peter Drucker. ‘It is a cost. There are no rewards, only costs of yesterday and of tomorrow.’ The costs of tomorrow are the risks we are taking. In his approach to cost/effectiveness the financial analyst should help the decision-maker to separate the problem into its quantitative and qualitative, or judgmental, elements. The main major from this study is clarification of ideas and relationships. The next main advantage is quantification of risk and return. This is achieved through: • understanding of the problem being analysed; • a clear statement of all relevant hypotheses; and • factual and documented conclusions based on these hypotheses. Prerequisite to any analysis is defining the problem and its variables; supplying measurements and other information elements associated to the problem; developing a mathematical description that relates the critical variables among themselves and to the objective; and evaluating alternatives by testing them and observing the outcome.
48 The Contribution of Science
In this process, the model behaves like a finite state machine (well known from engineering) which is essentially a protocol. A protocol machine can best be described as an abstract device having a finite number of states. Typically, these are memory states. Another requirement of the analytic approach is a set of rules whereby the machine’s responses to all input sequences are properly mapped into outputs. In terms of mathematical representation, a view of a protocol machine is that of a block diagram representing the decomposition of a problem or process into component submachines (other protocol machines) and the signalling paths between them. A block diagram can be presented in several versions. Two of them, the more common, are: • oversimplification; and • great functional detail. Practically every block in a diagram can be further decomposed into its constituent parts (or protocol submachines). These will be typically shown as an interconnected set of routing and checking logic, algorithms, feedbacks, switches and other primitives. Furthermore, a protocol machine may define its boundaries (see Chapter 2). That is, specifications of the format, content and other requirements imposed on the signals exchanged between its constituent parts and/or other protocol machines. This brief description brings our discussion about models somewhat further than model definition in Chapter 1 and the discussion on some of its mechanics in Chapter 2. The concept I have just explained is applicable with a wide array of real-life problems. Table 3.1 outlines some of the problems that have been among the first to be approached through modelling in finance and banking. Other fields than the six major classes in this table include: credit card services, branch site location, study of staff requirements, departmental or functional cost studies, personnel administration (selection, compensation, position evaluation). As this brief presentation demonstrates, modelling is based on organisation and it requires format. This is true of all arithmetic, algebraic and geometric approaches. Herodotus complained about the ‘rape of reality by geometric schematisms’, just as Charlie Chaplin characterised the photograph as the rape of the moment.
3.3
Can models help in improving business leadership?
Leadership is, first and foremost, the ability to lead, guide, stimulate and animate a group of people. Since organisations are made of people
Contribution of Modelling Table 3.1 1960s
49
Banking problems studied through models and simulation since the
Bond department Bond portfolio selection, bond trade analysis, bond pricing and bidding, coupon schedules, treasury auction bidding strategy, term structure of interest rates Commercial Loans Study of customer balance sheet and operating statement, cash flows, credit rating, loan portfolio selection, loan securitisation Instalment loans Numerical credit storing, analysis of defaulted loans, mix of loan types, credit information and verification, loan officer evaluation Real estate mortgages Loan analysis, credit evaluation, neighbourhood studies, loan securitisation, option adjusted spread, collateralised mortgage obligations Trust department Portfolio section, portfolio simulation, risk analysis, study of timing for buy/sell and switches, valuation of assets, investment performance measurement, estimating of earning, customer tax advisory Bank operations Teller operations, forecasting workloads, proof and transit operations, trust department operations, lock box operations and nation-wide location of lock boxes, messenger routing and scheduling, scheduling of visits to customers, deposit account profitability, statistical sampling for audits
their leadership directly affects the organisation and its behaviour. A leader is a person who shows the way by going first or, alternatively, by opening paths for others to follow. He or she is doing so by prognosticating, planning, directing, commanding and controlling the execution of the commands. By so doing, people are exercising their ability to lead towards the fulfilment of an objective that might already exist, or that they have imposed. The use of models does not directly alter a person or company’s leadership style, but indirectly it can contribute to it in a significant way. It does so by clarifying ideas, identifying critical factors, testing hypotheses, providing quantitative estimates or, alternatively, by assisting in deepening the investigation of opportunities and risks. Let’s not forget that quite often decisions are taken without a factual basis. ‘Many businessmen are always establishing new beachheads’, says Peter Drucker. They never ask: ‘Is there a beach to that beachhead?’ By enabling an investigation to be carried out in an organised way, the process of modelling alters and improves these conditions. It makes feasible at the same time two functions, which superficially might look
50 The Contribution of Science
as if they are contradictory, but in reality they complement one another very well: • the classification of the roots of uncertainty; and • the process of fact-based determination (see in Chapter 4 the quantification of credit risk). At the beginning of the 19th century, Honoré de Laplace, the mathematician and physicist, suggested that there should be a set of scientific laws that would allow us to predict everything that would happen in the universe, because the universe is completely deterministic. This was challenged many times and most particularly in 1926 by Werner Heisenberg who formulated his uncertainty principle. But in reality the concepts of Laplace and Heisenberg coexist. Heisenberg’s uncertainty principle is a fundamental, inescapable property of science, however, it does not necessarily contradict the deterministic hypothesis of Laplace as it has often been thought to be the case. Determinism and uncertainty are combined through complexity theory, which applies both in the physical world and in a business environment: • Events in the short run are stochastic, following the laws of probabilities. • By contrast, in the long run they tend to be deterministic, as chaos is followed by a new state of equilibrium; therefore, by stability. Leadership in banking and finance needs both conditions and whether it is the notions of Heisenberg or Laplace who hold the upper ground depends to a large extent on the investment horizon that we choose. This is another way of saying that models should not be monolithic, and different models are necessary for different circumstances. A model must be focused. A great deal depends in its design from answers to queries: Who will use it? Under which conditions? How accurate should it be? Which type of test is appropriate? Does the chosen level of confidence make sense? Does prediction inspire confidence? Clear answers are important because the model’s output alone may not be convincing to the user or the organisation: • Financial models confront the added challenge that the markets are discounting mechanisms. • This makes the knowledgeable person sceptical that the future will replicate the past.
Contribution of Modelling
51
Qualitative and quantitative results provided by models are a means for developing dissension, by permitting better focused decisions at corporate level. Alfred P. Sloan recounts how as chairman of the board of General Motors always advised other board members and his immediate assistants never to accept an important proposal without having dissension, therefore inviting critical discussion about merits and demerits of the issue on hand: 3 • The results of experimentation can be instrumental in promoting and supporting a range of viewpoints. • The results of analytical studies are often interpreted differently by different people – hence, they assist in developing dissension. Dissension might as well develop because not everybody accepts the model’s results. In evaluating cumulative payoff, for instance, some committee members may feel that the hypotheses being made were not well documented, the values given to the key variables were too small (or too large), the limits of variation were set too narrow (or too broad), and so on. Also, changes in the structure of the problem may have occurred that makes it desirable or even necessary to revise the model and its consequent implications. For instance, model results have become inconsistent when related to those observed in the business environment, and there are reasons to believe the model being employed is out of tune with current market conditions. Alternatively, deficiencies in the original model may have become apparent as new ideas (or new products) led to an improved solution compared to the one for which the model was built. The analyst should then formulate alternatives leading to an improved solution, evaluate these alternatives and come up with a better model version. In other cases it may be that the programming language used by the model builder is itself the origin of inflexibility or lack of adaptability. Neither should it be forgotten that occasionally a mathematical model is formulated that is so complex that practical results cannot be derived from it; an enterprise has no appetite for theories. Dr Robert McNamara, formerly US Defense Secretary, President of Ford, and President of the World Bank, advises: ‘Never to go ahead with a major project unless you have examined all the alternatives. In a multimillion dollar project you should never be satisfied with vanilla ice-cream only. You should have many flavours.’ McNamara’s dictum applies nicely with the plans that we make about the future of our firm.
52 The Contribution of Science 100 90 %
90% OF Y CORRESPONDS TO 20% OF X
80 70 60
Y
50 40 30 20 10 0 20
40
60
80
100
%
X Figure 3.1
Pareto’s Law is widely applicable to business and industry
Because they make feasible a documented evaluation of alternatives, models help in the elaboration of alternative options which is one of the characteristics of business leadership. They also assist in improving the sensitivity and connectivity of the leadership team. Sensitivity refers to how likely is that a given presentation of financial risk will be recognised and judged against its reward. Connectivity means how quickly and accurately information about a case gets passed to the different levels of an organisation which: • have to act upon it to take advantage of a situation; or • alternatively, to redress the situation and avoid further risk. A simple model is Pareto’s (Vilfredo Pareto was a late 19th-century economist and mathematician, professor at the University of Lausanne). Figure 3.1 brings home the message that the cumulative payoff can be related to a relatively small subset of elements, components or other factors: 90 per cent of ‘y’ corresponds to 20 per cent of ‘x’. Pareto’s Law is essentially a model that describes the general behaviour of a variety of
Contribution of Modelling
53
Y2 PARETO’S CURVE
Y1
Y
EQUAL PAYOFF
X1
X2 X
Figure 3.2
A Practical application of Pareto’s Law versus equal payoff
phenomena, providing a meaningful approach for developing decision rules and making selections. A practical application of Pareto’s Law is shown in Figure 3.2. Management should know by experience that in a given problem some solution has greater payoff than the average solution which connects ‘x’ to ‘y’ through a straight line. This case appears frequently in practical problems and it helps to describe the items constituting a solution with high, medium and low values. The same is valid regarding the importance of decisions managers make and their aftermath. There is always a silver lining in the evaluation of alternatives, even if we don’t get immediate results, which means that the time and energy spent in constructing the model is not necessarily wasted. As long as we are willing and able to learn from our mistakes, much is being gained even if the artefact that we built don’t work to perfection. The critique of models and modelling is another contribution to the art of management.
54 The Contribution of Science
3.4
Non-traditional financial analysis and qualitative criteria
It is always possible to try to solve a problem by proving that no solution exists. This is a negative approach and it works by default. Inability to formulate a solution can happen from time to time, but the proof of no solution is not the real goal of the financial analyst’s work. There is no such thing of negative proof in science. Therefore: • The goal of financial research and analysis is to find one or more solutions, whether through classical or non-traditional research. • The essence of non-traditional approaches is that of new departures doing advanced types of study through simulation and experimentation – but also by challenging the obvious. That is why in Chapters 1 and 2 so much attention has been paid to abstraction, critical evaluation of facts, and analogical reasoning. Analogical reasoning, including both qualitative and quantitative criteria, brings the analyst closer to the aim of financial modelling and to the understanding of the forces that dictate the behaviour of markets. It also guides his or her hand in analysing the factors behind products that are successful and others that are not. Non-traditional financial research uses analogical reasoning for the identification and exploitation of strange correspondences that exist in a financial system. This is key to breakthroughs in finding more dependable ways for calculating risk and return, therefore unfolding new business opportunities, but also requires following a trail of complexity studies which can lead: • from an economic analysis of market volatility; • to exposure associated to novel instruments. In this process, the non-traditional solutions provided by financial engineering are both the microscope and the hourglass – which, when it empties, signals that a product is out of luck. Its returns no more cover the risks and costs that it involves. Even if the market still wants it, our company should have no use for it. The financial and industrial literature does not pay enough attention to the contribution of analogical reasoning to non-traditional research. Yet, this has been the process used by the physicist and mathematician Evangelista Torricelli (1608–47) in examining the effect of atmospheric
Contribution of Modelling
55
pressure on the level of liquids; and by Johannes Kepler (1571–30) in his inference that the trajectory of planets is elliptic – which he reached after having investigated several geometric forms that might have been applicable. From the will behind these and so many other efforts to confront the known and therefore ‘obvious’ notions or schemata has come the 99 per cent of scientific progress, and the same is true of advances experienced in industry and in finance. Some of the most important revelations in analytics have been spontaneous, reached by stepping out of the beaten path, or thinking the ‘unthinkable’. The moving force behind discoveries is the fact that thinkers have a limit to their sensitivity to certainty, and this limit is fast crossed by those keen to challenge what is ordinary. Beyond it is uncertainty and the challenges it presents. Non-traditional thinking looks at facts from a metalayer (see Chapter 2) and formulates hypotheses that: • test or outright alter the usual conception of events; and • take out of the equation factors that are minor or inactive, while integrating new ones. The point of departure of non-traditional research conditions the whole journey into a given scientific domain (see Chapter 1). The choice of our focus beyond any known façade of things, is a major element in the success of this enterprise. This is how we should be looking at chaos theory, the method of study of pattern changes through non-linear dynamics. (Chaos theory is connected with the principles of motion of bodies under the influence of forces, in a way similar to the breakthroughs in physics in the Torricelli and Kepler examples.) Another characteristic of non-traditional financial research is the use of qualitative criteria in conjunction to the quantitative approaches whether the latter are based on numerical analysis or other disciplines. Qualitative criteria help in giving perspective. This does not mean that we disregarded the figures, but we must look beyond numbers as well as behind them: • How they have come about? • What is the likelihood of their remaining where they are? These are crucial questions and their answer can be a hell of a professional task. But serious analysts have to go through it. Qualitative criteria complement rather than contradict the principle that the decision process
56 The Contribution of Science
of bankers, treasurers and investors will become easier as more figures are available and more is known about how the market is working: • Quantitative approaches give an answer to the question: ‘what’ and ‘how much’. • Qualitative criteria respond to the questions ‘why’ and ‘in which way’, seeing to it that our projections are not half-baked. Take as an example devising a model for the mortgage market. The intellectual challenge for rocket scientists is not the mathematics but the need to understand how changes impact on market psychology and therefore the saleability of securitised mortgages. For instance, changes at certain points of the interest rate yield curve influence individuals to alter the pace at which they pre-pay their mortgages. There are as well other considerations of a qualitative nature, and associated to them are pitfalls. In the period between 1991 to 1993, dealers used pre-payment patterns from the 1980s to predict householders’ behaviour. In doing so, they did not realise that mortgage originators in the 1990s were beginning to act differently, often calling borrowers after 50 basis point cut in interest rates and suggesting that they re-finance. As a result of this change in behaviour many holders of sophisticated mortgage backed financing derivatives products suffered greater than expected losses as pre-payment rates accelerated. To make risk management more effective rocket scientists must ingeniously exploit the different features of individual mortgage-backed securities, like embedded call options, and how these might be managed. Because qualitative and quantitative criteria used in modelling interact with one another, new methods are being developed for numerical analysis enriched by qualitative factors. Many of the older methods of numerical analysis were devised for largely manual work and they are not the best for interactive computational finance; in fact, some are not even fit for computer processing. For instance, instead of employing tables of elementary functions we can compute the desired values directly from algorithms. The other side of the same argument is that through computers we can handle approaches to numerical analysis that could not even be considered for hand work. This includes stochastic processes such as Monte Carlo, as well as methods for finding eigenvalues, inversion of fairly large matrices, minima and maxima of functions of several variables, and so on. A wealth of structure can be found in the new wave of modelling leading to sophisticated forms of mathematical synthesis. This is very
Contribution of Modelling
57
important because we are increasingly dealing with situations that require the analysis of complex systems with dynamic behaviour.
3.5 Models become more important in conjunction to internal control With two pace-setting decisions: the 1996 Market Risk Amendment4 and the 2001 New Capital Adequacy Framework (Basle II), by the Basle Committee on Banking Supervision, Group of Ten central banks gave to the institutions under their control a very clear message that it is mainly the task of the banks to develop the models necessary for the steady valuation of their exposure and the calculation of appropriate reserves. To do so commercial banks must have the rocket scientists versed in model building. After the models are made: • the role of supervisors is to test and recognise the output of the model; and • to assure that this model output is properly used for management control, and the model is constantly upkept. During the central bank meetings that I had, Group of Ten supervisors also underlined the fact that, while it has become important, the development of risk control models is not the only activity necessary to the management of risk. Other crucial issues are: • sound organisation and structure (see section 3.6); • a properly functioning internal control system; and • the need for having all data important to the evaluation of exposure at fingertips Internal control, said one of the executives who contributed to this research project, is both a practical problem and a cultural problem embedded deep in the way a credit institution is doing its business. Yet, the culture of some companies is alien to the concept of timely and accurate internal control. Another senior banker commented that to find a valid solution to internal control problems one has to distinguish three distinct phases: • identification that there is an internal control problem; • measurement to assure that we know the size of the problem; and • alternative solutions to end that problem, at least in its current form, and choice of the best alternative.
58 The Contribution of Science
Identification, measurement and solution are three giant steps in both personal life and business life that pervade our thinking, decisions, and actions. There is an important link here with the world of military intelligence, including the fact that quite often the tangible is given greater value than the intangible. Intangibles are harder to identify and measure; yet, sometimes, this is leads to better understanding of the situation. Fear and greed associated to stock market behaviour and/or derivatives trades is an intangible. Can we say at which point we are in the curve in Figure 3.2? If we are able to exactly define the time and value at which we stand, we have nothing to fear of market crashes. As Sun Tzu says it: ‘If you know yourself and know your opponents, you have nothing to fear from one hundred battles.’5 Timely, accurate internal reporting on both tangibles and intangibles are indispensable partners of the command and control system of organisations. The complexity of most of the events in finance and in technology militates against classical methods and suggests the use of real-time simulation. In the longer run, even simple elements resist the attempt to deal with them by cookbook-type formulas. The advent of derivative financial instruments and the emphasis placed on global finance have given rise to projects of a scope greatly exceeding that of past banking operations. The pace of innovation in the market sees to it that projects addressing products and processes must evolve rapidly and their milestones of progress cannot be set, let alone controlled, without model assistance: • New financial instruments raise vast problems of planning, management and risk appraisal. • An extensive study is necessary of organisational aspects because of the impact of new processes whose aftermath is not fully appreciated. Complexity all by itself does not assure innovation or change. Professionals dealing with complexity have to become aware of the new conditions and this awareness must be translated into a plan for meeting objectives. Such a plan involves functions and persons, not only financial data, mechanical components and technological developments. With complexity comes a growing volume of challenging queries and record keeping leads to massive databases. This leads to the issue of complexity in modelling. There exist many parallels to new developments in the scientific and the financial domains. As achievements become more numerous, more
Contribution of Modelling
PROBLEM DEFINITION
GOALS DEFINITION
IDENTIFICATION OF VARIABLES
59
HYPOTHESES
NEEDED INFORMATION ELEMENTS
PLAYBACK USING DATAMINING
PROBLEM SOLUTION
Figure 3.3
EXPERIMENTATION THROUGH THE MODEL
GREATER DATABASE BANDWIDTH
EVALUATION OF FORWARD OUTPUT
Problem definition is only the starting point of modelling
inspiring, more divorced from the little corner of common sense familiar to all of us, it becomes increasingly necessary, as well as more difficult for an individual to maintain a firm intellectual grip on the various ramifications of a chosen field of knowledge. Therefore, there is every advantage in modelling products and processes, the way described in Figure 3.3, after carefully defining the problem. Usually in a complex environment business is so diversified and experience in handling innovative projects so short, that an overwhelming number of problems are clamouring for solution. Each of these problems requires individual attention and often separate treatment. It cannot be solved the same way like its predecessor
60 The Contribution of Science
problems or different issues confronted at this time by somebody else. Because complexity tends to dominate new products and processes, and this is by all likelihood going to increase in the future rather than attenuate, supervisory authorities of the G-10 look favourably to the development and use of models. This is at the core of Basle II, and most particularly of the IRB solution. The Basle Committee believes that: • models should be designed for better management of the financial institution, and • they should enable a quantitative appreciation of risk by senior executives, and at board level. It has not escaped the attention of senior bankers that like statistical quality control charts,6 and market risk models (see Chapter 10, credit risk models can be instrumental in promoting internal control. Said the European Central Bank: ‘Measurement units and control procedures are two of the challenges lying ahead.’
3.6
Human factors in organisation and modelling
As I never tire of repeating, organisations are made of people. Therefore, the strategy we choose towards modelling and models must be appealing to the people expected to use it and to those who would judge their output. The richness of an organisation when we talk of a company and of the population in regard to a nation resides in the analytical culture of its people. The persons who today appreciate what can be offered by models are not that many. But people can be trained since man is a reprogrammable engine. Lifelong learning is a good investment. ‘If you plan for a year, plant rice. If you plan for 10 years, plant trees. If you plan for 100 years, educate people’, says a Chinese proverb. And a proverb is long experience expressed in a short sentence. No sound plan on organisation and structure can afford to forget the importance of human factors in internal control and in modelling. Probably, Most likely, nobody will ever learn enough about all of the details of how humans react to new conditions, to build a valid prognosticator based on such reactions; but this makes human factors even more important rather than less so.
Contribution of Modelling
61
A basic notion to be explained is that modelling serves best in using the computer as a laboratory by running simulations of the behaviour of people, products, markets and other complex systems. This kind of experimentation aims to find the best solution to the likely course of events, and to work around them before the critical mission of a system, or of an institution, is adversely affected. One of the challenges of adopting a dynamic model of an entity is that a relevant part of the knowledge about system behaviour is embedded in the algorithms to be chosen. Whether or not initial data might suffice to estimate model parameters depends on: • the goal we are after, and • the timeliness and completeness of available information. As I had the opportunity to explain in Chapter 1, the quality of financial information can be good, bad, or unknown. Rarely is there assurance on its standard. Therefore, much depends on the integrity of the analyst in accepting only what is relevant and dependable. Some banks have found out that if the financial information is of an average quality they should be using two analysts, may be even three. Subsequently, they should scrutinise the results to find out differences in their perception of facts and trends. This permits a system of checks and balances, which should be build inside the chosen procedure. Agents (knowledge artefacts) can provide invaluable services in flashing out inconsistencies: 7 • in data streams, and • in database contents. Chapter 1 has also brought to the reader’s attention the importance of a dependent methodology that provides a common framework so that two different analysts will be able to come up with the same material in terms of findings and conclusions, if their evaluation of facts is congruent. The ability to repeat analyses and experiments under similar conditions is a point of strength, since available information elements are often scarce and do not always represent a statistically meaningful sample. Managers, treasurers, traders, loans officers and other professionals can benefit from the methodology and the models, the more so if these allow the identification of turning points and rates of resources to be
62 The Contribution of Science
allocated a given process or project. Benefits tend to be greater if and when the model is: • representative of the enduser’s responsibilities; • has retained the attention of other committed users; and • is fairly well documented in its presentation of results. These prerequisites will be fulfilled if the model has been worked out in a consistent manner, the hypotheses on which it is based are validated, the data it employs are timely and dependable, and its output presents a comprehensive, clear message to the people using it. Also, as I mentioned on so many occasions, if the model must be focused. Focusing underlines the need for tuning the artefact within the organisational setting it will be sued. Well-done organisational studies typically try to: • discern critical chains of activity; • anticipate adverse events, particularly exposure; and • guide action to obviate them. A major project to which I participated recently, investigated both analytic and stochastic means of financial system simulation, coming to the conclusion that in most cases a very detailed analytical approach is both too tedious and too complex to be of practical use. Beyond this were constraints in terms of qualified personnel. Stochastic approaches to simulation utilising Monte Carlo were found to be the most promising. 8 Subsequently attention was paid on developing techniques for determining the number of Monte Carlo trials necessary to predict system performance. This was done at different levels of confidence, both for initial conditions and as a function of time. A process was elaborated permitting random selection from a distribution which could be directly applied to the Monte Carlo procedure. Such work also included probabilistically defined measures of both initial and time dependent system performance. Subsequently, two different projects were done along the outlined principles: • one for derivative financial instruments; • the other for globalised banking operations Globalisation and derivatives were found to be irregularly connected activities in terms of amount of exposure, time, and frequency of different
Contribution of Modelling
63
events. Risk and return originated from a whole family of events whose time of occurrence was stochastic. They terminated at one or more final events, which were not normally distributed. Seen from a mathematics perspective, this network’s topology was such that no chain of succeeding events led back into itself. Furthermore, the duration of intermediate happenings was not necessarily a known constant but conformed, in general, to some of the known probability distributions that was positive as far as the analytical process was concerned. A key question asked during this study has been: Given the distributions pertaining to the isolated activities, what are the probability distributions for the times of occurrence of the intermediate and final events? Far from posing an abstract question, this sentence has intrinsic interest because closely similar situations arise in the context of several other activities in modern finance, making feasible the study of their patterns. In conclusion, after the organisational perspective has been set, both the human factors and the material factors have been considered, and the analytical job of quantification and qualification of the problem has been done. This was followed by modelling and experimentation, necessary to handle a fairly complex project like the one I have outlined. The result has been the framework of a dependable solution, which can be profitably employed for reasons of command and control.
This page intentionally left blank
Part Two Elements of the Internal Rating-Based Method
This page intentionally left blank
4 Practical Applications: the Assessment of Creditworthiness
4.1
Introduction
Credit risk that concerns the dependability and solvency of a counterparty is the earliest type of risk known in finance and banking. Credit risk is part of the broader concept of risk, which is not alien to people even in the sciences. Max Plank, the physicist, once said: ‘Without occasional venture or risk, no genuine inventions can be accomplished even in the most exact science.’ Everything we do has a risk, but we also hope that it has a return that exceeds the exposure by a margin. The question is which transactions present the best return for risk being taken as well as how many risks are involved. There are credit risks, market risks, operational risks, legal risks, technology risks, and many others. Therefore, we need a comprehensive, portfolio oriented approach to risk control: • understanding component risks; • measuring them in a uniform manner; and • assigning appropriate capital to support them. In principle, we can make more profit by taking credit risk than market risk, but credit risk is seldom well-managed. Banks are often too eager to bend their own rules in order to give loans, without really accounting for the counterparty’s creditworthiness. One way to do so is through credit rating either by independent rating agencies or through resources internal to the institution. In the past, many banks kept an internal credit rating system, fairly similar to that of independent agencies but less detailed. This is changing. The best managed (and larger) credit institutions are significantly 67
68 Elements of the Internal Rating-Based Method
increasing the detail of their rating approach, employ OC curves in their evaluation of credit worthiness, and are active in the use of models: • Taken together, these steps form the foundation of the IRB approach. • IRB is one of the pillars of the New Capital Adequacy Framework (Basle II), which replaces the 1988 Capital Accord by the Basle Committee. The reader is warned, however, that all on its own the knowledge on how to implement credit rating and credit models in banking is not synonymous to well-managed risk, though it is its basic ingredient. Without such knowledge we will not find a solution in controlling exposure in a reliable way, unless we stumble on it. There are also, however, other important ingredients in credit risk management, one of them being the absence of conflicts of interest. Take the risk council at one of the major and better known credit institutions, and its conflicting duties, as an example. In late 1996, this bank instituted a risk council with four members: the director of treasury and trading (later chairman of the bank), the chief credit officer, the assistant director of trading, and the chief risk manager who was reporting to the director of trading. This violated two cardinal rules at the same time: • that traders and loans officers in exercise of such duties should never be entrusted with risk control; and • the functions of the front desk and the back office should be separated by a thick, impenetrable wall. Eventually the inevitable happened: huge financial losses. Post-mortem analysts who looked into this case of conflicting duties also said that the creation of another risk control function, under trading, diluted rather then strengthened the bank’s central risk management system. The result has been a torrent of red ink. This and many other failures in the assessment of exposure should serve as lessons; otherwise, we are condemned to repeat the same errors time and again.
4.2
Notions underpinning the control of credit risk
Counterparties to a given transaction, even AAA rated, can fail. Based on statistics on rated parties by Standard & Poor’s, Table 4.1 documents this statement. No trade and no contract is ever free of credit risk. Some transactions, however, involve much more credit risk than others.
Table 4.1
Increasing probabilities of average cumulative default rates over a 15-year timespan (%)
Year 1 Year 2 Year 3 Year 4 Year 5 Year 6 Year 7 Year 8 Year 9 Year 10 Year 11 Year 12 AAA AA A BBB BB B CCC
0.00 0.00 0.06 0.18 1.06 5.20 19.79
0.00 0.02 0.16 0.44 3.48 11.00 26.92
0.07 0.12 0.27 0.72 6.12 15.95 31.63
0.15 0.25 0.44 1.27 8.68 19.40 34.97
Source:
Courtesy of Standard & Poor’s.
0.24 0.43 0.67 1.78 10.97 21.88 40.15
0.43 0.66 0.88 2.38 13.24 23.63 41.61
0.66 0.89 1.12 2.99 14.46 25.14 42.64
1.05 1.06 1.42 3.52 15.65 26.57 43.07
1.21 1.17 1.77 3.94 16.81 27.74 44.20
1.40 1.29 2.17 4.34 17.73 29.02 45.10
1.40 1.37 2.51 4.61 18.99 29.89 45.10
1.40 1.48 2.67 4.70 19.39 30.40 45.10
Year 13 1.40 1.48 2.81 4.70 19.91 30.65 45.10
Year 14 Year 15 1.40 1.48 2.91 4.70 19.91 30.65 45.10
1.40 1.48 3.11 4.70 19.91 30.65 45.10
69
70 Elements of the Internal Rating-Based Method
Therefore, there should be on hand a system that allows to compensate the extra exposure. This can be done through or a higher interest rate which allows a margin for: • reinsurance; or • self-insurance. A risk management solution must be elaborated that assures that our capital earns an adequate return for risk(s) taken, but also permits maintaining strong oversight of exposure by top management. Prerequisites to this is having clear definition of acceptable and unacceptable risk, ensuring an adequate level of risk diversification, and demonstrating a disciplined practice of risk management organisation-wide. Models can be of significant help, as we will see with Risk-Adjusted Return on Capital (RAROC). A credit model can be fairly simple addressing inventoried positions, default rates and recovery rates, like the example in Figure 4.1. But it may also be sophisticated involving credit risk optimisation and requiring an accurate analysis of transaction involving loans by a substantial number of factors affecting them. Classically this has been alone in terms of four different classes, by: • • • •
industry group; geographic area; size of company; and relationship prevailing with this company.
As far as analytics are concerned, this is a step in the right direction. Much more, however, can be done through algorithms that evaluate the loans applicants through more complex factors such as assessments of whether prevailing margins become better or worse (gross, net), what is the market strength (leadership) of our bank, and the projected level of volatility. Also: • • • • • •
market liquidity and out bank’s liquidity; diversification of our loans portfolio; current ratio and leverage of the client; capital structure and cash flow of the client; sustainable growth of the client; debt service of the client.
Peer data on business loans, and a factual and documented analysis helps in telling the client: ‘This is what you represent to the bank as
NEW TRANSACTION
SIMPLE DECISION RULES
ANALYSIS OF BORROWERS’ FINANCIAL STATEMENTS
DEFAULT RATES
RECOVERY RATES
YES/NO DECISION ON LOAN
A simple model for evaluation of credit risk
71
Figure 4.1
72 Elements of the Internal Rating-Based Method
credit risk…’. In this case the required model is much more sophisticated as the reader can ascertain by comparing Figure 4.1 with Figure 4.2. There is lots of business our bank may not wish to underwrite because the risks that it is taking are not covered by the prevailing premium. The theory is that highly rated clients would be given a prime rate; others will pay an interest rate above the prime rate. Leaving aside the fact theory and practice are not the same and often banks bend their own rules, a key question is by how much over the prime rate. A good example of a model specifically designed for the purpose of computing
NEW TRANSACTION
RE-EVALUATION OF INVENTORIED POSITIONS
DECISION RULES INVOLVING BATTERIES OF TESTS
ENDOGENOUS EXPOSURE FACTORS
EXOGENOUS EXPOSURE FACTORS
ANALYSIS OF BORROWERS’ STATEMENTS
DEFAULT RATES
RECOVERY RATES
BORROWER’S COUNTERPARTY RISK
VOLATILITIES
EXPECTED LOSSES
OUR BANK’S CONCENTRATION AND DIVERSIFICATION OF LOANS
UNEXPECTED LOSSES
LIQUIDITY
LIKELIHOOD OF OUTLIERS
YES/MAY BE/NO DECISION ON LOAN
EVALUATION OF PREMIUM FOR YES/MAY BE
Figure 4.2
A more complex model for evaluation of credit risk
Assessment of Creditworthiness
73
interest rates according to level of assumed credit risk is the RAROC developed by Bankers Trust in the late 1980s. Few of the banks, which over the years have purchased and used RAROC, appreciate what its developers have known since its inception: that its success is above all a state of mind, a discipline about things one chooses to do and risks one chooses to take. Once this is appreciated, RAROC can be applied to many aspects of the lending business done by a credit institution: • the choice of markets it pursues; • its selection of counterparties; and • the financial transactions it chooses to execute. RAROC is based on the statistical theory of sequential sampling. The model underpinning it forces the loans officer to consider the risk and return ratio whenever he or she is making go/no go (yes/no) decisions. This means to carefully explore potential risks, consider their financial impact, and relate this view to the revenue that is anticipated. Typically, when a borrower (company or individual) goes to a bank for a loan, a binary decision is made yes/no. The client qualifies or does not qualify for a loan. Risk-adjusted approaches are not binary; they stratify and put a premium by risk ceiling. Statistically, this is known as sequential sampling. 1 When confronted with uncertainty it delays a final decision till more evidence is provided to say ‘yes’ or ‘no’. The benefit provided by a stratified risk assessment is the ability to quantify the exposure embedded in a transaction so that responsible officers can decide if a particular risk is worth running. This decision is conditioned by two factors: the degree of exposure and the extra premium. Calculated risks covered by premiums permit: • expanding money and securities trading activities; • further building up and marketing financial services; • but also keeping assumed risks within limits. To gain the benefits from this solution, the bank must be able to maintain and selectively enhancing the quality of control services, steadily developing new information processing tools able to sustain risk and return goals. The trick is helping management to decide where to draw the line. As a systems solution, RAROC is a formal process that computes the capital required to support the risk in a transaction. For each risk element, it computes the largest potential loss accounting for after taxes market
74 Elements of the Internal Rating-Based Method
value over a one year holding period. Such computation is based upon historical volatility, taken into account historical correlation and using a 99 per cent level of confidence. These computations are assisted by a credit risk assessment factor, and they result in the assignment of a capital requirement. As a common unit of measurement, capital allows the various sorts of exposure to be aggregated to a total capital requirement for the portfolio. This is then compared with anticipated revenue, and computed as net present value (NPV) to create the RAROC. A hurdle of 20 per cent is commonly applied, but not viewed as absolute. This is the strategic model. To appreciate its mechanics it is important to understand its tactical component whose matrix of operations is shown in Figure 4.3. Every time we move to the right of the interest rate column under consideration because of a ‘may be’ answer by the model – a fuzzy area falling between ‘yes’ and ‘no’ – the interest rate of the loan under negotiation is increased to cover the extra counterparty risk which needs to be assumed. This is a significant improvement over the traditional policy where when a borrower goes to a bank for a loan, a binary decision is made: ‘qualify/not qualify’. RAROC is not black or white but has areas of grey in its response. It stratifies and puts a premium by risk ceiling. With every ‘may be’ the plan moves to the next column to the right, adding a reinsurance. In conclusion, sequential sampling permits to reset risk premiums, RAROC ties a borrower’s overall need for capital to counterparty risk; primarily to the borrower’s solvency. It starts with an estimated credit rating translated into an annual default rate. Then it proceeds by increasing the interest rate, as if a reinsurance is bought.
4.3
RAROC as a strategic tool
RAROC’s basic formula was developed by Bankers Trust, in the 1980s, at a time when the institutions implemented a strategy of sophisticated portfolio risk and performance measurement. This was a pioneering effort. Nearly 20 years down the line, more than 90 per cent of the banking industry has not yet reached the level of support in risk management provided by RAROC. It takes culture and practice, not just good intentions, to be successful with credit risk models. Bankers Trust has used RAROC to enhance central risk control. Each of its decentralised business operations has been required to monitor and manage its own risk and to report on RAROC positions. Within a
NO
MAY BE ADVICE ON GIVING A LOAN
NO
MAY BE
NO
MAY BE
YES YES
NO
MAY BE YES YES
PRIME RATE Figure 4.3
A sequential sampling plan allows computation of interest rates commensurate to risks being assumed
75
76 Elements of the Internal Rating-Based Method
business line, risk management took place at the trading desk itself, as well as at other higher-up levels of the hierarchy. A central group, Global Risk Management, was charged with setting: • risk management policy; and • following up on business line limits. This organisational solution created a comprehensive basis for discussing risk levels, refining underlying RAROC calculations, monitoring risk continuously, and reporting to senior management daily on the bank’s risk profile. That’s why I use RAROC as a model for banks contemplating the internal rating-based solution of Basle II (see also Section 4.4). In one out of many implementations that gave RAROC a first-class name in the financial industry, the annual default rate gets converted into a capital requirement using as metrics the standard deviation of the value of the firm’s total net assets. After analysing the shape of the overall loss distribution, one can conclude that, to limit annual default risk, the bank needs enough equity to cover three-standard deviations of an annual loss event: • The total of debt plus equity capital might correspondingly amount to, say, six-standard deviations. • RAROC distributes this amount of capital based on each activity’s marginal contribution to an annual loss. Financial institutions that have used RAROC on a global scale, across a variety of product lines, comment that its high degree of accuracy and timeliness is greatly assisted by their underlying technological infrastructure; also from a consistent development of sophisticated business applications that use the RAROC model. Information technology support is most essential to the rapid collection and dissemination of data, a process at the core of controlling risk in a global marketplace. In information systems terms this requires: • real-time access to committed positions of the bank by credit line, in total and in detail; • on-line availability of creditworthiness profiles of counterparties; • an ad-hoc database mining on liquidity, maturity, and capital exposure on all positions;
Assessment of Creditworthiness
77
• interactive profitability evaluations of counterparty relationships and inventoried positions; • the absence of geographic limitations and no time-zone constraints, for everyone of the above critical factors. In the course of a decade, prior to the Deutsche Bank buyout, Bankers Trust perfected its method of measurement with the introduction into daily practice of a family of models derived from basic concepts characterising RAROC, discussed in Section 4.2. With these supportive solutions every loan or other financial transaction is assigned a portion of the bank’s capital, depending on the level of the activity’s risk and return profile: • Each officer is judged by the earnings he or she produces in relation to the capital attributed to activities under his or her authority. • An adjustment is made for officers who are in charge of new and developing businesses, and are not expected to produce high returns immediately. This strategy introduces the notion of capital at risk. While some of the components of RAROC are no different than those used by other banks in their operations, its greater overall sophistication lies in the fact that risk has been taken as an integral part of financial calculations, and it is tracked in a constant manner for any product, any desk anywhere in the world. As a computer-based mathematical model, RAROC sees to it that senior management is always aware of the return it gets for assumed risk. RAROC is also an example of the extraordinary degree of integration necessary in the financial business. Risk assessment quantifies dangers so that responsible officers can decide if a particular risk is worth running. For instance, the careful reader will recall from the discussion in Section 4.2 that every time the customer’s risk level changes (more precisely worsens) the system calculates the insurance necessary to cover such risk, and ups the premium. Covered by premiums, calculated risks: • permit the expansion of loans and derivatives trading activities, while keeping a keen eye on exposure; and • make feasible to further build up market share in terms of financial services in full knowledge of exposure being assumed. The prerequisites are a firm policy on risk management, a first class methodology for the assessment of creditworthiness and other risks, and
78 Elements of the Internal Rating-Based Method
a high technology system. A focused solution requires maintaining and selectively enhancing the risk factors associated with financial services, accessing and mining online distributed databases. This is at the core of the IRB solution of Basle II, contrasted to the more classical standard method.
4.4
Standardised approach and IRB Method of Basle II
As we have seen on several occasions, the Basle Committee promotes an IRB method, which is practically a bank’s own, provided it satisfies criteria of dependability. IRB is substantiated by a bank’s internal ratings of counterparty risk, and it is typically more sophisticated solution than the standardised approach. With the latter risk weights are specified for certain types of claims: • In addition to the familiar weights of 0 per cent, 20 per cent, 50 per cent and 100 per cent, a new weighting factor of 150 per cent has been introduced. • As to be expected the 150 per cent factor is applicable to borrowers with a very poor rating. In the standardised approach, the risk weighting in the individual risk groups: mainly banks, non-banks and sovereigns substantially depends on assessments by external credit rating agencies for companies, and the export credit authority of the European Organisation for Cooperation and Development (OECD) for sovereignty. Claims on sovereigns are weighted, depending on their rating, between 0 per cent and 150 per cent. The New Capital Adequacy Framework has elaborated a table for credit assessment and risk weights based on ratings with the standard method. The computation of capital charge for a loan is simple and straightforward. Say that a bank has a loan of $10 million with an enterprise rated AA by a recognised independent agency. The capital charge is: 10,000,000 × risk weight 20% × capital ratio 8% = 160,000 This capital charge would have been zero if this same loan was with an AA rated sovereign. By contrast, if the loan was with a B-rated sovereign, the capital charge would have been $800,000 (the full 8% level). And if the loan was given to a B-rated non-bank, the capital charge would have been $1,200,000 (applying a 150 per cent weighting factor). Investors will be well advised to use similar standards in calculating their credit risk as well as the cost of funds they are investing in debt securities.
Assessment of Creditworthiness
79
The standard method is a refinement over the flat 8 per cent capital adequacy requirement by the elder 1988 Capital Accord, but there is no doubt it can be made better focused through modelling. For instance, by incorporating the probability of default along the lines of the block diagram in Figure 4.2. The concept is that a universal, factual and documented probability of default algorithm which follows the criteria outlined in Section 4.2, or similar concepts, can be taken as a basis of allocation to one of several weighting categories outlined in a fine grid. Such probabilities of default must evidently be consistent with the bank’s internal ratings and requirements associated to counterparty risk. Therefore, it is reasonable that: • Banks that start now with IRB will take some time to develop and tune a factual probability of default matrices. • As it evolves, such system has to be based on rigorous analysis, and it can only be maintained through high technology. The New Capital Adequacy Framework provides two options for claims on commercial banks, leaving it to the national supervisors to decide which one will be applied to all banks in their jurisdiction. Under the first option, banks are given a risk weight one category less favourable than that assigned to claims on the sovereign. According to the second option, a bank’s risk weighting is base on its external rating. For both options: • lower risk weights apply to lending and refinancing in domestic currency if the original maturity is three months or less; and • short-term claims, with a maturity of three months or less, can be assigned a preferential risk weight within certain limits. Claims on investment banks and securities firms are treated in line with the same rules as those envisaged for credit institutions, provided that the brokerages are subject to comparable supervisory and regulatory arrangements, with the same capital requirements. Three new risk weight categories are being introduced for corporates: 20 per cent, 50 per cent, and 100 per cent, while claims on unrated companies are given a risk weight of 100 per cent. Claims on non-central government public sector entities are weighted in the same way as claims on banks. However, subject to national discretion, claims on domestic public sector entities may also be treated as claims on the sovereigns in whose jurisdictions these entities are
80 Elements of the Internal Rating-Based Method
established. Claims secured by mortgages on residential property that is rented is (or will be) occupied by the borrower, are risk-weighted at 50 per cent; with other claims also assigned corresponding risk weights. Within this general framework, with the accord of their supervisors, banks can establish, follow and upkeep a finer grid addressing counterparty risk. Chapter 2 has given an example with a 20-position scale that practically corresponds to the full range of ratings assigned by independent agencies. With an eye at making the banking culture increasingly more open towards rigorous analysis, the Basle Committee promotes the use of mathematical tools and compensates that effort through the permission to hold lower capital requirements. This compensation is well-chosen because, other things equal: • The more factual and analytical is our credit risk computation, the greater the likelihood we will not be faced with default by counterparties. Combining what I just said about the IRB solution with the operating characteristics curve using the 20-position rating scale as absolute, level of confidence, and rules that permit to test data streams and database contents, we are able to gain a better cognitive ability of credit risk embedded in a given transaction and in the portfolio. Between the less sophisticated, linear standardised approach and advanced IRB there is a so-called ‘IRB Foundations Approach’. Its added value over the standardised approach is that it permits differentiation in credit risk levels, as well as risk mitigation, by incorporating 2-dimensional rating which includes: • the obligor; and • the facility. This method is not totally new. Most of the institutions currently using it apply a more coarse scale of 10 in rating (rather than the 20 of the advanced IRB solution). This typically consists of six passing grades and four watch-out grades. Moody’s Investors Service markets an expert system that handles both transaction rating and inventoried exposure. It grades on a scale of 10 both financials and non-financials. The degree is presented to the user numerically by major factor. A borrower, for example, may get:
Assessment of Creditworthiness
81
• 8, in financial rating; • 3, in management rating. Moody’s says its expert system is 75 per cent compliant with the New Capital Adequacy Framework by the Basle Committee. Its analysts are now working to make it 100 per cent compliant. Moody’s appreciates the fact that whether a more or less sophisticated approach is used, good visualisation plays a key part in its success.
4.5 Amount of leverage, loss threshold and counterparty risk The benefit from RAROC and similar solutions, is the help provided to the bank’s management in deciding where to draw the line in terms of committed positions, credit risk, market volatility, liquidity and maturity exposure. Well-done models permit profitability evaluations that rest on certain hypotheses about the standing of counterparties and their ability to perform. Their willingness to perform is a much more complex proposition. Many lessons can be learned from the meltdown of Long-Term Capital Management (LTCM).2 Risk can be better controlled if we evaluate the soundness of the counterparty’s senior management, analyse the quality of the counterparty’s track record, and pay attention to ethics and performance in dealing. Besides this, we should definitely: • require much greater transparency; • appreciate the importance of stress testing; and • practice collateralisation with dynamic haircuts. If we wish to improve our performance in connection to the timely and accurate evaluation of credit risk, we should fully understand that credit volatility increases with the reduction of credit rating. Indeed, this is one of the weaknesses of current rating systems, which do not necessarily account for the volatility of different credit grades. Therefore, • The results being obtained in credit rating are relatively static. • The usual transition matrices between different ratings provide information over the longer run. In the short run the fact that there exist credit volatility ranges leads to the outcome that three successive ratings can have the same probability
82 Elements of the Internal Rating-Based Method
of default. Regulators are currently looking into this issue. Another subject that by all evidence needs fixing is the effect of company size on the probability of default. Big companies do not default at the same rate that small companies do. Other things being equal, small companies have higher likelihood of default. Another upgrade necessary to rating systems in terms of their day-today usage is a distinction between general and special or systematic risk. My experience tells me we should carefully study the likelihood of systematic risk hitting the counterparty. The case of LTCM, and its very high leverage, explains the reason for making this suggestion. Again, other things being equal: • lack of leverage, means less credit risk; • by contrast, policies like loss threshold end up in high risk. Loss thresholds is practised by hedge funds and some commercial and investment banks. The practice consists of offering the customer loan facilities to absorb their losses and face urgent requirements for trading related margins as well as payments and settlements. By doing so, hedge funds and credit institutions significantly increase their own exposure. It needs no explaining that policies of high exposure increase the likelihood of the counterparty turning belly up. Leverage and loss threshold practices by counterparties must be watched very carefully, even if these entities cannot be left out of our company’s business altogether. Apart from business intelligence that has now entered the bloodstream of banking, watchful policies require: • development of new algorithms and heuristics to enhance interactive computational finance; and • implementation of a global system solution, which has no geographic limitations and no time-zone constraints. The model for counterparty tracking that we adopt should permit checking on-line which antecedent may be currently valid, and which formerly sound counterparty finds itself in a downturn because of overexposure. The solution to be chosen must also make feasible on-line generation of hypotheses based on events characteristic of the last two or three months, and even more so of the last couple of weeks. The objective is to obtain evidence on proportional hazard and at the same time watch for changes in counterparty policies. For instance, as a way of reducing their exposure, some institutions extent a credit line
Assessment of Creditworthiness
83
but don’t trade with that customer in its whole extent. Bank A gives to Customer I, say, a $100 million credit line based on collateral, but • it allocates part of that money to other banks; and • makes a profit by charging a fee to these other banks. Notice that such fee is not necessarily a profit; it is an insurance reserve along the RAROC concept (see Section 4.2). It may even not cover the risk Bank A takes, as most institutions don’t have a risk coverage policy through reinsurance and/or the technology to track their exposure towards: • correspondent banks; • hedge funds; and • industrial or financial customers. To be successful, the tracking I suggest should be done in detail and cover the allocation of funds down the line. For instance, in the foregoing example of $100 million credit line Bank A may keep $50 million and give Bank B $20 million and Bank C $30 million. While Bank A provides the credit line, deals and pricing is done by Customer I with Banks B and C. Many hedge funds work this way, but banks providing the credit line to other institutions in connection to Customer I don’t particularly care what happens subsequently with their money. The mere act of subcontracting the risk, so to speak, is supposed to wave a big chunk of the exposure. What the management of Bank A may not appreciate is there exists a multiple risk in such approach. Adversity can hit at several levels: • With dematerialization, the same collateral might have been pledged many times (Maxwell risk). • The haircut* taken on the collateral might prove insufficient, in a severe market correction while Bank A looks only at the $50 million it lent directly. • The correspondent bank B or C, or both, may become bankrupt, leaving a dry hole while the liquidators siphon the credit line.
*Percentage reduction made to the value of security given as collateral, to account for market rule.
84 Elements of the Internal Rating-Based Method
It is also not unlikely that the highly leveraged hedge fund becomes bankrupt pulling down with it Banks A, B and C (LTCM risk). There may as well be other complications in this multi-party deal. One of them relates to maturity; another to breaking prudential time limits established by the board of Bank A because of the gimmicks of reallocating the original credit line. Several tricks are used to cheat on limits. Say that for Bank A the time limit is 5 years, but the client wants a derivatives trade of up to 12 years. Bank A gives to Bank C the 7 to 12 years band and takes a commission. But its credit line remains open towards Bank C and the hedge fund for 12 years. The fund is highly leveraged and hits the rocks. Whether by design or by being unaware of it, Bank A has broken the 5-year limit imposed by the board on risky derivatives trades and pays for it dearly.
4.6
Risk factors help in better appreciation of exposure
One of the reasons why financial institutions are developing sophisticated models is to help themselves in getting better appreciation of exposure on an ongoing basis. Another is to increase senior management’s understanding of individual risk components. Systems used to model risk tend to consider several risk factors (RF) in an effort to map the exposure associated to loans, trades and other activities in a more accurate fashion. These risk factors: • must be established in factual and documented manner; • generate large matrices of risk data from operations; and • require a significant volume of calculation, to extract knowledge and action-oriented information. There is no recipe book on how to identify and elaborate the risk factors. Therefore, in my research I have asked cognisant executives on how they go about this job. By a majority, the answers that I got resemble the English recipe on how to cook a rabbit: ‘First catch the rabbit.’ To ‘catch the rabbit’, i.e. understand what’s the problem, it is necessary to talk with CEO, COO, CFO, Chief Credit Office, Chief Risk Management Officer (CRMO), and Chief Auditor (the ‘6’) to identify the origin of credit problems faced by our bank, and their sequel. Then pay attention on how each person looks at the origins of the problem, what kind of solutions each person has considered, which solutions were tried and which have been the results. When you have done that:
Assessment of Creditworthiness
85
• Read your notes, integrate your results, find where are the gaps and meet again with top management. • Talk with the immediate assistants of the ‘6’, ask for statistics and profiles of clients with bad loans. • Establish the pattern and size of misfortunes; look into other similar cases in the same market (and solutions by competitors at home and abroad); document the results obtained so far. The three points above provide a significant perspective. After their requirements have been satisfied, look carefully into the bank’s Internal Control (IC). 3 Find evidence on how IC works, how it monitors, what it requires and how frequently it provides management with meaningful information. Also, what kind of action has been taking after IC’s findings, and how this influenced the bad loans book. The steps that I have been outlining are fundamental to any credit institution that cares to study, analyse and report to top management on analytical findings therefore on financial health. A similar reference is valid in terms of developing a plan for future action to improve upon the bad loans and potential bad loans situation both in absolute terms and in connection to other critical factors such as leverage. In fact, the amount of leverage and loss threshold, discussed in Section 4.5, can be turned into risk factors. Let me be explicit on this issue. Every instrument tends to have its own RF, and the same is true of markets, institutions and counterparties, though some risk factors tend to have a greater appeal and therefore can be considered as general. The Federal Reserve Board and the Bank of England examined the issue of risk factors in connection to swaps. 4 Subsequently, they published a table with RF values as a function and the swap’s maturity in years, as the argument. This is shown in Table 4.2 and is based on: • average replacement cost of a pair of enacted swaps, and • maximum replacement cost of a single swap, computed through simulation. In the background is a lognormal distribution, based on the following parameters: 18.2 per cent volatility, 9 per cent initial interest rate and 90 per cent level of confidence. This means that 10 per cent of all cases will exceed the benchmark values in this table. The reader should never forget the role level of confidence plays in risk estimates.
86 Elements of the Internal Rating-Based Method
Other tables for slightly different conditions have been advanced by researchers who focused their attention on specific instruments. Practically all are computed on the basis of historical data, follow the lognormal distribution, use time as the independent variable, and the majority is set at 90 per cent level of confidence which, as I have just mentioned, is too low. In swaps transactions, especially longer dated currency and interest deals, which can cover a period of as much as 10 to 15 years, companies can assume significant exposures in regard to their counterparties. This fact has not escaped the attention of the developers of risk factor weights in Table 4.2, and of comparable tables that have generally been followed the credit lines in credit risk rating shown in Figure 4.4. Apart from the choice of time as independent variable, there are as well other factors that merit examination and can be incorporated into a RF model. For instance, volatility of credit rating of the counterparty by one or more independent agencies. Holding an AAA position over a long stretch of time is all-important in the market for over-the-counter (OTC) derivatives, in which banks provide customised swaps and other deals for corporate customers. Since the early 1990s, in search for a AAA status, which they lost for one reason or another, a number of banks have set up special, fully owned subsidiaries to handle derivatives transactions. These are thought to be bankruptcy remote. More precisely, they are structured to give creditors a prior claim on assets in the event of bankruptcy of the parent. Several of these controlled vehicles have been awarded triple A ratings by independent agencies because they are relatively overcapitalised. Table 4.2 Risk factors for swaps trades worked out by the Federal Reserve and the Bank of England (lognormal distribution, 90% level of confidence) Year 1 2 3 4 5 6 7 8 9 10
Risk factor (%) 0.5 1.5 2.8 4.3 6.0 7.8 9.6 11.5 13.4 15.3
50 45 40 CC 35 % DEFAULT PROBIBILITY
30 25 20
B
15 BB 10
BBB A AA AAA
5 0 1
Figure 4.4
2
3
4
5 6 YEARS
7
8
9
10
Cumulative default probabilities for AAA, AA, A, BBB, BB, B and CC rated companies
87
88 Elements of the Internal Rating-Based Method
In this connection, it is appropriate to take notice that most of the commercial and investment banks that formed such special vehicles for derivatives trading have lower ratings than their subsidiaries. At the same time while buyers and traders might have insisted on a triple A rating for all deals five or ten years ago, today they are more likely to be satisfied with a lower credit benchmark, like AA or AA- from S&P or Aa2 or Aa3 from Moody’s. The lack of reliable tables with risk factors based on two independent variables: credit rating of the counterparty and maturity, as well as the appreciation of the magnitude of embedded risks, which slowly gets understood by investors, sees to it that dealers in the OTC market have increased their demands for collateral from counterparties. Originally this has been mainly restricted to deals between banks, but multilateral agencies have followed their example. In conclusion, there are many ways people devise sophisticated but non-dependable solutions aimed to modify credit risk. One result is the exposure levels have increased; another that the use of collateral is back in favour. Collateral arrangements, and more specifically their haircut, are sometimes linked to the credit ratings of the parties involved. This way, when credit rating falls a counterparty would be expected to place more collateral as protection against exposure, increasing the frequency by which the value of collateral must be recalculated.
4.7 Has the WestDeutsche Landesbank Girozentrale (West LB) an AA + or a D rating? The best way to look at business is to consider if a company is solving a problem and making someone else’s life easier or, alternatively it is itself part of the problem. To contribute to the economy, an entity needs to have a product or service that somebody needs. Credit may be such a product, provided its level of dependability and its volatility are properly computed and are meaningful. Investors and traders must be particularly careful because a counterparty may also offer a poisonous gift. Masquerading as being of high creditworthiness while exactly the opposite is true is an example of Trojan Horse rating, with potentially severe consequences. The case of the WestLB is an example which worth bringing to the reader’s attention. The German Landesbanken have been an anomaly since their constitution, after World War II. They serve as treasuries of savings banks and have for a long stretch of time benefited from state (Länder) guarantees, but also engaged in risky business in direct competition to commercial
Assessment of Creditworthiness
89
and investment banks. Nowhere this contradiction has been more flagrant than in the case of the WestLB, which while being a savings banks treasury: • has acted as if it were an arm of the Länder government; • and engaged in very risky operations around the globe, which had no relation to savings banks duties. West LB spent other peoples’ money arranging the financing of public works and pet industrial projects of different politicians of the state of North Rhine-Westphalia, which owns 43.2 per cent of the institution. It also got heavily engaged in speculation involving derivative financial instruments, and in dubious financing of companies and governments in less developed countries. This went on for more than half a century, but finally life is catching up with the Landesbanken. To avoid being brought to the European Court, West LB plans to split off its commercial operations, which represent 80 per cent of the bank’s $360 billion in assets, into a separate unit without state backing. A smaller subsidiary that handles the bank’s government transactions will still benefit from official guarantees. This mammoth 80 per cent of uncharted business is what other German commercial banks have objected all along. Commercial banks say that guarantees thrust upon the Landesbanken by state governments are a misuse of taxpayer money which gave WestLB, and its kin, an unfair competitive advantage. What is more, these guarantees have been provided free of charge and they have been so far unlimited in duration. This will soon change. Besides other issues, the loss of state guarantees creates creditworthiness problems. West LB’s commercial unit will not inherit the parent bank’s top credit rating. Standard & Poor’s typically assigns the Landesbank the grade of AA + because it was backed by the state and its ability to tax its citizen. Once it loses that support, West LB’s standing will plummet and for good reasons: • • • •
its high cost base; huge appetite for risk; aggressive expansion policy; and limited capital base.
Given the huge amount of exposure of the 80 per cent of its business, Moody’s Investors Service gives WestLB the lowest possible financial-
90 Elements of the Internal Rating-Based Method
strength rating; that of D. AA is nearly at the top of the rating scale; D is short for default and it is at the bottom. Such very low credit rating turns the tables on West LB. Because of it, it will pay more than privatesector banks for funds and most likely will take a hit in its derivatives and other highly risky deals. The fall from grace of WestLB helps in dramatising some of the problems with risk rating. While the independent agencies are doing a very good job in analysing counterparty risk, there are some shortcomings in the method that have to be corrected, even if it is difficult to do so in a homogeneous way within a globalised market. These shortcomings include: • oversimplification, by reducing hundreds of variables to a couple of figures; • a certain degree of subjectivity in making judgements on credit risk; • lack of a useful basis for comparison, particularly where nepotism plays a major role; and • the use of classification criteria that may be accurate in some countries but not in others. Another one of the problems with all-inclusive risk ratings is that attempts to look at the comparability of risk among banks of different character, culture, location and size ends in results that are not characterised by high consistency. Furthermore, the structure of the deals that will be struck ‘sometime in the future’ presents an intangible dimension of the risk model – one that is difficult to analyse a priori. One way to correct this shortcoming is by means of systematic and steady backtesting of credit ratings and credit models. As both regulators and many institutions found in connection to market risk, static model(s) are not viable because their results are not dependable in the longer result. Judging from the market risk experience it is not impossible that many banks that go for an advanced internal rating-based approach without appropriate feedback and backtesting, may have to fall back to a less sophisticated approach or classical credit ratings. This being said, the assessment of creditworthiness by independent rating agencies serves a very useful purpose. It is a necessary supplement to credit evaluation in a globalised economy where not everybody knows everybody else, and therefore credit institutions need an independent input in deciding whether or not to extend credit. But the reader should keep in mind that not all counterparties that appear to be equal actually are.
5 Debts and the Use of Models in Evaluating Credit Risk
5.1
Introduction
The majority of bankers with whom I met during my research expressed the opinion that, in general, credit risk models should incorporate an element of compliance to policies established by the board, the rules set by regulators, and the law of the land. Many pressed the point that institutions and their credit risk systems should account for what happens at the tail of the credit distribution, the outliers shown in Figure 5.1 as an example. The effective development and use of models able to respond to the criteria the foregoing paragraph has presented practically means personalising credit risk evaluators, embedding into them the ability to address relationships with correspondent banks, institutional investors, hedge funds and major corporations. By contrast, non-individualised risk evaluation should rest on: • statistical sampling; • pattern analysis; and • behavioural studies. There are many reasons for a bifurcation between personalised models for big counterparties with whom we take a significant amount of exposure, and a statistical treatment of small clients, where exposure is widely distributed. The former models should include both qualitative and quantitative factors, acting as a magnifying glass of credit policies; the latter will be largely quantitative. The globalisation of banking and finance promotes the need for interactive knowledge artefacts (agents)1 able to calculate credit at risk, flash out credit deterioration, and bring it immediately to senior management’s 91
92
FREQUENCY
AREA OF GREATEST INTERESTS
SPIKES
MEAN
LOW
HIGH
MAGNITUDE OF EXPOSURE
Figure 5.1
The best credit risk models are those that focus at events at the tail of the distribution
VERY HIGH
Use of Models in Credit Risk
93
COUNTERPARTY RISK
COUNTERPARTY EXPOSURE BY INSTRUMENT RISK BECAUSE OF COUNTERPARTY EXPOSURE
TYPE OF INSTRUMENT
PRESENT VALUE BY INSTRUMENT
ESTIMATED PRESENT VALUE Figure 5.2 space
A finer definition of capital at risk must be done in a 3-dimensional
attention. Agents are also needed to track credit improvement, putting together exposure data by counterparty and by instrument, and helping to manage credit risk and market risk along the frame of reference in Figure 5.2. As Figure 5.2 suggests credit risk models should be both counterpartyand instrument-oriented. But there exist constraints. Most credit instruments are not marked to market. Therefore the predictive nature of an interactive capital-at-risk model does not come from a statistical projection of future prices based on comprehensive historical experience. This lack of information is exacerbated by: • the relatively infrequent nature of default events; and • the short-term horizons often used in measuring credit risk.
94 Elements of the Internal Rating-Based Method
There is the world over an inclination to look at credit risk in the short term and in a spontaneous way. This is contrary to the manner in which the financial markets work, but it is so embedded in the credit risk culture of bankers that it has almost become a second nature. The result is a limited, localised view unable to capture the dynamics of the large credit risk landscape. One of the problems with sparse data and the short term is that there are several potential repercussions on bank solvency if modelled credit risk estimates are inaccurate. Hence the need for better understanding of a model’s sensitivity to default statistics, structural assumptions and parameter estimates, within the time horizon it is going to be used. Another important factor limiting credit risk modelling is that the very important validation of such models is more difficult than the backtesting of market risk models where short time horizons do not deform market risk dynamics. While market risk models usually employ a time horizon of a few days, credit models have to rely on a timeframe of one year or more, and a longer holding period presents problems to model-builders, and model-testers, as well as other people responsible for assessing their accuracy.
5.2 Contribution of information technology (IT) to the control of credit exposure One of the challenges with risk management models is that of underestimation of exposure in economic booms because of the tendency to implicitly extrapolate present conditions into the future. For instance, quite often, the method for measuring counterparty risk relies on equity prices and it tends to show a lower risk of corporate defaults in booms as equity prices are rising. The reader should appreciate that the art of credit modelling is still in its early stages, yet we have no option than rely on algorithms, heuristics and technology at large, for developing, trading and controlling the exposure taken with financial instruments, from loans to derivative products. The examples we will see in this and the following sections help in documenting that derivatives of both equities and debt are a complex business, involving rapidly changing products and a great variety of trading term structures. There are a great deal of twists in product design and in marketing approaches. Therefore, rigorous risk management strategies can make or break a firm. IT that supports the design and trading of financial instruments must be avant-garde in its conception, real-time in its operation, extensible,
Use of Models in Credit Risk
95
adaptable, and responsive to rapid changes. One of the major problems associated to the able use of IT in connection to the control of exposure is that technology specialists lack the domain expertise which would have enabled them to: • make the constructs better focused; and • react rapidly to changes by revamping current models. The point many banks are missing is that the existing gap between technology specialists and business users diminishes the benefits high technology can offer in buying or selling contracts of financial assets and liabilities, particularly where the value of a product is derived from the price of the underlying asset. Risk and return has to do a great deal with the contract’s ability to mitigate risk. In the background to this statement lies the fact that unless traders and IT specialists work closely together in the development of focused and accurate information technology solutions for business opportunity analysis and risk control, models and other technological solutions will be inadequate at capturing the dynamic, fast changing nature of financial information. • The current, legacy, techno-centric assumptions have major shortcomings because of inadequacies in using old technology for modern complex projects. • Modern off-the-shelf packages can provide some help, at least in getting started, but every institution has its own profile – hence sophisticated eigenmodels is the answer. One of the examples of off-the-shelf software is RiskCalc by Moody’s Investors Service. It is oriented to credit risk control. Its contribution to the perception of counterparty risk lies in the fact it generates a 1-year and 5-year Estimated Default Frequency (EDF) of an obligor. To do so, this model, which is specifically designed for private firms: • ties credit scores directly to default probabilities; and • helps in determining pricing for underwriting and securitisation reasons. The need for tools like RiskCalc is evidenced by the fact that while consumer spending has experienced significant transformation, middle market lending is still a largely subjective process, which has been for
96 Elements of the Internal Rating-Based Method
some time in need for rationalisation. Subjective judgement in loans is no more admissible at a time of deregulation, globalisation and securitisation of debt. (See also in Section 5.4 the rules by Banque de France for securitisation of corporates.) The fact that better tools are necessary to gauge credit risk exposure does not escape the attention of clear-eyed bankers. In an effort to know more about the credit risk in their portfolio several institutions have installed new systems for measuring exposure associated to loans. The majority chose off-the-shelf software with titles such as CreditMetrics, CreditRisk + , CreditPortfolioView, and Loan Advisor System (LAS).2 Participants to my seminars often ask the question: Is the use of packages an advisable practice? My response invariably is: This is not a query that can receive a unique answer. In principle, to those institutions who know how to use off-the-shelf software and who have done the proper preparation, modern packages are a good help, allowing to: • build and maintain databases on their customers (particularly in retail); • mine them about loans, their employment track and counterparties credit history; • extrapolate credit risk, and estimate other exposures they have assumed. But as the careful reader will observe this answer has many ifs. A major part of the problem is that few banks know how to use packages in the best possible way – altering their own procedures, if this is necessary, instead of massaging the package. Once changes start being done on off-the-shelf software, they never really end. The result is delays, high costs and another average kind of software, and enduser dissatisfaction. There is also, as stated in the Introduction, the fact that contrary to market risk models credit risk models don’t offer many opportunities for backtesting. Yet, every system should be subject to control post-mortem. This is necessary to get a better insight, improve its accuracy and gain confidence on the way in which it works. Senior bankers commented in the course of my research that they have found backtesting estimates of unexpected credit losses to be very difficult, and they expect this difficulty to increase in the future. Till now, to my knowledge, no formal backtesting program for validating estimates of credit risk is operational. Instead, banks use alternative methods, including some adaptation of market-based reality checks such as:
Use of Models in Credit Risk
97
• peer group evaluation; • rate of return analysis, and • comparisons of market credit spreads with those implied by the bank’s own models. Contrarians, however, have suggested that reliance on these techniques raises questions regarding comparability and consistency of credit risk models. To improve upon the current situation, institutions need to ensure proper oversight over the artefacts and their deliverables. They must also appreciate model control and auditing are fundamental to the development of a reliable internal rating-based system. Taking model risk into account (see Chapters 12–14) we can state that before a portfolio modelling approach could be used in the formal process of setting regulatory capital requirements for credit risk, both bankers and regulators would have to be confident with available methodology and its algorithms. This confidence should see to it not only that models are being employed to actively manage credit risk but also that they are: • conceptually sound; • empirically validated; and • produce capital requirements comparable across credit institutions. This is, of course, asking a great deal, but it is possible. Internal rating systems will not be developed overnight. They will take time, and we should allow for the accumulation of experience. The problem that I see is that the issue of modelling credit risk is pressing, as the curve of credit risk rises because of derivative financial instruments. This is documented in Section 5.3. The challenge then is to move ahead of the credit risk curve rather than stand still or fall behind.
5.3 Credit risk, rating and exposure: examples with credit derivatives ‘The success or failure of credit derivatives is steered by rating’, said Arjan P. Verkerk of MeesPierson. And he added ‘We look at a counterparties in a most conservative manner.’ To my question whether banks are modelling the risk embedded in credit derivatives, Verkerk suggested ‘Not all of them, but the decision depends highly on other issues such as the servicer and the documentation’. I consider credit derivatives as a test bed for technology transfer to credit risk models because they combine many elements of credit risk and
98 Elements of the Internal Rating-Based Method
market risk. Several institutions try to carry their experience from market risk modelling into algorithms that reflect counterparty exposure. It is more rational to do so in a domain that has certain similitarities to market risk. Technology transfer, however, should be done carefully and under controlled conditions, being fully aware of pitfalls. There is plenty of them. For instance, IRB systems used by banks tend to indicate a decline in credit risk when current default rates are low, reflecting the short horizons over which risk is often measured. At the same time, external (acquired) credit ratings are often only adjusted after the materialisation of some adverse events – while a better approach is to do so as credit risk is building up. With both eigenmodels and bought models, hypotheses made about counterparty risk should reflect the principle that the rise in defaults in a downturn is better thought of as the materialisation of credit risk built up during a boom. While from a practical perspective it is difficult to identify when credit risk actually begins to increase when the market goes north, evidence of: • • • •
rapid credit growth; strong gains in asset prices; narrow lending spreads; and high levels of investment
tend to have a negative aftermath. They are followed by stresses in the financial system, characterised by higher than average levels of counterparty risk, even if current economic conditions are still strong. Failure to appreciate this type of exposure can play an important role in amplifying the upswing of a financial cycle, subsequently leading to a deeper downturn. The examples I have just given are underlying factors in the evaluation of counterparty risk. Therefore, they play a crucial role in the exposure embedded in credit derivatives, whose scrutiny should be steady and consistent. ‘There is a finite amount of AAA and AA paper’, said a commercial banker in the City. ‘When good credit is in short supply, everybody is forced down the quality curve.’ The problem is that not everybody models and monitors the risk that they are taking. Part and parcel of this rather relaxed attitude is the fact banks today make a lot of money out of credit derivatives because the instrument is new and it has found a market. Also, because derivatives
DERIVATIVES
JUST NOTE DIFFERENCE
ASSETS LOANS EQUITY AND RESERVES 1990 Figure 5.3
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
The rapid growth in derivatives versus the slow growth in assets, loans, equity and reserves
99
100 Elements of the Internal Rating-Based Method
have become retail. But does it take a great deal of skill to develop a sophisticated risk control model? The conclusion arrived at after three meetings in London and two in New York, which focused on this query, has been that high-grade skill is in short supply in the banking industry because so many institutions want to do so much. ‘If you look at the whole food chain of derivatives trades, you are missing some links’, said the senior executive of a major British bank. And the missing links are mainly at the risk control side. This has given a lot of worries to the New Economy. 3 If the growth in the derivatives portfolio of financial institutions was matched by the growth of assets and the growth of equity and reserves, then there would not have been so much to worry about. But, as shown in Figure 5.3, this is not the case. Supervisory authorities have tried but so far they have not been far able to bend the derivatives curve. Yet, everybody knows that: • the assets are far from growing in a way commensurate to that of rising liabilities; and • the difference between what we have in assets and what we are assuming in liabilities is plain gearing The risks being assumed will not disappear because of modelling, even if credit derivatives and other sophisticated instruments, which are based on somebody else’s liabilities, demand renewed emphasis on risk management and on internal control. Leveraging leads to a bubble, and this is a major challenge confronting central bankers and regulators. Careful analysis of the evolution taking place in financial instruments leads one to appreciate that in a matter of about five years credit derivatives have established serious requirements of their own regarding the evaluation of counterparty risk. Because credit rating is key to their marketing, for reasons of risk reduction (whether theoretical or real), securitised corporates re-emphasise the importance of: • collateral • guarantees; and • balance sheet structure. The contribution made to the control of counterparty risk by all three bullets can be modelled. This ensures that the focus is now both on the classical method for balance sheet analysis and on IRB evaluation of banking book and trading book. (As we have already seen, IRB is
Use of Models in Credit Risk
101
promoted by the New Capital Adequacy Framework of the Basle Committee.) Cognisant bankers believe that in the coming years sophisticated credit-oriented eigenmodels will constitute the backbone of IRB methods, rather than a replay of some form of VAR (see Chapter 10). Sophisticated models for counterparty risk, however, have still to be developed in the full sense of the term, even if top-tier financial institutions and their rocket scientists are busy working on them.4 By all evidence, there is today a growing gap between those credit institutions and investment banks who are moving ahead with models for an internal rating system, and the large majority who are way behind in studying, let alone implementing, an IRB solution. Many central bankers, as well as Bankers’ Associations, are worried that the majority of banks under their authority don’t have the expertise to develop and apply sophisticated modelling techniques. They find difficulty in doing so even if they would like to move ahead in risk control. To assist in this effort, the Bank for International Settlements is currently promoting two research projects on IRB. Their aim is to elaborate on domains, types, model sophistication and level of acceptance for IRB solutions. Each of these groups has one of the following two objectives: 1. State of the art in credit risk modelling, in three classes: loans, investments and participation. 2. Charges that should be made per class of credit risks, taking into account general market factors and bank-specific issues. Progress along these two lines of reference involves an accurate definition of what is involved in each class of credit risk, the key variable to be accounted for, and levels of accuracy in modelling techniques. In all likelihood, this effort will be eventually followed by a standardisation effort. The leading thinking is that once the credit risk in each class is normalised, a framework will be developed and published at the level of the Basle Committee on Banking Supervision. This will most likely help the laggards to move up the IRB ladder; it will also make it easier for supervisors to develop meaningful global credit risk standards as well as to exercise a better focused control of credit risk in their jurisdiction.
5.4 Rules by Banque de France on securitisation of corporate debt To the opinion of senior executives of Commission Bancaire, Banque de France, credit institutions should be interested to develop a rating system
102 Elements of the Internal Rating-Based Method
for corporates, which is more precise than the current rating scales by independent agencies. To a significant extent, this is what Basle II, the New Capital Adequacy Framework, is doing by promoting an IRB system (at least among top-tier banks) that uses state of the art technology. The leading thinking among central bankers is that this greater detail and precision must correspond to the real credit risk assumed by the institution. Much of the data needed to compute such safety factor is internal, but the supervisory authorities could help in regard to the methodology and to technology transfer. For instance, the Banque de France has developed a detailed rating system that is now available to French credit institutions. The new approach is based on three scales working in parallel with the classification ‘A37’ as the highest grade. In this 3-digit code, the first digit denotes yearly turnover. ‘A’ is the top level on the established scale and it stands for a turnover of FFr.5 billion (US$850 million) or more. The second position in the 3-digit code indicates default history, and it varies between 3, 4, 5, and 6. A grade of ‘3’ says there has been no default. The message conveyed by the third digit indicates if there was in the past a payments incident: ‘7’ stands for no delay or rescheduling of interest and principal; ‘8’ is an average quality position, while ‘9’ is the worst grade in terms of payment incidents: • This three-way scale is designed to provide a factual classification of credit risk. • A high rating of the A37 type is necessary for repurchase agreements because it conveys a positive message in regard to the counterparty’s quality. The concept is good, and it should be examined within the realm of IRB solutions promoted by Basle II. But I am missing a quality factor that tells whether or not an institution makes profits with derivatives. In the UK with the Statement of Total Recognised Gains and Losses (STRGL), in the US with the Financial Accounting Statement 131, as well as in Switzerland and in Germany with the prevailing new regulations, it is easy to identify this quality factor if analysts do their homework. The statistics in Figure 5.4 come from annual statements of a money centre bank. Swiss law requires that recognised but not realised gains from derivatives are reported in the balance sheet as other assets. Similarly recognised but not realised losses with derivatives are reported as other liabilities. The careful reader will observe that in connection to this reporting standard the trend is negative to the bank.
OTHER ASSETS
OTHER LIABILITIES
JUST NOTE DIFFERENCE
JUST NOTE DIFFERENCE
1
2
3 YEARS
Figure 5.4
4
5
6
1
2
3
4
5
6
YEARS
Other assets and other liabilities reported to the authorities over a 6-year timeframe by one of the credit institutions
103
104 Elements of the Internal Rating-Based Method
Control over derivatives risk is not a matter of doing no more derivative trades. Derivative financial instruments have a role to play in the New Economy. The challenge is to carry out legitimate trades, not gambling; also to assure that year after year recognised other assets are always in excess of other liabilities. Short of that the bank is heavily damaging its future. This excess of other assets is not the message conveyed through Figure 5.4. Both other assets and other liabilities increase over time, but other liabilities (therefore losses from derivatives) increase much faster than other assets. This is an early indicator that should be definitely be reflected in the expanded rating scheme as a fourth position in the scale, which then becomes a 4-digit code, with the highest grade: A111 where: A = denotes yearly turnover, as indicated in the solution by Banque de France; 1 = default history with four options 1, 2, 3, 4 (rather than 3, 4, 5, 6), or a finer grain; 1 = past payments incidents, again with four options 1, 2, 3, 4 (rather than three denoted by 7, 8, 9); 1 = gains or losses with derivatives, still classified 1, 2, 3, 4; with 1 indicating an excess of other assets; 2, equality in other assets and other liabilities; 3, an excess of other liabilities; 4, a situation requiring the regulators’ intervention. The concept of risk and return as well as of return on investment in connection to derivatives trades should not only be reflected in the grading but also be a clearly stated guideline on; exposure for senior managers, traders and rocket scientists working for credit institutions. This is not the case today because people are concerned only with one thing: Return. This is translated by developing and selling products to counterparties with the main aim of ill-defined commissions – calculated on assumed profits but likely not to materialise. There is a basic principle in the junction of trading and risk control. Managers and professionals should not only care how many new contracts are signed and how much is sold in derivatives or any other financial product. Nor should they be interested in an isolated number reflecting potential return. What counts is: • The profit as a percentage of capital at risk, all the way to maturity.
Use of Models in Credit Risk
105
Only at maturity of a trade can it be established in a factual and documentary way whether there have been profits or losses. Therefore, commissions should be calculated (and paid) when profits can be exactly measured, not a priori through volatility smiles and other gimmicks as happens today. Furthermore, • Standards must be put in place to assure that all trades abide by prudential rules and risk-control guidelines.5 It is as also wise to avoid establishing standards that are contradictory, for instance, zero counterparty risk and high return on investment (ROI). The excuse the (ir)responsible executives of American Express found in July 2001, when they ended up with a cool US$1 billion loss from CMOs and junk bonds, is that they were under pressure by top management to come up with a 20 per cent ROI for the year. Risk-taking has a cost and, as the preceding chapters have advised, there should always be a meaningful evaluation of return of investment – including risk and considering the capital being invested. Banks and investors sometimes fail to appreciate that inflation, too, is a cost. In a recent study in Switzerland in which I was involved we took as the basis an annual inflation of 3.5 per cent (1951 to 2000). Then we evaluated what was left from the gains of a mutual fund investing in bonds, promoted and managed by a major bank. Because it invested only in G-10 government securities the vehicle was rated AAA by an independent agency; but the real return on investment was less than 0.5 per cent per annum. In cases such as the one I have just mentioned the capital is secure but the return is trivial. In fact, during the last five years the fund’s market price has tended to fall steadily. The sales brochure did not say so; in fact, it showed exactly the opposite in the aid of reinvesting coupons and compounding. We have to have well-thoughtout standards both for the short term and the longer term, which account for the multiple viewpoint I have outlined. In developing instruments, selling them to counterparties, measuring our performance (in trading, loans, investments) as well as in computing exposure, we must be absolutely unwilling to relax our standards. More errors in finance are the result of people forgetting what they really wanted to achieve than are attributed to any other reason.
106 Elements of the Internal Rating-Based Method
5.5 Credit derivatives with non-performing loans: Banca di Roma and Thai Farmers’ Bank In June 1999 Banca di Roma became the first European bank on record to securitise and sell off a portion of its non-performing loans to the bond market. Under this Euro 1.18 billion (US$1.2 billion, at that time) transaction, the credit institution (which had resulted from a merger of Banco di Roma and the Savings Bank of the Roman Province) reduced the ratio of its non-performing loans from 12 to 8 per cent of its total loans portfolio. Many banking analysts expressed the opinion that the Banca di Roma bond issue has been a shrewd plan to reduce the visibility of the bad loans; and some of them suggested that even if it succeeded in doing so it would not actually remove the risks attached to the securitised nonperforming loans. The reason behind this point of view has been that securitisation does not actually remove the risk from the bank’s balance sheet. In the case of Banca di Roma it left: • a tranche guaranteed by treasury bills; and • another tranche backed by the bank itself. This means that, ultimately, Banca di Roma will still be carrying some of the risks attached to its bad loans. Such a twist essentially makes the securitisation an embellishment exercise that will not necessarily resolve all of the lending problems, but it could make the credit institution appear more favourable to investors and potential partners. Those in favour answered that there were positive aspects in the securitisation of non-performing loans. First of all, this transaction broke new ground in the European capital markets. Secondly, Banca di Roma found buyers for the securitised instrument in spite of the fact it has one of the highest ratios of non-performing loans in Europe. In this securitisation, the management of the institution exploited some of its advantages: • the legal system in Italy is in good shape; • there are around institutional investors with plenty of money; • credit derivatives start being accepted as a way for diversification; and • investors begin to appreciate they can make more money by taking credit risk rather than market risk.
Use of Models in Credit Risk
107
The Banca di Roma bond issue required considerable homework. Analytical studies were made for back-up purposes, and the fact that loans were discounted at 50 per cent certainly helped. Also Banca di Roma kept the worst loans, which represented the bottom 25 per cent bracket of the securitised instruments and that might have discouraged investors. The lead manager of this deal was Paribas, and the bond issue was backed up by collateral from a mixture of defaulted mortgages and defaulted loans to individuals and companies across Italy. The pool included more than 20,000 ordinary claims of unsecured loans, and Banca di Roma set aside roughly double the amount of collateral necessary to service the bond on the assumption that about 50 per cent of the value of the loans would be recovered. This enabled the issuer to: • achieve a single A investment grade, hence credit rating, from S&P and Moody’s; and • assigned 86 employees to chase up the repayments on the debt, to keep the money coming in order to service the bonds. Superficially, it does not seem the points above are related – but they are. Many of the non-performing loans were in the sick list because of political patronage of companies which did not want to perform on their obligations. Once the loans were securitised and sold to investors outside the reach of Roman politicians, political nepotism became important International investors are not sensitive to the wims of Roman politicians. Considering all the factors I have outlined, including patronage as a step function, a model can easily be written to map the securitisation of non-performing loans by Banca di Roma. Most of the component parts of this model are deterministic. The one that is stochastic, and might cause the greatest problems, is the assumption that 50 per cent or more of non-performing loans will be repaid. Historical reference does not support this assumption. There is an alternative to securitisation and it comes in the form of a special vehicle to take over the non-performing loans. The Thai Farmers’ Bank is an example. In early July 1999, the Thai Farmers’ Bank, Thailand’s third largest commercial institution, sought to raise Bt24 billion (US$650 million) in new capital to help finance the creation of a wholly owned subsidiary. The aim of the subsidiary has been to take control of a substantial portion of the bank’s bad debt. The managers of this subsidiary have been GE Capital and Goldman Sachs;6 and the amount of bad loans to be transferred to it may reach Bt80 billion (US$2.1 billion). Thai Farmers’ Bank relying on the fact that
108 Elements of the Internal Rating-Based Method
these institutions will be more efficient in managing liabilities and more ruthless in recovering the money than it could ever afford to be. For GE Capital and Goldman Sachs, the carrot has been a share of recoveries above the Bt40 billion transfer value ascribed to the loans. Note that both the Thai Bank and Banca di Roma have discharged their non-performing loans at 50 per cent of their face value. The rights issue was needed to provide for write-offs involved in the transfer. It is not difficult to understand that this novel move for Asia, through a special vehicle buying up non-performing loans, has been keenly watched by other banks in Thailand and in many other less developed countries. Globalisation makes financial news travel faster and wider, and open-eyed banks everywhere are interested to find out whether the solution flies because they share the same problems: • how to gain expertise in trading their non-performing loans; • how to deal with a capital base that does not support a high degree of write-offs; and • how to kick-start some sort of securitisation in a market that does not go for bad loans. In the Thai Farmers’ Bank case, the new capital was to be raised via a one-for-one rights issue, with the shares being priced at Bt20 apiece. The new asset management subsidiary purchased Bt80 billion worth of non-performing loans, or approximately one-third of all Thai Farmers’ non-performing loans, at a 50 per cent discount. Discounting half the value is no solution for non-performing loans but in the Thai Farmers’ Bank case analysts said the 50 per cent write-off at which these non-performing loans were to be transferred to the new subsidiary was a reasonable indicator of the actual value of nonperforming loans assets held both by Thai and by other top commercial banks in Asia. They added, however, that a real benchmark was difficult to determine because basically the quality of the loans to be transferred was unknown.
5.6
Don’t use market risk models for credit risk
The solutions followed by Banca di Roma and Thai Farmers’ Bank, examined in section 5.5, are different from one another and they should not be handled by the same model. After all, the model, the hypotheses behind it, its key variables, their range of variation, limits, and the algorithms mapping the problem into the computer represent
Use of Models in Credit Risk
109
a real-life problem and, therefore, they should be accurate enough to be useful. Part One has explained the reasons why model-making is a science and it should be accomplished in a rigorous manner. Intelligence gathering, mapping the market into the computer, and the control of risk correlate. For the lender, the trader and the investor, it is vital to gather intelligence on companies and industries. Then, to: • consolidate; • analyse; and • apply that intelligence. As we have seen through practical examples, high technology can be used in instrumental ways for intelligence purposes, for instance, in business opportunity analysis, risk evaluation of global operations, dealer support systems, portfolio management and financial planning. The steady process of assessing loans of various kinds, cannot be done through a generalised approach. It has to be factual and specific. The same model cannot assist Banca di Roma and Thai Farmers’ Bank even if both try to get the non-performing loans out of their balance sheet at a 50 per cent discount. The methods they choose are diverse. Successful models have locality and are focused to a given case. In spite of that, many bond dealers and institutional investors are using equity-linked computer models to estimate credit risk. This is no good practice because it violates the cardinal principle that models should be specific to the case under study. Only charlatans pretend that mathematical constructs are ‘good for everything’ – they are actually good for nothing. What I have just mentioned is evidently true of all market risk models used for evaluating credit risk and vice versa. This confusion between objectives and means leads to a perverse effect. Let me take just one example. Many market risk models derive asset value, leverage, and likelihood of default from the market value and volatility of a company’s share price. Therefore: • If the price drops precipitously, dealers and investors also quickly mark down the value of the company’s bonds. • The aftermath is that it damages the company’s liquidity, because few people want to hold bonds whose prices are falling, while the drop in stock price may have reflected general conditions independent of the company’s health.
110 Elements of the Internal Rating-Based Method
Neither is the generalised use of ‘famous’ models a good way to establish creditworthiness – or lack of it. Value at risk (VAR, see Chapter 10) can only address between one-third and two-thirds of market risk problems, and it gives only approximate results in connection to some of the problems it addresses – the fact remains that this is in no way a credit risk model. Correctly, for a number of reasons, supervisors do not recognise VAR as a measure of credit risk. Careful practitioners in the financial industry appreciate that if they have apples and melons in a basket of fruits they must count them separately. Those who do not know that don’t know how to count. Because financial instruments are more complex than apples and melons they require both a sound method of observation and metrics. This poses challenges. First and foremost, credit models data on: • defaults; and • recovery rates. This information is much less complete than marking-to-market information. Typically, credit risk data internal to commercial banks is not collected in a useful format; while external data, from ratings to bond and securitised instruments prices, tend to be dominated by US experience, which may not be valid elsewhere. As a result of the action by the SEC and the Federal Reserve, it is more easy to collect credit information in the US than in any other country. Another major reason for the complexity of credit models is that information on the influence of factors such as the economic cycle, geographic location, industry sector, loan maturity, default and recovery rates is rather poor. The incompleteness of data also affects the estimate of credit correlation, which is often based on proxies – a practice that introduces more approximations. Neither are credit models at a ‘final’ stage of development. In my book on credit derivatives I have presented a number of improvements on Actuarial Credit Risk Analysis (ACRA), an existing model, which help to generalise its usage even with very small samples and therefore limited data. 7 These improvements rest on two pillars: • extending the timeframe; and • using the chi-square distribution. A third reason for the greater challenges encountered with credit risk models as contrasted to market risk models is that in the general case
Use of Models in Credit Risk
111
credit returns tend to be skewed and fat-tailed.8 Hence, Monte Carlo simulation may be a more appropriate tool for credit risk but its computational burden poses a problem with large portfolios – particularly so with low technology banks, which are the most common in the constellation of credit institutions. The fourth reason is that appropriate holding periods connected to credit risk inventories differ widely, ranging from a comparatively short timeframe for marketable securities to much longer ones for nonmarketable loans held to maturity. This complicates the task of parameter setting, and is only slightly eased with credit derivatives. Still another reason accounting for the fact that supervisors are most reserved when considering the use of VAR models for credit risk purposes is that the profile of an institution’s counterparties is very important in determining the appropriateness of marking-to-model credit risk. ‘Averages’ resemble the story of a sausage manufacturer who mixed the meat in the proportion of one horse to one rabbit and marketed his produce as ‘50–50’ horse and rabbit. Estimations and correlations cannot be meaningfully carried out by basing them on averages: • CreditVAR and other credit models assume that companies can be satisfactorily classified by industry type – this is a weak hypothesis. • Because the assumption makes it not sound, estimates are most often based on averages that do not permit any accuracy in computing counterparty risk. A sixth important reason is that credit risk is particularly concentrated with big counterparties, and most often institutions have a relatively limited number of big counterparties. This leads to the problem of credit risk diversification being a wish rather than a fact. Practically all banks say that they diversify their risks, which is not true. To my experience, there are trends in credit exposure: • Sometimes concentration of credit risk is unavoidable. • In other cases, the majority, banks pay only lip service to diversification. Even focused eigenmodels can be fooled by management in this connection. While they would favour credit institutions with a diversity of credit exposures, therefore less correlated default probabilities, additional safeguards are necessary such as dynamically adjustable limits – and
112 Elements of the Internal Rating-Based Method
rigorous exposure rules, in terms of capital requirements, which is itself an anathema to banks. These six major reasons are making the contribution of CreditVAR and similar models most questionable. But they should not discourage banks from working on improving credit risk modelling. With more focused algorithmic and heuristic solutions, the quantification of credit risk has a role, particularly in testing regulatory capital requirements. How much of a role depends on the ingenuity of rocket scientists in solving the aforementioned constraints and of senior management in establishing, observing and controlling the rules.
6 Models for Actuarial Science and the Cost of Money
6.1
Introduction
Both in banking and in insurance the time value of money is an important concept as well as a basic element for decisions concerning discounted cash flows, investments, debt, payments, and other issues. The calculation of actuarial present value is a practical example of the importance of time value of money and its calculation through the use of mathematical models. Three notions are underpinning the time value of money: • Money today is worth more than the same amount some time hence. • The difference is made up by the rent of money, known as interest. • The theory of interest is a product of time and place, and it is steadily evolving. Interest is as old as money and banking, but this is not the case of the theory underpinning its computation. We can find roots of the current theory of interest in the 19th century. The two pillars on which it rests, are the productivity of capital and the notion of time preference. Generally speaking, the calculation of present value of money, whether in terms of discounted cash flow or otherwise, is a typical model for decisionmaking under uncertainty, which involves: • the enumeration of alternatives, such as hypotheses concerning future interest rate(s); and • the evaluation of likelihood, or expectation, of each rate and its volatility. 113
114 Elements of the Internal Rating-Based Method
Once we are clear about future interest rate(s), the evaluation of an outcome in discounted cash flow is carried/out by means of a fairly simple algorithm (which we will see later in this chapter). Crucial are the interest rate hypotheses we make, as well as the estimation of counterparty risk (if credit risk is present). A key feature of estimating present value of money revolves around this ability to quantify the relative importance of key factors, their interdependence and their value. Chances are that some or all of the key factors entering into our calculation concern the productivity of money. The fact that capital used in business and industry should be productive underpins the whole theory of capitalism. Capital can be (or, at least, should be) employed to earn more capital at a rate higher than the cost of borrowing. That is why lending and borrowing makes sense if and when: • Lenders find their capital productive by the fact that it has grown in interest. • This return is used to appreciate the time value of money, which takes on a meaning broader than interest alone. Time preference associated to the value of money finds its roots in the fact that people prefer present money to future money; also, present goods to future goods. Even if they have no immediate need for money for reasons of investment or consumption, people and companies appreciate that money is durable, but because of inflation and other reasons it loses part of its value over time. This is compensated by future earnings and, therefore, by the interest. Precisely because of inflation and risks, like that of the counterparty’s willingness or ability to perform, we must distinguish between gross interest and net interest. Books, and many practitioners, usually suggest that gross interest equals net interest plus inflation. This is wrong. Gross interest equals net interest plus inflation plus risk. Risk has a cost that should always be taken into account.
6.2
Basic principles underpinning actuarial science
The Institute of Actuaries, the first of its kind, was established in London in 1848. In 1856 the Faculty of Actuaries was born in Edinburgh. In the United States, the first professional society of actuaries, the Actuarial Society of America, was formed in 1889. It was followed in 1909 by the American Institute of Actuaries, and in 1914 by the Casualty Actuarial Society (CAS).
Models for Actuarial Science
115
The fact that actuarial science is most fundamental to insurance saw to it that in the early days most actuaries were employees of insurance companies. The actuarial profession has been considered over long time to be integral part of the insurance industry. The few consulting actuaries providing their services to other companies fulfilled tasks which, more or less, had also to do with insurance. Actuarial science and mathematical analysis correlate: • Since early in this profession, actuaries used mathematics and statistics, particularly probability theory. • Originally, however, applications concepts were rather crudely expressed; only with time did they develop into disciplined mathematical tools. Actuarial tables preceded the establishment of actuarial societies by about 150 years. Life and pension actuaries, for example, have been using the tables of Edmund Hally, a mathematician, who in 1693 published what became known as the Breslau Table. Actuaries have also employed quite extensively probabilistic tables, based on the work of Bernoulli, Descartes, Gauss, La Place and Leibniz. In the 20th Century they have been using distributions like Poisson and Weibull.1 The work of actuaries is both quantitative and qualitative. Among the foundations of their work have been the meaning of the value of money, cash flow and risk. Risk in all its aspects is crucial to actuarial studies, with particular importance given to the notion of uncertainty, and the concept of injury or loss. Through the use of stochastic processes, and their models, actuaries express the risk of loss in monetary terms. • The sense of loss in actuarial studies is closely connected to value and wealth. • A property is judged by its ability to produce desired goods and a cash flow. This is precisely the notion on which the masters of investing have capitalised to develop their profitability models. An example is Warren Buffett. Any party taking risks is exposed to economic loss. This is as true of insurance companies as it is of investors – and generally of all players in the financial markets who must understand that volatility means not only upside but also downside. The downside may be due to credit risk, market risk, legal risk or other exposure. A quantitative expression of credit risk is the probability
116
MEAN MODE
PROBABILITY OF DEFAULT
0% Figure 6.1
DEFAULT RATE
Probability distribution of default rates in a relatively normal business environment
TOWARDS 100%
Models for Actuarial Science
117
of default, often expressed as expected default frequency (see also in Chapter 4 the 10-year transition matrix). The probability distribution of default rates in a relatively normal business environment is shown in Figure 6.1: • Credit institutions are required to estimate probability of default for their counterparty exposures. • With the internal rating-based solution, the minimum requirements for deviation of such estimates must be associated with each internal rating grade.2 There are different reasons for uncertainty in the insurance industry. Risks originate from death, theft, embezzlement and accidents, as well as adverse court action. These risks impair established wealth and diminish or altogether interrupt the cash flow. The impairment results in economic loss. A similar concept prevails in all other industrial sectors. A major difference between insurance business and investing in different instruments available in the markets is that the former is a financial security system, though not all financial security solutions are insurance and not all insurance companies restrict themselves to the aforementioned norm. To the contrary, investing in the financial markets is intended to: • maximise gains in times of prosperity, or • minimise losses in periods of distress. Because early actuarial work was closely linked to insurance, classically the realm of the actuary has been the concept of the financial security system. This is in the process of changing as actuarial skills spread into other branches of finance, with cross-disciplinary solutions providing for fertilisation of skills. An example is the use of actuarial approaches in estimating expected, unexpected and super-catastrophe loans losses. 3 There is a similitude in background because the insurance company which issues a policy against a premium: • makes reserves for risks, • covers its expenses; and • earns a profit beyond this premium. It earns a profit because it invests the money that it receives. But, like the price of any commodity, the price of money varies with supply and
118
EURO
US $
MARCH 2000 6
7
% INTEREST RATE
SEPTEMBER 2000
% INTEREST RATE
6
SEPTEMBER 2000
5 MARCH 2001
MARCH 2001 4
5
MARCH 2000
3
4 1 3 MONTHS
3
5
7
9
15
30
YEARS Figure 6.2
Yield curves for interest rate swaps in US$ and Euro
3 1 MONTHS
3
5 YEARS
7
9
15
30
Models for Actuarial Science
119
demand – and its computation is part of the work of many professionals: traders, investors and actuaries included. As every banker, treasurer and investor knows, there is no single rate of interest. At any time, in any place, a yield curve talks volumes about interest rate volatility. Figure 6.2 shows the change in yield curves for interest rate swaps in US dollar and Euros. The reference is three benchmark months: March 2000, September 2000 and March 2001. (For computation of interest rates see section 6.4). Interest rates reflect, among other things: • • • • •
supply and demand for money; the credit rating of the borrower; legal restrictions and customs; the length of time for which money is lent; and dominant market factors.
Interest rate predictions are not easy, neither are they secure. Such predictions are not the actuaries domain in the most strict sense of the term. But they are one of the add-on factors in all studies that address discounted cash flows and the value of money. Therefore, they interest investors, actuaries, central bankers and other people. As the examples we have seen in this section document, while the realm of actuarial activity has expanded, the basic nature of the actuarial work has not changed. A fact insurance, banking and so many other professions have in common is the impossibility of certainty. Therefore, the interest in studying the behaviour of key factors in a given professional environment through stochastic processes. Whether discrete or continuous, oriented towards risk factors, interest rates or other issues, actuarial studies include random variables that tend to dominate the results. Examples are the occurrence of chain events, periods of instability in the markets, the time between the occurrence of a transaction and its settlement, the remaining length of human life, or the time characterising the time span of a disability. Typically, random variables in the insurance business are: the number of claims, claim amount, total claims arising from a pool of policies within a given time period, and so on. An example where actuarial tables can be used to advantage in finance is discounted cash flow (intrinsic value), where the unknown may be future interest rates, future currency exchange rates, likelihood of default, or some other critical factor.
120 Elements of the Internal Rating-Based Method
6.3
The stochastic nature of actuarial models
In 1953, one of my professors at UCLA taught his students that in past times actuaries have used deterministic models in their treatment of the time value of money. But by the 1930s this started to change, and in the 1950s the change towards the use of probabilistic elements was in full swing. By the 1970s, exchange rate and interest rate volatility, among other reasons, has pushed towards stochastic approaches, which are more representative of the nature of risk: • Deterministic models usually consider expected values, with focal point the mean of a normal distribution. • With stochastic models we estimate the most likely shape of the distribution, including higher order moments, and compute confidence intervals. For instance, we may choose the 99 per cent level of significance. This has by now become a common practice with value at risk (see Chapter 10). A probability distribution may be bell-shaped as the example in Figure 6.3. In this case, the distribution is defined through its mean, x, and its standard deviation, s, which is the square root of its variance. The mean is the first moment of a normal distribution, and the variance is the second moment. In a normal distribution, the mean, mode, median, and mid-range coincide. This is not true of a skew distribution. As we have seen in, Figure 6.1, for example, the mode and mean are different. In the latter case, skewness and kurtosis respectively known as third and fourth moments must be computed. The fat tails of a leptokyrtotic distribution (Hurst coefficient) shown in Figure 6.3 can act as a predictor of the likelihood of some future events. Other things equal, it is easier to handle deterministic problems because they are generally well structured and can be solved through algorithms. Stochastic problems involve considerable uncertainty and require dealing with probabilistic outcomes, including conditional probabilities (more on this later). Often, these are more complex problems which may involve many interacting variables that: • are interdependent and change over time; • or they are combinatorial in nature. But stochastic type problems have many instances where the outcome depends on the acceptability of the solution, not its precision or opti-
Models for Actuarial Science
121
LEPTOKYRTOTIC DISTRIBUTION
NORMAL DISTRIBUTION s
FAT TAILS, HURST COEFFICIENT
2s 3s
x, MEAN MODE MEDIAN MID RANGE Figure 6.3
Bell-shaped normal distribution and leptokyrtotic distribution
mality. In these cases, reasoning may be the critical determinant. There are as well organisational type variables that require an understanding of behaviour, interpersonal interaction, conflict resolution, negotiation and leadership. An important element in a stochastic study is the time horizon we are considering. No two investors have the same time horizon. The time horizon of speculators is very short; by contrast central bankers have a long time horizon. The example of the yield curves we have seen in Figure 6.2 has a time horizon of 30 years. This example was purposely chosen because: • actuaries tend to have a long time horizon; and • the longer is the time horizon, the greater the impact of even minor factors on the time value of money. Behind this statement are some basic issues characterising insurance and other financial business. The typical shorter-term insurance contract is often renewed automatically becoming, de facto, a mid- to long-term proposition. Contrasting the span of time inherent in a life insurance policy or an employee retirement plan with the shorter time period of commercial banking (including consumer lending), we can appreciate
122 Elements of the Internal Rating-Based Method
the actuary’s emphasis on the time value of money. Also, the need to formulate hypotheses on: • what affects this value; and • by how much the time value of money is affected. The type of instruments that we use also influences the nature of our models, this statement being valid both about actuarial models and investors’ time horizons. For instance, with derivative financial instruments the timeframes in commercial banking have been lengthened quite significantly. Thirty years’ interest rates swaps, for example, have become fairly popular. Hence, in banking too, actuarial studies with a longer time horizon are quite pertinent. Attention should be paid to the fact that: • Actuaries make no claim as to any special ability to predict interest rates. • What they do is to concentrate on compound interest, applying their mathematics to practical problems. We will talk more about present value in section 6.4, prior to this, however, let me clarify what is the purpose of a hypothesis in actuarial work or any other branch of science. A hypothesis is a tentative statement about background reasons, or not-so-well known events, that affect the credibility of who makes them. To this, every actuary should be sensitive. Two of the key aspects of credibility theory associated to insurance claims, for example, are: 1. how to make the best interpretation of claim experience, when a section of a population exhibits a different pattern than the whole; and 2. to which extent a current event has been influenced by the occurrence of another event which preceded it, and may have cause-andeffect relationship. The first point above refers to do with sampling procedures, therefore with both the concept and the contents of a sample. The answer to this point is the statistical test known as the Null Hypothesis, H0, which states that there is no difference between two samples. The alternative hypothesis, H1, says that there is a difference – hence the two samples don’t come from the same population. The appropriate statistical test accepts either H0 or H1.
Models for Actuarial Science
123
The answer to the challenge posed by the second point is provided through the use of conditional probabilities, known as Bayesian theory. What’s the likelihood of event B given that event A has taken place? Another powerful tool which can be used in this connection is possibility theory: • Probability theory is based on two outcomes: 0 or 1, black or white. Something that either happens or it does not. • Possibility theory accepts not only black or white but also tonalities of grey: ‘may be’, ‘more likely’, ‘not as probable’. Fuzzy engineering is a practical implementation of possibility theory, and it is recently used by actuaries as a more sophisticated procedure to the analysis of claims. Fuzzy engineering is in fact a misnomer. As a mathematical tool, possibility theory aims to defuzzify, turning a subjective qualification process into a relatively objective quantification. 4 We will not be concerned with possibility theory in this text.
6.4
Interest rates, present value and discounting
In practical terms, an interest rate is paid over a given interest interval, for instance a year, in proportion to the amount of capital that has been invested at the start of the interval. The computation may be expressed in time units that are a fraction or a multiple of the nominal interval. The general algorithm expressing interest rate i is: Interest paid during a time interval i = ------------------------------------------------------------------------------------------------------------------------------Principal invested from the beginning of that interval In many investments we wish to know how much we should invest at a given time to provide a given amount at a specified later date. This involves the notion of present value, or discounted value. It is discounted by the interest that will have to be paid, at a given rate, over the specified interval. Actuaries make wide use of present value in which future money flows are discounted. This means they are valued in a current time frame by taking into explicit account the time value of money available in future years. Present value calculations can also involve discounts for other factors, but invariably the time value of money is present. Say that the interest rate per time period, for instance a year, is i. The present value of a sum of money B due at the end of a term of n interest
124 Elements of the Internal Rating-Based Method
periods is the principal A. When invested at the beginning of the term at the effective rate i this principal will accumulate to B at the end of n periods. The algorithm for discounting is: A = B (1 + i)–n
(1)
The term present value, though commonly used, is not quite exact. Rather, it should be understood to signify the discounted value of an obligation or investment at a time prior to its due date – whether this actually is the present moment or some other chosen time prior or subsequent to the chosen time level. Usually when we speak of computing the interest rate we imply compound interest, that at the end of prescribed time intervals – or interest periods – is added to the principal. This process of compounding essentially amounts to a conversion of interest into principal, essentially meaning that the interest, too, earns interest. If a capital A is invested over n interest periods at i per period, where n a positive integer, and earned interest is converted into principal, then the accumulated value B (or final amount) will be: B = A(1 + i)n
(2)
Notice the similarity that exists between equation (1) that expresses the discounted value of B, and equation (2) that gives the compound value of A. These algorithms are simple and straightforward. They are widely used as many investors have learned to differentiate between discounted and not-discounted cash flows; gross interest and net interest; interest before tax and after tax; nominal, effective, and real rates of interest; as well as rates of return. A serious study on present value will account for yield curves, and express the relationships between interest rates for different maturity periods. Available algorithms recognise that any specific interest rate has a basic component for time preference. There are as well additional components to which reference was made in the introduction. For instance, the expectation of inflation and the possibility of default. A vital concept in a discussion on discounting and compounding is the term structure. This means the total interval extending, from the beginning of the first payment period to the end of the last payment period, as in the case of annuities. In connection to bonds, the term structure is the length of time from date of issue to maturity – usually, an integral number of years.
Models for Actuarial Science
125
A different sense of the word ‘term structure’ applies in connection to insurance. For instance, under a term life insurance policy the death benefit is payable only if the insured dies during a specified number of years. This acts like an option. The policy expires without value if the insured survives beyond that specified period. The globalisation of the insurance markets and steady innovation will bring many more examples along this frame of reference. By all likelihood, it will also lead to a better distinction regarding the term structure by differentiating between short term and long term. One of the problems encountered in the comprehension of the world in which we live and in structuring the transactions we are involved in doing, is that classically, for the vast majority of people thinking is short term. Very few people have the ability to get out of these boundaries and take a longer term perspective. Usually, these are the philosophers and scientists (see Chapter 1) who have imagination and courage to get off of the beaten path and to defy superstition and tradition. Finance and economics have followed the tradition of the physical sciences, which have taken the properties of inanimate materials, and in cases also of animate entities, as being independent of time and space – therefore of the time and location at which an observation is made. Only recently is this changing with scientific experiments putting this postulate of time and space under question: • if the results of experimental science are linked to the time of observation; • then the studies which we must pay attention to are time, space and their aftermath. The fact that discounted cash flow pays attention to the time structure goes contrary to classical economic thinking, which from mid-19th and mid-20th century has ignored evolution over time by refusing to place today and tomorrow under their proper perspective. This way, it became removed from technological and financial developments, and all it means in terms of productivity, business opportunity and standard of living. The careful reader might recall that technological progress, of which nearly everybody today is talking, only occurred in economics and financial analysis during the 1960s. Even today not all economists think of the progress of technology and its aftermath as a very significant
126 Elements of the Internal Rating-Based Method
factor, which over the longer run might turn current financial data on its head. This poses two vital challenges: • A problem of synthesis that links the past to the present and the future. The present is much more meaningful when we know where we want to go, and how can we reach ‘there’ from ‘here’. Therefore, the interest in computing present value as distinct from future value is: • A challenge of new knowledge, which we have not yet absorbed yet is vital in the ‘here’ to ‘there’ bridge. This bridge is necessary because spontaneous thinking has nothing to do with the discovery of scientific truth or of financial reality. The bottom line is that, the ways toward physical truth and financial reality tend to be parallel and indistinguishable, at least in method. It is the method and its tools that make them transparent or, alternatively, impossible to perceive in meaningful financial terms.
6.5
Modelling a cash flow system
Many, though not all, of the financial systems can be modelled as if they essentially consist of two cash flows: The one is the inflow of money, or income. The other is the outflow, or disbursement. Timerelated considerations weigh strongly in cash flow studies. A cash inflow to a financial security is a time-related complex of payments. Similarly, every disbursement payment has time-related elements and constraints. The basic component of either and both cash flows are: An amount At at time t; the time t at which payment is made; and probability of payment Pt. The amount At can be any fixed amount; or the expected value of a given variable. The probability Pt can have the value 0 or 1, practically implying certainty as to whether the payment will be made. Alternatively, Pt may lie somewhere between 0 and 1, implying uncertainty. This uncertainty underpins the possibilistic, or fuzzy engineering approach to which has been made reference in section 6.3. The new classical cash flow models have been popular for many years. They date back to the 1950s with linear programming, developed to assist in controlling and monitoring the daily money transactions involving the bank’s own cash account, its account at the federal reserve, disburse-
Models for Actuarial Science
127
ments due to commitments, and incoming funds from correspondent banks and clients. Early cash flow models have used short-term forecasts of sources of funds, fund requirements, and of interest rates, to develop a cash flow projection. The common flow of such models is that they take the longer term as equal to the sum total of shorter terms. This is not true. Chapter 1 has explained that the longer term has a causa sua, as the philosopher Spinosa used to say. It is not the simple summation of short terms, as many things happen in the long run that are alien to short term elements. What the previous paragraph explained is true of most of the various cash balances we target in order to establish and optimise schedules of cash inflow and disbursements: maintain required reserve balances and plan for purchase and sale of funds. For each of these activities, modelling can play a vital role. Prior to the use of models, and of interactive computational finance, planning was largely a matter of guesswork by treasurers and bankers. Present-day cash flow models can be simple, but not that accurate, or more accurate but complex. A complex model is simplified by capitalising on the fact that every income payment has the same three elements: • amount; • time; and • probability. The time element is very important because the concept of the time value of money underpins not only actuarial science but also many other areas of the financial world. Discounted cash flows lead to the calculation of actuarial present values, along the lines discussed in section 6.4. Present value allows to make judgements as to actuarial equivalence, and other matters. The actuarial present value for future income payments is: PVI = Σ (1 + i) −tPt At
(3)
where PV I stands for the present value of future income (cash flow). This summation is over all positive values of t for which the financial product exists. Pt is the probability that a payment will be made at time t, and At the expected amount of such payment.
128 Elements of the Internal Rating-Based Method
Quite similarly, the actuarial present value of the entirety of potential future disbursements summed over all positive values of t and can be written as: PV0 = Σ(1 + i) −tPt At
(4)
where PV 0 is the present value of the future outflows or disbursements. Notice that both algorithms: PVI and PV 0 have the same core subsystem reflecting inflows and outflows of payments potentially available t years hence: (1 + i) −tPt At where (1 + i)–t is the discount for the time value of money at an assumed rate i. The probabilities that the payments will be made, Pt, as well as the time value of money, At are taken into account. Equations (3) and (4) represent the so-called generalised individual model, which makes feasible the comparison of actuarial present value of all future inflows, with the actuarial present value of all future outflows. A focal point of the generalised individual model is the difference between PVI and PV0: ∆PV = PV0 − PV I ∆PV can be either positive or negative, and it changes with time. Therefore, it must be viewed as a function of time t 0 denoted as: ∆t = tp − t0 = ∆(tp − t0) where tp is the time we compute the present time. The difference at present time ∆ (tp – t0) denotes what actuaries call the reserve at time tp. This reserve is the excess of the actuarial present value of future disbursements over the actuarial present value of future income: ∆(tp − t0) = PV0(tp) − PVI(tp) This generalised individual model is employed in two steps. In the first, ∆(tp − t0) is measured from the time when an individual arrangement begins – set at time 0. This indicates an initial balance between the actuarial present values of disbursement and income flows. From this
Models for Actuarial Science
129
algorithm, the values of At are computed, where At stands for the premiums being charged. Therefore, the inflow. In a second step, ∆ (tp − t0) defines the value of the reserve at any duration (tp − t0) of a given policy. This algorithm can be specialised to represent cash flows in most financial systems. Let’s recapitulate the sense of the algorithms in equations (1) to (4), and with this emphasise their usability. It is advisable that expected future cash flows are often discounted to their present value. Critical to this process is the interest rate used for discounting, because it determines the economic meaning of the discounted cash flow: • If expected future cash flows are discounted at a market rate appropriate for the risk involved then the result is essentially market value. • If future cash flows are discounted at the rate stated in the contract, or not discounted, the result is stated transaction value. If expected future cash flows are discounted at an interest rate that equates to future cash flows, to the amount received or paid in exchange then the result reflects that particular measure. If a rate is used that ignores risks inherent in the instrument then the outcome is not accounting for exposure. The advantages of expected future cash flows underline that these are the source of the value of receivables and payables, whether conditional or unconditional. However, estimation may be difficult for conditional instruments and even for unconditional instruments for which credit risk is large. Discounting procedures that monetise credit risk and other exposures are still under development. Hopefully, progress along internal rating-based solutions will significantly improve upon the current state of the art.
6.6
Actuarial reserves and collective models
A special case concerning the implementation of the generalised model we examined in section 6.5 is the calculation of reserves. This is also the goal of IRB solutions. Reserves are a fundamental issue in actuarial science, and they have both an asset and a liability interpretation: • the system’s liability is the individual’s asset; • but the individual may have no right to convert this asset to cash.
130 Elements of the Internal Rating-Based Method Table 6.1
The top eight telecoms debt defaults in the first half of 2001
Company name
Country
First rating
PSINet Inc. Winstar Communications Inc. Viatel Inc. Call-Net Enterprises Inc. 360networks Inc. Globalstar, L.P. RSL Communications Ltd. 360USA
USA USA USA Canada Canada USA USA Canada
B– B– B– B+ B+ B+ B– B+
debt (US$m) 2733.9 2091.4 1833.2 1781.6 1675.0 1450.0 1415.6 1200.0
We live in a liabilities-based economy. 5 A basic but not always appreciated issue is that a liabilities economy can work only as long as the system holds together. A liabilities economy grows faster, much faster than an assets economy – but if at some corner the financial fabric breaks then the liability-based assets of other institutions are affected and this can create an economic tsunami, to which the world came close in 2001 because of the meltdown of the telecommunications industry. During the first six months of 2001, telecoms debt defaults reached the stars. A total of 100 rated or formerly rated companies defaulted on US$57.9 billion worth of debt in North America and Europe, and investors paid the price for gambling on their junk bonds. In this torrent of red ink, some US$18.4 billion, or about 32 per cent, came from telecoms. Table 6.1 shows the losses of over US$1 billion, all of them in the US and Canada. Another loss of US$961 million, near the US$1 billion mark, came from Global Telesystems Europe, a company of the Netherlands. The plight of these telecommunications firms has been exacerbated by the deterioration of general economic conditions, which cut off traditional funding sources, but the most fundamental reason for the failures was mismanagement, which reached unprecedented proportions. The example of the failure of telecoms is very instructive, because these have been leveraged companies practically without reserves. The model that we make for the survival of our company is that we must see to it that a large proportion of the reserves is real assets. Their computation should follow established economic theory, which states that reserves are the measure of assets expected to have arisen from past individual arrangements and their management. The solvency of an entity or of a system depends on:
Models for Actuarial Science
131
• the adequacy of the asset to face outstanding and clearly stated liabilities; and • the correctness of cash flows, profit margins, premiums, or other prices that it charges. The simple model that we looked at in section 6.5 is not enough to respond to these requirements. Therefore, it must be strengthened through add-on features because the actuary deals with aggregates. These aggregates include: • reserves; • premiums; • claims. and so on, spread over the timeframe under study. In the insurance industry, for instance, one should take notice of the fact that, fundamentally, an important element of the reserves arising in this compound picture are those for claims incurred and not yet reported (or reported and not yet paid) – as well as from premium paid but not yet earned. In other terms, an accrued liability can play a great deal the same role as a reserve. Another vital element is the balance between the projection of disbursements, over a relatively long timeframe, and the projection of cash inflows over the same period (see section 6.5). This reference includes both present participants and their successors to the transactions being inventoried. Realistic reserves, cash flows, profit margins, premiums and claims, cannot be computed in the abstract. They must be based on a model that is factual, as far as the operations of our company are concerned. This model must be fully documented in terms of its entries. Table 6.2 presents, as an example, a consolidated statement of cash flows from the manufacturing industry. Table 6.2
Consolidated statements of cash flows
Operations • Net income • Cash provided by operations • Adjustments for non-cash and non-operating items: – Non-cash restructuring charges – Loss on derivative instruments – Loss (gain) on sale of other investments
132 Elements of the Internal Rating-Based Method Table 6.2
(continued)
– Depreciation and amortisation – Charge for acquired research and development – Amortisation of compensatory stock options – Equity in losses of investees • Changes in operating assets and liabilities, net of acquisitions and dispositions: – Trade accounts receivable – Other receivables – Prepaid expenses and other current assets – Other assets – Investments including available-for-sale securities – Accrued expenses and other current liabilities – Deferred revenue and other liabilities Investing activities • Cash used in investing activities • Product development costs • Purchase of property and equipment • Purchase of investments • Proceeds from sale of investments • Proceeds from short-term investments, net • Purchase of minority interest • Net proceeds (payments) for dispositions (acquisitions) • Other investing (disinvesting) activities Financing activities • Cash provided by financing activities • Proceeds from issuance of common stock, net • Principal payments on debt • Proceeds from issuance of debt • Payment of deferred finance costs Cash and equivalents • Cash at beginning of year • Cash received during the year • Cash paid during the year • Cash at end of year • Increase (decrease) in cash and equivalents
Practically every company has its own classification of the elements entering the computation of cash flows. In the general case, these converge towards a regulatory standard. A basic principle regarding the format to be chosen is that the groups for which such elements are calculated must be fairly homogeneous. This leads to the concept of taxonomy 6, and with it, of: • selection; and • rejection, or, antiselection.
Models for Actuarial Science
133
The classes into which companies, individuals, accounts or events are to be sorted are defined by the taxonomical system that we chose. The classes of a good taxonomical system will be characterised by homogeneity of their constituent parts and a high degree of certainty in classification. Practically all scientific disciplines have to involve this concept of homogeneity. Classification in physics, chemistry, manufacturing operations and within a financial system have much in common. Its importance is demonstrated by means of the following example. Say that an insurance benefit of X is to be paid upon the occurrence of a designated random event. The premium is based on the assumption that the probability of this event occurring is P: • The value P can be estimated by observing the number of events and non-events in large samples of the potential population. • Once this probability is found, the premium is estimated and it must be more than the value of expected claims, P·X. If we make the hypothesis that the sample on which P has been based was not really homogeneous and the probabilities for two or more groups within the population may be unequal, then P is not based on homogeneous data, but is instead an average of two more group probabilities: • for some groups, the probability is greater than P; • for others, it is less than P. Therefore, it is not appropriate to base the pricing for all subgroups on P. If the correct probability for class A is P + ∆; then that for class B it is P – ∆. In this connection, an important question to ask is how many of the potential buyers from classes A and B will actually buy insurance if the rate charged is based on P? The answer to this query is that if the average is P, then either group A or group B (the two subpopulations) will be subsidised by the other group. Therefore, the reaction of the group paying the subsidy may be characterised by rejection (antiselection). There are always dangers in miscalculating the premium and in subsidising one product by another. In theory, antiselection may be avoided if the buying public is unaware that such a subsidising difference exists. Or, it may be overcome if a risk averse population has no viable alternative. But neither of these conditions can be expected to last for long in a competitive market.
134 Elements of the Internal Rating-Based Method
If it exists, it will be exploited by competitors through more rational subdivision and product pricing. As information becomes widespread and competing companies strive to attract the better risks, the more coarse classification system must ultimately give way to the more refined, or the company using it will drive itself out of business. A different way of making this statement is that the models which we use must be refined over time. This is just as true of actuarial science as of any other process or product.
Part Three Forecasting, Reporting, Evaluating and Exercising Market Discipline
This page intentionally left blank
7 Scenario Analysis and the Delphi Method
7.1
Introduction
Scenarios are developed and used for a multiplicity of reasons that range from the study of a financial environment to the behaviour of a system, or the aftermath of alternative courses of actions. Another use of scenarios is in connection to elicitation of systems specifications or model parameters. The nature of scenarios make them suitable to represent behaviour in a way allowing users of the same aggregate, but with different views, to provide different opinions which may be: • converging; • diverging; or • overlapping. Under these conditions, it is normal that scenarios may include contradictions. Also the resulting, or projected, system behaviour may not be completely defined by a scenario or set of scenarios, which are usually of a qualitative nature. What is more, to obtain a global behaviour model, it must be possible not only to define but also to deal with contradictions and incompleteness characterising a given product, system, or market environment. Because scenarios describe possible interactions between entities, they often focus on behavioural issues. These entities may be individuals, organisational units, financial instruments, machines, protocols, specifications, tolerances, or risk factors. Scenarios constitute a means to communicate functional and/or operational requirements within a company and outside of it, to its supply chain and its business partners: 137
138 Market Discipline
• Scenarios are not goals; they are tools and the tools we use must be flexible and powerful. • Scenarios are often textual and graphical, helping in the description of an environment for elicitation, development, integration, verification and validation of specifications or assumptions.
The qualitative nature of scenarios is particularly well suited for financial studies, but it is also in the sciences and in engineering. Whether they deal with financial instruments, machine components or systems, designers have to be particularly alert to nonquantifiable characteristics. All sorts of analysts have the tendency to be so taken by the beauty and precision of numbers that they frequently:
• overlook the simplifications made to achieve this precision; • neglect the qualitative factors that are often determinant; and • overemphasise the calculations that are, in the last analysis cold numbers.
Scenarios and numerical analysis complement one another. It is the wrong approach to define engineers as persons who spend their life finding out the best way to do something that should not be done at all. But it is right to say that every quantification must have adjunct to it a qualification addressing issues like utility, market appeal and cost/effectiveness, as well as challenges or difficulties with the financial, social and political aspects of a new product or process. Scenarios can be instrumental in expressing some of the difficult-toquantify considerations we must take into account in our design and in our plans. Methods like Delphi (discussed later in this chapter) can be instrumental in attaining a more systematic and direct use of expert judgement from a statistically significant sample. Whether we talk of finance or of engineering and physics, reliance on expert judgement is indispensable to all analysis. One of the virtues of studies in cost/effectiveness and in market response is that they provide a framework that can make it easier for the judgement and intuition of experts in diverse fields to communicate with one another; or for endusers to express some a priori opinions. Scenarios permit to combine systematically and efficiently these opinions, to improve processes and products.
Scenario Analysis and Delphi
7.2
139
Why expert opinion is not available matter-of-course
Many people think committees are the best way to share expert opinions, but this argument forgets committees often fail to make their assumptions and reasoning explicit, let alone the personal conflicts which exist in a committee setting. Parkinson’s Law suggests that the time committees, including the board of directors, spend on a subject is inversely proportional to the importance of that subject. Henry Ford used to say a committee cannot drive a company, just like a committee cannot drive a car. Committees have no body to kick and no soul to blame. Also, in many cases their findings are obtained through bargaining and these different deals are not bringing up the rationale for decisions, which eventually leads to drawbacks. A round-table discussion is a better setting in which the advantages and disadvantages of an issue can be examined: • systematically; and • dispassionately. Indeed, many committee meetings find it preferable to proceed in this way, to minimise the identification of political factors in decisions, particularly when such factors might have a major weight. But with roundtable discussions recordkeeping is less than perfect, and let’s not forget that even the minutes of many board meetings show summaries and a consensus (if there is one) rather than details. Lack of detail is in the way of being corrected through the institution of a corporate memory facility (CMF), which databases all figures and opinions discussed in committee meetings, all the way to the decision which is reached and the reasons underpinning such decision. A CMF is, in essence, a way of scenario writing. An alternative approach is using experts in evaluating the likelihood of events through well known and accepted operational procedures like the Delphi method. The Delphi method has been named after the oracle, because its developers first thought of it as a scheme to obtain better political forecasts. It makes use of a process that might be seen as expert arbitration with deliberation steered by a control group through feedback. The first experiment with Delphi targeted horse racing and it took place in 1948. Subsequent applications of Delphi used several different approaches. It took another ten years till the method settled down and became accepted: • Delphi can be seen as an interactive roundtable discussion with feedback.
140 Market Discipline
• The elicitation of expert opinion is systematic and more forthcoming than in a committee. • Because all intermediate and final details are databased, this process leads to a corporate memory facility. The domain of application is wide and polyvalent. A good implementation of the Delphi method is with credit estimates. We can use a combination of quantitative and judgmental processes. Bayesian statistics introduce conditional probability. Delphi forecasting uses interviews and/or questionnaires to extract estimates or prognostication on a specific event (or issue) from a valid sample of experts: • Typically, this event is subject to a conditional probability: something happening if something else takes place. • The experts’ prognostication is subjected to iterations with the responses being obtained presented to the same experts in order to confront them with dissension. Dissension helps them to focus their judgement. Delphi attempts to improve the panel approach in arriving at an estimate by subjecting the views of individual experts to each other’s opinions which might be divergent. Dissension is a sort of criticism, but Delphi avoids face-toface confrontation and it also provides anonymity of opinions, and of arguments advanced in defence of these opinions. There are different ways of implementing Delphi. In one version: • The participating experts are asked not only to give their opinions but also the reasons for these opinions. • Direct debate is replaced by the interchange of information and opinions through a carefully designed sequence of questionnaires. At each successive interrogation, the participants are given new and refined information, in the form of opinion feedback. This is derived by a computed consensus from the earlier parts of the programme. The process continues until further progress toward a consensus appears to be negligible (we will see a practical example in section 7.3). The remaining conflicting views are then documented and presented in a form that shows the relative weight of each opinion within the group. Progressively refining of judgmental opinions through successive iterations is a good way of taking some of the subjectivity out of the system, though it should not be done in a way which creates undesir-
Scenario Analysis and Delphi
141
able effects, like one group of experts exercising dominance over the other(s). This happens when the panellists are subjected to a herd syndrome. The avoidance of a herd syndrome is most basic with any undertaking, whether we talk of Delphi or any other method. For instance, it can be a self-defeating process with the prediction of creditworthiness, because performance criteria are usually compound. Not only must we have dissension in forecasting default probability but also we should not fall into the dual trap of euphoria and of incorrectly forecasting default for companies that are likely to prosper. Delphi is well-suited for credit evaluations because the method can target default in an effective manner through successive probability estimates. As the largely subjective quantity of counterparty risk changes, our estimate of default likelihood changes with it – whether it grows or shrinks: • But the results from the implementation of Delphi are not supposed to last forever. • Rather, they are a snapshot of current conditions and estimates, which have to be periodically reviewed. In conclusion, Delphi experts don’t act like members of a classical committee; they have greater independence of opinion, and details of the experts opinions are databases. In a practical application of the Delphi method we attempt to examine the likelihood of an issue, action or event, which is characterised by tonalities of grey. Issues with yes/no answers are the exception. The estimate of the conditional probability of events or actions is critical to the proper estimation of an outcome, for instance, risk of credit exposure.
7.3 The Delphi method helps management avoid tunnel vision ‘Warren Buffett said: “What do you think the odds of this thing making it are?” I said, “Pretty good. One out of two.” He said, “Do you think that’s good? Why don’t you go in an aeroplane with a parachute that opens one out of every two times and jump?”’ 1 Estimating the likelihood of an event is one thing; evaluating whether or not that event is rational or sustainable seems to be another. But in reality rationality and likelihood, up to a point, are interlinked.
142 Market Discipline
Experts form their opinion largely on personal experience that has been built over years of practice and into which has integrated a great deal of their training. Scenarios and the estimated likelihood of events are based on such personal opinions. Crucial to the parallel development of scenarios from different viewpoints is to integrate them into a model able to express and document an opinion, is not to give outright a clearly stated specification. There are two main composition methods: • Declarative where composition is made according to the manner explicitly indicated by the user, his view(s) and requirements. • Inductive. This approach uses inductive rules to insert a scenario into a more complete framework, or to update an existing scenario. Either and both methods can be used to describe whole or partial behaviours. All of the elements coming into them may not be known at the beginning, therefore it must be possible to continue evolving a scenario and/or subdivide it into parts which are acquired one by one. Subdivision and integration are feasible because: • each part is expressed according to the constituents of the application domain; and • further elements might be added to the scenario during analysis or testing. Syntactical analysis uses characteristic elements of the application domain. A method like Delphi, which rests on an integrative approach, studies parts of scenario(s) and merges the partial results into an aggregate. This helps to produce quite early a script that includes subsequent iterations at subscenario level, whether these have qualitative only or with their qualitative and quantitative components. A vital element in the process is the way we use to arrive at an answer or to put on a homogeneous (hence comparable) footing opinions from different experts. Consider, as an example, the common situation of having to arrive at an answer to the question how large could be the market for a new computer product, or the market acceptance of a product at a given price. Neither of these queries is easy to answer and a lot of mistakes are made when one attempts to do so. Even the experts can be inaccurate by a large margin. When, in the late 1940s, the designers of Univac presented Thomas Watson, Sr, with the opportunity to buy their computer, IBM’s boss rejected it saying that computers had only a limited market among
Scenario Analysis and Delphi
143
academics. Finally, IBM did get into computers. In the late 1950s IBM experts estimated that the 650, which in reality has been the first ever minicomputer, had a market as big as 50 units, world-wide. How far was this estimate from the mark? By the mid-1960s when 650’s lifecycle ended, the machine had sold about 3,500 units. Thomas Alva Edison, too, made his miscalculations. Edison was an investor who had made major contributions to science and technology: the light bulb, the phonograph, and the motion picture. But he also stubbornly stuck by numerous errors in judgement – for example, calling radio a ‘fad’. The assumption with the Delphi method is that if one expert makes, for whatever reason, a mistake in judgement like that of Watson or Edison, others would disagree. This is the importance of dissent discussed in section 7.2. It is likely that it will happen that way, but it is by no means sure the cumulative judgement will be correct. (See also Chapter 12 about errors in prognostication.) An example when all of the experts were wrong happened with Delphi in connection to the Man on the Moon programme. In the early 1960s, the Rand Corporation, which developed the most popular current methodology of Delphi, carried out a global research among experts on the most likely year man would land on the moon. The answer with the highest frequency was ‘the early 1990s’ – roughly three decades hence. But the Soviet space challenge led to a race, and man landed on the moon in 1969. Nothing is failproof, and nothing is so failprone as human opinion. This strengthens rather than diminishes the importance of Delphi. Let’s therefore look more closely on how the method works. Take ‘A’ as the portion of an annual budget to be devoted to R&D in order to leapfrog our competitors, and say that we proceed in the following steps: • We select a panel of experts, and ask each expert to estimate independently what A should be. With this, we arrange the responses in order of magnitude, Ai, i = 1…15, and determine the quartiles so that four intervals are formed on the A-line by these quartiles, as shown in Figure 7.1. Each quartile contains one-quarter of the estimates. After the first round of collecting expert opinions on the most advisable value of A given the stated objective: • We communicate the values of these quartiles to each of the respondents, asking them to reconsider their previous estimates.
144
A1
A2 A3
A4
A5
A6
A7
A8
A9
A10 A11
A12
A13
A14
A15
POSSIBLE RANGE OF ANSWERS
FIRST QUARTILE
Figure 7.1
MEDIAN SECOND QUARTILE
THIRD QUARTILE
A linear plot of answers given by experts to the first round of a Delphi procedure
FOURTH QUARTILE
Scenario Analysis and Delphi
145
8 7 6 5 FREQUENCIES 4 3 2 1
A¢
A²
A²
PORTION OF THE ANNUAL R&D BUDGET Figure 7.2 The opinions of participating experts can be presented as a pattern with corresponding frequencies
If the new estimate lies outside the interquartile range we ask the experts to state briefly the reason why, in their opinion, the answer should be lower (or higher) than the one which corresponds to the 75 per cent majority opinion expressed in the first round of the questionnaire. The results of this second round, which will generally be less dispersed than the first, are again fed back to the respondents in summary form: • We include in this form the new quartiles as well as the reasons given for raising and lowering the values of ‘A’ by the experts. The results elicited in the second round are collated, edited and given again to the respondents, always preserving their anonymity. The experts are asked to consider these reasons, giving them the weight they think they deserve. Then, they should be using the new information, to revise their previous estimates or, alternatively, to stick to them if they chose to do so. The outlined methodology requires that if the revised estimates fall outside the aforementioned interquartile range, the participating experts are asked to briefly state why the argument that might have drawn their
146 Market Discipline
estimates toward the median were not convincing. Then, in a fourth round, both the quartiles of the distribution of responses and the counterarguments elicited in the third round are submitted to the respondents. The participating experts are encouraged to make one last revision of their estimates. The median of these fourth-round responses may then be taken as representing the group position as to what ‘A’ should be – or, alternatively, the pattern of fourth-round responses are presented to the board for its final decision. Figure 7.2 shows the pattern of responses with three values of A: A′, A′′, A′′′, each with its corresponding frequency. The case might have been for example that A′, the lower share of the R&D budget, was voted by the financial experts who were convinced about return on investment; the engineers voted for the mode of the distribution; while a couple of marketing people saw in the new project the way for their company to move ahead of the curve – and voted for more money than the engineers had asked. It is not advisable to streamline the opinions of experts when such opinions are characterised by bifurcation.
7.4
Scenarios and the pattern of expert advice
The example in section 7.3 helps in documenting that scenarios and the process of developing a pattern of expert opinion provide a logical string of concepts and events that can be exploited to advantage. In actual practice, such procedure is undertaken for one of several reasons or a combination of them: exploring a new line of thinking or of action; reaching a certain convergence of opinions; making relatively abstract opinions about events or budgets better understandable; restructuring existing notions regarding a certain issue; or improving upon the current system. Scenarios and the patterning of expert advice help as well in evaluating alternative courses of action, and in examining their aftermath. Also, in specifying wanted change at a conceptual level; or, in implementing new system configuration while taking the legacy solution into account. In this particular case the pattern of opinions that they present will tend to vary along the bifurcation we have seen in Figure 7.2. In an application of the Delphi method more complex than the one we have just seen, experts may be asked to do country rating for creditworthiness reasons along the frame of reference shown in Figure 7.3. This will take into account not only the country but also the instrument; for instance, foreign debt typically gets a lower rating than internal debt, because the latter benefits from the government’s ability to tax its citizen. Another dimension in Figure 7.3 is the term bucket.
Scenario Analysis and Delphi
147
COUNTRY RATING (SCALE: 0 TO 100)
INSTRUMENT OF THE TRANSACTION
TERM BUCKET (E.G. FIVE MATURITY BANDS) Figure 7.3 A 3-dimensional frame of reference for calculating the premium rate in connection to country risk
Usually short-term debt is rated higher than long-term debt, other things equal. One of the advantages of scenario analysis in the case of bifurcations in expert opinion is to promote shared understanding, explaining the reasoning behind divergent views, and/or describing step-by-step the change process and providing a linkage to reality. Generally, the scenario and the expert opinions backing it up describe plausible events that might take place but they may also include outliers. In both cases, they stimulates thinking about: • current problems, and likely or possible solutions; • hypotheses and assumptions relating to different occurrences; • opportunities for action, and associated risk and return. Scenario analysis and expert opinion voting correlate because, as we saw in section 7.3, the successive phases of expert feedback based on quartiles is in itself a scenario; while any scenario would require documentation and expert opinions provide it. Let’s see how an approach of this type
148 Market Discipline
can be applied to a typical case of cost/effectiveness in industrial investments. The target is a manufacturing budget and its competing investment opportunities each demanding an allocation of cash: more robotics, more computers, better-trained personnel, more emphasis on quality control, and so forth. Not all promising measures can be financed. Inevitably, there is a case where a substantial fraction of a given budget has been already irrevocably committed to previously contracted obligations: depreciation of factories, amortisation of machinery, management overhead, labour costs, pension payments, materials, inventories, and the like. The challenge is to devise a scheme that permits: • to suggest better focused measures; • develop and compare alternatives; and • select a preferred allocation of the freely disposable residue of the manufacturing budget. If we implement the Delphi method, we will begin asking a panel of experts familiar with the problem to list measures that they feel should be included in any programme of restructuring. Of course, rarely is a single investment of an all-or-nothing kind. There is associated with it a degree to which an allocation of funds can be executed on the basis of policy or other criteria. For example, new assembly robots, enterprise resource planning (ERP) software 2, support of a just-in-time (JIT) project, retraining programmes, quality control circles, are of this kind. Even in the case of more or less one-time actions, such as building a business partner passthrough facility around ERP software, there are constraints under budgetary control, such as: • the expected time to completion; and • the required quality of effort, and its deliverables. Therefore, in addition budgetary allocation connected to the action, and the arguments supporting it, these constraints should be supplied by participating experts and be observed by them. In a study of this nature it is as well likely that at least some of the measures will be complementary, in the sense that neither can be meaningfully adopted in the absence of the other. These measures should be first identified then combined. The same is true of actions that the experts judge:
Scenario Analysis and Delphi
149
• technically infeasible; • expensive beyond all reason; or • fraught with very undesirable consequences. These actions should be properly identified, then eliminated, while other actions or measures might be added. As this example suggests, the Delphi method’s successive iterations may be used as stepping stones towards the definition of an action programme, or the pruning of one which has been tested and found not to be satisfactory. Once the critical factors have been identified, the sequential procedure we have followed in section 7.3 with budgetary allocation for an R&D project can be repeated. The list of factors (or stepping stones) is submitted and rated by the experts. Before this happens however, it is necessary to designate for each nominated action Ci the unit in which its degree of adoption Di is to be measured. Attention must be paid to constraints. For instance, if i = 5 it may be decided that ΣDi = 100. In the average each factor will have 20 units and if one is given more than 20, one or more of the others will have to have less. The units may be merely the number of dollars or pounds or some other metrics specific to the situation. The particular choice is material to the specific problem but immaterial for the Delphi method, as long as the adoption of Ci to the degree Di has a precise meaning. No matter how the benefits derived from Ci are assessed, their value function Fi, will depend on the degree of adoption, D i and will typically appear as in Figure 7.4. The value factor derived from degree of satisfaction of a certain course of action Ci will very rarely if ever relate in a linear fashion to the corresponding degree of adoption Di. Usually, the relation between them will be non-linear. In Figure 7.4 up to Di′ corresponds a small Fi, but then there is an acceleration of returns till Di′′, and tappering off of the ogive curve shortly thereafter: • One of the requests to be posed to the experts would be precisely to estimate these Di′, Di′′, Di′′ points and their corresponding Fi′, Fi′′, Fi′′. • Once this is achieved in a manner which could be accepted as convergent, the optimisation of financial allocation becomes a relatively easy and well-documented enterprise. Notice that a similar approach could be followed with investments in connection to their risk and return. Up to some point Di′ in the vicinity of the point of inflection, the measure of satisfaction had negligible
150 Market Discipline
F″ F″ F VALUE FACTOR
F′
D ′i
D ″i
D ″′i
DEGREE OF SATISFACTION OF Ci Figure 7.4 The value derived by the degree of satisfaction of an action Ci might be represented by an ogive curve
value. While beyond Di′ the marginal value added per adopted unit begins to increase, and this happens up to Di′. From this point on, the law of diminishing returns comes into play. In the example presented in Figure 7.4 marginal returns decrease so fast as to make further investment seem definitely pointless. Scarce resources are allocated without attaining any significant improvement in satisfaction, or even undesirable side effects might appear. It should be remembered that there is always the constraint Σ Di = 100 which serves as a reminder that there exist limits. Opinions and estimates which serve optimisation purposes can be nicely obtained by combining the response of participating experts. If the plot of a quantitative expression of these opinions approximates a normal distribution, and provided the standard deviation is small, we can use the mean as a proxy of a convergent value. Since costs and benefits are in the future, they cannot in principle be fixed with any great accuracy. But they can be tuned. For instance, a valuable step would be to ask a panel of trained cost analysts to estimate the amount required to implement each action Ci at Di′ and Di′′ levels and to sketch as best they can the cost curve. Because the expected cost of a measure depends to some extent on the other measures being enacted, and so do the benefits, we should follow a similar procedure for a group of actions. This is in line with what has
Scenario Analysis and Delphi
151
been discussed in section 7.2 about analysing a system into its subsystems and recombining the results.
7.5 Extending the scope of analytics and the planning horizon The Delphi method is by no means a worst-case scenario, though it might be used to develop one based on the opinion of experts. A worst case is essentially a hypothesis about how wrong things may go. We may be correct in our pessimistic estimates but we may also be wrong in thinking that the future reserves the worst case possible outcome. ‘The singular feature of the great crash’, John Kenneth Galbraith once suggested, ‘was that the worst continued to worsen.’ An implementation of Delphi similar to the one we have followed in section 7.4 can be made in connection to the analysis of credit risk, focusing on the rating of counterparties, and on changes that may characterise the grade they have been given over a period of time. In fact, this is more or less what independent rating agencies are doing regarding the rating of companies and their financial instruments, based on a battery of questions and the analysis of their annual reports (see Chapter 9). The answers to critical questions concerning counterparty risks, and associated analytical findings are evaluated by a panel of experts to reach a decision on rating a given entity. Through this procedure, independent agencies decide whether an entity and its financial paper is AAA, D, or between these two extremes in the rating scale, which typically has 20 graduations. In the coming years, this procedure of evaluating and rating will become most important as financial institutions implement the new Capital Adequacy Framework of the Basle Committee on Banking Supervision – known as Basle II – and opt for the IRB option discussed in Chapter 4. I would like to particularly stress the point that not only the method to be used must be sound, to reach factual and documented results (and in this Delphi can play a critical role), but also the internal control system 3 of what Basle calls IRB-banks should work without clogging. A similar argument is valid about compliance to legislation and regulation. The laws may be written black on white, but to interpret them correctly learned lawyers try to read between the lines. In doing so, not only they must take into account a huge and growing
152 Market Discipline
body of jurisprudence but also they must account for elements which: • are largely situational; and • depend on expert opinion(s). Some of the better known jurists are suggesting that the most basic reason for having an established body of laws is not because it represents an ideal instrument of justice, but because, by its presence, it precludes the adoption and implementation of something worse. Also, the laws help to provide a fair playing field. How this level playing field will be protected or demolished depends on the virtuosity of the lawyers–experts and the ingenuity of their judgement on how to enforce the law or turn around it, depending on the case they are confronted with: • if they defend a case; or • if they bring a case to court. The argument just made about expert opinion, and the way to record it and distil it in a reliable manner, is also applicable to matters regarding security. There is always a question about the relative character of security as a concept, the security measures to be taken, and the quality of their implementation. In principle, security depends greatly on: • • • •
our our our our
resolve; mission; tools; and environment.
Security is a relative notion whose materialisation depends on the place and age we are living, on personal judgement regarding the wanted or needed degree of protection, and the means we are willing to put in action to meet a given security goal. These are largely judgmental issues decided by a committee or panel of experts on behalf of the government or some other authority. What the Delphi method provides is a tool for convergence of opinions. There is another fact to be added to what I have mentioned in the preceding paragraphs. The second half of the 20th century has brought an important change in the intellectual climate of the western world, particularly regarding the attitude of people toward the future. This has become apparent in both public and private planning agencies,
Scenario Analysis and Delphi
153
not only in the research community. In actual practice, however, planning horizons can be effectively extended only when we are able to provide: • a craftsmanlike analysis of risks and opportunities the future might offer; and • a way of measuring in a reasonably dependable manner the future effect (positive or negative) of our current actions. This is, after all, one of the goals we aim to reach with models. Scenario analysis and the Delphi methods are integrative tools that permit a better understanding of what it means to talk about the future. In spite of failures, like the estimate of when man will land on the moon, they are all part of the growing recognition that it is important to learn something about tomorrow. Dr Harold D. Koontz, my professor of business policy at UCLA, was often talking about the importance of reading tomorrow’s newspaper today – albeit at the cost of less accuracy than might be the case the day after: • The basic premise is that the future is no longer unique, unforeseeable, or inevitable. • But there is a multitude of possible futures, with associated likelihood that can be estimated. This is true even if the technological environment is undergoing rapid change and the pace of change is accelerating. Being able to foresee the next step, or state of affairs, is vital inasmuch as we are going through several major adjustments and to survive we have to continuously adapt. To many people this has become a way of life. Provided we use the right tools, the distillation of cumulative expert opinion provides results that tend to be more dependable than the projections of one person that go unchallenged. This dependability is most vital when we strive to anticipate changes in our environment rather than having to deal with them belatedly and inadequately: • Anticipation is the keyword in this connection and this is what an analytical scenario can offer. • The arguments and counterarguments that it develops give perspective, and often challenge established notions.
154 Market Discipline
The traditional method of analysis based on a single track is proving inadequate to the task of dealing effectively with complex events, particularly so because the rate of change tends to accelerate. The multiversity of events that are confronting us requires polyvalence in opinion, as well as a steady policy of challenging the obvious. In the bottomline this is the reason why we are interested in mathematical models, simulation studies, scenario analysis, and a systematic approach to the use of diverging expert opinions like the one provided by the Delphi method.
7.6
Making effective use of informed intuitive judgement
Some of the most worthwhile scenarios are linked to changed goals, providing a middle-ground between abstraction and reality as well as facilitating understanding and reuse of existing knowledge. This is achieved by projecting on opportunities and consequences prior to detailing system functions and features, launching a new instrument in the market, or trying the hard way to reach a desired solution. Scenarios involving the compilation and digestion of expert opinions help to explain why a certain course of action is needed by showing what will be the most likely outcome, or by bringing under perspective the alternatives as well as their aftermath. In engineering, this can lead to design decomposition, trade-offs, life-cycle views and iterative development patterns of the artefact under study. In finance, the consequence may be a reallocation of resources in a way which increases returns. Within this overall perspective, scenarios based on expert opinions can help as mediators between general guidelines, a framework of options, and detailed specifications. Analysts and designers use formal scenario representations to help themselves with a guidance tool within the context of organisational work and development capabilities. I have already explained why: • Dissent is often necessary to reach desired product characteristics, attain system goals, or assure loans are given and investments are made with full understanding of background factors. If dissent was one of its core policies in giving loans, Bank X (a Nigerian financial institution) would not have lost millions in its loans to a local cocoa trader. Its client was a company specialising in wholesale of cocoa with a business of US$1 million per year. In 1997 the CEO asked the bank for a US$250,000 credit line with collateral. He got it in spite of
Scenario Analysis and Delphi
155
the fact that on his record was that in the previous couple of years he had lost US$7 million playing on futures in the London Mercantile Exchange. That year, 1997, business had been good and growing. Right after he got received the quarter of a million credit line the CEO asked Bank X for another US$1.5 million: an unsecured loan. He got it again, and the total exposure now stood at US$1.75 million. With this money he invested in a cocoa fund, and also went started a warehouse construction. The CEO bet wrongly. Both businesses turned sour. Trying to recoup its money, the Bank X seized the company’s assets. However, it found out that the company was also indebted to other banks for US$2.50 million; a total of US$4.25 million with what it owed to Bank X. The CEO of the client company had even taken a foreign loan by pledging the same assets it had given the domestic banks as collateral. The message to retain from this example is that credit institutions don’t always exercise due care in examining the soundness of their loans and their investments. Major decisions are usually made by one individual and he or she could have repeatedly failed in his other judgement. No dissent is being expressed, and no challenger opinions are being asked. Yet: • A scenario clarifying expert opinions on different options, and their risk and return, helps decision-makers in rethinking the choices they are about to make. Within this context, the Delphi method provides effective means for informed intuitive judgement, including projections of future risk and return on which decisions must rely. Let me repeat once again, however, that these opinions and – scenarios based on them – are largely derived from personal expectations of individuals rather than from a fundamental theory that is generally valid. As such, they are pragmatic but not error-free (of course, a theory, too, is man-made and it is not error-free either). Errors in evaluation and prognostication can happen even when we have a well-tested model available, the underlying assumptions or hypotheses clearly stated, its range of applicability well defined, and the quality of the input(s) verified. The interpretation by one or more individual(s), who can bring the appropriate expertise to bear on the application of that model, might bias the results. Another fact to be brought to the reader’s attention is that in view of the absence of a deep theoretical foundation regarding evaluation and prediction we are faced with the inevitability of having, to a lesser or greater extent, to rely on intuitive expertise. There is no alternative than
156 Market Discipline
making the most towards problem solution by obtaining relevant intuitive insights of experts. The challenge is that of using their judgements as systematically as possible. Scenario writing, Delphi method, fuzzy engineering tools and procedures,4 or their combination, largely have to rely on expert judgement, making the most constructive and systematic use of such opinions. The key to dependent results is to assure the most constructive and systematic use of such opinions to assure a sound methodology: • creating the proper conditions under which convergent can perform their duties; • being objective in deriving from divergent opinions a single (or bifurcated, but exploitable) position; and • presenting the results in a meaningful, comprehensive, action-oriented manner. Because at least some scenarios deal with generalisations, the methodology to be chosen must remain valid in spite of a degree of abstraction or lack of details. Early specifications, which are made available, can be used in writing a draft or a scenario, developing a prototype, or doing simulation to show a system’s behaviour – then test for validity and acceptability the obtained results. Prototyping is most helpful in the case when the scenario includes quantitative definition of requirements, establishment of tolerances, or a description of partial behaviour. Or, it involves the ability to elaborate tolerances, therefore going beyond hypotheses concerning partly unknown factors. It is possible that early assumptions (or specifications) reflect inconsistencies which have to be flashed out. When this is true, a coherence verification becomes necessary, and it might lead to a modification of the scenario.
8 Financial Forecasting and Economic Predictions
8.1
Introduction
In Chapter 7 it was explained that there are two ways of prognosticating events that might take place sometime in the future. The one is through scenarios based on expert opinion. This is what the Delphi method does. The other uses theories relevant to the issues under study, particularly those that either are generally accepted or there are reasons to believe that could lead to fairly dependent results. This is the theme of the present chapter. Pitfalls exist in both approaches because most persons, including the experts, have a common tendency to make overconfident predictions; and, as everybody knows, in the case of major market twists overconfidence can be fatal. Another major shortcoming is the time interval between reception of market information, cognition of the new trend under development, and the subsequent reaction to it. • People do not recognise trends until they are well established, which often means until is it too late. • Many analysts do not begin to extrapolate a phenomenon, such as rising inflation, unless it rises for some time. Errors in prognostication is the theme of Chapter 12, but to make this discussion more meaningful it is necessary to bring the reader’s attention to another pitfall: Wishful thinking. In October 2000, after the big fall of the NASDAQ index, the majority of experts in Wall Street said the market had reached a bottom. Yet, the worst was still to come. Then, through the first eight months of 2001 they predicted a recovery just two months down the line. It did not happen that way. 157
158 Market Discipline
Wishful thinking and delays in detecting trends do not enable our firm to reposition itself to face market forces. Developing a prognostication or a plan after irrefutable statistical evidence is on hand makes us unresponsive in terms of the adjustments that are necessary, reducing our flexibility as well as the value of our reaction. Out-of-date technology and ineffective mathematical tools encourage such an attitude. Dr Benoit Mandelbrot was correct when he stated that almost all our statistical tools are obsolete and we might eventually need to ‘consign centuries of work to the scrap heap.’ If forecasting general economic trends and functions is neither easy nor so timely, the good news is that we are able to do a better job in projecting cost functions (largely based on standard costs), production functions (leading to production scheduling), and market functions (based on the results of market research). All three provide data structures that are useful in model building and experimentation. Among the more classical tools used in connection with the above functions are regression analysis on available data sets, and selection of equations that assist in obtaining a predictive view point after making some assumptions about the data distribution. New tools are associated with the experimental method, and they include the test of hypothesis, test of the mean (t-test), test of the variance (χ2-test), computation of confidence intervals as well as a lot of experimental measurements and other observations. Newer, more advanced modelling methods developed for prognostication account for the fact that an accurate prediction of the future does not always occur through extrapolation of known phenomena. Rather, it is made by means of analogical reasoning – which essentially means judging whether there exist some sort of analogies and of which kind. Let me add, however, that in a fast changing environment this too has its flaws. Managing change is one of the today’s greatest challenges, and change is rarely predictable.
8.2
The art of prognostication and its pitfalls
Prognostication is knowing something in advance, or indicating beforehand based on a factual forecast. Being able to prognosticate means making judgement(s) ahead of the facts concerning events for which there exist scant documentation, contradictory evidence, thin experience, conflicting opinions, or a combination of these factors: • Every investor and every trader would like to prognosticate a correction in the stock market, or the chances and likely timing of a recovery.
Financial Forecasting
159
• Rare are, however, the people who can do that, and even more so those able to repeat such performance. Predictions about economic activity are usually based on inference, they reflect theories about economic cycles, and often have causality in the background. Scenario analysis based on economic theories tries to exploit a sign or indication of things to come; or a cause-and-effect relationship by utilising early indicators that are more or less generally accepted. Theories are developed to explain market activity and they are more or less generally applied as long as their appeal lasts. Let me take an example. The 19 October 1987 stock-market collapse, a 14 standard deviations event, erased more than US$500 billion in investor wealth in the US alone (much more than that amount world-wide). It was also a major blow for one of the major theoretical ideas in finance – the efficient market theory (EMT) of Dr Henry Markowitz. Developed in the 1950s, in the euphoria of the post-World War II years, EMT rejected the then popular view that the value of stocks changes with buying trends, in reaction to speculative fever, and as a function of market inefficiencies – which was itself the de facto theory of that time. According to EMT: • investors act rationally; • the market is efficient; and • stock prices reflect whatever information people have about the fundamentals. For instance, according to EMT stock prices reflect both present and future earnings. Other economists produced their own theories that boosted EMT. One of them is that the market is a zero sum game. Dr Maurice Allais demonstrated statistically that what one player gains the other loses. The way this theory has it: • stock prices change only with fresh financial news, and • price movements do not necessarily reflect crowd psychology. While such notions were not unknown in years past, generalised acceptance of the EMT built up to nearly a market revolution in the 1980s. Finance professors as well as mathematicians and physicists built careers on Wall Street by exploiting EMT’s real or hypothetical insights. Few enlightened spirits had the courage to state publicly that both EMT and zero sum were based on false hypotheses.
160 Market Discipline
EMT states that one cannot consistently outperform market averages, since only unexpected news moves prices. But its theorems and postulates were found to be useless in explaining the Bloody Monday of October 1987. What new information jarred investors into reducing their estimate of the value of corporate assets by some 23 per cent in the six-and-a-half hours that the New York Stock Exchange was open? None, apparently. Many people believed the price drops themselves signalled a crash, and many investors tried to cash in their assets. The events that took place in the biggest New York stock-market calamity in 58 years seem to have had several causes, among them shortcomings in trading practices. One significant problem was the inability of dealers to answer phone calls. This was simply a case of having too many phone lines for all of them to be answered manually. Another problem was dealers’ inability to process the flood of orders resulting from the crash. This sort of undocumented theoretical underpinnings, while they are à la mode work, are taken as the bible of finance. This is, dogmatic, and it is a totally different approach that the one we examined in Chapter 7 which thrives on expert opinion, is based on dissent, and involve a valid sample of knowledgeable people. Long hair economic theories may give their maker status in academia and maybe a Nobel prize, but they are rather irrelavant in practical terms. The way the late president Lyndon Johnson had it, the theories of economists is like somebody making pipi in his trousers. It is hot stuff, but only to him. In plain English, as with all personal opinions, theories are in no way fullproof: • Some date back to antiquity (ancient Egypt – the seven fat and seven meagre cows denoting a seven-year business cycle). • Others, are most recent and tend to come and go, made irrelevant by new facts, even if they have exhibited a certain degree of permanence. An example of the latter is the theory that says that rising interest rates are as nourishing to stock values as toxic waste in the garden. In more elegant terms this theory suggests that rising rates overwhelm rising earnings. This is expressed in Figure 8.1, which relates earnings per share (EPS) multiples to applicable interest rates for 10-year treasury bonds. Statistically the repression line looks nearly linear, but as with every model this two-dimensional presentation omits certain crucial factors that ensure that the theory does not necessarily apply to all cases. If it did, then the eleven consecutive interest rate cuts by the Federal Reserve
12-MONTH FORWARD P/E RATIO
Financial Forecasting
161
26 24 22 20 18 16 14 12 10 8 4
5
6
7
8
9
10
YIELD OF 10-YEAR TREASURIES Figure 8.1 Interest rates impact the way investors value equities Source: With permission of prudential securities.
in 2001 would have definitely turned the market around into positive territory. This has not yet been the case.1 Because economic theories are not fullproof and factors that are omitted can refute them, critics suggest that their developers as well as financial analysts and other forecasters are often removed from reality, and by a margin, in the projections that they make and the hypotheses behind them. In a published commentary, John Dorfman said: ‘I doubt there will be another US stock-market crash in this decade, because investors have been trained to buy the dips.’2 In October 2000 the NASDAQ moved massively south and this continued well into 2001, for 17 months after that opinion was voiced. Dorfman, however, had the prudence to carry out some preventive damage control, as in that same article he stated: ‘But we could see a year or two in which stocks rise two days a week and decline three days a week. In other words, we could have an old-fashioned, saw-toothed, gradual, excruciating bear market.’ Another late US president, Harry Truman, is rumoured to have said to his assistants: ‘I am looking for a onehanded economist.’ When asked why, he replied: ‘That way he would not give me an advise and then immediately add: “On the other hand . . . ”.’ Another interesting hindsight into economic projections is the reference John Dorfman made to Peter Lynch, the former Fidelity Magellan Fund manager, whose predictive power of stock market trends had more or less gone unchallenged in the 1980s, till the saga ended. Lynch
162 Market Discipline
once said that anyone who spends 10 minutes a year predicting the stock market is spending 5 minutes too much. In spite of such aphorisms, people make a profession out of all sorts of prognostications, and sometimes they also make a small fortune – or lose one. Neither is the imitation of what others have done to forecast future events, or are supposed to have done, a good policy. My principle is that a more or less accurate prediction of the future does not really occur through extrapolation of known phenomena. Rather it is made by means of conceiving and exploring nonlinearities, including discontinuities, interruptions and loopbacks. In short, this means the knowledge and skill to understand that: • Living systems, including those man-made, find themselves quite often at the edge of chaos. • But there is an underlying pattern of behaviour that we need to detect and analyse – if possible. When we are able to do so, we can position ourselves against the forces of the future. Otherwise prognostication is daydreaming and the hypotheses made to sustain it are no more than wishful thinking. A large number of forecasts are not based on facts that can be verified as a given situation unfolds, neither are many of the proposed solutions able to adjust themselves to the real world.
8.3 Predictive trends, evolutionary concepts and rocket scientists In early 1999, a senior analyst of Merrill Lynch predicted that stocks would not perform well that year. Instead, he said, a 10 per cent or better return on investment could be obtained in bonds. The market did not oblige. In spite of Russia’s bankruptcy and LTCM’s meltdown, 1999 was a bumper year for equities, while investors were negative to bonds. Nobody should be under the illusion that what financial analysts and rocket scientists say is foolproof. To appreciate the difficulties associated with prediction we should recall that rocket scientists and other financial experts are faced with situations full of uncertainties, as they struggle to find evidence for what they perceive to be future events. To be able to do a decent job, they need all the independent strands of evidence they can muster, as well as first-class tools. Only then they stand a chance to:
Financial Forecasting
163
• see through certain patterns; and • develop plausible hypotheses that can stand their own. This is what physicists and engineers have been doing as an integral part of their profession. What is new is that such methods are now being used in economics and finance – or, more precisely, in non-traditional financial research. This has happened because of the need to analyse complex situations, as well as the fact of cross-fertilisation by engineers, physicists, and mathematicians who were working in biology, nuclear science, space research and rocket design before becoming financial analysts. However, the laws of physics are not necessarily the laws of finance, even if some similarities exist. The laws of physics are universal, while the laws of men (including the rules of finance) change in time and space. Normally, an offer-and-demand cycle corrects itself because there are a number of buyers and sellers. In a market landscape dominated by institutional investors, however, big players move as one in the same direction: • when there are too many sellers, there are almost no buyers, and • the result is that disparities between offer and demand become very large. In the aftermath of such reaction, old market mechanisms, which are based on the assumption that some buyers will be available, fail to deliver. One of the theories has been that computerised trading systems could avert these problems by making it possible for a dealer to design a client’s trade on a computer and execute it by computer. After the investigation that followed the October 1987 market crash, this theory, too, fell into disfavour. Program trading was found to be the problem rather than the solution. A basic question that is common to both forecasting and the science of evolution – in short, in connection to prediction theory – is how to determine the macroscopic variables that characterise the behaviour of a complex system. This has many similarities with the quest to identify and quantify notions connected to unpredictability in the study of chaos, whether the undergoing system is: • the behaviour of financial markets; • astronomical observations in the cosmos; or • patterns connected to weather prediction.
164 Market Discipline
In visual pattern recognition connected to either of these cases, the typical formulation of the problem is intended to classify information in a manner amenable to interpretation. This requires identifying, then handling in a competent manner, independent variables and dependent variables, as well as their range of variation and other characteristics. Usually in any system, whether natural or man-made, the independent variables provide the inputs from which, given the model we have made of the real world, we aim to obtain outputs that tell us something significant about the behaviour of the dependable variables. The outputs may, for instance, be the: • • • •
yield curve; volatility curve; cap curve; or, some other pattern.
We have spoken of the yield curve in Chapter 6, and of the effect of rising interest rates in section 8.2 of this chapter. Estimates of expected volatility are extremely important to financial research projects in a wide array of issues – ranging from the pricing of instruments to risk control. Quite often theoretical volatility smiles guide the trader’s hand into mispricing – as happened in March 1997 when NatWest sustained huge losses because of mispricing of options. The more pain we suffer from wishful thinking and unverified hypotheses, the more we appreciate that serious study and development of predictive models is closely associated to that of evolutionary adaptive systems. This is so much true that, many experts believe, in the long run these two research areas will probably merge: • Prediction technology is already fairly well developed in the natural sciences, by means of modelling. • Evolutionary technology, by contrast, does not currently exist in an applicable form involving complex adaptive mechanisms and autonomous systems. It is the opinion of several people, by means of guestimating rather than through rigorous experimental principles, that one of the primary effects of predictive technology in financial markets will be to make the markets more efficient. Critics say that this is evidently nonsense; a statement made by people who confuse the use of a scientific tool, like modelling, with the domain of science as a whole (see Chapter 1).
Financial Forecasting
165
As section 8.2 demonstrated, the argument of the efficient market theorists is self-defeating, not only because the market is not efficient but also for the reason that tradable predictability can be found in a market if it is inefficient, as most markets are. If the market were efficient, then prognostication would have been quite easy to everybody not only to entities with the: • best rocket scientists; • best organisation; and • most modern tools. Tradable predictability can be unearthed not as a matter of luck but through steady effort and hard work. This is true for any business and for any purpose: hard work, patience and thoroughness are the primary means used to build a strong project, or an unassailable case. Prognostication is research and researchers must: • seek out all evidence; • overlook no clue, however small; and • use the latest technology in doing their work. Experience teaches that it is unlikely that the real reasons or the underlying laws of physics, or of economics, will be discovered in a straightforward manner. Nor is it likely that a dramatic discovery will wrap things up neatly, as often happens in fictional stories with a happy ending. What is needed is to construct a web of circumstantial evidence so strong that the conclusion is inescapable. The proof of every hypothesis entering into a prediction should be a matter of basic concern, and the strands of this web should be as independently anchored and as firmly expressed as possible. In this way, if one strand breaks the whole web of a prognostication will not unravel. In conclusion, predictive technology is very important to finance, and it can have a large impact on many business and economic sectors. In any market, obvious applications are situations that involve supply and demand as well as those requiring the adjustment of parameters in a complex process, such as the one characterising stock markets, money markets and bond markets. Predictive technology, however, has a cost – and this cost is not only financial. Beyond the implementation of prediction theory in well-defined situations concerning ‘this’ or ‘that’ financial product or market sector, evolutionary approaches are a two-edged sword because they elucidate
166 Market Discipline
changes in long-term market behaviour, such as business cycles. On the one hand, they might shed light on gradual evolution in market agents’ strategies; and on the other they are instrumental in promoting such changes and in accelerating their happening.
8.4 A prediction theory based on the underlying simplicity of systems Prediction processes exist because even chaos has underlying rules of behaviour, but they need a theoretical background on which it is possible to explore opportunities as well as to built further notions that can serve as stepping stones. Noether was a mathematician at the University of Göttingen when in the 1920s she suggested that the conservation of momentum might indicate a symmetry or underlying simplicity and symmetry in nature: • This concept of symmetry was destined to have a profound effect on particle physics and in other domains. • Noether’s idea was extended to mean that nature tends to prefer simplicity, and so should designers of systems. At the heart of the universe, beneath the apparent chaos of the world as we perceive it, there exist underlying laws. This is what Mitchel Feigenbaum said when he conceived the principles of chaos theory. What we perceive as the random movement of submicroscopic quantum physics, are simple regularities and symmetries that tend to occur. Some people suggest a similar statement can be made about the behaviour of financial markets when approached at atomic level. Based on this concept, which is clearly a hypothesis, astute financial analysts, as well as physicists, biologists, engineers and mathematicians work hard to develop a prediction theory. They use algorithms and heuristics and target not the whole universe but a small area, or subject, at a time. For instance, volatility expressed in percentage price changes. To calculate the volatility of prices of a given commodity or derivative financial instrument in the market, we usually take percentage price changes. For any given commodity these are thought to form a normal distribution, which is, of course, an approximation. A better approach in mapping actual price changes is the lognormal distribution, shown in Figure 8.2. A variable has a log-normal distribution if the natural logarithm ln of that variable is normally distributed. For instance, when it is said – that
Financial Forecasting
167
CAP VALUE PER YEAR (JUST NOTE DIFFERENCE)
0
Figure 8.2 maturity
5 10 15 MATURITY IN YEARS
A lognormal distribution for option pricing reflecting volatility and
a bond price is log-normally distributed, what is being suggested is that the rate of return of the bond is normally distributed. The latter is a function of the natural logarithm of bond price. In their paper on option pricing, published in the early 1970s, Dr Fisher Black and Dr Myron Scholes used the lognormal distribution of prices. If P1 and P2 respectively represent the prices of a commodity, or of an option, at two different measurements taken tick-by-tick – which may be second-by-second, minute-by-minute, or on any other basis – then: • A percentage change is given by: P2 − P1/P1 · 100. • A lognormal change is calculated by: ln (P2/P1). Where ln is the natural or Neperian logarithm. For both percentage changes and lognormal changes, the fewer the observations the less reliable the inferences drawn by the model. Hence, the wisdom of using high frequency financial data (HFFD), which is now becoming the norm among tier-1 financial institutions. Figure 8.3 dramatises the difference between fine-grain observation, made tick-by-tick in regard to market prices, and the classical coarse grain. The argument that too many measurements tend to impose too much of a workload on traders and analysts only holds true for the technologically underdeveloped banks, since the calculations should be done in real-time through models and computers. When I hear this argument, I tend to think that: • the organisation of the financial institution is deficient; and • its management looks at the future through the rear-view mirror.
168 Market Discipline ORDERS OF MAGNITUDE SUBSECOND FINE GRAIN SECOND 3 TO 4 SUBMINUTE
6
MINUTE
INTERMEDIATE GRAIN
5-MINUTE
8 2 TO 3 4 TO 5
HOURLY COARSE GRAIN DAILY
WEEKLY
UNREASONABLY COARSE
MONTHLY Figure 8.3 Between fine grain and coarse grain financial data the difference is orders of magnitude
Therefore, the entity is incapable of handling HFFD requirements. There are many institutions whose management is wanting. What is more surprising is that those banks and other companies that feature no rigorous analytical skills and low technology, are the more reckless. Their managers and traders are engaging in potentialy dangerous transactions and putting their firms at risk. At the same time, because they lack the navigational instruments a modern company should have, these firms: • are missing a golden horde of market opportunities; and • they become prey to adversity in volatile environments. An example can be taken from the commodities market. Copper is the world’s third most widely used metal, after iron and aluminium. Because it is primarily employed in highly cyclical industries, such as construction and industrial machinery, its price is influenced by
Financial Forecasting
169
economic cycles. Since the market fluctuates widely, profitable extraction of the metal depends on cost/efficient, mass mining, high yield techniques. That much about the supply side. At the demand side, copper’s importance in world markets and responsiveness to world events makes its futures and options risky. Any leveraged instrument can turn to toxic waste when volatility increases. This exposure is evidently greater in times of high volatility not just as a trend but also as intraday event. Therefore, yearly averages, or even monthly, weekly or daily averages, are totally inadequate to: • pinpoint business opportunity; or • permit gauging the amount of assumed risk. What is needed is high-frequency financial data both for the upside and the downside; for profitable trades and for money losing trades, in short for control of exposure purposes. What I just stated is very important for copper futures and options as they are designed to provide a trading device for industrial producers and users of the metal – also for private investors and speculators, who seek to profit by correctly anticipating price changes. They all aim at earning trading profits by taking advantage of: • • • •
a (questionable) ability to foresee trends; affordable margin requirements, hence leveraging; a relatively high level of market liquidity; and a fluctuating level of copper price volatility.
It follows from the four points above that copper futures trading is not for everyone. Since the leverage inherent in the futures market can work against traders as well as for them, losses can mount up quickly if the market moves adversely – and such risks may prove substantial unless we are able to follow them tick-by-tick. The coarse grain information conveyed by many market price charts, or through daily averages, provides no basis for analytical studies in copper trading. At best, it only suggests that during certain time periods volatility in copper prices was high, while in other volatility was low. This serves absolutely no purpose in predictability. It follows that if we wish to be able to prognosticate: • our theories must be correct; and • our data streams must be fine grain.
170 Market Discipline 2 2 6 22 4 20 1 16 8 14
4
5
6
6
7
8
9
18 16 12–MONTH 1 FORWARD 1 4 10 2 P/E RATIO 8
YIELD OF 10-YEAR TREASURIES Figure 8.4
From low yield stability through chaos to higher yield stability
We need high-frequency financial data to be able to extract significant information from historical statistics. We also need algorithms to massage HFFD in real time and make the data streams confess. One of their confessions may be that the level of reference at which we work has significantly changed. This is another manifestation of non-linearities in financial markets. We are just beginning to understand how to handle non-linearities in finance. An example is provided in Figure 8.4, which restructures the statistics presented in Figure 8.1. The yield of 10-year treasuries finds itself at two levels of reference: • interest rates less than 6 per cent, and • interest rates more than 6 per cent. The 6 per cent is the cut-off point that allows us to look in a more homogeneous, and meaningful way at the cluster around the trend line characterising each one of the two levels. Many business problems show this type of nonlinearity, but few people understand that as the level of reference changes so does the model. A model made for the higher-up landscape of interest rates is invalid for the lower one; and vice versa.
Financial Forecasting
171
8.5 Undocumented hypotheses are in the background of many model failures Rocket scientists brought the concept of hypotheses into financial modelling the big way. They did so from their experience in aerospace and nuclear engineering but, down to the fundamentals, hypotheses have always been in the background of financial analysis. What has been missing is a methodology for making and subsequently testing tentative statements that are made to explain some facts or lead to the investigation of others. To be careful, hypotheses have to be reasonable, and they should not be unrelated to real life. They must be based on the background and experience of who makes them, and they must always be tested. The test of hypothesis is most vital both after they are made and when the implementation conditions change. One of their basic characteristics is that they are never permanent. The way to bet is that valid hypotheses are based, among other factors, on experience valid definitions, and the minimal of databases. Webster’s Dictionary states that a definition is a statement expressing the essential nature of something, by differentiation within a class. A good definition ought to help us distinguish between pertinent and sound financial factors and their opposites. Once we have determined that a given variable is important to our model, we have to subject it to a screening test in regard to: • its pertinence to our problem; • its relative weight; and • the way it combines with other variables. This can be successfully done through database mining. The advisable procedure is shown in Figure 8.5, taking intraday exchange rates as an example. A sound procedure would see to it that the same data used to develop a hypothesis is not employed again in its testing. This is one of the most frequent mistakes I have observed in my 55 years of practice. Mathematics is inappropriate for solving problems if we are not careful with what we do, too theoretical, hard-headed, or do not use the experimental approach. It is obviously not possible through models to mathematically define the one and best solution, do away with uncertainty, or leave aside a lot of practical considerations usually associated with financial problems.
172 Market Discipline
STUDY OF DATA STREAMS AND DATABASE MINING
MAKING OF TENTATIVE STATEMENT
CHECKING ONLINE FOR ANTECEDENTS
REALTIME SCREENING OF HYPOTHESES BASED ON ANTECEDENTS
EVALUATION OF A CURRENCY EXCHANGE TREND STEP-UP, CONSTANT, STEP-DOWN
PREDICTION, ITS PROBABILITY AND JUSTIFICATION
Figure 8.5 A procedure for online generation of hypotheses regarding intraday currency exchange rates
Failure to appreciate that even a valid hypothesis may be invalidated because of changing conditions can lead to unusual results. For instance, during the surge in gold prices that started in April 1993, one of the algorithms being used factored a 12-year trend both in the gold market and at the New York stock market (both are dollar-denominated). This forced correlation made little sense: • In 1982 the Dow Jones Index stood at the 800 level while gold fluctuated around US$500 per ounce (though at a peak it did cross the US$800 mark). The ratio has been 1.6:1. • In April 1993, the Dow Jones was at 3.450 while an ounce of gold languished at 350, nearly a 10:1 gap.
Financial Forecasting
173
There is no reason why the Dow Jones and an ounce of gold should be more or less equivalent move in tandem, or move in opposite directions. But some financial analysts made the hypothesis they had found a passing anomaly on which they could capitalise. To exploit this hypothesis they build scenarios, with ratio analysis as the pivot point. Ratio analysis can be tricky. Historical data are not always the best guide. In Roman times, the price of an ounce of gold and of an ounce of silver differed by less than an order of magnitude – while today the difference is nearly two orders of magnitude. The old silver-to-gold ratio never returned, and if the Byzantines used the widening trends as an ‘anomaly’ on which they could capitalise, they would have been badly deceived. On this same issue of gold prices, some financial analysts feel that a more fundamental solution is in order as a predictor; one that ultimately relates to the size of the bullion and equities market in gold. If enough calls on bullion are purchased, someone eventually has to buy the bullion and this, their hypothesis goes, will push up the capitalisation of gold mines. In formulating this statement, little attention has been paid to the fact that gold is a tiny asset class, compared to the trading that goes on in other asset classes such as the trillions of dollars of equities at large, bonds and currencies. The correlation supposed to be present has not materialised, nor is there any indication that it will. Another undocumented hypothesis, too, has been floated around. If a tiny percentage of the money that trades in currencies and fixed-income instruments each day, as well as some equity money, were to move toward gold (whether into gold derivatives, bullion, or stocks) the impact would be huge. Seen from an early 1993 perspective – when this hypothesis was advanced – it was suggesting that: • The two-year long depression in the price of gold bottomed-out and big profits loomed ahead. • But apart from the gains made because of the scam of the gold mine in Indonesia, other profits did not come forward. Rumours too play a key role in prognosticating the future behaviour of commodities and other assets. In April 1993 the rumour was that there was a huge short position in excess of 350 tons of gold, sold by some Middle East sheikhs. It was said as well that the producers had sold forward at unprecedented rates, because all forward sales were good deals in a declining market.
174 Market Discipline
Since short trades will have to be covered, from where will the sellers come? Most sales of above-ground gold are controlled by governments, banks and corporate treasuries – an ownership pattern leading to a seller’s market. This hypothesis of profits to be made because of too many people having gone short worked for a couple of months, particularly in the April/May 1993 timeframe, but for the rest of that year and for the rest of the 1990s, those who gambled failed: • Undocumented hypotheses by analysts and investors can cause havoc, and they are the antipode of analysis. • It serves very little to use more rigorous analytical tools when the theories and statements on which our work is based are tentative or unreliable. Another challenge is the case of extreme events. A growing number of financial analysts who use modelling techniques for prognostication ask whether the appropriate response to extreme market movements is to develop a new forecasting methodology that assumes the inevitability of more economic storms and outliers. Examples are often taken from weather forecasting. Though weather examples have a rationale of their own, there is as well a significant difference between those and financial events. No matter how good or how poor is the meteorological forecast, one is safe in assuming that the prediction will not affect in any significant manner the weather sometime down the line. By contrast, a financial or economic forecast that is perceived to be good will influence the subsequent evolution of business transactions and of the economy – while one unjustifiably gloomy will do the opposite. Therefore, financial models have to be both sophisticated and welldocumented, making predictions but also endowed with means able to feedback conclusions made by the forecast and test them. This must be done repeatedly. Real-life backtesting is also highly recommended, as the Basle Committee implied with the 1996 Market Risk Amendment. Otherwise, embedding extreme events in frequently used models may have the unwanted effect of increasing the chance of further financial storms.
8.6
Investment horizon and the arrow of time
Siegmund Warburg used to evaluate his fellow bankers, treasurers of corporations, major investors, as well as chiefs of state, on the basis of
Financial Forecasting
175
whether or not they knew the difference between Kairos, or short-term time, and Chronos, the long-term time. He did the same with his associates and assistants, impressing upon them that important events happen in the longer term, and benefits come to those who can wait. The time horizon one adopts is most critical to practically every type of investment. More often than not, the time horizon would be a deciding factor in regard to whether this investment has been fruitful or it has turned sour. The choice of a time horizon has much to do with the personality of a professional, as well as with the job one is doing: • Speculators have a very short time horizon; they move fast in and out of the market. • Traders generally have a short time horizon; they hold securities in the trading book to sell them. • Bankers are used to have a longer time horizon, as evidenced by their holding of loans to maturity. This changed with securitisation. • Central bankers have a very long time horizon. They issue the money; their investments and their returns can wait. Other things equal, a longer time horizon – therefore Chronos – can do with less high technology – while old technology strangles the trader and the very active investor who targets the concept of Kairos. Just as important is the management of time, a subject to which the different economic theories and market postulates (that come and go) pay only scant attention. One of the problems of Kairos is that nobody seems to have enough of it, yet everyone has all the time that is available. People make uneven use of their time. A surprising large number of people lack the concept of time management, while time moves faster for those who work faster. This is the concept of intrinsic time, shown in Figure 8.6. • Time moves fast in the New York market. American traders take sandwiches for lunch and they eat them at their desks while working. • Time moves slow in Tokyo. The Japanese trader will go to a sushi bar and wait to be served, while business opportunities are gone. Because time moves faster for those who work faster, people with higher intellectual productivity accomplish more in the same amount of time than people who work at normal pace or slower. The different economic theories typically fail to account for the arrow of time; an error
176
INTRINSIC TIME MOVES SLOWER
Figure 8.6
CLOCK TIME
Intrinsic time can be shorter or much longer than clock time
INTRINSIC TIME MOVES FASTER
Financial Forecasting
177
as significant as that of always relying on the normal distribution of measurements and events. In connection to distributions, section 8.4 brought under perspective the log-normal distribution as an alternative, while Chapter 6 explained the interest analysts should have in studying the 4th momentum of a distribution; known as kyrtosis or Hurst’s exponent. Hurst was a British engineer who, when studying the floods of the Nile, observed that events he was looking for were not normally distributed. They had a pattern of their own, as if nature had a memory: • a flood tended to be followed by another flood; and • a draught tended to be followed by another draught. This is a good example of what we may observe when we consider the arrow of time. It may well be that more recent events have greater impact than distant events, because of some residual influence. A system that exhibits Hurst coefficient, or fat tails, is the result of a long stream of interconnected events, which we do not quite understand: • Where we are now is a result of where we have been in the past, particularly in the more recent past. • With this type of happenings, the short term seems to be more important than the long term. This is a different way of saying that time matters. In this case, kairos might hold the upper ground over chronos. Events ripple forward in time but the size of the ripple diminishes until, so to speak, the ripple vanishes. Eventually, chronos wins, which is another way of saying that the long term is a symmetric pattern of the short term. This concept of a time arrow runs contrary to classical econometrics, which assumes that time series are invariant with respect to time. Few economists indeed appreciate that behind this undocumented hypothesis lies the concept that the present does not influence the long-term future, no matter what the shape of the underlying distribution, but it can have short-term effects. Hurst’s findings refute the sort of linear theories of classical economics and the notions underpinning them. Not only distributions of events may be leptokyrtotic or platokyrtotic but also the time series may be antipersistent or ergodic. The output of the underlying system can be mean reverting. If that system has been up over a certain timeframe
178 Market Discipline
a mean reverting process sees to it that it will be down in the next timeframe; and vice versa. Markets follow quite closely this antipersistent behaviour, which switches between positive and negative correlation of key factors. This sort of time series would be choppy, reflecting the fact the market is volatile. Mean reversals are not the only force. There is also the widening standard deviation which makes events in a certain setting less predictable. The Modern Portfolio Theory (see section 8.2), and other pricewinning constructs, don’t pay attention to mean-reverting time series and Hurst exponent. But forgetting about them does not mean they are not present, or that they don’t take their toll. Economists like series that conform to a normal distribution. They consider them to be a more interesting class for their work, even if they are not plentiful in nature and, more or less, they are alien to the capital markets behaviour.
9 Reliable Financial Reporting and Market Discipline
9.1
Introduction
When a company wishes to have access to the capital markets, it must accept and fulfil certain obligations necessary to protect the interests of investors, supervisory authorities and the general public. One of its most basic responsibilities is full and fair public disclosure of corporate information, including financial results whether these are positive or negative. On these financial results will be based not only current analysis but also future efforts on prognosis. Financial disclosure is, so to speak, the opposite of prognostication, because it is based on facts and verifiable accounting records, not on hypotheses and estimates. But at the same time publicly available financial statements assist in evaluating the health of a company. The analysis of data provided in public disclosures helps in demonstrating good financial management or in identifying an impeding disaster. The company’s senior management is the main party in financial disclosure but it is not the only one responsible for the accuracy of financial statements. According to a decision by the US Supreme Court, when an independent public accountant expresses an opinion on a public company’s financial statements, he or she assumes a public responsibility that transcends the contractual relationship with his or her client. The independent public accountant’s responsibility extends to all of the corporation’s stakeholders: • stockholders; • employees; • creditors; 179
180 Market Discipline
• customers; and • the investing public. A third major party in reliable financial reporting is the government and its regulatory and supervisory agencies. Rules and regulatory standards for financial reporting as well as for auditing public companies must be in place to safeguard public trust, and everybody must adhere to those standards. Compliance is a core issue in prudential supervision. But not every company is keen on compliance. This is not always the case. From time to time reliable financial reporting standards are relaxed, or simply they are not observed, while the government may not be exercising a draconian corrective action. At the same time, new financial instruments change the playing field, and old rules are no more able to control the dependability of reported financial information. A good example on this last reference is derivative financial instruments that since the early 1980s have been written off-balance sheet. While the rules regarding derivatives reporting started to change in the early 1990s, it was really by the end of that decade that rigorous rules were established in the UK, the US, Switzerland, Germany and other countries of the Group of Ten. The result of these rules has been to reintegrate assets and liabilities derived from derivatives into the balance sheet. In the UK, the cornerstone regulation is the one which established the Statement of Total Recognised (but not realised) Gains and Losses (STRGL). Financial Reporting Standard 13 ‘Derivatives and Other Financial Instruments Disclosures’ was released in 1998 and became effective for accounting years end ending after 23 March 1999. In the US, the current rules have been established by the Statement of Financial Accounting Standards (SFAS) 133. Unfortunately in their fine print the rules outlined by STRGL and SFAS 131 are not the same, but, they do have common elements. One of their most important common basic rules is the concept of management intent. Is the entity entering into a derivatives transaction doing so for reasons of trading or as a hedge (or investment) intended to be kept to maturity? In this Chapter I use STRGL, as a label, quite liberally. I don’t treat it as a British standard but as a universal reporting norm in case management intent with derivatives transactions (or, at least, with some of them) is trading. The concept I present is essentially a merger of STRGL by the UK Accounting Standards Board (ASB), FAS 131 by the US Financial Accounting Standards Board (FASB), and the Swiss derivatives reporting
Reliable Financial Reporting
181
regulations. The framework into which this expanded STRGL is embedded in another real-life rigorous financial reporting system, known as COSO, which has been implemented in the U.S.
9.2 Committee of Sponsoring Organisations (COSO) of the Treadway Commission and implementation of COSO The Treadway Commission was a private initiative started in the late 1980s by James C. Treadway, Jr, formerly a Commissioner of the Securities and Exchange Commission (SEC). The Treadway Commission has been jointly sponsored and funded by the American Institute of Certified Public Accountants (AICPA); the American Accounting Association (AAA), the Financial Executive Institute (FEI), the Institute of Internal Auditors (IIA), and the National Association of Accountants (NAA): • Originally, COSO has been an abbreviation of Committee of Sponsoring Organisations of the Treadway Commission. In sponsoring the implementation of the conclusions on reliable reporting policies and practices reached by the Treadway Commission, the aforementioned professional organisations have been joined by the US regulators. In fact, in 1998 the Federal Reserve banks have been the first to implement COSO: • Today COSO has become a generic term identifying the pillars on which rests reliable financial reporting. The first of these pillars is the control environment. As defined by COSO, it includes integrity, ethical values, competence, organisational structure, management’s philosophy, operating style, assignment of authority and responsibility, and human resource policies and practices. Another pillar is risk assessment, which incorporates risk analysis of external and internal sources, as well as risks associated with change and senior management’s ability to manage change. A third pillar of reliable financial reporting is control activities, defined by COSO as the policies and procedures identified by management to meet established objectives. A fourth, is information and communication prerequisites. These concern the ability to capture and disseminate relevant information to ensure people carry out their job responsibilities. The fifth pillar is steady and accurate monitoring, not only of ‘normal’ events but also of stresses on activities connected to the internal control
182 Market Discipline
STEADY MONITORING
INFORMATION AND COMMUNICATION
CONTROL ACTIVITINIES
RISK ASSESSMENT
CONTROL ENVIRONMENT
RELIABLE FINANCIAL REPORTING
FINANCIAL FACTS AND FIGURES ACCOUNTING REPORTS BANKING BOOK, TRADING BOOK, AND OTHER DOCUMENTS
Figure 9.1
The reliable reporting structure created by COSO
system. The integral control system must allow management to ensure that objectives are met and that corrective action takes places. All five pillars, COSO, says, are necessary to support a sound management control system. Figure 9.1 gives a snapshot of COSO’s reporting structure. As I mentioned, the rules and financial reporting system it defines have been implemented in 1998, at the Federal Reserve Banks of New York, Boston, and Chicago, by decision of the Federal Reserve Board. This was followed by their implementation in the other nine Federal Reserve Banks, then in all US commercial banks with assets of more than US$500 million. ‘What we attempt to do in the COSO implementation’, said William McDonough, Executive Vice President of the Federal Reserve Bank of Boston, ‘is to go through the COSO-defined process of risk assessment
Reliable Financial Reporting
183
INTERNAL CONTROL
RISK MANAGEMENT
COMPLIANCE, ACCOUNT RECONCILIATION
PERSONAL ACCOUNTABILITY
Figure 9.2
The viewpoint of SEC and of the Austrian National Bank
and communications. We also implement risk tools and decide the sort of mechanism which will serve best these goals. In this, COSO serves as a framework which permits to look into different types of risk and check the mission-critical processes.’ According to this definition by the Federal Reserve Bank of Boston, internal control is the big goal into which integrate credit risk, market risk and other risks. Also, personal accountability, compliance and account reconciliation. The concept is shown in Figure 9.2. Every risk faced by the financial institution must be monitored and controlled through an agile and reliable internal control framework, whose channels are: • open in a communications sense; • capillary in capturing information; • fast and accurate in terms of reported data.
184 Market Discipline
According to the definition given by the US Committee of Sponsoring Organisations of the Treadway Commission, internal control is a process brought into being and fully sustained by the institution’s board of directors, senior management, its professionals and other personnel. It is designed to provide reasonable assurance that the institution will achieve the following internal control objectives: • • • •
safeguarding of its assets; efficient and effective operations; reliable financial reporting; and compliance with laws and regulations.
Notice that safeguarding of the entity’s assets is at the top of the list. No matter which are the activities into which engage the financial institution: loans, derivatives, investments, fund management or any other, the assets must be safeguarded. The concept of STRGL discussed in sections 8.5–8.7, is an integral part of this responsibility. The feedback is provided by internal control, which consists of many components that are part of the management process. The five most important are: control environment, risk assessment, control activities, information and communication, and steady monitoring. The effective functioning of these components is essential to achieving internal control objectives, which help in the survivability of the organisation.
9.3 Qualitative and quantitative disclosures by financial institutions The regulators ensure that qualitative and quantitative disclosures by financial institutions provide an overview of the company’s business objectives, its risk-taking philosophy, and how its different business activities fit into the objectives. An integral part of such disclosure is the types of internal control procedures that are in place for managing what is often a diverse business environment served by a wide-spanning range of instruments. Qualitative disclosures, for example, provide the bank’s management with the opportunity to elaborate on and acquire depth in connection to statements made in quantitative disclosures in the annual report. Within the evolving regulatory framework, banks, securities firms, and other financial institutions are encouraged to include an overview of
Reliable Financial Reporting
185
key aspects of organisational structure central to the institution’s risk management and control process for all types of business activities: • • • •
lending; investments; trading; and innovative derivative instruments.
Another evolving guideline in prudential regulation is the inclusion of a description of each of the major risks arising from an institution’s daily business concerning: credit risk, market risk, liquidity risk, operational risk and legal risk; and the methods used to measure and manage these risks. For instance; • an IRB solution for credit risk; • limit policies for exposures to market risk and credit risk; and • value-at-risk measures of exposure to market risks (see Chapters 10 and 11). Regulators increasingly require a discussion of how the institution assesses its performance in managing the risks which it faces. Equally important is information about the overall objectives and strategies of trading activities involving all on-balance sheet and off-balance sheet components. These components will be subjected to four processes that support them and scrutinise them at the same time. Accounting rules and regulations look after recordkeeping. Auditing examines the accounting books. Risk management uses accounting and other information to estimate exposure. The internal control system capitalises on findings by both auditing and risk management. Figure 9.3 shows the partly overlapping nature of such activities. If the financial reports are manipulated, they serve no purpose. To appreciate what it takes to develop and maintain dependable financial reporting practices, it is necessary to examine the stages through which moves the life of a business system and its reporting requirements:
9.3.1
Study and design
Concepts and rules associated to an existing financial reporting system are reviewed to gain understanding of evolution in prudential requirements. New rules are developed and old rules are updated, then specifications are written and a new solution is designed to meet evolving requirements.
186 Market Discipline
INTERNAL CONTROL
RISK MANAGEMENT
AUDITING
ACCOUNTING
Figure 9.3 The areas covered by accounting, auditing, risk management and internal control overlap: each also has its own sphere of interest
9.3.2
Implementation
The so-designed new financial reporting system is detailed in terms of its integration into the institution’s working environment. This is precisely what the COSO of the Treadway Commission did with new rules – and what the Federal Reserve Board did with the regional Federal banks.
9.3.3
Operation
This is the day-to-day routine running and managing that must be carried out efficiently. Operation is a steady business, while study and design is a periodic affair – except for the process of upkeep, which is continuous. When isolated parts of specific procedures relating to particular problems develop over time, an integrative approach must be examined and tested. This is exactly what COSO has done, working on the principle that before a new reliable accounting system can be implemented it must be completely understood.
Reliable Financial Reporting
187
Hidden between the different phases I am discussing is another aspect that is frequently overlooked: the analysis of the true requirements that should satisfy the institution’s board, senior management, shareholders, auditors, financial examiners and regulators. These parties have different viewpoints and if this phase is skipped, the new system will often be just a bigger, faster way of repeating existing mistakes – and of inviting fraud. Let me therefore briefly review some more of the fundamentals of the case I am making. Study and design, a goal met by COSO, divides itself into three phases. In the first, the aim is to understand what the accounting system does, and to which degree it can be manipulated. This is expressed in terms of activities that thread through the business of the institution. In the second phase, COSO analyses what is specified as necessary for dependable accounting management, to accomplish the goals of reliable financial reporting. In the third phase, the accounting system that meets those requirements is described in a manner that responds to reliability requirements at different levels of detail, including as inputs: • financial reports; and • supporting databases. One of the contributions of COSO is that reliability enhancements assure that different accounting methods and tools work together to provide a coherent description of the credit institution’s financial condition. Moreover, they compose an inspecting instrument of magnifying power, since they can be used at several levels of detail – and, most particularly, they can serve in an effective way the internal control system. What many institutions are missing today is the ability to display in a dependable manner the dynamic mechanism of an activity in a manner able to provide a fairly detailed operational view. This is where STRGL’s concepts and reporting tools come in. The methodology implied by COSO and the use of STRGL as a reporting tool permit a critical look at operations, as well as observation and analysis of information inputs, outputs and contents of database resources. Methodologies and tools are important, but their use requires management resolve. According to the proverb: it is not worth mounting a high horse unless one is willing to perform some acrobatics to remain safely in the saddle. If one gets thrown off, it was hardly worth mounting the horse in the first instance. What many people in the financial industry fail to appreciate is that hiding the facts will magnify the bad news.
188 Market Discipline
By contrast, sharpening the focus on gains and losses, as required by the new reporting rules, serves their institution’s survival.
9.4 Proactive regulation and the use of an accounting metalanguage COSO is a methodology. STRGL and similar regulatory reporting tools or frameworks in the US, Switzerland and other G-10 countries, are accounting metalanguages, (this has been discussed in Chapter 2). A metalanguage for reliable financial statements must make the governance of the institution: • transparent; and • accountable. British, American and Swiss reporting standards have recognised that, under the old system, the stumbling block in integrating risks resulting from both off-balance sheet and on-balance sheet operations is the lack of adequate procedures for identification of different types of exposure in a way which is homogeneous and manageable. Associated to this are two other facts: • the absence of an internal culture which appreciates the need for full transparency of exposure; and • the lack of skill able to develop specific models and implement an integrative approach to recognised and realised gains and losses. The next major problem is a cross between organisational challenges and the existence of an accounting metalanguage for exposure. Risk managers must be able to cross-departmental lines, creating a landscape that is open even if many managers consider that parcels of that landscape are their personal property. Still another issue is technical and it concerns the challenge of unbundling the significant complexity and diversity: • of financial operations; and • of the exposure these operations involve. Promoted by innovation, the complexity of financial transactions has changed the nature of risk. It has also extended the timeframes within which financial instruments mature and their results (profits and losses) can be effectively reported. Loans have been classically kept in the banking
Reliable Financial Reporting
189
book till maturity and banks thought the only risk they were carrying was that of the counterparty. By contrast, today we appreciate that loans involve significant: • interest rate risk; • liquidity risk; and • foreign exchange risk. All this adds to reporting challenges and it is compounded by the fact that, in a surprising number of cases, top management does not yet fully appreciate the importance of derivatives risk – and its intrusion into loans risk. Yet, banks now rely much more on trading than ever before, which makes obsolete old regulatory policies and reporting practices that were not made for an intensive trading environment. One of the major benefits provided by the STRGL, nicknamed ‘struggle’ by accountants, is the bifurcation of P&L because of future market behaviour affecting the derivatives contracts and other financial instruments in the bank’s portfolio. The same is true of SFAS 133 and of regulations advanced in the late 1990s by the Swiss authorities. The principles underpinning these approaches are that: • Commercial and investment banks must be entrepreneurial, because they compete with each other. • But while they scare their competitors, they should not take so many risks, or use too much gearing, in case they scare the regulators. Publicly no commercial banker or investment banker wishes to see new, tougher reporting practices that assure a greater transparency of transactions. But privately, those financial executives who are sensitive to systemic risk appreciate the necessity for the new accounting metalanguage. They also understand that today, and in the future, their competitiveness is measured by their ability to: • participate in the globalisation of business; • proceed with rapid product innovation; and • use sophisticated technology in an able manner. These preliminaries to the discussion on a metalevel of financial reporting are necessary for background reasons. The effects of this multiple evolution have to be understood and managed most actively. Internal systems and procedures should ensure accurate reporting on all three
190 Market Discipline Table 9.1 Three types of risk to which the board and senior management must pay attention Transaction risk
Position risk
Default risk
Calculated at time of the transaction.
Accumulated in trading book and banking book.
Considered as a loss at time of default.
Measured on a transaction, counterparty and instrument basis.
Measured cross-instrument and cross-counterparty.
Measured on a counterparty basis, and in total exposure.
Reflecting the consequence of a deal on the bank’s risk profile.
Involving exposure in the short, medium or longer term.
Accompanying the counterparty’s choice, its grading, country and political risk.
Till now not subject to prudential insurance, but this could change.
Considered as part of P + L (when realised) and of future gains and losses when recognised.
Till now taken as dry hole. This may change with credit derivatives.1
types of risks outlined in Table 9.1, relating to transactions being made, positions being taken, and the likelihood of default. These exposures should be closely monitored and integrated into a comprehensive figure, which interests a great deal the board, investors and regulators. Financial statements steadily evolve. Bankers must appreciate that these will always be in a transition phase. The old type of reporting stability is gone. Yet, we must have a fairly good idea of what lies ahead including how to value our holding at market rates. This is increasingly critical for deciding about appropriate forward premiums or discounts and for prognosticating coming events. For instance: • how different types of risk might affect the bank’s performance and financial condition; and • in which way is senior management in charge of credit risks and market risks should position the bank, to be able to sustain future shocks. Narrative disclosures are required to deal with the role financial instruments have in changing the risks the company faces, or creating new ones, as well as which should be the bank’s objectives and policies in using financial instruments to manage those risks through hedging activities. Numerical disclosures need to be given to expected and unexpected volatilities of interest rates, currency rates, and market liquidity. The
Reliable Financial Reporting
191
market value of inventoried instruments should be compared with book value; and issues must be raised relating to the effects of hedge accounting. The role of the numerical disclosures is to show aftermath: • historical; • present; and • hypothetical in the future. New regulations have moved in that direction. However, in the beginning, the banking industry was not supportive of the financial reporting issues that led to COSO, SFAS 133 and STRGL. Much of the resistance was due to misinformation. But cultural issues change. Let me take as an example the change in opinion, as well as in substance, regarding balance sheet and off-balance sheet items, the way it has been discussed in a meeting in London. When supervisory authorities reproached the CEO of a major company because of concerns about the huge off-balance sheet exposure of his firm, the CEO answered: ‘It is nonsense to look off-balance sheet. You should only control the balance sheet.’ A few months down the line, the same CEO wrote to his shareholders: ‘Looking at the balance sheet is not enough. You have also to appreciate the positions your company has off-balance sheet.’ Regulators and accounting standards boards cannot waste time until the different CEOs change their opinion about derivatives, after having suffered significant losses. They have to be ahead of the curve because derivative instruments significantly increase systemic risk, while at the same time they reduce the ability of governments to manage currency exchange and interest rates. Quite often risks, and most specifically systemic risk, are perverse. Some bankers are confident that new financial products can be handled safely. To them, complexity does not equal risk. But others point to the large losses incurred over the eight years from 1994 to 2002, and are concerned about their ability to understand all the risks they are taking.
9.5 Defining the territory where the new regulations must apply The emergence of new regulations is promoted by the abundance of more sophisticated and more risky derivatives products, and the losses being incurred when entering into contracts whose further-out aftermath is unknown. Both reasons have rekindled concerns about how well the
192 Market Discipline
markets are being supervised. As American, British and many other examples document, the hold governments have over market regulation needs to be rethought and resettled. The same is true of the imposition of financial discipline on governments that act as: • • • •
borrowers; issuers of money; taxing authorities; and regulators
at the same time. One of the major regulatory challenges is that banking at large, and most specifically derivatives, are global processes propelled by the openness of borders and the territorial sway of regulation. Not only are borders falling between countries, but also between different financial sectors, such as banking and insurance. This makes it so much more difficult to draw and redraw the territory where new regulations must apply. Defining the territory of cross-border rules and regulations, as well as of national regulations with cross-border impact, is much more difficult when markets exist primarily on computer databases and networks, unless we have available (or develop) models for simulation and experimentation. Besides modelling, one of the challenges in experimentation is that the old way of describing physical locations is no more valid. This has strange effects on the power of governments including the power to surprise the markets by setting new policies: • The financial authorities are often astonished at how fast markets react to their policies and their moves. • But also the players begin to wonder who, if anyone, is in charge of regulation in a cross-industry way, as industry barriers fall. Precisely for this reason it would have been best if FASB and ASB as well as the SEC, the Federal Reserve, OCC, FDIC, OTS and FSA had come together to establish a common set of regulatory reporting standards for the US and the UK. This has not been the case, to my knowledge. As I already pointed out, the procedure defined by FRS 13, which defined STRGL, and those established by SFAS 133 converge only up to a point. For example, a basic concept underpinning the STRGL is that some gains and losses are not reported in the classical profit and loss account because they do not satisfy current accounting practices where what is reported in P&L must have been both recognised and realised. This being
Reliable Financial Reporting
193
the case, there remained only two other classical accounting places where recognised but not realised P&L might be put: • the balance sheet, within assets and liabilities other than shareholders’ funds (as the Swiss are doing); and • the balance sheet, within shareholders’ funds – the traditional equity – which not only does not make sense but can also be highly misleading. In terms of accounting principles, what is stated by the first point above is all right. It is also a good way to integrate assets and liabilities off-balance sheet with those on-balance sheet. The problem lies in the fact that the same item would change from assets to liabilities and vice-versa depending on market value. It is different with the equities notion described by the second point above. Embedding derivatives gains and losses into shareholders’ funds essentially amounts to reserve accounting and leads to new creative accounting gimmicks. This contradicts other established reporting practices in the financial industry, particularly in the UK and the US. The British solution is that, for example, STRGL should include gains and losses on interest rate derivatives used to manage the interest basis of borrowings, fixed rate borrowings, and currency borrowings. Also, it should also include currency derivatives where these are used to manage currency risk because of capital invested in overseas operations and other reasons. Reporting is streamlined since: • the realised changes in value are reported in the P&L account; and • the recognised but not realised value changes are reported in the STRGL. For derivative instruments, the concept underpinning STRGL in its broader sense discussed in the introduction, can be simplified into a 4-layered structure such as the one shown in Figure 9.4. (See in section 9.6 a more detailed discussion on the headlines.) This approach has the merit of bypassing historical costs dear to the accruals method, leading to a distinction in reporting requirements depending on whether an item is: • out-of-current-earnings, hence to be reported in the STRGL; or • in-current-earnings to be reported in the income statement. The terms ‘Out-of-current-earnings’ and ‘In-current-earnings’ are my own. There is no contradiction between the meaning of these terms and the
194 Market Discipline OUT – OF – CURRENT EARNINGS, IN STRGL
A . HEDGES OF CASH FLOW EXPOSURE
B . HEDGES OF FOREIGN CURRENCY EXPOSURE
IN – CURRENT EARNINGS, IN P&L
C. HEDGES OF FAIR VALUE EXPOSURE
D. ALL OTHER TRADES THAN A,B,C
Figure 9.4
Measuring derivatives at current value and reporting gains and losses
practice with STRGL detailed by FRS 13, but the STRGL/P&L bifurcation has some interesting effects. For instance, provided a fixed rate borrowing is held to maturity, any changes in its value arising from interest rate movements that are initially recorded in STRGL would reverse in later years in the P&L. Hence there is continuity in reporting. One can look at the STRGL/P&L bifurcation as a major renewal and revamping of past accounting practices. While the notion of historical cost is familiar, the fact of measuring financial instruments at present value leading to a reporting practice like STRGL, is a big conceptual change. A growing number of financial experts see this change as a leap forward in prudential accounting. But not everybody agrees with this statement because past practices and obsolete ways of thinking die very slowly.
9.6 Measurement practices, reporting guidelines and management intent The message the careful reader would retain from section 5 is that accounting standards and principles for internal control put in place by the regulators should be followed all the way by the supervised entity
Reliable Financial Reporting
195
and its senior management. Take settlements as an example. Are the settlements people fully segregated from those authorising the payments? Major frauds often happen because the person responsible for settlements is careless about security, leaving, for instance, an identification number and password on a sticker on his or her computer. Another proof of carelessness on the metalinguistic level, therefore to be covered by COSO-type principles, is the indiscriminate use of models, because models are in vogue. True enough, as banks become involved in more complex products they need to use sophisticated algorithms and heuristics for instrument design, pricing and risk control. But senior management must look at these models, test them and take a critical view before being approved. Otherwise it is likely that the models’ output would not be dependable. Model risk (see Part 5) can integrate with accounting risk making financial reporting practices most unreliable, as documented by the near bankruptcy of Long-Term Capital Management (LTCM).2 Sound reporting practices like those promoted by COSO, SFAS 133 and FRS 13, reduce the likelihood of this happening. The name of the game is greater prudence in the management of financial resources. To avoid the synergy of creative accounting and model risk, the guiding principle in the implementation of STRGL and similar reporting regulations is that all derivatives should be recognised in the balance sheet as assets or liabilities, and they should be measured at present value. Whether realised or unrealised, gains and losses should be reported in a manner that depends on management’s stated reason for holding the instrument: • ASB’s reference to a stated reason for holding the instrument is a different way of saying management intent, as SFAS 119 and 133 explicitly require. • An clear statement of reasons and of management intent is crucial in explaining the ‘why’ management did a transaction, held a given position, or made some other commitment. As an example, on the second point above, Class A in Figure 9.4 includes derivatives designated as hedges of cash flow exposures. These may be hedges of uncontracted future transactions and floating rate assets or floating rate liabilities. In these cases, the gain or loss on the derivative should be reported, in the first instance, in comprehensive income but outside of current earnings. This can be done through the STRGL.
196 Market Discipline
Notice that eventually as the instrument reaches maturity this gain or loss would be recycled to the account, transferred from comprehensive income into profit and loss. The date of such transfer is the one management designates at the origin of hedge, in connection to when the projected cash flow is expected to occur. The same reference regarding out-of-current-earnings classification applies if the derivative is designated as a hedge of foreign currency exposure, for instance in a net investment in foreign assets. In this case too: • The gain or loss on the hedge would be reported in comprehensive income. • But it will be outside of current earnings, though it would offset the transition loss or gain on foreign assets. By contrast, in current-earnings accounting come derivatives designated as hedges of fair value exposure. This means hedges of firm commitments, whether assets or liabilities. According to this rule, the gain or loss on such transactions should be included in the profit and loss account. Also to be recognised and included in earnings is the offsetting loss or gain on a firm contract – whether asset or liability. The reader will also observe in Figure 9.4 that the in-current-earnings class has two subclasses. The first is that of hedges of fair value exposure. The second is all other trades than those in the above category, or in the out-of-current-earnings class. In the research meetings I have held with supervisory authorities in six different countries I observed that both they and accounting standards bodies believe that a homogeneous treatment of all items in the trading book is a much better solution than measuring all derivatives at current value but leaving non-derivative instruments at historical cost: • such a split would be irrational; and • the use of two different reporting standards creates opportunities for loopholes. An accounting practice is only then streamlined when all financial instruments are reported in the same structured manner in terms of form and content. This statement is true whether one follows accruals or fair value. A bifurcation would make sense if the use of historical costs is limited to those instruments explicitly designated through management intent as held to maturity because they are true hedges; while everything else is quoted at fair value.
Reliable Financial Reporting
197
There are as well practical issues associated with two different ways of financial reporting without making a distinction based on management intent. For instance, the fact that a dichotomy between derivative and non-derivative instruments is sometimes difficult. Most transactions are essentially contracts for cash flows and they have embedded in them rewards and risks that cross the thin red line separating derivatives from other instruments. Management intent can be brought a notch further, as the Securities and Exchange Commission did by advancing another piece of market regulation: The accounting treatment for swaps should depend upon the way in which the trading desk had hedged its position. This would make a difference between treating a derivatives transaction by marking to market or through accruals accounting. Interesting enough, this rule which is part of SFAS 133 has been a compromise. The original discussion document asked for marking to market all provisions, but many institutions objected saying that it was their intent to keep some long-term trades in their book till maturity. Hence the accruals method would do. This led to the regulation differentiating trades along the line of management intent. On the bottomline, the problem with management intent is political. Under the different ramifications of crony capitalism management intent can be manipulated to distort financial reporting by providing a loophole: ‘Yes, we intended to…but this was changed because of the law of unforeseen circumstances.’ Therefore, management intent: • should not be used selectively according to the way circumstances change; • it should be pre-announced in writing and applied throughout the portfolio. Another question that arises in connection to accounting procedures, is what happens when an asset is only partly hedged. Should hedging percentages be used? Or, the entire asset be measured and reported at present value? Still another difficulty is how to treat gains and losses that have been there before the asset began to be hedged: • if these are recognised when hedging begins; • then, there will be opportunities for creative accounting. A crucial issue, as well, is whether current value should be extended to all cases where the hedge is not a derivative. The list of queries does not
198 Market Discipline
end here, but those I have presented tend to be among the more important. As it can be seen from these references, there are no easy answers or linear solutions. Central banks, commercial banks, investment banks, insurance companies, and other institutions have still to do lots of homework.
9.7 Why fair value in financial reporting is a superior method Supporters of historical cost contend that it reflects actual transactions and the money terms associated to them. Therefore, it should be retained as a basis for reporting. The counter argument by supporters of the marking to market is that accounting practice has long found that historical cost is not enough. Therefore, it is better to adopt fair value. This notion of fair value starts slipping down the management layers of many credit institutions and other organisations. Bankers, treasurers and financial analysts have discovered that in connection to risk management historical costs do not adequately reflect capital at risk. Such failure in showing actual cost is important inasmuch as the effective control of exposure has become a foremost preoccupation among wellmanaged companies – as well as among the 9–10 regulators. For instance, in examining compliance by the institutions it supervises, the UK Financial Services Authority (FSA) specifically looks at clearly defined responsibilities. Such responsibilities are easy to identify when both the accounting language and metalanguage are crisp. FSA bases its opinion on the entity’s: • • • • •
organisation chart; reporting lines; understanding of risk; ways used to control exposure; and reliability in reporting practices.
Another focal point of FSA’s examination is a well-defined segregation of duties, which is assisted through rigorous accounting and reporting norms. The regulators are after a comprehensive application of this principle, particularly in what regards the segregation of frontdesk and backoffice duties which is, ironically, one of the major weaknesses financial institutions are not quite ready to correct. Accounting rules and regulations written for the current financial landscape are so important because, as it cannot be repeated too often, many types of risk must be brought under perspective: Market risk,
Reliable Financial Reporting
199
credit risk, funding and liquidity risk, as well as many issues connected to operational risk, such as legal risk, information technology risk, and reputational risk.3 For every one of these risks the supervised entity is expected to exercise an effective risk control that requires: • • • •
proactive management; a strong internal control culture; streamlined organisation; and accounting rules which are modern and comprehensible.
In the course of the Third International Conference on Risk Management 4 Sanjiv Shah, of the FSA, gave an excellent example on the role good organisation plays in risk control. One British bank employed 30 people to take care of risk management but they were located 30 miles away from London. Another had five people but sat on the trading floor, were close to the deals and even looked the traders’ eyes. The second was much more effective in internal control. A great deal of what comes into the internal control culture originates in to accounting and organisation. Is the board watching over the shoulders of senior management in connection with financial reporting? Is the board alert to exposure control? Do the risk managers challenge the traders for their observance of limits? Do they ask focused questions, or do they only look at reports? It is the responsibility of the bank’s board and executive committee to approve limits. Has the bank put in place a limits control methodology to make sure that they are observed down the line? Do the board members understand the risks being taken? Do some of the board members sit in the credit risk and market risk committee(s)? These are important guidelines that have to do with the COSO methodology rather than SFAS 133 and FRS 13 per se. Other questions, too, target the accounting metalanguage that we are using. What’s the policy when the institution’s treasurer, the desk or the trader break(s) the limits? Some banks have a strict policy about limits overruns; others take it lightly. Limits are tolerances that should be observed. If breaking the limits is not welcome, neither is it a good policy being too much below limits. The bank must know what to do about both: • breaches of limits; and • their consistent underutilization.
200 Market Discipline
These questions cannot be answered in the abstract. The role of the board in risk management has to be properly established: defining the bank’s risk appetite, providing the technology that can answer established challenges, and assuring that risk control is integral to business strategy. Outsourcing of vital services is another subject where the board must exercise direct control. This is as true of auditing as it is of information technology and other issues. Some credit institutions consider information technology as not being a core issue in banking. They are wrong. Regarding the practice of outsourcing new directives by the Basle Committee require that: 1. An institution is very careful when entering into an outsourcing arrangement, because it increases its operational risk. 2. The outsourcing vendor must be competent, financially sound, with appropriate knowledge and expertise. 3. Senior management must assure that the bank concludes an outsourcing contract that can remain valid over a long time period. 4. The contract should define the outsourcer’s assignments and responsibilities, including risk analysis. 5. The bank must analyse the impact outsourcing will have on its risk profile, and internal control system. 6. The overall accountability for outsourced services remains with the board and senior management of the bank. While the board should delegate authority to the executive committee for day-to-day risk management, it is important to assure that the role and responsibility of the executive committee, as well as of every board member, point towards personal accountability. This accountability includes what must be expected of risk controllers at the peak of the organisation, and is part of the framework promoted by COSO.
Part Four What to do and not to do with Models
This page intentionally left blank
10 The Model’s Contribution: Examples with Value at Risk and the Monte Carlo Method
10.1 Introduction A company is exposed to interest rate, foreign currency, and equity price risks. A portion of these risks is hedged, but volatility negate this and impact negatively on our company’s financial position. Entities usually hedge the exposure of accounts receivable and a portion of anticipated revenue to foreign currency fluctuations, primarily with option contracts. Models help in monitoring foreign currency exposures daily, or even better intraday, to assure the overall effectiveness of hedged positions. A similar statement is valid about interest rate risk, even if the portfolio is diversified and consists primarily of investment grade securities to minimise credit risk. Companies hedge their exposure to interest rate risk with options in the event of a catastrophic increase in interest rates. Many securities held in the equities portfolio are subject to equity price risk. Companies hedge equity price risk on certain highly volatile equity securities with options. Typically, companies have used sensitivity analysis to estimate interest rate and equity price risk. One of the benchmarks is how much a 20, 50, 100 or 200 basis points increase (or decrease) in interest rates would have changed the carrying value of interest-sensitive securities. For equities, a sensitivity test is how much a 5 per cent and 10 per cent decrease (or increase) in market values would have reduced (or increased) the carrying value of the company’s equity securities. As an alternative, or a complement to the tests I just mentioned companies use a value at risk (VAR) model to estimate and quantify market risks. With the 1996 Market Risk Amendment by the Basle Committee 203
204 What to do and not to do with Models Table 10.1 The time schedule of major regulatory measures by the Basle Committee on Banking Supervision July 1988 End 1992 January 1996 June 1999 October 1, 2002 May 2003 End July 2003 End October 2003 March 2004 September 2004 January 1, 2006 December 31, 2006
Publication of the Basle Capital Accord (Basle I) Implementation of Basle I Market Risk Amendment First consultative paper on the New Capital Adequacy Framework (Basle II) Third Consultation Paper (Responses by Dec. 31, 2001) Basle last Consultation Paper, and EU Third Consultation Document End of Consultation Publication of New Basle Accord Proposal for EU Directive EU Co-Decision Process Starts Parallel Running of New Capital Adequacy Framework Basle II comes alive
These delays should be used by banks to prepare themselves in a more rigorous way.
on Banking Supervision credit institutions must use VAR and report the results for supervisory reasons. Supervisory controls and reporting practices change over time. As an example, Table 10.1 outlines the time schedule of 9–10 agreements from the 1988 Capital Accord (Basle I) to the implementation of Basle II in 2006. The VAR model, which is taken as an example in this chapter, is not intended to represent all of the bank’s actual losses in fair value. Neither should VAR numbers be interpreted as predictors of future results. Similarly, VAR should not be used as an alternative to setting limits; yet, as we will see in Chapter 11, several banks do just that. All three examples in the preceding paragraph point to the wrong use of value at risk. In the general case, models are misused either because senior management, and the institution as a whole, do not appreciate what they stand for, or because they are found to be an easy (and cheap) way for avoiding to do something else that has to be done. Prudent companies don’t fall into these traps of model usage. Every instrument has its rules. The best way to use value at risk is as a management tool providing (an approximate) snapshot on some exposures. The implementation of VAR includes: assumptions about a normal distribution of market values (and conditions) or, alternatively, Monte Carlo simulation; a 99 per cent confidence interval; a 10-day estimated loss in fair value for each market risk category; and backtesting procedures to gauge the model’s accuracy.
10.2 Concepts underpinning value at risk and its usage As far as financial institutions are concerned, the 1996 Market Risk Amendment offered the possibility of using, as an alternative to VAR, the standardised approach to market risk management and regulatory
Value at Risk and the Monte Carlo
205
reporting. This was intended for less sophisticated banks, but within a surprisingly short timeframe many banks adapted themselves to model usage. This is good news, if we account for the fact that for the majority of institutions models were first introduced to them through the Basle Committee’s discussion paper, released in April 1993, which became the 1996 Market Risk Amendment.1 The parametric version of VAR is particularly appealed to banks that have no rocket scientists to develop their own proprietary models. One of the capital errors some of the banks did is to use VAR also for credit risk, unaware of the fact that it simply does not work with debt instruments. Also, VAR shows an insufficient recognition of risk diversification, and it pays no attention to internal controls. The misuse of VAR, and other models, by bringing them into domains for which they have not been intended is a good example of how credit institutions and other companies employing low technology end by cornering themselves. While there is model risk, as it will be demonstrated through plenty of practical examples in Part 5, not using models and high technology (as well as misusing models) is by far the worse strategy because today through derivatives and other sophisticated instruments banks navigate in uncharted waters full of all types of exposure. Before explaining what VAR is and is not, let me present some fundamental concepts. The treasury’s value at risk is the expected loss from an adverse market movement, with a specified probability, over a period of time. Based on a simulation of a large number of possible scenarios, we can determine at the 99 per cent level of confidence that any adverse change in the portfolio value over 24 hours will not exceed a calculated amount. This is essentially what is meant as value at risk by the 1996 Market Risk Amendment . Once this basic notion is understood, value at risk can be used by any entity or investor in managing risk of derivatives and other financial instruments. The VAR model is far from being perfect. In essence, what it provides is a snapshot of exposure, I discourage the notion that value at risk has some strong relation to a minimum safe level of capital:
• VAR is an approximate concept, but it can be useful as a shot in the arm. • Clear-eyed managers and financial analysts recognise that – and they use VAR for what it is, not for what it might be.
Section 10.3 elaborates on this issue in greater detail. To do so it capitalises on the fact that VAR models aggregate several components of price risk
206
VERY HIGH 99 CONFIDENCE INTERVALS
HIGH 95 % MARKET RISK (JUST NOTE DIFFERENCE)
90
MEDIUM
LOW
VERY LOW 1
2
3
4
5
6
MATURITY IN YEARS Figure 10.1
The money at risk increases as the level of confidence increases
7
8
9
10
Value at Risk and the Monte Carlo
207
into a single quantitative measure of the existing potential for loss. More precisely, the estimate of ‘maximum’ potential loss expected over a fixed time period, at a given level of confidence. Beyond that level of confidence, there can be more losses. Hence, the word ‘maximum’ in the above definition is imprecise. It could be replaced by ‘minimum’, in that same context, and still say the same thing. This looks like a self-contradiction, but it is not so. The difference comes from the fact that beyond, say, the 99 per cent level of confidence there is a probability the losses will be greater – much greater. Figure 10.1 explains this concept: • VAR at 90 per cent is the maximum loss at that level of confidence. • But it is the minimum loss when three different levels of confidence are considered 90 per cent, 95 per cent and 99 per cent. VAR at 90 per cent will be exceeded in 10 per cent of all cases; therefore, it is hardly worth computing. VAR et 99 per cent will only be exceeded in 1 per cent of all cases (in a statistically significant sense). Still this 1 per cent of losses can be huge compared to the estimated 99 per cent level. (More about this later on.) Even with these exceptions and approximations, value at risk provides a useful tool for measuring exposure, in regard to those financial instruments it can handle (Section 10.4 explains which instruments.) What the reader should appreciate is that, from a statistical viewpoint, VAR is: • a scalar estimate of an unknown parameter of the loss distribution; and • sampling errors can affect the accuracy of a point’s estimates, therefore, we have to be careful with VAR. Confidence intervals for the VAR can be derived in a parametric context within a portfolio structure with distributed returns. Some rocket scientists prefer to compute VAR through interval estimates, and they choose to use asymptotic confidence intervals. We are not going to be concerned, in this text, with the merits and demerits of these approaches. I only wish to underline that the correct use of VAR requires: • a statistical framework for reference purposes; • understanding of time variations in volatility; • establishing the behaviour of returns in assets and liabilities;
208 What to do and not to do with Models
• the use of the Monte Carlo method as a better alternative to parametric assumptions; • the sense of extreme events and breakdown in correlations; • the correct use of scenario analysis (see Chapter 7). The use of VAR as a risk management tool should be on daily business. The Basle Committee suggests that in order to effectively control risk not only the tools we use should be well understood but also an independent review of the risk measurement system should be carried out regularly in the bank’s internal auditing. Besides auditing the risk system and the observations made through it, senior management must be actively involved in risk control, review the daily and intraday reports produced by an independent risk management unit. In other terms, risk control models must be closely integrated in the day-to-day running of the bank. The results of experimentation on exposure should both: • be reviewed by senior management; and • be reflected in policies and limits set by the board of directors. In order to avoid conflicts of interest, the bank should have an independent risk control unit responsible for design and implementation of risk management systems and procedures. VAR estimates should be fed through the internal control channels (see Chapter 9). A sound practice should see to it that this unit reports directly to senior management, and it is given the authority to evaluate relationships between: • trading limits; and • risk exposure. VAR alone would not produce miracles. Policies and procedures make the difference between success and failure. Also given the fact that model risk is always present, a company should as well conduct regular backtesting, comparing the risk measures generated by models to actual profit and loss. Notice should as well be taken of the fact that there exists not one, but many VARs. One of the problems with eigenmodels is that they tend to be heterogeneous, making the job of supervisors and regulators – also of the company’s own management – so much more difficult. To reduce such differences between models, the Basle Committee has fixed a number of
Value at Risk and the Monte Carlo
209
the parameters that effect on the way the models are specified and developed. Examples are: • A 99 per cent one-tailed distribution defining the level of confidence. • A minimum period of statistical sampling of one year in terms of historical information. • The use of price changes over a 2-week period, regarding fluctuation in price volatility. In daily practice, however, there exist conflicting requirements like that of accounting for non-linear behaviour of option prices. Accounting for them makes the model more sophisticated but not standard; not accounting for them makes the results of VAR questionable. Another issue leading to diversity in output is the historical correlations used in value at risk models. The Basle Committee permits banks to use the correlations within and between markets that they deem appropriate, provided that their supervisory authority is satisfied with this process.
10.3 What VAR is and what it is not Quite often value at risk and other models are mishandled because what they can and cannot do is so much misunderstood. VAR has frequently been subjected to wrong assumptions regarding the area of its implementation, and therefore to improper usage. Originally developed to handle market risk, it has been extended to instruments it cannot effectively address, as well as converted for credit risk. Some institutions even talk of using VAR for certain types of operational risk. This is not VAR’s role. Even those banks that limit the use of VAR to market risk, fail to appreciate that it addresses between one-third and two-thirds of all instruments they are handling. The exact number depends on the type of bank, its products and the exact method it chooses. We should always be keen to examine whether a model handles an instrument in a meaningful sense. As shown in Figure 10.2, a survey made in late 1998 by Middle Office, documented that only 7 per cent of financial institutions responded that, for market risk, the concept of VAR alone is a sufficient measure. 2 VAR was originally developed by the Morgan Bank as a way to inform its senior management about current exposure. But it does not handle
210 What to do and not to do with Models IS VAR SUFFICIENT ?
NO 93%
YES 7% Figure 10.2 According to the majority of banks, even for market risk, the concept of VAR alone is not a sufficient measure
instruments with asymmetric tales. Neither do some institutions take note of the fact that: • The simple, parametric VAR aggregate recognised losses under conditions characterised by a normal distribution of events. • In no case does VAR address extreme events or circumstances, nor does it take account of leptokyrtotic or platokyrtotic distributions. Market risk events characterised by a leptokyrtotic distribution cannot be handled through VAR. One of the difficulties in proper implementation of value at risk metrics is the misunderstanding of its limitations on behalf of its implementers as well as of the prerequisites to be fulfilled. To be used as a measure of exposure, the model must be fed with: • market prices; • volatilities; and • correlations. If they wish to achieve meaningful results through VAR, the implementers must satisfy themselves that the above three factors are well measured, databased and tracked to assure they are stable. In fact, this is in itself
Value at Risk and the Monte Carlo
211
a contradiction because volatility is not stable, correlations are not well established, and over the counter (OTC) market prices are usually not available. This is, of course, part of model risk, which we will examine in Part V, but it is also part of mismanagement because it means that credit institutions and other companies use mathematical tools that they do not quite understand. Not understanding a tool’s implications leads to miscalculating the size of exposure. Down to basics, the model is not at fault. It is precisely for this reason that companies, their executives and their professionals should become model-literate. The benefits to be derived from simulation and experimentation are in direct proportion to the skills of: • the people developing the models; and • the people using the models as assistants in their daily work. Understanding the benefits derived from models means, at the same time, appreciating their output and knowing their limitations. A correct way to look at value-at-risk is as a probability-weighted aggregation of current status in exposure embedded in a trading portfolio under current conditions – or under various market scenarios. VAR typically assumes that: • no trading occurs during the pricing exercise; • hence, there is zero liquidity in the market. The daily estimate of value-at-risk can take the form of a daily revenues and exposure graph, which maps a holding period with (theoretically) no risk-reducing trades. Basle specifies a ten-day holding period, but the algorithm converting the one day into ten days is weak. The hypothesis underpinning it is not fulfilled in real life. What I have just stated is another reason why the computation of exposure made at the 99 per cent confidence level is not highly dependable. Typically empirically derived correlations are used for aggregation within each of three basic risk classes: • currency exchange; • interest rates; and • equities. Reference to this was briefly made in the introduction. Across risk classes, cumulative exposure is computed assuming zero correlation among risk
212 What to do and not to do with Models
classes. This reflects the diversification more or less existing among a universal bank’s business areas – but, again, such diversification is in no way characterising institutions and their distribution of assets and liabilities. What kind of assistance are central banks and other supervisory authorities providing to commercial banks confronted with compliance to the Market Risk Amendment? According to the results of my research, this is mainly encouragement, the availability of some skills in discussing on the development of models with user organisations, and an effort to examine the model’s adequacy largely based on the results of backtesting. I would think that equally important in terms of assistance is the development of a methodology for the evaluation of models being developed by commercial banks by means of knowledge artefacts, which can make backtesting a more detailed and more sophisticated enterprise. A great deal in this connection can be contributed by supervisory authorities with good experience in quantitative solutions addressing problems in the capital markets. How are banks positioning themselves against the modelling challenge? Again, according to the results of my research, they take one of two approaches: either they develop the models that they need in-house or they buy such models from software houses. In both cases, the reserve bank has a role to play in terms of assistance to the commercial banks, as well as in regard to auditing the output of off-the-shelf models for goodness of fit. In 1998 and 1999, in connection to the Year 2000 (Y2K) problem, the three main US supervisors: Federal Reserve, OCC and FDIC developed an excellent policy of auditing software producers and service bureaux acting as outsourcers to credit institutions. I believe that this policy should be reactivated and introduced to all 9–10 countries as models become commodities available off-the-shelf for: • market risk; • credit risk; and • operational risk. Emphasis should also be placed on model-literacy and the preparedness of credit institutions for using models available as commodities. For many banks, buying some ready-made models is far from being the optimal approach. Bought software used in a way that is inaccurate or too static. By contrast, for management control purposes, the value is in the dynamics of the solution we apply.
Value at Risk and the Monte Carlo
213
Much has to do with computer-literacy and model-literacy at top management level. Also many board members don’t understand the derivative financial instruments with which their banks presently work. This is indeed regrettable because if we don’t understand a product we cannot control its negative effects. Other things equal, senior management’s involvement in risk control is the cornerstone to the successful implementation of a model-based methodology.
10.4 Historical correlation and simulation with VAR models Originally, the 1996 Market Risk Amendment by the Basle Committee identified two types of models: historical analysis based on the normal distribution and on correlation (variance-covariance), which is parametric (VAR/P); and non-parametric simulation (VAR/S), which typically uses the Monte Carlo method. 3 Nearly five years of implementation, however, have led to a bifurcation of the parametric approach (bootstrapping is discussed in section 10.5). Starting with the more sophisticated of the above solutions, VAR/S, Monte Carlo simulation permits to analyse exposure embedded in a number of derivative instruments. Results rely on the generation of a distribution of returns. These may be asset price paths or exposure estimates; the computation uses random numbers: • By means of values drawn from a multivariate normal distribution, rocket scientists develop future risk scenarios. • They employ a pricing methodology to calculate the value of the portfolio, based on computed VAR estimates. Note that while the outputs might have the pattern of a normal distribution, a normal distribution is not taken a priori, as an original hypothesis. Basically, this method is non-parametric, because it makes no assumptions about the population’s parameters. The avoidance of making a priori statements about the parameters of the distribution under study helps in gaining greater accuracy in terms of results. Another main advantage of VAR/S is that a whole range of derivative instruments from plain vanilla to path-dependent products and other exotics, can be included in the computation of exposure. VAR/S, however, has two downsides. One is that it is computation-intensive, therefore time consuming particularly so for banks that still use low technology. The other constraint is that while there are plenty of people masquerading
214 What to do and not to do with Models
as rocket scientists, few really know what they are doing. Those knowledgeable appreciate that: • Monte Carlo is based on random sampling within the boundaries set for the variables of the problem. • Its most attractive feature is that it often affords a direct model of the problem being studied, but at higher level of sophistication than VAR/P. The Monte Carlo method is an excellent tool, but it should be used only when it is supported by appropriate expertise, when computer power is no problem, and when the company features a large database bandwidth. What we are really after with VAR/S is to have the full portfolio, or a big chunk of it, revalued on each simulation of exposure: • A bank using VAR will be well advised to organise itself and upgrade its technology so that it can properly implement Monte Carlo. • The user organisation must appreciate that short-cuts lead nowhere. They are also dangerous because they mislead management in terms of exposure estimates. Whether within a VAR framework or outside of it, simulation solutions have been devised to approximate the relationship between the proportional change in the value of the portfolio in one day as the value of the market variables changes. These new market values are used to calculate the moments of the portfolio change and reduce the time-window of calculations. Such approximations, however, have given only average results in terms of realistic risk estimates. Alternatively, one can use the more limited variance/covariance method and the correlation matrix which it implies. These were in the origin of the value at risk computation developed by J.P. Morgan. The reader must be aware that such an approach has two major deficiencies that see to it that the measures of exposure it provides are not so appropriate: 1. The most talked about approximation is the undocumented assumption of a normal distribution. In an analytical sense, this assumption of normality reduces the value of VAR’s output, even if it features the benefit of simplifying VAR calculations because it permits the use of established tables. With this approach, all percentiles are assumed to be known multiples of the
Value at Risk and the Monte Carlo
215
standard deviation, a process that provides only one estimate of exposure. 2. Less talked about, but just as serious is that – as Dr Alan Greenspan has underlined – we are still at the infancy of variance/covariance computation in finance. Behind this second and most important constraint of parametric VAR lies the question of serial independence of observed values. To make matters more complex, the VAR method is quite uncertain on how long should be the reporting period. The 10 × daily holding period, advanced by the Basle Committee, presupposes serial independence. Serial independence means that one day’s risk results do not affect the next day’s. Evidently, this is not the case with a portfolio of securities. As an assumption, this is most unrealistic and contributes to the weakness of VAR output. Don’t blame the model; blame the people who do not appreciate that only when the hypothesis of independence is tested and found to be valid, a longer horizon can be computed by multiplying the daily standard deviation by the square root of time – in this case 10 days. Even then, 10 makes little sense because nobody bothered to answer the query why 10 days and not 20 days, 30 days or even 6 months or a year should be used. Indeed, some banks and other companies use a 20-day holding period just by multiplying one day’s results by 20 . These silly twists to mathematical theory made by people and companies who think that 20 years of experience mean one year’s experience repeated 20 times. With models we don’t aim to be perfect, but we should not be crazy either. Taking freedoms with computational procedures is exactly what should not be done, whether with models or with bread and butter accounting.
10.5 The bootstrapping method and backtesting User organisations have developed two alternatives in connection to the version of VAR which rests on historical correlation. The newer approach is known as bootstrapping and constitutes a separate way of dealing with parametric VAR than the elder variance/covariance. The big challenge with bootstrapping is the existence of a very good database. Few banks are able to fulfil the database broadband requirements which it imposes in an able manner.
216 What to do and not to do with Models
A ‘very good database’ is one incorporating in detailed and accurate manner daily movements in all market variables over an extended period of time. This requires both first-class methodology and a lot of database bandwidth, but also it permits the computational procedures to assure that: • the percentage changes in each market variable are accurately calculated; and • such accuracy persists day after day, as new transactions are executed changing the pattern of inventoried positions. Through bootstrapping, VAR is computed as the fractile of probability distribution of portfolio changes, which can be obtained either revaluing the portfolio and its embedded pricing schemes, or by approximate methods. Of course, approximate methods increase the likelihood of errors. Bootstrapping relies on a stationary statistical environment in which returns rather than prices are being used. The problem is that because there are significant non-stationary events regarding volatilities and correlations, this stationary hypothesis is weak – let alone the fact that one should update the historical return distribution practically in real time. As with the Monte Carlo simulation, few banks are able to do so. The advantage of the method is that bootstrapping can help to move away from the assumption of a normal distribution. Also, it rather accurately reflects the historical multivariate probability distribution of market variables. Historical simulation incorporates correlations between assets, but not any autocorrelation in the data. One of its limits lies in the fact that: • Some tests, like sensitivity analysis, are difficult to do with a number of instruments. • Variables for which there is no market data cannot be included in the computation, and • For prognostication purposes one still needs to assure serial independence. Another limitation of the method comes from the fact that what has happened, say, three years ago, is not representative of what happens now. Historical data may reflect completely different economic circumstances that currently apply. Yet long time series are most important in any analytical study, and most particularly in bootstrapping.
Value at Risk and the Monte Carlo
217
Some rocket scientists suggest the use of GARCH models based on stochastic volatility. 4 Returns are normal and conditional, while unconditional returns are not normally distributed. Suitably conditioned returns can be made to approximate normality, provided we know the variance, which is not often the case. Still another approach, like the one by City University, London, employs a mixed jump-diffusion function. Returns are normal, conditional on the hypothesis there are no spikes. This model embeds crashes as endogenous variables. The conclusion is that whether we talk of a plain vanilla parametric solution, of bootstrapping, of the Monte Carlo method, or of any other artefact mathematical models must be tested twice: • as part and parcel of their development; and • against real life data as this becomes available. Let’s never forget that, as explained in Part One, models are based on abstraction and hypotheses. We may err on both counts. The abstraction might be too coarse grain and one or more of our hypotheses might be wrong. Also, the algorithms we write may be inappropriate. Testing is an integral part of model development. Testing is also an integral part of model usage. This starts being generally appreciated. With the Market Risk Amendment, the Basle Committee on Banking Supervision not only promoted marking-to-model a bank’s trading book but it also specified backtesting throughout the lifecycle of the model. The rules implied by this procedure are as follows: • With less than 4 exceptions per year, a quality level of 98.4 per cent or better will keep an eigenmodel in the green zone. If in the green zone, there is no change in capital requirements beyond the multiplier 3 required by the Market Risk Amendment: • If there are 5 to 9 mismatches, or a quality level 96.4 per cent to 98.3 per cent, the model is in the yellow zone. There may be required an increase in capital reserves for market risk at the discretion of the central bank: • More than 10 exceptions over 260 days, or a quality level equal to 96.3 per cent or less, bring the model in the red zone.
218 What to do and not to do with Models
The Basle Committee projects an increase in capital requirements if backtesting results find themselves in the red zone. The central bank, which has authority over the commercial bank using a VAR model, may also decommission that model if it finds itself in the red zone. With this in mind, credit institution targets improvements in model performance. An example is given in section 10.6.
10.6 Level of confidence with models and operating characteristics curves Within the framework of the Market Risk Amendment, the computation of value at risk is based on the output of a statistical model that targets what an inventoried position worth in the market. As we saw in sections 10.4 and 10.5, results can be gained parametrically by estimating variance/ covariance, and using this to infer the probability distribution of a position. Or, they can be computed through simulation of the actual values that position would have taken if it had been held over the time interval under study. With parametric VAR we assume a normal distribution with parameters µ and σ2, for mean and variance, estimated through the statistics x and s2. To read the position’s value at the chosen confidence level, we use operating characteristics (OC) curve shown in Figure 10.3. A key contribution of the OC curve is that it offers a pattern of performance that is easily visualised and understood. The careful reader will recall what Figure 10.1 had shown: that less capital is at risk at the 90 per cent level of confidence than at the 99 per cent level. This can be better appreciated in Figure 10.3: • The value at risk at 99 per cent is A, • The value at risk at 90 per cent is B, and B is less than A. In banking, OC curves have been used since the mid-1980s in conjunction with expert systems for loans, where the abscissa expresses probabilities and the ordinates maps the values of a scoring system. The probabilities give the numerical value of the confidence level, in a manner similar to that shown in Figure 10.3. Operating characteristics curves have been effectively used to visualise trends in quality control and in risk taking. In industrial engineering, for instance, for any given function of percentage defective p in a submitted lot, the OC curve allows to compute the probability P that this lot will be accepted by a given sampling plan. The OC curve can as
100 99
LEVEL OF CONFIDENCE, TYPE I ERROR
90
% PROBABILITY OF ACCEPTANCE
0
A HIGH
B LOW
VALUE AT RISK CAPITAL AT RISK
Figure 10.3
The operating characteristics curve of a statistical distribution
219
220 What to do and not to do with Models
well be used in estimating the probability of accepting lots from a flow of products with fraction defective p. Graphically, the OC curve resembles an inverted curve of probability distribution in a cumulative form. The process is one of adding up the area under the distribution at a particular value. Whether we talk of industrial production or of banking, the probability of acceptance of a lot of quality p (percent defective) is shown in the vertical axis: • P represents the percentage of lots accepted by an inspection plan when many lots of quality p are submitted. In banking these lots may be loans or, alternatively, one single loan when p is the rating of the counterparty in terms of creditworthiness. These examples are no different than the first we saw in this section where the OC curve helped in calculating value at risk in the trading book at a probability P known as producer’s risk, α, or type I error. This probability α is the level of confidence: α = P = 99 per cent in Figure 10.3. Notice that a worst case scenario, or calculation, at a given level of confidence is not a catastrophic scenario. On the other hand, because it is based on the hypothesis of a normal distribution, the smooth OC curve in Figure 10.3 has parametric characteristics. Say then that we like to derive it through simulation without making parametric assumptions; we can do so using the Monte Carlo method. Several references made in connection to Monte Carlo have probably given the reader the message that this type of stochastic simulation allows to elaborate in a realistic manner functions in domains still difficult to treat through an analytical approach. Because mathematical problems arise in fields yet unexplored or difficult to approach in an algorithmic way, we use simulation to obtain synthetic data. Applications domains range: • From cases involving the lifespan of a system if we know only the life curves of its components. • To the option adjusted spread of a pool of mortgages, or likely repayment patterns of loans in a portfolio. The following paragraphs consider practical applications of the Monte Carlo method. Let’s start with the following assumption: we have mapped into the model the current positions in the trading book, and made a hypothesis about how these positions are likely to vary over some future interval, which we define. For this purpose we specify either a
Value at Risk and the Monte Carlo
221
deterministic or stochastic law of motion for our positions going forward. If, for instance, this is an interest rate risk portfolio, we: • subject it to a number of interest rate shocks; and • follow paths drawn randomly from an assumed distribution. The Monte Carlo model will calculate the interest income in each period of each path. Such stochastic behaviour of interest rates can be based around the existing yield curve, using projected variation in forwards rates or some other scenario. Alternatively, we can test different assumed yield curves. Dr Brandon Davies, Barclay’s treasurer of global banking operations presented a practical example with Monte Carlo, experimenting with net interest income.5 He assumed future interest rates that have been mapped through 300 different interest rate paths. The upper level of Figure 10.4 reflects the distribution of annual product income derived from this simulation, with variation of rates and volumes. The plot shows the probability of achieving at least some specified income level. As indicated by the results of the simulation, the position is virtually certain of yielding at least A hundred million but may earn B hundred million and as much as C hundred million (where C > B > A). An added advantage of Monte Carlo is that after having generated the probability distribution of earnings, we can ascertain the minimum level of income at some confidence level, such as 99 per cent. This could be very loosely interpreted as earnings at risk (EAR). However, to calculate earnings correctly at risk, it is necessary to look at the difference between: • actual income; and • expected income (mean value). Such evaluation must be done at a specified confidence level. If the expected income is C hundred million and this corresponds to P = 50 per cent in the probability scale (see Figure 10.4), then earnings at risk at the 99 per cent level of confidence is D hundred million (D < A). Notice that a similar approach can be followed by using fuzzy engineering. 6 The resulting platokyrtotic distributions will permit us to accumulate the outcomes. A significant benefit to be obtained either through fuzzy engineering or Monte Carlo is that it becomes possible to
222 What to do and not to do with Models A FRACTAL DIAGRAM OF NET INTEREST INCOME
100 90 80 70 60 % P
50 40 30 20 10
D
A
B
C
0 LOW
HIGH LEVEL OF INCOME
A FRACTAL DIAGRAM OF HEDGED VS UNHEDGED INCOME 100 90
UNHEDGED
HEDGED
80 70 60 % P
50 40 30 20 10 0
LOW
HIGH LEVEL OF INCOME
Figure 10.4 The use of Monte Carlo simulation in connection with income from interest rate instruments
Value at Risk and the Monte Carlo
223
quantify in a dynamic manner the impact and benefits of specific hedging strategies. We do so: • by modelling the position before and after hedging; and • by comparing the resulting change in earnings at risk. The same approach to a quantitative solution can be used for benchmarking hedging performance. This too has been presented by Brandon Davies in his lecture, and is shown in the second half of Figure 10.4. By comparing actual hedges against benchmark hedges, it is possible to measure the value added by specific hedging strategies. The Monte Carlo simulation of hedged and unhedged positions shows a rather significant benefit connected to hedging, as the OC curve steepens. A different way to interpret what I just said is that mid-range (P = 40 per cent to P = 60 per cent) the hedged income profile is narrower. This can be interpreted as indicating that there is less volatility around mean values. Notice also that the concepts entering into this solution are simple, and the mathematics are relatively easy to handle. Monte Carlo simulators are bought off-the-shelf. What is needed is the will to use them, the skill to make them work for our cause, and the technology to back it up.
11 Is Value at Risk an Alternative to Setting Limits?
11.1 Introduction The core of the research that led to this book has been ways and means for more efficient internal control and improvements in the role of model’s play in the management of risk, and so it is self-evident that the subject of limits has been discussed both with the regulators and with commercial and investment bankers in Germany, Switzerland, the United Kingdom and the US. The overwhelming opinion has been that limits should be set by the board and the system of internal controls should assure that they are observed. There was, as well, a convergence of opinions on two other important points: • globalisation makes the setting and observance of limits both more urgent and complex; and • while value at risk is a valuable means of measuring exposure, it is in no way a substitute for limits. Among other regulators, the Swiss Federal Banking Commission, for instance, stated that it is incorrect to substitute limits by VAR. Banks need both limits set by the board and value at risk calculations. For trading risk, a bank might use VAR if the board decides to set a value at risk limit on a daily basis. But this is not usually done, and anyway the middle and lower levels of in a credit institution require: • liquidity limits; and • concentration limits. 224
Is Value at Risk an Alternative to Limits?
225
Diversification in risk taking will not come as a matter of course. Therefore, the study and establishment of prudential limits and of diversification goals is core function in any financial institution. Attention should as well be paid to the fact that some times concentration may migrate to other domains. For instance, limits placed on equities and on indices might induce taking rights with no limits where might have been placed interest rates and commodities, hence, the need for safeguards to be established by senior management, assuring that: • There is an enterprise-wide limits strategy. • limits are observed on an individual basis; and • they are not taken as a sort of cumulative value at risk. Several credit institution expressed the opinion that the desks and field operations should be partners in the establishment of limits by senior management. Some said the best approach is bottom-up; but in others the policy being followed is definitely top-down. Figure 11.1 reflects both concepts. With either one of them, the crucial question is who makes the final decision. In my book this should be a basic responsibility of the board. Another key element in good governance – where models can help is transparency. As a result of globalisation and of an acceleration in product innovation promoted by technology, new concepts are now evolving on how internal control can become more efficient. A growing number of bankers believe that: • limits and their observance should be transparent to all authorised people; and • assisted through models internal control should provide management with a feedback on the observance of limits. The more technologically advanced banks said that, for better supervision, this feedback should be available in real time helped by the best tools science and technology can provide. Credit institutions with experience in knowledge engineering use agents (knowledge artefacts) to track interactively the observance of limits, bringing to senior management’s attention trends and possible violations.1 Tier-1 institutions should also ensure it that limits for private and institutional investments, derivatives trades, and credit lines are steadily followed up, with all deviations immediately channelled through the internal control system. The same should be the case with feedback on
226 What to do and not to do with Models
BOARD AND EXECUTIVE COMMITTEE
TOP-DOWN DECISION BOTTOM-UP PROPOSAL
DESKS IN FIELD OPERATIONS Figure 11.1
Establishing limits: top-down or bottom-up?
the observance of risk policies concerning clients and correspondent banks, as well as brokerage operations and all aspects of assets/liabilities management.
11.2 Establishing a policy of prudential limits Next to the possibility of huge losses, which can wreck a credit institution, reputational risk is a key reason behind the establishment and observance of prudential limits. Once a company’s reputation is ruined, its ability to generate income evaporates. Yet, limits are resented by traders, loans officers and bankers because they conflict with their independence of action, and they enable senior management to look over their shoulders. In many credit institutions limits also pose cultural problems. Without limits, every business opportunity looks like a chance to make a small fortune – a chance one may not have had for a long time. Nick Leeson
Is Value at Risk an Alternative to Limits?
227
must have surely thought so when through the exposure that he assumed on behalf of Barings he brought his bank to bankruptcy. Another cultural challenge is that senior managers must be able to: • • • •
take the information raised by transactions; compare it to all sorts of limits set by the board; make sense out of these tests; and take immediate corrective action.
A credit institution most definitely needs limits for credit risk (see Chapters 4 and 5). The surest way to ruin a business is to run it out of cash, capital and reserves. Day-in and day-out, year-in and year-out the risks a bank is taking can be divided into two large classes: • The dealing room, which represents roughly two-thirds of the total exposure, and involves both the counterparty’s creditworthiness and market risk. • Credits and loans, which typically are under the commercial division and concern the other third of global risk the institution is taking. Limits must evidently be set both for credit risk and for market risk (eventually also for operational risk). For instance, equity derivatives require primarily market risk limits, but credit derivatives2 need both – though of the two credit risk limits are the more important. Credit risk and market risk limits have always to be managed: • within broader overall limits established by the board; and • by area of operations, with the centre having the power to override local decisions. Banks that use high technology go through frequent re-examination of creditworthiness and of market price risk. In connection to equities, they do so several times a year, by stock exchange, the industry where investments are made, individual equity and currency being involved. In connection to exposure to equity derivatives, this control process must be carried out much more frequently than for equity investments that are not leveraged: • Only the most clear-eyed financial institutions appreciate that limits for leveraged and for non-leveraged instruments are not at all the same thing; the former must be much more rigorous than the latter.
228 What to do and not to do with Models
• The amount of leverage should be used as an indicator of how fast a limit can be reached through a transaction that looks as if it is within established norms but can quickly run out of control as the market changes. Whether we talk of loans, bonds, equities or other commodities, both detailed and compound exposure are important. With derivatives, including those transactions done for hedging, exposure can grow at exponential rate and because VAR maps assumed risks with a latency it cannot be used for reasons of prudential management control. By the time the VAR message reaches the CEO and the board, it might be too late. Compound risk is another challenge. In daily practice banks always hold more than one risky asset at a time. Furthermore, the composition of the portfolio is constantly changing. To be in charge, it is necessary to calculate the risks on individual assets and liabilities properly, but though necessary this is not enough. We also need a methodology to aggregate risks into a compound exposure measure for the whole trading book and banking book. Well-managed banks appreciate that the job of establishing limits and implementing testing procedures to assure they are observed is unrelenting. Figure 10.2 presents the four quarter spaces of a methodology that covers credit risk and market risk. It addresses both tick-by-tick intraday activities and the exposure accumulating in the bank’s portfolio. An important component to any study that computes compound risk is how commitments in the bank’s portfolio correlate with one another. Institutions usually say that they follow a policy of diversification. This is rarely supported by the facts brought to attention in the aftermath of an analytical study that targets the concentration of risk. To calculate compound exposure it is necessary to understand how prices move in relation to each other. Two assets move in concert – if the price of one rises, the other’s price rises too. They move in opposite directions, when the prices of one rises while that of the other falls. This is what theory says, but practice often reserves surprises: • In the past, when interest rates rose equity prices fell, and viceversa. • Today, leveraging, derivatives and simultaneous massive going short/going long by hedge funds, see to it that within certain ranges interest rates and equity prices might rise or fall at the same time (see also section 11.6 on non-realities).
Is Value at Risk an Alternative to Limits?
229
TYPE OF RISK AREA OF CONTROL
CREDIT RISK
COUNTERPARTY
MARKET RISK
INSTRUMENT LIMITS
LIMITS TICK-BY-TICK (INTRADAY)
PORTFOLIO
COUNTRY LIMITS
CLIENT-BY-CLIENT SIMULATION AA BB DEFAULT RESCHEDULING
‘WHAT IF’ FOR INTEREST RATES EXCHANGE RATES EQUITIES BONDS DERIVATIVES
IF THE BANK IS LEVERAGED THEN STRESS SCENARIOS ARE A ‘MUST’
Figure 11.2
Establishing limits and testing procedures for all main types of risk
This has two results. First, models according to the first point above are no more able to prognosticate or control exposure when non-linearities set in. Secondly, limits should be set in a way that is much more sophisticated than in the past, including liquidity limits that can act as a common denominator in going short and going long. We have to be able to establish which assets and liabilities in our portfolio are correlated, and which are not. If the prices of all assets in our portfolio are correlated, then the risks are additive. This is the concept that underpins the European Union’s Capital Adequacy Directive. In real life, no assets are perfectly correlated all the time, but to reap benefits from diversification require a very steady watch indeed. An old principle in investments is that: • If the assets in a portfolio move together, the risks are added together. • If they move in opposite directions, the risks are netted. Netting, however, is an imperfect art at best. In the spectrum between these two cases, risk needs to be progressively defined. The problem is that the number of correlations that have to be
230 What to do and not to do with Models
calculated goes up rapidly with an increase in the number of assets. Doubts also exist as to whether risk managing through correlation is conservative enough. Correlations are not stable over time, and can be subject to sudden change. This upsets past calculations because concentration and diversification are important considerations in the study of limits. It was not always this way, but starting in 2006, with the New Capital Adequacy Framework, concentration and diversification will also play a crucial role in the calculation of capital requirements. The 8 per cent reserves of the 1988 Capital Accord will give way to flexible levels that are going to depend on a number of critical factors. Among other things is discussed a reserve of 12 per cent for highly leveraged institutions because of lending to hedge funds.
11.3 Limits, VAR and market risk The bank’s policy should state explicitly that market risk within the bank’s trading book must be managed in accordance to the limits set by top management. As section 11.2 underlined, this must be done in detail, taking account of leverage. It must also reach the level of desk and trader. In terms of compound exposure, it should be expressed as maximum overnight value at risk – which, post-mortem, is controllable through the computation of VAR, keeping in mind that: • The maximum overnight value at risk limit is dynamic, and therefore reconfigurable over time. • Its test by means of the VAR algorithm must be done daily at the 99 per cent confidence level, with intraday being a better option. Figure 11.3 dramatises the reason for this statement. Mid-session the equity index may well have broken through the equity exposure limit set by the board, but if the reporting cycle is daily damage control cannot be done till the next day’s session at the stock exchange. By that time it may be too late. (Let’s recall, among other facts, that on 4 September Hewlett-Packard’s stock price fell by 18.5 per cent, after announcement of its take-over of Compaq. The next day it fell by another 7.5 per cent.) Both planned and actual exposure are necessary to provide a basis for damage containment. The two together make a good tool for management control. The daily value at risk may increase as a result of additional assets being managed on a marked-to-market basis, and the ongoing
Is Value at Risk an Alternative to Limits?
231
MARKET PRICE (JUST NOTE DIFFERENCE)
INTRADAY CHANGE
Figure 11.3 Intraday changes in equity index, therefore, in market risk, may totally upset equity limits
process of volatility affecting the types of risk being measured and incorporated into the plan vs actual model. If a plan versus actual evaluation of exposure is going to be worth its salt, then the figures we are using must be dependable. Apart of the limits that must always be considered in the planning phase, the actual value at risk may lead to re-evaluation of capital requirements. The VAR amount is essentially subtracted from the capital base. Regulators don’t fail to take action in this regard: • The 1988 Capital Accord specified 8 per cent of assets, and the Bank of Japan has defined a 6 per cent of assets as a crisis level. • At the end of 1996 the Japanese Finance Ministry ordered the Hanwa Bank to suspend operations because its capital was down to 4 per cent of assets. Provided the figures that are computed by banks and reported to the authorities are dependable, this approach to combining VAR with capital adequacy also makes feasible an effective linkage between market risk and credit risk. Therefore, until 2006 it closes one of the loopholes in the 1996 Market Risk Amendment, which did not combine the two risks into one integrated figure. After 2006 the New Capital Adequacy Framework takes care of that loophole. Notice, however, that the correlation between marking –to-market and marking-to-model, through VAR or any other artefact, would never be perfect. It is therefore important to periodically examine the correlation
232 What to do and not to do with Models
MARKING–TO– MARKET (JUST NOTE DIFFERENCE–JND)
N IO AT S L E T RR SUL CO RE OF
MARKING–TO–MODEL (JND) Figure 11.4 The correlation between marking-to-market and marking-to-model is not perfect, but it can be revealing
coefficient between the two results, as shown in Figure 11.4. No rules can be expressed as a general case, and each bank should compute its own correlation coefficient by homogeneous group of its instruments. As a guideline, correlations of less than 0.80 indicate unreliability between the two sets of values. While I am strongly advising to tune the system of marking to market and marking to model, the careful reader will notice that I am in no way suggesting a procedure through which VAR replaces limits. The concept that I advance is that of their complementarity in a management control based on plan versus actual evaluation. To the contrary, I find the statement by some institutions that ‘assigning risk limits through VAR has several advantages’, to be wrong. It becomes right if the word ‘advantages’ is replaced by ‘disadvantages’. Institutions that put the system of individually assigned limits in the backburner and rest their risk control hopes only on VAR damage their management control policies and practices. While there might be some gain if limits are comparable across products, desks, and units through a quantitative expression of exposure, the disadvantages that result by abandoning individual limits outweigh advantages by a wide margin. VAR does not necessarily facilitate the formation of a common risk language across the organisation, as it is alleged, because – as already explained in Chapter 10 – it only answers between 35 per cent to 65 per cent of computational requirements regarding exposure. (The exact figure depends on the type of institution, its products, its markets, its model(s), the data on exposure which it uses, and other factors.) Besides:
Is Value at Risk an Alternative to Limits?
233
• VAR is so often manipulated to cover risks rather than making them transparent, that it ends by providing a disservice to risk control; and • few bank executives really understand what they should do about the VAR figures they receive, while usually they appreciate the barrier presented by classical limits and the fact this barrier is being broken. True enough, the use of static limits is no more rewarding. Slowly evolving instruments, as loans used to be, can be controlled by static figures. Dynamic instruments in a fast changing market are a totally different ballgame. What the bank needs is a polyvalent system with real-time internal control an integral part of the solution (see Chapter 9). This is precisely the reason why I promote both the plan and the actual aspects of a valid approach. As I never tire repeating, VAR is a metric after the fact, while limits are set before actual operations to guide the hand of managers, traders, and loans officers. The goal and functions of VAR and limits are different, and they should not be confused unless one wants to escape internal control. Neither is the argument that VAR is superior to limits, because it is frequently updated, holding water. The most frequently banks compute VAR is daily but as we have already seen events happen and limits are broken intraday, tick-by-tick. By the time VAR catches up with the iceberg of an innordinate exposure the bank might have sank like the Titanic. Let’s not forget either that many institutions feed into daily VAR not daily but monthly data – as happened with UBS in regard to its LTCM exposure.3 Finally, statistics can be revealing about how the majority of experts think about VAR’s contribution to risk control. In its November 1998 issue, Middle Office published the response to a question posed to professionals on whether for market risk the concept of VAR alone is a sufficient measure of exposure: 93 per cent of respondents answered: No! Only a mere 7 per cent said: Yes!
11.4 The impact of level of confidence on the usability of VAR The reader is by now well aware that value at risk is a statistic that quantifies the likelihood of downside exposure of an asset or portfolio to market factors. It is the expected loss over a given time interval, under normal but dynamic market conditions at a given level of confidence – for instance 99 per cent. As we have seen in Chapter 10, when the operating
234 What to do and not to do with Models
characteristics (OC) curves have been discussed, this corresponds to a Type I error (or producer’s risk) of α = 0.01. The reason why I iterate these notions is that many banks using VAR don’t really appreciate the impact of a level of confidence on the meaning of the numbers they receive. This being the case, they don’t really know how to read VAR, nor do they understand its real significance in terms of exposure. Let me repeat a concept I have already explained: • The expected loss at 99 per cent is not the worst case. • The real losses might be far higher than that depending on the shape of the distribution underpinning the VAR computation. In all likelihood, sometime in the future when VAR will be generally considered an obsolete model, two things will have been gained by this exercise: Model literacy and the culture of confidence intervals. Confidence intervals should be used not just in connection to value at risk but with every implementation of statistical models where a resemblance to a normal distribution can be retained as being acceptable. On the other hand, experience and hard learned lessons will condemn other practices, like using VAR instead of limits. To appreciate a little more the message given by VAR and its irrelevancies as a limit, it should be recalled that this algorithm was originally developed to evaluate trading portfolio risk after Dennis Weatherstone, then chairman of JP Morgan, demanded to know the total market risk exposure of his bank at 4:45 p.m. every day. This request was met with a daily VAR report at that specified time – with no particular connection to limits set desk-by-desk. VAR was conceived as a snapshot that informs the CEO about the level of exposure. At the extreme, it tells whether the institution is still a going concern or it has gone under. The VAR algorithm is not intended to be precise but, rather, to serve as a quick reference number – an approximate message-giver.4 Like its predecessor RiskMetrics, also developed by JP Morgan, VAR’s methodology involves two steps: • A portfolio’s exposure to prescribed risk sources is identified. These sources are transactions in interest rates, foreign exchange, equities and derivative instruments. • The portfolio VAR is estimated using database mining. Exploiting existing data permits to measure a company’s risk exposure across its dealings and portfolio assets.
Is Value at Risk an Alternative to Limits?
235
The Basle Committee adopted VAR in connection to the 1996 Market Risk Amendment because it was the best model publicly available at that time. But it is not perfect. G-10 regulators consider VAR an acceptable yardstick to meet risk reporting requirements. Other metrics, which can express exposure, are cash flow at risk, capital at risk, and earnings at risk. A quick computation of risk is useful to management only when a level of confidence is observed. Chapter 10 presented the reasons why, other things equal, if we reduce the level of confidence we also reduce the reported exposure. This reduction of the level of confidence is selfdeceiving, yet it is what some banks do when they report to their senior management. • VAR at 97.5 per cent confidence level, with 2.5 per cent of all likely cases left out of the calculation, or • VAR at 95 per cent level, with 5 per cent of all possible loss cases not being taken into account. It is like lying to one’s own lawyer or doctor. Management without an understanding of what the level of confidence means does not have the know-how to appreciate the difference – and the damage being done. The same is true of anyone who looks at reported VAR figures and is satisfied with what he or she see without really knowing what lies behind these figures. If there has been no other reason for not using VAR as a substitute to limits (and there is plenty of other reasons, as already explained), what the last few paragraphs have stated gives plenty of justification to keep the time-honoured system of detailed limits in place. This does not diminish the need for a daily VAR; it simply avoids confusion (and wishful thinking) in management’s mind. Model-literacy should be complemented by literacy in the interpretation of the different faces of risk, in a market more challenging than ever. Today, in the general case, investors and shareholders lack information about how to interpret risk exposures. Analysts’ estimates do not necessarily mirror the true company risk, hence practically everybody might experience unpleasant surprises on earnings, profits and losses – in fact, all the way to the survival of a company as a going entity. In appreciation of that, some banks have been working to improve the classical VAR methods for internal risk measurement purposes. Certain institutions are now using a method known as Time-Until-First-Failure (TUFF), which is considered to be more dependable than VAR.
236 What to do and not to do with Models
With TUFF, a sequence of days is plotted and the outlier is the occurrence of a loss exceeding a predetermined value; hence, a limit. This method is interesting because it permits using either or both techniques already popular from quality control and reliability engineering: • mean time between failures (MTBF), leading to the use of the Weibull reliability algorithm (see the following paragraphs); and • statistical quality control (SQC) charts by attributes, which permit the putting in place of a system of detecting limit abuses. Based on pioneering statistical research in the 1920s, and on developments in connection to the Manhattan Project during World War II (which have been thoroughly tested for over 50 years in the manufacturing industry), quality control charts make possible to bring to the immediate attention of the board, of the CEO, and of senior management whether each desk (and each trader) keeps within authorised limits. 5 The pattern established over time by quality control charts helps to convey the seriousness and spirit of risk management, and its underlying culture, which not only sets guidelines but also assures these are enforced. Yet, as my research documents, few financial institutions make effective use of statistical quality control charts because they lack appropriate skills and their management is not versatile in advanced statistical tools. A similar statement is valid regarding the use in finance of the Weibull distribution developed for reliability engineering studies. The exploitation of mean time between failures offers many advantages, among them the ability to use TUFF metrics, and to prognosticate the reliability of the organisation’s internal control system. Walodi Weibull’s reliability algorithm, which dates back to the missile studies of the 1950s, is: R = e
–t ⁄ T
where: R = projected reliability; T = mean time between failures (MTBF); e = Neperian logarithm; t = time over which we project a system’s reliability under pre-established operating conditions. TUFF can be used as the best estimate of MTBF (that is T) in a financial trading environment, while t can be set as equal to one day, one week or a month, depending on the timeframe we are examining. For equal T the larger is the t timeframe, the lower will be the resulting reliability. This reliability algorithm can also be used to estimate risk given the
Is Value at Risk an Alternative to Limits?
237
return we are seeking (in this case t) and the return T available with an instrument free of credit risk (such as Treasury bonds). 6 Such calculation is of great help in connection to precommitment studies.
11.5 Can we use eigenmodels for precommitment? The senior management of credit institutions must understand that the 1 per cent beyond the 99 per cent confidence limit in no way corresponds to a 1 per cent error in calculation. This 1 per cent error leaves out of the computational procedure regarding exposure the 1 per cent of all risk cases. This can be devastating when they represent extreme values. Extreme values can easily sneak in as spikes and outliers. One of the basic problems with the so popular normal distribution is that it says nothing of extreme values whose aftermath may well invalidate, and by a wide margin, even the most carefully computed risks. A different way of looking at this problem is that management should not confuse the general case with specific cases that arise for a number of reasons, and have the nasty habit of hitting hard in nervous markets. Banks with experience in the quantitative evaluation of value at risk, and other algorithmic approaches, as well as with existing pitfalls (see Part V), will appreciate that there is also present the principle of competence. The job of managing risk based on VAR is for risk controllers, not for traders. Usually, the traders don’t care for what VAR says though they may care of delta and gamma hedging: 7 • The best tools for the trader are those he or she can work with because of understanding them and seeing their impact. • Limits are understood by the trader and they are easy to supervise by senior management. Hence, it is possible to work with them in a consistent manner. The principle of competence suggests that managers and professionals should not use VAR as a substitute for good sense, which comes from experience and from understanding the business. Qualitative criteria and quantitative approaches, therefore models, are aids not a substitute to thinking. For instance, with experience the plan versus actual concept discussed in section 11.3 can be put to advantage in computing forward capital requirements – a sort of precommitment. During the research I did in July 1998 in New York, Boston and Washington, commercial and investment bankers observed that the Federal
238 What to do and not to do with Models
Reserve is moving towards reliance on eigenmodels for capital adequacy. Indeed, the Federal Reserve did so but the other members of the G-10 did not particularly appreciate the idea of precommitment because they were afraid that it would increase the amount of assumed risk by credit institutions. This being the case, the concept of precommitment as such was abandoned by late 1999, but it did leave a legacy. This is seen in the advanced (IRB) solution which, as Chapter 4 has documented, is incorporated into the New Capital Adequacy Framework and becomes effective on 2004. It pays therefore, to briefly consider what an internal precommitment procedure through IRB might mean. Starting with the fundamentals, technologically advanced banks have been working for considerable time with eigenmodels both for market risk and for credit risk. The same is true of independent rating agencies. Standard & Poor, for instance, is using models in its rating. ‘We rely on models to do our work, but we approach a bank’s in-house models with some scepticism,’ said Clifford Griep, ‘because financial institutions are overleveraged entities. They don’t have much capital, by definition’. True enough, but banks try to optimise their capital. When it comes to an internal precommitment, tier-1 commercial and investment bankers are fairly confident that this solution can offer them a serious competitive advantage. ‘We are headed towards a situation where each financial institution will have its way of measuring it capital requirements’, said a cognisant executive. ‘This will impose quite a bit on regulators because they will have to test these in-house models.’ ‘Precommitment is one of the subjects where opinions are divided’, suggested Susan Hinko, of the International Securities Dealers Association (ISDA), ‘It is very intriguing idea but many regulators say no.’ Some of the regulators to whom I spoke think that precommitment has merits, but it will take a lot more development to make it a reality. Two of the problems are that: • risks being taken by banks are not linear; and • most of the current models are in fact an oversimplification of real life. Not only are nonlinearities alien territory to the large majority of banks but also many problems in financial life change from linearity to nonlinearity, depending on the area the values fall. This is shown through a pattern in Figure 11.5. Between −A and A the behaviour of the system mapped in this graph is linear, but it would be a mistake to think that linearity prevails beyond this limit. In fact:
Is Value at Risk an Alternative to Limits?
A –B
Figure 11.5
239
B
–A
Characteristic behaviour of a nonlinear system
• As a whole (between −B and B) the system is non-linear. • But in each of the regions −B, −A; −A, A; and A, B a linear representation would do. Nonlinearities are increasingly seen as important elements of analytical financial studies. One of the realities that slips in among the technologically advanced credit institutions is that there exist a nonlinearity in valuing big positions at prevailing market prices. For instance, in principle: • the bigger the position; • the worse the price the bank would get if it tries to unwind it.
240 What to do and not to do with Models
Very few credit institutions have in-house the skills necessary to follow nonlinear approaches, which is one of the reasons why simplification through linearities seems to be attractive. Besides this, regulators are concerned both about the lack of experience in the optimisation of a bank’s capital, and of the difficulties posed in an effective integration of market risk and credit risk. Hence, even those who are for precommitment think that many years will pass before it becomes a reality. Regulators are also concerned about the fact that significant changes in volatility pose their own risks. While today the tools at the banker’s reach are much sharper than they were 10 or even 5 years ago, much depends on how the management of investment banks and commercial banks uses them. One can only hope that, as it happened with VAR, with time IRB might provide a totally new culture in the computation of counterparty risk. ‘We commercial bankers will love precommitment, but my guess is the regulators will be uncomfortable about it’, said a senior commercial banker in New York. Other credit institutions are concerned about what has been contemplated as a penalty to the bank which fails by a margin in its precommitment, since there is always present the possibility of: • wrong hypotheses being made; • limitations in modelling capabilities; and • the likelihood of a modelling error. This can be stated in conclusion. While the concept of precommitment has not been adopted the studies associated to it by some of the institutions who were vocal about its wisdom, have not been sterile. With hindsight, they have been instrumental in developing technology-based alternative approaches to the flat rate of 8 per cent capital requirements for credit risk. The notions underpinning these approaches have been embedded into the New Capital Adequacy Framework as the Internal Rating-Based solution.
11.6 Using the warning signals given by value at risk The classical definition of a derivative financial instrument is that this is a future, forward, swap or option contract. But the Statement of Financial Accounting Standards133, by the Financial Accounting Standards Board
Is Value at Risk an Alternative to Limits?
241
(FASB), amplifies and restructures this concept. It redefines derivatives as financial instruments with the following characteristics: • They have one or more underlying, and one or more notional amounts, payment provisions or both. • Usually, they require no initial net investment and, if needed, this is rather small. • They require or permit net settlements, or provide for delivery of an asset that puts the buyer at net settlement position. The underlying notions embedded in each one of the three points above can be nicely modelled and handled by computers. This is significant inasmuch as it enhances the accuracy of negotiating, processing and inventorying derivatives products. It also makes possible a better control over assumed risk by promoting ways and means for management supervision in real time. Having been explicitly designed for market exposure the value at risk algorithm and the models that complement its reach in a portfolio-wide sense, can contribute the computational procedures and, therefore, the means for marking-to-model different derivative financial products. Not just one but several models will be needed to cover equity indices, call options on common shares, foreign currency options, and a variety of instruments connected to fixed income securities, and so many other complex products. The common threat must be a sound methodology which permits: • to apply models effectively and their metrics; and • provide the means to control positions whose exposure is found to be outside established limits. The careful reader will appreciate that in this exercise accuracy is much more important than precision. A similar statement is valid regarding deviations from expected value. In any process, whether physical or man-made, the smaller is the standard deviation the higher is the quality. This fact has been ingeniously exploited by General Electric in its Six Sigma methodology. Among financial organisations that have successfully applied Six Sigma are GE Capital and JP Morgan Chase. 8 The issues of the quality of a process and of its confidence intervals correlate. Rather than reducing the level of significance from 99 per cent to 95 per cent to shrink the computed capital at risk (which practically means to lie to themselves) banks should be keen to reduce significantly
242 What to do and not to do with Models
the standard deviation by improving the quality of their products and processes. Among the ways for doing so are: • diversification of the portfolio through experimental design to reduce the standard deviation of exposure; and • hunting down errors and loopholes, which impact negatively on daily exposure figures. Let’s look into the implications of the second point above through an example that has the VAR model in the background. If value at risk is computed at the closing of the business day and then again at the opening the next day, theoretically it could be expected that: VARt, opening = VAR t–1, closing It so happens, however, that this is not true because of differences, or leaks, due to a number of factors. Banks have evidently tried to find excuses for these discrepancies that originally were supposed to be due to unknown factors, but little by little the background reasons started becoming more concrete. For instance, one of the dealers might have entered a trade with the wrong value of an underlier, or made a new booking, or manipulated the value of one of the transactions in the database. All that could have taken place without notifying the desk, backoffice, or risk manager. Targeting the leaks is a good way to find out if somebody: • is massaging the bank’s database, or • proceeding with transactions beyond authorised levels. An equally important family of differences comes from the fact that various departments in the institution feel the need to ‘adapt’ the VAR algorithm to reflect the nature of their business area as, for example, fixings of instruments such as average rate options. In doing so, these departments may be choosing a value somewhat different from a spot outside the realm of the company’s policies. In fixed-income, there may be used accounting gimmicks for fixing any of the floating rates that are part of the portfolio through internal fixed rate/floating rate swaps. Or, in the case of equities, some busy bodies might be adding a variable to represent the payment of dividends to help themselves in discovering how prices of futures and options change ex-dividend.
Is Value at Risk an Alternative to Limits?
243
To find discrepancies, which might be at the borderline of legal transactions or outside their domain, we must decompose our data into sub-sets at that much fine grain as needed to highlight different events. Hunting discrepancies is a different ballgame than targeting volatility and liquidity to help ourselves in explaining differences between actual and theoretical exposures. Computational discrepancies may be due to: • systematic; or • random reasons. These can be found with nearly all financial instruments, and with IT operations as well. In one of the banks for which I was consultant to the board, management discovered a loophole that permitted a programmer to round up the cents in clients, accounts and credit an account opened in his wife’s maiden name. What is different with derivatives is that discrepancies are magnified because of high leveraging, and small swindles can turn into big ones.
This page intentionally left blank
Part Five Facing the Challenge of Model Risk
This page intentionally left blank
12 Errors in Prognostication
12.1 Introduction The general rule discussed throughout this book, and explained through practical examples is that we construct conceptual models in order to simplify, idealise, understand, gain insight, exercise foresight and take action. But as we have also seen, like any other thing in life models are not fool-proof. In fact, the same is true of every theory ever invented. One more reason why the process of modelling can go wrong, and the prognostication we do may be off-the-mark, is that for most people this is a new experience. New experiences should be integrated with other acquired know-how, but this is not always the case. As an advice, we should be very careful in assimilating the results of experimentation and of prognostication: • Nothing should be immune of testing and of proof. • We should always question the ‘obvious’ and challenge what seems to be an evident truth. William Schneider, a political analyst, has coined Schneider’s Law of election-forecasting: ‘The models work, except when they don’t work.’ Non-believers of models point out that even the 90 per cent confidence intervals (see Chapter 10) is a target that few models care to use because of the inadequacy of the data samples that they employ. Yet, as already stated, a 90 per cent level means that in 10 per cent of possible cases representing facts of real business life are not taken care of in estimating exposure. Worse yet, the likelihood of failure for several other reasons, some of which fall in the gaps of our knowledge. In 1992 a Harvard University 247
248 Facing the Challenge of Model Risk
professor organised a control group of graduate students to test their combined power of prediction against that of the models involved in the tests. This Harvard group performed as poorly as any of the models. If models are no improvement over pundits, concludes Jay Greene, who conducted the experiment, we might do a well talking to a group of graduate students as feeding data into computer. Chapter 8 addressed precisely this problem. A prognostication method like Delphi, which is based on the opinion of experts and the distillation of that opinion through successive refinements, cannot give a better result than the quality of these ‘expert’ opinion. Chapter 9 made the point that the use of models for forecasting, based on hypotheses that might be wrong to start with, cannot give a better output than that of hypotheses, algorithms and data it uses. It is appropriate to underline that: • Personally I am for the responsible development, testing and use of models – not against models – even if I bring to the reader’s attention weaknesses associated to quantitative solutions. • The real enemies of models are not the people who criticise their weak points, but those who develop them and use them in an irresponsible manner. One of the reasons for irresponsibility in modelling and in prognostication is lack of skill. This is true throughout information technology. In 1990, the Policy Division of the National Science Foundation (NSF) predicted a shortfall of 191,000 information technology professionals in the coming years. Ten years down the line this estimate was exceeded. The problem with lack of skill is that it is leading to weak structures. There is no alternative to lifelong learning, and to intensive programmes, which aim to close gaps in knowledge and skills. Experts must learn that a great deal of what is done with models tends to look into the future. Therefore, forecasts should account for rapid changes taking place in business, technology, and society as a whole. But as Chapter 11 has underlined, there is no linear progression. If we project the evolution of a major branch of industry we must also anticipate the downsizings, buyouts, consolidations, and mergers that often accompany industrial activity and they may radically change the business landscape. These are non linearities of which many prognosticators seem to be unaware.
Errors in Prognostication
12.2
249
‘FF or’ and ‘aa gainst’ the use of models for forecasting
While in many cases the use of models provides a useful insight and helps in the understanding of the problem, we must guard ourselves from being overexposed to the assumptions that underpin the model we use. As we saw on several occasions, hypotheses change over time, calling into question some of the assumptions being made. We must also steer away from sterile mathematical models coming to the market in increasing numbers, but being short of well-documented deliverables. There is a counterpart to what I just said in academia. Particularly worrisome is the fact that fledgling economists cannot get tenure at major universities without proving some sort of theoretical virtuosity, which they try to map into a model. They do so even if they have to abandon relevance along the way. As contrasted to these attitudes: • The making of models is not a goal, but a means to better perception provided there is conception and perception to start with. • We should not lose sight of that fact that financial modelling is still an art rather than a science, and its prerequisite is comprehension of the problem. A senior industrial executive participating at a seminar and workshop I was giving in Berlin on the development and use of expert systems, built during the workshop a curious model that practically led nowhere. He said this was an expert scheduler of board meetings. While I was trying to understand what this expert system intended to do, the executive commented that he and his colleagues in the board found it hard to agree on the Order of the Day. Therefore, he designed an expert system to do it for them. Another useless exercise with models is the current craze among speculators (as well as some traders and investors), who try to develop algorithms able to flash out ‘anomalies’ in the market. The Nobel Prize winners of Long-Term Capital Management (LTCM) seem to have excelled in this art. 1 ‘I have always found the word “anomaly” interesting’, says Warren Buffett. ‘What it means is something the academicians can’t explain, and rather than re-examine their theories, they simply discard any evidence of that sort.’ Alternatively, they may overexploit it. Contrary to this attitude, when an analytical mind finds information that contradicts existing beliefs, he or she feels the obligation to look at it rigorously and quickly. This has to be done directly and in a concentrated manner by knowledgeable
250 Facing the Challenge of Model Risk
people. It cannot be done by models. The problem with the majority of people, however, is that: • Their mind is conditioned to reject contradictory evidence. • They are scared to express disbelief, and hope a model will do it for them. This is not what models can do on their own will, or as proxies. Another major flaw with today’s culture of modelling is what I call the battle between: • arithmetical results, which could be predicted; and • momentum, which can not be prognosticated at a reasonable level of certainty. To make their arithmetic seem believable many prognosticators bet on undocumented statements about precision. Industry analysts and research firms supposedly predict a market – any market will do – will be worth, say, US$6,395,200,000 by 2005 and have 28.72 million customers by that time. This questionable precision of five decimal digits in the ‘worth’ aims to convince that the figures have been produced only by the most advanced mathematical tools and they are based on plenty of research. These are false pretentions that render a very bad service to modelling. The five decimal digits are pure guestimates, and they are mostly useless. By and large, they rest on some sort of logic aiming to show much greater precision than order of magnitude about where a certain sector may be heading. These policies are usually part of a rather widespread believe that: • undocumented guestimates are really interesting; and • nobody understands that the assistance they offer in planning for the future is zero. Other, most frequent errors with modelling are connected to information available at the time the model is built. In many cases this information is either inaccurate or obsolete. Besides that systematic expectation errors often reflect irrational hypotheses (see section 12.3) made by mathematicians who don’t understand the business they are modelling, or by business people who know nothing about mathematics.
Errors in Prognostication
251
These are not the only causes for mistakes. Another major risk is that of market players at different echelons of the organisation, who are not sufficiently informed and react to false signals. Also, in many cases people using ‘this’ or ‘that’ projection don’t understand that expectations derived from price trends are not representative of the underlying market forces. Practical experience indeed suggests that in a great many of cases systematic expectation errors may occur if traders and investors misjudge future shifts in the structure of economic policy or in the market’s behaviour. The learning curve also plays a role. As long as a process of learning the new computer-based system has not sufficiently progressed: • expectations tend to be distorted in a particular direction; and • events may become autocorrelated, while they were assumed to be independent. As Chapter 1 brought to the reader’s attention, in order to talk about in a serious manner models in finance, and to discuss questions such as whether they contribute anything to investments, loans, trading, or other business, we have to be clear about the scientific theory on which will rest our prognostication. A theory, any theory, is just a model of something, for instance a market, or a restricted part of it, and a set of rules that relate quantities in the model to observations that we make. A theory on which we will base out work exists only in our minds: it does not have any other existence. Yet, we must trust its construct(s) and its deliverables, otherwise there is no point in having them. What we expect from this theory is to: • accurately describe a large class of observations, on the basis of a fact that contains only a few arbitrary elements; and • be able to make reliable predictions about the results or conclusions of future observations, which we essentially simulate. Notice that good theory does not need to be complex. Quite to the contrary, Newton’s theory of gravity was based on a very simple model, in which bodies attracted each other with a force proportional to a quantity called their mass and inversely proportional to the square of the distance between them. Even if simple, Newton’s theory predicts the motions of the sun, moon, and planets at very accurate level. Another
252 Facing the Challenge of Model Risk
very simple but powerful theory relating energy, mass and velocity is Albert Einstein’s most famous equation: E = mc2 In the original version Einstein’s equation was written: EL = mc2, there was a constant ‘L’ in it. Then in 1912 Albert Einstein decided this equation was weighty enough without superfluous constants, so he crossed the ‘L’ out. We should simplify every time, whenever this is possible, the models and prognosticators that we do. This is not yet generally appreciated. Let me conclude with this thought. Because bankers know very little about models and rocket scientists are not necessarily versatile in banking, neither of them may clearly understand the full predictive content (or lack of it) of a financial market indicator. The goal of an early indicator is to predict the future movement of one or more economic variables. This is not in doubt. The question is how well it is being done, and if the model we develop is simple enough so that it can be fully appreciated and thoroughly tested.
12.3 Faulty assumptions by famous people and their models By now, the reader should fully appreciate that, in a general sense, model building rests on underlying assumptions that are either explicitly stated or inferred from the structure of the artefact. Many statements on predictive characteristics are made on the basis of statistical criteria. For instance, regression analysis is used to test how two or more variables correlate with one another: • first, these relationships are analysed; • then, they are modelled through appropriate algorithms. Correlation and covariance is one of the analytical tools whose object in an analytical study is to identify reliable financial market indicators that help in forecasting future development of one or more variables. For instance, the object of the analysis may be the volatility of interest rates within a given term structure, and how this correlates to changing economic conditions. Chapters 10 and 11 made a number of references to the correlation between daily exposure computed from VAR and trading losses experienced in real business life. Figure 12.1 presents a comparison between
Errors in Prognostication
253
300
DAILY TRADING LOSS IN $ MILLION 200 (JUST NOTE DIFFERENCE)
100
0 0
100
200
300
400
ONE-DAY VAR (JND) Figure 12.1 A comparison between VAR estimates and trading losses at Crédit Suisse First Boston Source:
Crédit Suisse 1998 Annual Report.
daily value at risk estimates and trading losses at Crédit Suisse First Boston (CSFB), based on statistics included in the 1998 Annual Report of Crédit Suisse. The knowledgeable reader will observe that VAR tended to overestimate the bad news. The evident goal is to obtain a perfect correlation – this is, however, more theoretical than real. One of the problems created by imperfect correlation is the selection of some other appropriate statistical procedure for assessing predictive fitness. Another challenge is that of obtaining a statistically valid sample. The assessment of the predictive content of indicators is often geared to statistical descriptions and comparison of values which, among themselves, help to define predictive accuracy. Many of the preceding chapters brought the reader’s attention to an even greater challenge than those I just mentioned: the formulation of basic assumption which, if proved correct, might lead to a sort of tentative theory; but if they are incorrect they ruin the model and the
254 Facing the Challenge of Model Risk
prognostication. With marketing savvy, a factual tentative theory can make breaking news. There are cases when this happens even if the basic assumptions themselves might be rather hollow. Take as an example the now famous paper by Franco Modigliani and Merton Miller, which won them a Nobel prize. Their hypotheses on cost of capital, corporate valuation, and capital structure are flawed. Either explicitly or implicitly, Modigliani and Miller made assumptions which are unrealistic, though they have also shown that relaxing some of them does not really change the major conclusions of the model of company behaviour. In a nutshell these assumptions state that: • Capital markets are frictionless. The opposite is true. • Individuals can borrow and lend at risk-free rate. Only the US and UK Treasury can do that. • There are no costs to bankruptcy. The cost of bankruptcy can be major. • Firms issue only two types of claims: risk-free debt and (risky) equity. For companies, risk-free debt has not yet been invented. • All firms are assumed to be in the same risk class. Credit ratings range from AAA to D,2 a 20-step scale. • Corporate taxes are the only form of governmental levy. This is patently false. There are wealth taxes on corporations and plenty of personal taxes. • All cash flow streams are perpetuities. Not so. There is both growth and waning of cash flow. • Corporate insiders and outsiders have the same information. If this were true SEC and FSA would not have been tough on insider trading. • Managers always maximise shareholders’ wealth. Not only there are agency costs, but also exists a significant amount of mismanagement. With nine basic assumptions that do not hold water, this is far from being a scientific method. Some critics say that people who are aware they make false assumptions are antiscientists. I would not go so far, but would point out that as a matter of sound professional practice when confronted with research results that challenge a basic tenet of theory, scientists must look in the most rigorous way at the data before stating hypotheses or making pronouncements. Also, under no condition a hypothesis can be tested with the same data that helped in making it. True enough, sometimes scientists are not as careful as that. This is evidently the case when the peer review
Errors in Prognostication
255
process looks at assumptions and data, and comes to the conclusion that something is seriously amiss. In many other cases, however: • hypotheses full of flaws are accepted; and • faulty selection procedures pass through. It is neither unusual nor unheard of that expediency and incorrectness creep into scientific investigation, eventually discrediting the researcher(s) and debasing the results. Then, unless the organisation cleans up its act, it can kiss its credibility good-bye. Even to this statement, however, there are exceptions. Sometimes, faulty assumptions are allowed to remain because of what is known as confirmation bias. Cognitive psychologists have coined this term to identify the partly unconscious ability to value evidence that confirms a hypothesis that has been made. This is done no matter how unreliable the basic assumptions may be when confronted with real-life evidence, or by ignoring other evidence that contradicts the hypothesis. Researchers, for instance, may be expecting some type of observations, but currently available data streams are incomplete and the accept/reject decision process is rushed. Senior management is under pressure and the researchers tend to perceive what they thought was confirming evidence: • when this happens, sensitivity is greatly reduced; and • no consistent effort is made to improve the dependability of confirmation. Wishful thinking is another reason blurring research results, and leading to false premises or conclusions. Wishful thinking may influence both the hypotheses connected to background factors and the assumptions made about deliverables and their timetable. A good example is the failure of the Japanese Real-World Computing Program, 3 which was established in 1992, and fully funded, with the goal to promote individual novel functions and advanced solutions to: • • • •
recognition and understanding; inference and problem solving; human interfaces and simulation; autonomous control.
Cornerstone to the support of novel functions and their integration were the breakthroughs outlined in Figure 12.2 and expected to take place in
256
First period (5 years)
Second period (5 years)
Theory and novel functions
Novel functions
Individual novel functions
Integration of individual novel functions
Theoretical foundation for novel functions
Constructive theory for novel functions
Theory
Development of prototype systems Neural system Development of prototype systems
Mid – term Evaluation
Massively parallel system
Computational Design guideline base
Development of a large scale prototype systems
Development of a large scale prototype systems System integration Design guideline
Optical computing and device Development of device technology
Figure 12.2
Schedule of research and development: real-world computing
Supply of optical module
Development of optical computing prototype system
Final Evaluation
bench mark for evaluation
Mid – term Evaluation
System
Errors in Prognostication
257
the 1992 to 2002 timeframe. The theory of novel functions, supposed to cover a void of algorithmic insufficiency, never really took off. The massively parallel system and the neural networks are no more the sexy subjects that used to be; and optical computing is another decade or two away (if it ever comes).4 Whether in finance or in science and technology, people have a tendency to be overoptimistic and oversimplistic with their assumptions and with the way they plan their work. Failures become unavoidable when the necessary testing is wanting and the tendency to promise something and have it budgeted overrides the responsibilities of analysts, economists designers and engineers for high quality deliverables.
12.4 The detection of extreme events In a meeting in London, MeesPierson said that their scenario on extreme events is dependent on market movements and triggers associated to volatility and liquidity. For reasons of controlling exposure, the institution’s executive committee also looks individually at high risk clients, while it regularly datamines the entire client base to discover exceptions or changes in client relationships. High-risk clients are followed daily but control activities can go on at an hourly pace if necessary. Some types of control involve Monte Carlo simulation under stress testing. The test is for exceptions and outliers. Particularly targeted are changes in volatility as well as swings in derivatives prices, whether these go up or down. Data analyses and simulations are necessary because extreme events can seriously affect any institution. Also known as bolt out of the blue (BOB), they are often associated with a large loss that in many institutions, leads those responsible to remark: ‘I never thought of that!’ The best way to learn about outliers and their aftermath is to examine in the most critical manner: • • • •
the meaning behind exceptional moves; what has happened to others because of such events; how they restructured their risks; and how they altered their management practices to take care of similar happenings in the future.
Extreme events are uniquely informative about when, where and how assumptions that underlie business opportunity analysis and risk management practices become invalid. The very existence of outliers provides
258 Facing the Challenge of Model Risk
the basis for rejection of the implicit hypothesis that an institution understands the nature and significance of the risks it is facing. Only practical and factual analysis, not out-of-the-blue theories help in understanding background factors of extreme events. The reader should as well appreciate that in science we are much more sure when we reject something than when we accept it: • We reject a hypothesis or a finding, because we have evidence to the contrary. • We accept when there is no evidence available for its rejection. Whether we talk of outliers or of ‘normal’ behaviour, acceptance, including the acceptance of theories, is always tentative. This is a basic tenant of the scientific method that does not serve at all the status quo; instead, it supports change. Because of this any theory, including physical theories is always provisional, in the sense that it is only a hypothesis until disproved by a nasty new fact: • We can disprove a theory by finding even a single observation that disagrees with the predictions of that theory. • No matter how many times real-life observations, or the results of experiments, agree with some theory, we can never be sure that the next time there will be no facts contradicting it. The fact that new observation disagree with a given theory and we have to modify it, or outright abandon it, is very positive. It provides the management of financial institutions, and other enterprises, a unique opportunity to extract value from past experience and put it into effect. This requires: • a policy of investigating and reporting surprises and failures in prognostication; and • a factual, documented basis for improving future performance in our ability to foresee coming events. One of the reasons behind the failure in detecting outliers is that current policies and practices in data collection are not that reliable, therefore, events that are out of the ordinary go undetected till it is too late. For this reason, practitioners in time-series modelling have developed the concept of the so-called broken leg cue to characterise a common modelling failure.
Errors in Prognostication
259
According to this concept, the time series model represents the performance of a thoroughbred racehorse. It predicts accurately the outcomes of its races, until the horse breaks its leg because of unexpected obstacles that it encounters. In finance, these unexpected obstacles might be spikes. Figure 12.3 presents as an example the late September 1999 surge in gold prices. Initially a rise in the price of bullion was accompanied by rising bond yields, but gold is no longer a good leading indicator of consumer price inflation – which is itself a key bond yield determinant. Instead, this gold price jump was a reaction to the September 26 announcement that the European Central Bank and 14 other nations that in the next five years they would not sell or lease any more of their gold holdings beyond previously scheduled transactions. Gold’s formerly decent track record in inflation forecasting, in the 1970s and early 1980s, is itself an example of a discarded theory. Its value in prognosis has broken down in the late 1980s and in the 1990s. Confronted with this spike, however, and given the fact that gold is still viewed by some investors as an inflation hedge, economists asked the question: ‘Do inflationary expectations of gold buyers “pan out” in subsequent consumer price inflation statistics?’ This query brings into the picture the issue of correlations of which we spoke in section 12.2. In the 1974–85 period, there was a positive 78 per cent correlation between the percentage change in gold prices and the annual Consumer Price Index (CPI) inflation rate in the following year. Traditional thinking would lead one to believe this will be repeated. However, in the subsequent years through 1998 that correlation fell sharply to just 21 per cent. Let me add at this juncture, in hindsight, that correlations with the rate of inflation are higher using percentage changes in gold prices versus actual price levels. Also, the correlations used to be higher with a year lag in gold prices as opposed to concurrent prices. All these theories, however, changed in the 1990s as gold fell out of favour with investors. Another interesting phenomenon with time series is splines, and has to do with the algorithms we use. Say that we want to approximate the unknown structure of a function of which we only have some scatter points and are generally unfamiliar with the most likely functional form of this curve. Working by approximation, within an arbitrary error by some polynomial defined over the same interval, continuous functions might be approximated at least in some interval. Greater accuracy can be obtained by employing higher degree polynomials. This, however, presents a particular problem if a high order polynomial is used in connection to data not uniformly distributed over the
260
330 320 310 300
US $ PER TROY OUNCE
290 280 270 260 250 JAN. 1999
Figure 12.3
FEB. 1999
MAR. 1999
APR. 1999
Volatility in daily gold prices: a short-lived spike
MAY 1999
JUNE 1999
JULY 1999
AUG. 1999
SEPT. 1999
OCT. 1999
Errors in Prognostication
261
interval. If a higher order term is added to improve the fit, the form of the function might be effected over the entire interval. To avoid such effect, numerical analysis advises to direct the explanatory power of an additional parameter to a chosen subinterval. This leads to splines, or piecewise polynomials: • an approximation interval is divided in a number of subintervals; and • separate polynomials are estimated for each of such subintervals. Usually, constraints are added to make the separate polynomials blend together. Because continuity and smoothness of the curve are prerequisites. If a polynomial is of f degrees, such constraints lead to a function which is f –1 times continuously differentiable at breakpoints separating the chosen subintervals. This is a snapshot of the mathematics which should be in place to support greater accuracy in a prognostication. Both spikes in data collection and splines and other functions with polynomials are of significant importance to the work of the analyst. Also, to the interpretation we make of certain phenomena which seem to be outliers, and we want to position ourselves against their effects, or to capitalise on the likelihood of them coming again under control.
12.5 Costly errors in option pricing and volatility smiles One of the uses to which prognostication is put is in determining the right price for financial instruments. One of the problems encountered by financial institutions with their pricing models is the guestimate of future volatility. This is compound by the fact that the over-the-counter (OTC) market is immense, and complex derivatives deals in the trading book don’t have an active market. Typically over-the-counter financial products are only priced twice: • when they are done; and • when they expire. Therefore, banks and other entities find themselves obliged to do pricing by model. In this connection model risk can be of two kinds: the accuracy of the model itself in mapping real life, and the hypotheses concerning key variables and most specifically future volatility (more about this in Chapter 12).
262 Facing the Challenge of Model Risk
VOLATILITY
PRODUCTION OF VOLATILITY AS FUNCTION OF TIME
FAMILY OF VOLATILITY SMILES
TIME
LIQUIDITY Figure 12.4
Volatility smile and volatility valley with interest rate products
Volatility, product pricing and the risk being assumed correlate. Not only might the model fail to represent real life but also volatility and liquidity can be up to a point be manipulated with the resultant bias in pricing financial instruments. The expectation of lower future volatility, the so-called volatility smile, is essentially guesswork: • A trader can use it to distort the value of options, swaps and other instruments. • The usual strategy is to convince management that it is appropriate to expect cheerful volatility smile. The guestimate behind a volatility smile is that future volatility would be low. Applied to options this means that they can be sold at a lower price because the risk assumed by the writer is not that high. Because, the same mistake (whether random or intentional) is repeated with repricing, volatility smiles sometimes turn into volatility valleys as shown in Figure 12.4. Theoretically, but only theoretically, wrong guesses about volatility should be caught by internal control and reported to senior manage-
Errors in Prognostication
263
ment. 5 This is particularly important when such wrong guesses are repeated; successive volatility smiles can bring a bank to ruin, as happened with NatWest Markets in March 1997. Practically, however, spotting wrong guesses and correcting the input to the model is not that easy. Top flyers in any profession have a sort of immunity, and most traders tend to use more sophisticated models than the controllers in the middle office, who check their books. They have therefore an upper hand over risk managers in justifying poorly documented prices, which often result in voluntary mispricing initiatives: • Low options pricing helps in closing deals with fat commissions. • But it also represents an inordinate amount of exposure for the institution. It is not that difficult to take management for a ride by using highly technical language, because by and large senior executives don’t have an inkling about computers and models. Even when they do, they don’t have the data they need to exercise control at their disposal. The middle office sometimes employs external agents to check pricing. In London, these agents include Eurobrokers, Harlow Butler, Tullet & Tokyo, and others. The possible conflict of interest comes from the fact most of these companies put deals together and while they may be knowledgeable about expected volatility, at the same time they have a direct interest in the deals. Therefore, the cardinal principle that frontdesk and backoffice should be separated by a thick wall in terms of responsibility, is violated through outsourcing. In plain terms: • Using brokers as consultants presents the problems of conflicts of interest. • Brokers have incentives to lean towards volatility estimates which assist in concluding deals. NatWest Markets, the investment banking arm of National Westminister Bank, paid for the mispricing of its options dearly. In March 1997, the institution’s controllers found a £50 million (US$82 million) gap in its accounts that eventually grew, allegedly, to many times that amount. With the announcement of the losses it was said that: • risk management did not have good enough computer models; and • the bank had accepted brokers’ estimates of value that turned out to be overgenerous.
264 Facing the Challenge of Model Risk
If financial institutions should have learned anything over the past seven years, it is that volatile instruments like derivatives – but also traditional ones like bonds – produce risks that can resonate throughout the entire firm. The red ink increases in amount in direct proportion of leveraging. One should not blame the model for such failures in prognostication. They are the management’s own. At the same time it is no less true that risk exposure is compound because the communications, computers, databases and software technology of most banks leaves much to be wanted. Their information elements are sitting all over the world, but cannot be readily integrated, if this is feasible at all: • Major banks typically have different sites with incompatible platforms. • They also use heterogeneous models, hence different ways to model risk in each location. System integration is a massive technology problem made more complex by the fact that many institutions still have legacy programs at the 10 to 30 billion lines of code level that use various types of basic software running on incompatible machines. Quite often a bank’s trading and position keeping records will be based on different technologies. I have already brought the reader’s attention to the fact that another problem faced by banks when they try to understand their exposure is that the model culture itself is new and not well-entrenched. This spells disaster as the inventory of different models grows and their frequency of use accelerates. Under present day conditions a system of mathematical constructs would include: • • • • •
Monte Carlo simulation; credit risk constructs; VAR modules; pricing algorithms; tracking routines, and so on.
This list grows with time but, as knowledgeable bankers appreciate, even the richest models library is only a partial answer to the problem. The other side of risk control is decisions reached by computer-literate top management, and the existence of rigorous internal controls to flash out derivatives and exceptions (see Chapter 8). Ideally, pricing, tracking and risk control should be done at both macroand micro-level. The macro-level is stress analysis. The micro-level is
Errors in Prognostication
265
detail. Steadily exercised sound policies and first-class technological support should see to it that senior management is able to look at different levels at the same time: the trader, the instrument, trading unit and corporate-wide. Top management’s eyes must work like the beam of a light house.
12.6 Imperfections with modelling and simulation Model-based systems that permit the exploitation of business opportunities and the identification of risk factors are very much in demand these days, but criteria for choosing them and for using them are not so well established. Also, many companies expect too much from a bought model, which they only partly understand and have not properly tested its capabilities. Companies with good experience in modelling are better positioned to appreciate what an artefact can and cannot offer. Zurich-based bank Julius Baer has years of experience in the usage of a sophisticated model for foreign exchange purchased from a rocket science firm. The executive responsible for foreign exchange commented that this is one of the better approaches to currency trading, that he has tested, because it spreads widely the risk. But even under these conditions the assistance the model offers traders can go up to a point; not further. ‘The model is strong in big trends like those which have happened in the January to March 1995 timeframe in currency exchange’, said Peter Gerlach. ‘When there is no real trend but wild swings and volatility dominates, models are not that good.’ The answer to the query regarding the model’s possibility to forecast market moves has been interesting: ‘Yes, but . . .’. The ‘but’ depends not only on the signals given by the model but also on how the user reads the model’s response, and how he or she understands the way the system work – hence feels confident about its output. This is, indeed, a very basic issue in interactive computational finance. The counterpart of this argument is that just having a model able to carry out prognosis is far from being synonymous to forecasting market moves, let alone making profits. Models are no substitutes for experience and for alertness in trades. The opposite is also true: imperfections in human skills compound the imperfections in models. The user, generally the trader, needs to have long experience in: • reading the system; and • acting on a moment’s notice.
266 Facing the Challenge of Model Risk
Peter Gerlach pointed out that interactive visualisation supported by a good model has many similitarities with reading charts for technical analysis. Another similarity between interactive visualisation and charting is that they both perform well when there is a market trend. In any case, it is always wise to understand the limitations of the tools we are using. A similar argument is valid in connection to the limitations of human judgement. While models can be effective assistants, we should remember that modelling and simulation may at times be incomplete, which does not help much to improve the quality of analysis. This particularly happens when key factors get overlooked or the mapping fails to represent real-life behaviour. The careful reader will take notice that this does not happen only in finance but as well in engineering. In 1991, after a Titan 4 upgraded rocket booster blew up on the test-stand at Edwards Air Force Base, the program director noted that extensive 3-dimensional computer simulations of the motor’s firing dynamics did not reveal subtle factors that apparently contributed to failure. This was an oversight in building the model. As another example, in simulation an F-16 flew upside down because the program deadlocked over whether to roll to the left or to the right. Another F-16 simulator caused the virtual aeroplane to flip over whenever it crossed the Equator, because of the program’s inability to handle south latitudes. The perfect model has not yet been invented. Test methods to help in gauging a model’s fit are of course available, but in some cases these too have an error (see also the discussion on value at risk, in Chapter 10). Even repetitive tests may not reveal all the flaws. In one major engineering project three different tests came up with seemingly consistent results. Further testing has shown each one of them wrong, but for a different reason: • a wind-tunnel model had an error relating to wing stiffness and flutter; • low speed flights tests were incorrectly extrapolated; and • the results of a resonance test were erroneously accommodated in the aerodynamic equations. Preparing through simulation testing for the second Shuttle mission, the astronauts attempted to abort and return to their simulated Earth while in orbit, but changed their mind and tried to abort the abort. When on the next orbit their decision was to abort the mission after all, the program got into a two-instruction loop. It seems that the designers had not anticipated the astronauts would abort twice on the same flight.
Errors in Prognostication
267
There have been as well some spectacular collapses of public buildings due to incomplete or incorrect simulations. One example is the collapse of the Hartford Civic Centre Coliseum roof under heavy ice and snow in January 1978, apparently due to the wrong model being selected for beam connection. Another example has been the collapse of the Salt Lake City Shopping Mall. Yet, this program was tested but it involved tests that ignored extreme conditions. Also present were, as well, some incorrect assumptions. It is interesting to notice that in the case of the Civic Centre Coliseum the program was rerun with the correct model, and the results reflected the collapse that had actually occurred. I have not yet seen something similar to that happening in banking; but I have seen plenty of models with wrong assumptions. That’s why I bring all these real-life cases of imperfections to the reader’s attention. In conclusion, past failures are not a reason why we should not try again. After all, to try and to fail is at least to learn – while to fail to try is to miss the opportunity that might have been. But we should be very careful with our assumptions, our algorithms, our data and the way we use our models. The fact is 90 per cent of all failures are created by people.
13 Model Risk is Part of Operational Risk
13.1 Introduction One reason finance and economics have been littered with broken forecasts is that as market conditions change old models prove to be unreliable but they are still being used. With the New Economy, the obsolescence of the factors that are coming into modelling, and of their range of variation, has accelerated. The irony is that: • it is precisely the New Economy that demands a great deal of modelling; and • also the New Economy has the most to lose from the downside of prognostication. Let me explain what I mean by this statement, which looks like a concentration. Stocks used to respond predictably to a set of variables with which analysts were familiar for years: interest rates, inflation, price/ earnings, and growth of corporate profits. Up to a point, these four variables still remain important. But the increasingly technology-driven economy has introduced new powerful factors into the equation of finance. The rapidly evolving technology-intense economic and financial environment greatly impacts on both corporate profits and investor behaviour in a way that traditional guideposts do not apply. Financial analysts often miss the market pulse because they tend to rely on static economic models developed decades ago, which the New Economy has developed a set of different perspectives. Among them is the introduction of model risk as an operational risk.1 268
Model Risk and Operational Risk
269
One issue, which in the late 1990s puzzled experts, was the US economy’s resilience. In the fall of 1997, and in mid-1998, when the meltdown in emerging markets spread and took down US equities, analysts and business publications quickly concluded that, as Fortune put it, ‘The Crash of ’98’ had arrived. But the market proved them wrong. The crash came two years later and models had not foreseen it. Because so much depends these days on computers, networks and speed of response, analysts start to appreciate that problems may arise if the computer model is even slightly inaccurate. For instance, with volatile collateralised mortgage obligations (CMO) a small error can be expensive. In principle: • the more leveraged the instrument is; • the more fatal becomes an error in the input. What makes model errors especially dangerous is that although some derivatives instruments are extremely risky, they can appear quite safe to the untrained eye, and some unscrupulous salesmen exploit this fact. The so-called alternative investments are an example.2 Major changes in market sentiment, and therefore in prices, can be lethal to the holder of alternative investments securities. In an effort to increase the soundness of their policies, the accuracy of their hypotheses, and therefore the dependability of their models, some financial institutions have been changing the criteria guiding their strategic moves. In 1997, Fidelity Investments abandoned its practice of not buying companies it saw as overpriced. Instead, it decided to pay up for companies with strong, sustainable profit growth. Other money managers, too, have been looking at ways and means to identify the best companies in any industry, basically buying into them at any price. Yet, even with this choosy attitude investors suffered a great deal of pain as the New York Stock Exchange and the NASDAQ fell in 2000 and 2001. The old models could not foretell NASDAQ’s meteoric rise, but when heaven broke loose at the NASDAQ and the torrent took along some of technology’s blue chips the new models, too, failed. In the April 2000 to September 2001 timeframe their predictability was wanting because the fundamental criteria for investment choice changed, while for some time the algorithms and hypotheses behind them, and their range of variation, stayed put.
270 Facing the Challenge of Model Risk
13.2 The risk you took is the risk you got In any business the First Law of Exposure is that the risk you took is the risk you got. Write it down as the Chorafas Law. A trade or a loan that seemed reasonable and prudent by some classical criterion (or criteria) turns out to be very bad indeed, if not altogether catastrophic. Something big changed in the market and we were not able to see it. We usually pay for our failures. If a model was supposed to give a warning signal and it did not do so, many people say that this is the model’s failure. This is an incorrect assumption. For any practical purpose it’s the developer’s and the user’s failure, not the model’s. Maybe the risk should not have been taken in the first place, but it was taken because of poor analysis, bad judgement, or too much greed: • There is an entire class of assumed risks connected to a factor named asleep at the switch. • These factors characterise cases in which exposure monitoring is ineffective, with loan officers, traders, managers and auditors violating prudential guidelines. Among the people who are asleep at the switch many say ‘they were not alerted in time’. But who should alert them? They are professionals, and therefore they should rely on their own acuity and professional conscience. Often credit institutions fall victim to an underestimation of the severity of the down cycle in a given market, as it happened September 2001 to late 2001 in the US. Even the concept that a great deal of diversification comes with globalisation does not hold water any more. There is a general belief that by operating in many markets a bank achieves a portfolio effect that reduces overall risk. This is not always true. Hence the wisdom of: • having at fingertips real-time data on exposure; • using an intraday global-risk chart that highlights worst-case scenarios; and • making sure there are limits in taking risks and the right people control these risk. These are policy and organisational issues, not model issues. If the concepts underpinning rigorous management are taking a leave, then the downside is unavoidable. This is quite independently how good or bad
Model Risk and Operational Risk
271
the mathematical artefacts written for prognostication, optimisation, or risk control reasons might be. But there is also model risk. In the context of this chapter I will use model risk, as a term, to describe how different models can produce very different results with the net effect of confusing rather than informing managers and professionals. For instance, different prices for a given derivative which create more opportunities for losses than both gains in the market. Let me immediately add that to a considerable extent model risk reflects the trader’s and the manager’s own right or wrong hypotheses. ‘All models are wrong!’, says Tanguy Dehapiot, of Paribas. ‘A volatility smile error could cost $5 million to $10 million.’3 (See the discussion on volatility smiles in Chapter 12.) There is nothing unusual in this statement. Just like car crashes models can go wrong, but in the majority of cases the faulty party is the man behind the wheel. These are also cases where something is faulty with the car: the airbag explodes inadvertedly; or there is a failure in the brakes. Still, while in the general case the model’s complexity and/or algorithmic insufficiency can lead to erroneous valuations for any instrument, the greatest enemy are the inaccurate assumptions underlying the volatility, liquidity, market resolve, and other factors – particularly during nervous markets. In analogical thinking terms, the faulty airbags paradigm can be found in the model’s structure. When they are built in an inflexible way, models do not cope well with sudden alterations in the relation among market variables. Examples where many models fall short are: • the effect of illiquidity in the market; and • a change in the normal trading range between say the dollar and the pound. The opportunity that comes when the majority of financial institutions is exposed to model risk is that astute trades with better models can capitalise on mispricings in another trader’s model to sell, for instance, an overvalued option. This practice is known as model arbitrage, and it can lead to good profits for the winners and major losses for the losers. Usually, however, model risk is translated into red ink. The estimates which are made on how much money is gained and lost because of model risk vary widely and none of them is documented well enough to be taken seriously. One estimate is that model risk losses amounted to
272 Facing the Challenge of Model Risk
20 per cent of the money lost in 2000 with derivatives – but this is an assertion that does not hold water because: • so many reasons are behind model risk, largely having to do with traders’ assumptions, that losses would have happened anyway even if there was no model at all; and • there is no zero-sum evidence, that what one trader lost because of coarse grain models another trader gained supported by fine grain models and high technology. The fact that well researched models are at the origin of gains, not losses, does not mean that artefacts based on algorithms can match (or even more so, overtake) an expert trader’s skill, daring and intuition about the market’s whims. On the other hand, even the best trader needs assistance to cope with the dynamics of the market, and this is what models and knowledge artefacts (agents) should provide. The reference I have just made to artefacts not overtaking top human skill brings to mind another which dates back to the late 1950s and has to do with computer programming. When Fortran, the engineering computer language, was released, the question arose: ‘Will its use produce supercomputers able to beat the best available assembler language programmers in quality and productivity?’ More than 50 years of experience with Fortran have taught that its usage does not make a superprogrammer out of an average programmer. All it does is to help him or her produce above average results. For any practical purposes, that’s what we should target with models: to help average traders, loan officers and other professionals to improve the work that they do.
13.3 Model risk whose origin is in low technology Section 13.2 has explained that model risk is a new type of risk that is particularly important with derivatives, and it is part of operational risk. It is also an unavoidable factor in modern business because trading in derivative products depends heavily on the use of valuation models which are susceptible to error from: • Incorrect assumptions about the underlying asset price and the way it changes. • Estimation errors about volatility and other factors that may be overoptimistic.
Model Risk and Operational Risk
273
• Data inputs that are unreliable because of being erroneous, incomplete, or obsolete. One frequent flaw with the use of models is starving them with obsolete and/or inaccurate data. Poor information also correlates with small database bandwidth depriving the user(s) of rich information elements necessary to perform in a fiercely competitive market. Like the deadly mainframes, database narrowband strangles the bank. There are also mistakes made in implementing the theoretical aspects of the algorithmic form which we chose. Algorithmic mistakes or insufficiency see to it that significant differences might develop between market prices and computed values that cannot be easily reconciled. (See Chapter 11 on the need for correlation between marking-to-model and markingto-market.) Another major reason for model risk is the use of low technology. For instance, mainframes or workstations of insufficient power to execute a cycle-consuming model, like Monte Carlo, in real time. Credit institutions with a compromised technology base – and there is plenty of them – should not expect any significant results from models. Even the most brilliant algorithmic mapping would be of no avail to the trader if the model runs on a slow mainframe. Because of all these reasons practical evidence drawn from financial institutions shows that model error can be quite large, and it can lead to significant risks in pricing derivatives and other instruments. It also reduces the punch of risk management – therefore it increases the amount of exposure. The other side of the coin is that derivatives and other innovative financial instruments cannot be handled without technology and models. The combination of technology risk and model risk may be particularly disruptive when input from different sources and is expected to affect the outcome of, say, pricing by model. As we have seen, model-based pricing requires reliable estimates of future volatility not optimistic forecasts involving wishful thinking. The evaluation of volatility assumptions boils down to the query: • What are the risk and return characteristics of the trade and of the commitment to be assumed? • What is the market, technology and model risk exposure faced by our bank by doing this transaction? Besides pricing errors there are as well many hedging errors due to imperfections associated to management’s estimates, the model’s flexibility,
274 Facing the Challenge of Model Risk
and sophistication of the technology we employ. Using models without understanding their limits creates sizeable exposure for option writers, and for market players at large. Another domain where model risk manifests itself is in the inaccurate or incomplete definition of the market to which the instrument is addressed. As I never tire repeating, models have locality. They cannot be good for everything and for every place. Relevant to this particular issue are the following questions: • What pricing level and structure may be appropriate for this market and instrument? • What is the level of risk for which we will have to be compensated, and how is this risk calibrated and priced? • What other concerns need to be considered in developing a strategy for this instrument and how should they be handled? The type of hedging that may need to be done is one of these concerns. This is not a one-off affair, but one which shows up repeatedly. The best hedge can become invalid if the market takes a sharp U-turn, or if the political climate changes in a radical way. Hedging is not a matter of divine revelation. It needs a great deal of modelling and experimentation. The reason why low technology strangles the bank is precisely that real-time responses and facilities for experimental design (let alone the culture which should accompany them) are nowhere to be seen. One of the best examples on how much support valid models and high technology can give is that of Bankers Trust in the 1990 Kuwait affair. How much high technology contributes to the timely and accurate management of exposure is documented by this event. On 2 August 1990, at 1.00 a.m., Kelly Doherty, Bankers Trust’s world-wide-trading manager, got a call from his No. 2 person in Tokyo: Iraq was invading Kuwait. Still at home, Doherty: • dialled into the bank’s high-tech foreign exchange trading aggregate, the Resources Management On-line System (REMOS); and • he immediately gained access, in real time, to Bankers Trust updated trading positions world-wide. Helped by his institution’s powerful models, after sizing up the situation Doherty phoned Charles Sanford Jr, the Bankers Trust chairman, and
Model Risk and Operational Risk
275
Eugene Shanks Jr, its president, to alert them to potential problems. Both credit risk and market risk worried top management, because of the newly developed situation in the Middle East. Carefully crafted for a relatively peaceful environment the bank’s positions looked really bad after the Iraqi invasion. The bank was short on sterling because the UK was in recession, short in dollars for the differential to deutsche marks, and with long, long bonds because rates were expected to decline. Bankers Trust had one good position: the long yen yield curve. The way it was reported at the time, senior management called up interactively on the screen what Middle East counterparties owed the bank and when those payments were due; then it put these counterparties on credit alert. In an early morning-hours conference call, Doherty, Shanks and Sanford reviewed the bank’s exposure and the findings about being in the wrong side of the balance sheet. Then, within a mere two hours they had completely repositioned their institution using high tech and the bank’s London subsidiary as agent. This speedy action allowed Bankers Trust to avoid losses and even make a tidy profit. Its competitors may have enjoyed a sound sleep that night, but then had to labour long into the next days to sort out and correct their Middle East exposures. Many of these competitors found themselves in a sea of red ink, paying dearly their reliance on low technology in a market more dynamic than ever.
13.4 The downside may also be in overall operational risk Since the mid-1980s central bankers and many professional associations repeatedly highlighted the importance of agreeing on a common benchmark for risk measurement. The first significant transborder effort was the 1988 Capital Accord on credit risk, followed in 1996 by the Market Risk Amendment which established VAR as metrics. As we saw in Chapter 10, VAR is expressed as a simple number of recognised gains and losses, at an established degree of confidence. There is however another kind of very important risk to be addressed, than credit risk and market risk. This is operational risk and it should be regarded as a separate domain because of the complexity involved in its identification and management. The New Capital Adequacy Framework published in 2001 by the Basle Committee not only promotes more sophisticated methods for calculating credit risk, but also pays a great deal of attention to operational risk. In my research I found operational
276 Facing the Challenge of Model Risk
risk to be a composite of many exposures, including among other factor: • • • •
management risk; legal risk; payments and settlements risk; and technology risk.
Some central bankers believe that of the amount of money to put as reserves with Basle II, about 20 per cent would correspond to operational risk. Monitoring operational risk requires a look at a large number of parameters which range from the possibility of a computer breakdown (including the year 2000 problem), to the risk of substandard internal controls. Low technology does not enable senior management to detect the odd rogue trader who has devised a way to execute trades without reporting them. Tracking operational risks in an institution will by no means be an easy business. Experts say that while it is relatively easy to set up a system that singles out 85 to 90 per cent of operational risk the remaining 10 to 15 per cent is very difficult and costly to spot. Part of the complexity lies in the fact that operational risk is, more or less, an all-or-nothing type of event. Many types of operational risk can be tracked by using to the fullest extent what technology makes available, but this presupposes appropriate skills and advanced solutions to computers, communications and software. Statistical quality charts can be of significant assistance, 4 but not everything is amenable to statistical measures of the probability that something will happen or it will not happen. What matters is the ability to: • rapidly allocate resources to the areas that are most likely to suffer an operational risk type problem; and • immediately correct the problem in spite of internal organisational inertia. This leads us to another domain when the downside in prognostication is not that easy to redress: the projection of forthcoming operational risk(s) and associated and deployment of methods to confront them in an able manner. A major element in the control of operational risk is the ability to define the areas which need monitoring. This is not as straightforward as it might appear because:
Model Risk and Operational Risk
277
• a number of hidden factors come into play; and • the detection of operational risk is sometimes clouded by internal company politics. To face operational risk, a number of businesses are designing minimum control standards partly sparked by Washington-based Group of Thirty (which is chaired by the former US Federal Reserve chairman Paul Volcker). In the early 1990s, the group of Thirty, a think tank, captured the essence of this type of risk and what is at stake – particularly how operational risk may compound other exposures, such as derivatives risk. Compound risk has always been a worry in banking. The issues raised by the Group of Thirty have been followed up by the Derivatives Policy Group (DPG), comprising representatives from US investment banks. This investigation was taken one step further in Germany. A new regulatory framework, applicable at the start of 1997, imposed minimum requirements in terms of risk management: • These are inspired by other recommendations, like the Generally Accepted Risk Principles and the DPG’s work. • But the German study also offered the advantage of being more explicit than earlier work on operational risk and its aftermath. My own research has revealed that still today most banks have not a clear mind about the exposure they take because of different types of operational risk – and the needed countermeasures. Neither do they appreciate that at the core of the concern about operational risk should be the fact that there is much more to the exposures a bank is faced with than a given probability. The basic distinction is between: • known risks; and • uncertainty. Known risk can be measured through probabilities and statistics. Uncertainty is trickier because it admits that management does not have full control of the situation, and this includes the corrective action which must be taken in many cases. Asked after his retirement which has been the most frustrating experience of his career, president Truman said that it was to sit in the Oval Office, give an order, expect that the order of the president is immediately executed, and then see that nothing happens. Models and computers help in establishing a feedback loop, like the one shown in Figure 13.1, which answer some of Harry Truman’s worries.
278 Facing the Challenge of Model Risk INPUT
OUTPUT
FORWARD FUNCTIONS
FEEDBACK, THEREFORE ON-LINE CONTROL Figure 13.1 A feedback mechanism is fundamental to any process in engineering, accounting, management and internal control
But as we have seen they also introduce into the equation operational risks, like: • model risk; and • technology risk. We don’t eliminate model risk by getting the models out of the system, just like we don’t eliminate payments and settlements risk by stopping all payments. Both types of operational risk are part of core business of banking. At a time when heavy-duty computers and the arrival into finance of rocket scientists have made risk management more than ever a numbers game, model risk is here to stay.
13.5 Operational risk in the evaluation of investment factors TIAA/CREF is America’s largest pension fund, the owner of more than 1 per cent of current capitalisation in the stock exchange. To help itself better manage its assets TIAA/CREF monitors 25 good-governance issues connected to the companies in which it is investing. These range from board independence and diversity, to the age of directors and their potential conflicts of interest. Among the 1,500 companies making up its US$92 billion equity investment, those that fall short under a point system devised by the fund will get inspection visits regardless of market performance. The effort to define good-governance principles and to monitor and encour-
Model Risk and Operational Risk
279
age them is costing the fund about US$1 million a year. This cost is peanuts in comparison to: • the close look on operational risk this system makes possible; and • the documented basis it provides for keeping management risk under lock and key. Few banks, pension funds, mutual funds, or other financial institutions have in place control systems like that of TIAA/CREF. The assistance this solution provides is invaluable; and it is a competitive advantage. Business intelligence on good-governance is rare, even if its contribution to the control of a major component of operational risk is most important. There is a great deal to learn from those companies that have taken firm steps to curb operational risk. On one side, there is the danger of overdependence on models compounded the fact that the industry has not yet succeeded in modelling operational risks. On the other, the best examples which exist in curbing some operational risks use models and computers, but they are little known or rarely followed. The best a company can do today is to learn from the leaders about how they address the risks of: • • • •
mismanagement; faulty internal controls; secrecy and fraud; and payments and settlement, and others.
Analytical approaches and a thorough examination should as well be used with legal risk, including issues relating to compliance and taxation. Politics is another operational risk, nearly inseparable from legal risk. Here is an example reported in the French press.5 A thorough restructuring of property and habitat taxation laws was decided by the French government in 1989, largely based on a revaluation of sorts of real estate properties. Ten years down the line, in 1999, it has not yet implemented. The real reason behind these interminable delays was political. The state would not have made any money out of it, since it targeted only a redistribution of charges, but would have made an enemy of one out of two voters whose charges would have been increased by up to 30 per cent. Doing nothing is a solution with many ‘advantages’.
280 Facing the Challenge of Model Risk
As usual, an excuse has to be found for inaction. The Ministry of Finance found the excuse, for ten years of delays, in endless simulation. A model was build to permit the study of how the new taxation system could best be applied. The first tests were not conclusive. This led to new simulations that continued through the 1990s. In 1999 it was said that more simulations would be necessary for the redistribution of property and habitat taxes to be ‘equitable’. Errors don’t only enter into calculations based on models, but as well in the methodology and in the guts to apply a new solution. This happens for a number of reasons and it is part of management risk. The most potent of these reasons is the hypotheses being made that a new and different solution is indeed applicable and it would sail seamlessly through the system. To the contrary: • the solution might be inappropriate on political grounds or incomplete; or • ways and means for its implementation have not been studied in any detailed manner. Other errors are present on account of some common practices which are out of tune with rigorous modelling, and therefore expose the proposed solution to severe criticism. For instance, relying heavily on guestimates, some of which come from past experience which has nothing to do with the new conditions. Guestimates and averages can be deadly in evaluating specific risks currently associated to new conditions. Another mistake that can be ascribed to the operational risk of modelling is deriving asset risk premiums from tenuously related estimates of aggregate debt and equity costs – which typically have noncompensating errors. Also using heterogeneous database elements (a cross between technology risk and model risk) or too much interpolation to cover the gaps in time series. As I have already mentioned, still another major error source lies in the frequently used assumption that events are normally distributed. The fact is that in many situations they are not. For instance, in daily practice, because of systematic risk induced by correlation, the portfolio value distribution generally takes on a non Gaussian character. Similarly, correlation between borrowers causes the portfolio value distribution to take skewness or to be kyrtotic in the lower tail. There are sound management rules to be observed, whether we estimate the shape of the probability of events, good governance practices in companies in which we are investing, or any other operational issue
Model Risk and Operational Risk
281
we confront. The company that does not observe these rules is a company that has not lost its ability to disappoint. Its defences against operational risk are down, because of organisational lethargy and bureaucracy.
13.6 How far can internal control reduce operational risk? The concept of internal control has been discussed in Chapter 9, but it has not been associated to operational risk. Therefore, prior to answering the query posed by the title of this section, it will be wise to briefly redefine the meaning of internal control from an operational risk viewpoint. To the opinion of talented people in the financial industry, including central banks, commercial banks, investment banks, brokers and members of trade associations, the following five points give a comprehensive picture:6 • Internal control is a dynamic system covering all types of risk, addressing fraud and assuring transparency and making possible reliable financial reporting. • The chairman of the board, the directors, the chief executive officer, and senior management are responsible and accountable for internal control. • Beyond risks, internal control goals are the preservation of assets, account reconciliation, and compliance. Laws and regulations impact on internal control. • The able management of internal control requires policies, organisation, technology, open communications, access to all transactions, real-time response, quality control, and corrective action. • Internal control must be regularly audited by internal and external auditors to ensure its rank and condition, and see to it there is no cognitive dissonance at any level. If internal control covers all types of risk, then operational risk and all its component parts are included. The difficulty is in the great diversity of operational risk constituents and the fact that many of them have not been properly analysed so far in order to develop: • early indicators of their forthcoming happening, and • ways and means to be in charge of them before they become a torrent. In this sense, a great deal more is needed in terms of research that helps in understanding, for example, which is the evidence of mismanage-
282 Facing the Challenge of Model Risk
ment (see in section 13.5 the reference to what TIAA–CREF is doing), and which are the indicators that investments in information technology, and services provided by IT, are substandard and mismanaged. The common frontier between internal control and operational risk management can be expressed through a simple block diagram like the one in Figure 13.2. Each of the seven milestones in this diagram should be analysed to greater detail, within the operating environment of the company designing an operational risk control system. It should be kept in mind through the study that, in many cases internal control and management control coincide to a significant extent. Management control is the process by which the board, senior executives and other levels of supervision assure by themselves, insofar as it is possible, that the actions of the organisation conform to plans and policies and operational risks are kept under lock and key. There are two ways of looking into this subject: • The one is through new tools, indices and methods, which, as I already mentioned, still need to be researched and developed in practical terms. • The other is by means of more classical approaches applied to component parts of operational risk, such as accounting information and its in-depth analysis. A little appreciated fact is that since the time of Luca Paciolo at the close of the 15th Century 7, accounting is a mathematical model. Accounting information is useful in every business process as means of motivation, communication, and appraisal. The standard model of accounting helps headquarters to keep track of costs and financial results in the business units under its control. Unless the business is a one-man enterprise, its management does not personally design, manufacture, sell and service the product(s) that it makes. Rather, this is the responsibility of specialised business units. It is the central management’s duty to see to it that the work gets done in a cost/effective manner. This requires that personnel is hired and formed into the organisation and, that this personnel is motivated in such a way that it will do what management wants it to do. Accounting information can help in measuring personal accountability promoting or hindering the motivation process – depending on the way in which it is used. Part and parcel of internal control is to assure that the accounting model is observed, and it is free of bias. Also that the contents of
DEFINITION OF OPERATIONAL REQUIREMENTS
DETERMINATION OF DESIGN SPECIFICATIONS PER OPERATIONAL FACTOR
DEFINITION OF FEEDBACK AND THRESHOLDS
SYSTEM DESIGN, TESTING AND IMPLEMENTATION
CONTROL SYSTEM OPERATION AND MAINTENANCE
OUT OF CONTROL INFORMATION BY OPERATIONAL FACTOR
MANAGEMENT ACTION TO RIGHT THE BALANCES
Figure 13.2
The common frontier between internal control and operational risk management
283
284 Facing the Challenge of Model Risk
accounting reports are correct. As a means of communication, accounting reports can assist in informing the organisation about management’s plans and policies. They also reflect on actions the organisation takes by market segment, product line and other criteria; in short, they provide feedback: • Without accounting information internal control is nearly impossible. • But at the same time, internal control requires much more than classical accounting. For appraisal reasons, for instance, periodically management needs to evaluate how well employees are doing their job. The aftermath of an appraisal of performance may be salary increase, promotion, a training program, reassignment, corrective action of various kinds, or dismissal. Accounting information assists in this process, although with the exception of extreme cases an adequate basis for judging a person’s performance cannot be obtained solely from quantitative information revealed by accounting records. The operational risk associated to this process is that accounting records may be manipulated; this is true of any type of records. For instance schedules, execution plans, plan versus actual analysis, or whole processes covered by enterprise resource planning procedures. Hence the need to perform both regular and exceptional review activities with regard to accounts – a policy which should be followed with all models. Auditing must: • look into the methodology for preparation of financial statements; • assure there are no creative accounting gimmicks and double books; • evaluate the proper utilisation of such statements by all persons expected to learn from them and use such knowledge in their work. The consistency of application of planning and control procedures, accounting policies, and financial policies depends on this kind of usage of accounting records (and their auditing) which integrates with internal control. The reconciliation between business unit practices and consolidated group results presented in the audited financial statements is cornerstone in prognosticating where this business unit will go from ‘here’ to ‘there’. Operational risk alters this landscape, reducing visibility and leading to serious errors.
Model Risk and Operational Risk
285
13.7 The contribution that is expected from auditing The systematic verification of books of account, vouchers, other financial and legal records, as well as plans, executions and their reports for the purpose of determining their accuracy and integrity, is known as auditing. The auditing of accounts, for example, aims to assure that they show the true financial conditions which result from operations: • Auditing is certifying to the statements rendered. • Or, alternatively, its mission is that of bring discrepancies to senior management’s attention. Audits may be conducted externally by hired professionals, therefore through outsourcing, or internally by regular employees of the organisation. In both cases, audits must be factual and documented, able to convince that the accuracy and integrity of the accounts and records has been determined. A similar statement is valid in regard to the auditing of models and their use. Certified reports submitted by auditors serve as the basis for: • • • •
determining sound financial condition, and prevailing trends; deciding on creditworthiness and extension of credit; planning the action to be taken in bankruptcy and insolvency; providing information to stockholders, bondholders and other stakeholders; • guarding against poor methods, employee carelessness, and inadequate records; • flashing out fraud and determining action in fraud cases; • assisting in the preparation of tax returns and in compliance to regulations; The results of auditing are a safeguard to the public because the resulting reports are the output of examinations of records. Auditing also involves interpretation of reports and a critical evaluation of the business that the auditors reach from a detailed analysis of every transaction, up to a general review and survey of operations and their accounts. The reason why I look in such detail into the functions of auditing is that, to my experience, they fit hand-in-glove policies and procedures that should be followed with models – for the reduction of model risk. While the specific factors to be audited in connection to models and
286 Facing the Challenge of Model Risk
their usage may be different, the methodology has many common characteristics. As I have often underlined, analogical thinking is of an invaluable assistance to business people who cannot hope to have all of the various phases of operations, at their finger-tips, at all times. Or, to reinvent the wheel of management control if left to their own devices. Senior managers have to rely not only upon their associates and employees to advise them and to consult with them in a great many matters of daily operation but also on: • a sound methodology that has past the test of time, like auditing; • disinterested, competent outsiders, able to check on financial, accounting and model-making problems; • external experts able of providing sound advice, comment and analyses to their client. Both internal and external expertise are important. In practically every company, the progress of business in general contributes to the development of an increasingly polyvalent internal auditing, because the latter provides the basis for judging the dependability of financial records as well as of models and other artefacts. The board of every company faces an accurate need for qualified reports permitting to determine that policies and procedures are being observed and the interests of the firm are adequately protected. Internal auditing has been used as an investigating agency and for making special surveys. This has the effect of: • drawing closer the internal auditor to top management; and • assigning additional responsibilities to the auditor in support of the internal control system. Both points above ensure that the internal auditing job itself is in full evolution, which may be expressed both as systematic company-wide effort and as a largely custom-made project, patterned and moulded to satisfy the particular needs of a project. In both cases, internal auditing remains an independent appraisal activity within the organisation for the review of the accounting, financial, information technology and other operations. Among the factors that contributed largely to accelerating the development of internal auditing have been the social and economic problems that pyramided during the past 40 years, and associated management
Model Risk and Operational Risk
287
challenges. In most businesses today, normal and routine operations are administered by remote control: • reports; • statistics; and • directives. These have replaced personal observations, evaluations and instructions. To give added assurance that these reports are accurate, delegated duties are faithfully performed, directives are properly interpreted and executed, and matters requiring consideration are promptly brought to attention, management finds it necessary to maintain an inspection and reporting service provided by internal auditing. In fact, the formerly quantitative aspect of auditing has been augmented through qualitative requirements like the evaluation of the internal control structure. Over the years, practitioners and writers have adopted various strategies in an endeavour to distinguish between the more restricted and the broader type of auditing. The title that best describes current practice, and the one that appears to predominate at the present time, is managerial auditing, indicating an activity utilised as an aid to management rather than as a clerical function. In conclusion, the best way of looking at internal auditing and its contribution is as a type of control which functions by measuring and evaluating the effectiveness of other types of management control activities. While it deals primarily with accounting and financial matters, it may also address issues of an operational risk nature. The aim always is to assist management in achieving the most efficient administration of the company’s operations, pointing out existing deficiencies, and calling for appropriate corrective action.
Notes
1
Science and the solution of real-life business problems
1. D.N. Chorafas, Rocket Scientists in Banking (London and Dublin: Lafferty Publications, 1995). 2. Albert Einstein, Essays in Science (New York: Philosophical Library, 1934). 3. Scientists believe that Copernicus came across the work of Aristarchos and his advanced model of the heliocentric system. 4. Stephen Hawking, A Brief History of Time (New York: Bantam Books, 1998). 5. Carl Sagan, Cosmos (London: MacDonald, 1988). 6. Wallace and Karen Tucher, The Dark Matter (New York: William Morrow, 1988). 7. D.N. Chorafas, Financial Models and Simulation (London: Macmillan, 1995). 8. Joseph Wechsberg, The Merchant Bankers, Pocket Books (New York: Simon & Schuster, 1966).
2
Is the work of financial analysts worth the cost and the effort?
1. José Ortega y Gasset, What Is Philosophy? (New York: W.W. Norton, 1960). 2. D.N. Chorafas, Chaos Theory in the Financial Markets (Chicago: Probus, 1994).
3
The contribution of modelling and experimentation in modern business
1. D.N. Chorafas, Operations Research for Industrial Management (New York: Reinhold, 1958). 2. D.N. Chorafas, Financial Models and Simulation (London: Macmillan, 1995). 3. Alfred P. Sloan, My Years with General Motors (London: Pan Books, 1969). 4. D.N. Chorafas, The 1996 Market Risk Amendment. Understanding the Marking-toModel and Value-at-Risk (Burr Ridge, McGraw-Hill, IL: 1998). 5. Sun Tzu, The Art of War (New York: Delacorte Press, 1983). 6. D.N. Chorafas, Reliable Financial Reporting and Internal Control: a Global Implementation Guide (New York: John Wiley, 2000). 7. D.N. Chorafas, Agent Technology Handbook (New York: McGraw-Hill, 1998). 8. D.N. Chorafas, Chaos Theory in the Financial Markets (Chicago: Probus, 1994).
4 Practical application: the assessment of creditworthiness 1. For the tables necessary for sequential sampling see MIL.STAND.105A, US Government Printing Office, Washington, DC.
288
Notes
289
2. D.N. Chorafas, Managing Risk in the New Economy (New York: New York Institute of Finance, 2001). 3. D.N. Chorafas, Implementing and Auditing the Internal Control System (London: Palgrave Macmillan, 2001). 4. Federal Reserve Board and Bank of England, Potential Credit Exposure on Interest Rate and Exchange Rate Related Instruments (1987).
5 Debts and the use of models in evaluating credit risk 1. D.N. Chorafas, Agent Technology Handbook (New York: McGraw-Hill, 1998). 2. D.N. Chorafas, Credit Derivatives and the Management of Risk (New York: New York Institute of Finance, 2000). 3. D.N. Chorafas, Managing Risk in the New Economy (New York: New York Institute of Finance, New York, 2001). 4. D.N. Chorafas, Credit Derivatives and the Management of Risk (New York: New York Institute of Finance, 2000). 5. See the effective use of statistical quality control charts in D.N. Chorafas, ‘Reliable Financial Reporting and Internal Control: A Global Implementation Guide’ (New York: John Wiley, New York, 2000). 6. GE and Goldman Sachs have been buyers of non-performing loans from bankrupt Thai finance companies. 7. D.N. Chorafas, Credit Derivatives and the Management of Risk (New York: New York Institute of Finance, 2000). 8. D.N. Chorafas, Chaos Theory in the Financial Markets (Chicago: Probus 1994).
6 Models for actuarial science and the cost of money 1. Dimitris N. Chorafas, How to Understand and Use Mathematics for Derivatives, Vol. 2 (London: Euromoney, 1995). 2. Basle Committee on Banking Supervision, Working Paper on Risk Sensitive Approaches for Equity Exposures in the Banking Book for IRB Banks (Basle: Bank for International Settlements, 2001). 3. D.N. Chorafas, Credit Derivatives and the Management of Risk (New York: New York Institute of Finance, 2000). 4. See D.N. Chorafas, Chaos Theory in the Financial Markets (Chicago: Probus/ Irwin, 1994). 5. D.N. Chorafas, Liabilities, Liquidity and Cash Management. Balancing Financial Risk (New York: John Wiley, 2002). 6. D.N. Chorafas, Integrating ERP, Supply Chain Management and Smart Materials (New York: Auerbach/CRC Press, 2001).
7 Scenario analysis and the Delphi method 1. Roger Lowenstein, Buffett, the Making of an American Capitalist (London: Weidenfeld & Nicolson, 1996). 2. D.N. Chorafas, Integrating ERP, Supply Chain Management and Smart Materials (New York: Auerbach/CRC Press, 2001).
290 Notes 3. D.N. Chorafas, Implementing and Auditing the Internal Control System (London: Palgrave Macmillan, 2001). 4. It has been a deliberate choice not to deal with fuzzy engineering in this book, but readers knowledgeable in fuzzy engineering would find many similitudes with the Delphi method. See also D.N. Chorafas, Chaos Theory in the Financial Markets (Chicago: Probus, 1994).
8 Financial forecasting and economic predictions 1. Still, the interest rate theory has its merits. This will be discussed more fully later. 2. The International Herald Tribune, 27–28 Sunday, 2000.
9 Reliable financial reporting and market discipline 1. D.N. Chorafas, Credit Derivatives and the Management of Risk (New York: New York Institute of Finance, 2000). 2. D.N. Chorafas, Managing Risk in the New Economy (New York: New York Institute of Finance, 2001). 3. See Chapter 13 below and D.N. Chorafas, Managing Operational Risk: Risk Reduction Strategies for Investment Banks and Commercial Banks (London: Euromoney, 2001). 4. Geneva, 22, 23 March 1999.
10
The model's contribution: examples with value at risk and the Monte Carlo method
1. D.N. Chorafas, The 1996 Market Risk Amendment. Understanding the Markingto-Model and Value-at-Risk (Burr Ridge, IL: McGraw-Hill, 1998). 2. Middle Office, November 1998. 3. D.N. Chorafas, Chaos Theory in the Financial Markets (Chicago: Probus, 1994). 4. Dimitris N. Chorafas, How to Understand and Use Mathematics for Derivatives, Vols. 1, for Vols. 2 (London: Euromoney, 1995). 5. At the First International Conference on Risk Management in Banking, London, 17–19 March 1997. 6. See D.N. Chorafas, Chaos Theory in the Financial Markets (Chicago: Probus/ Irwin, 1994).
11
Is value at risk an alternative to setting limits?
1. D.N. Chorafas, Agent Technology Handbook (New York: New York McGraw-Hill, 1998). 2. D.N. Chorafas, Credit Derivatives and the Management of Risk (New York: New York Institute of Finance, 2000). 3. D.N. Chorafas, Managing Risk in the New Economy (New York: New York Institute of Finance, 2001).
Notes
291
4. D.N. Chorafas, The 1996 Market Risk Amendment: Understanding the Marking-toModel and Value-at-Risk (Burr Ridge, IL: McGraw-Hill, 1998). 5. D.N. Chorafas, Reliable Financial Reporting and Internal Control: A Global Implementation Guide (New York: John Wiley, 2000). 6. D.N. Chorafas, How to Understand and Use Mathematics for Derivatives, Volume 2 – Advanced Modelling Methods (London: Euromoney Books, 1995). 7. See D.N. Chorafas, Advanced Financial Analysis (London: Euromoney Books, 1994). 8. D.N. Chorafas, Integrating ERP, Supply Chain Management and Smart Materials (New York: Auerbach/CRC Press, 2001).
12
Errors in prognostication
1. D.N. Chorafas, Managing Risk and the New Economy (New York: New York Institute of Finance, 2001). 2. D.N. Chorafas, Managing Credit Risk, Vol. 1: Analyzing, Rating and Pricing the Probability of Default (London: Euromoney, 2000). 3. D.N. Chorafas, Network Computers versus High Performance Computers (London: Cassell, 1997). 4. D.N. Chorafas, Enterprise Architecture and New Generation Information Systems (Boca Raton, FL: St Lucie Press, 2002). 5. D.N. Chorafas, Implementing and Auditing the Internal Control System (London: Palgrave Macmillan, 2001).
13
Model risk is part of operational risk
1. D.N. Chorafas, Managing Operational Risk: Risk Reduction Strategies for Investment Banks and Commercial Banks (London: Euromoney, 2001). 2. D.N. Chorafas, Alternative Investments and the Management of Risk (London: Euromoney, 2002). 3. Futures & OTC World, June 1999. 4. D.N. Chorafas, Reliable Financial Reporting and Internal Control: A Global Implementation Guide (New York: John Wiley, 2000). 5. ‘L’Enfer du Fisc’, Les Dossier du Canard Enchainé, November 1999, Paris. 6. D.N. Chorafas, Implementing and Auditing the Internal Control System (London: Palgrave Macmillan, 2001). 7. D.N. Chorafas, Financial Models and Simulation (London: Macmillan, 1995).
Index
accounting metalanguages, 188, 189 Accounting Standards Board (ASB), 180, 192 accounting system, 186 accruals accounting, 197 actuarial credit risk analysis, 110 actuarial science, 115 Actuarial Society of America, 114 actuaries, 122 Advanced System Group (ASG), 46 agents (knowledge artefacts), 61, 93, 225 aggregate derivatives exposure, 36 algorithmic insufficiency, 257 algorithms, 33, 82 Allais, Maurice, 159 American Institute of Actuaries, 114 analogical reasoning, 54, 158, 286 analogical thinking, 33 analogous systems, 35 analysis, 5, 9, 35, 38, 43 financial, 24, 25, 27, 36, 38, 43 growth-based, 26 value-based, 26 analysts, financial, 24, 25, 26, 27, 28, 36, 38 analytical method, 7, 42 analytical study, 35 analytical tools, 14 appraisal of performance, 284 assessment of creditworthiness, 77, 90 backtesting, 90, 212, 217 Bacon, Francis, 24 balance sheet analysis, 100 Banca di Roma, 106, 107 Bankers Trust, 73, 74, 77, 274, 275 banking, 4, 45
banking problems, 49 Bank for International Settlements, 101 Bank of England, 85 bank operations, 49 Banque de France, 96 Barings, 227 Baruch Spinosa, 20 Basle Committee on Banking Supervision, 16, 19, 57, 59, 68, 78, 80, 101, 151, 204, 205, 208, 217, 218, 235, 275 Basle II, 59, 76, 78 Bayesian theory, 123, 140 Bell Telephone Laboratories, 46 benchmark values, 85 Bernard, Claude, 39, 42 Black, Fisher, 167 block diagram, 48 bond department, 49 bootstrapping, 215, 216 Buffett, Warren, 42, 115, 141, 249 business life, 42, 43 business opportunity analysis, 109 calculated risks, 77 Capital Accord, 68, 79 Capital Adequacy Directive, 229 capital at risk, 77, 93, 241 capital requirements, 80 cash flow models, 126, 127 cash flows, discounted, 127–9 Casualty Actuarial Society, 114 chaos theory, 55, 166 Chorafas Law, 270 City University London, 217 classification system, 133, 134 cognition, principles of, 3 collateralised mortgage obligations (CMOs), 269 commercial banks, 57, 89 292
Index
commercial loans, 49 Commission Bancaire, 101 complexity principle, 50 complexity theory, 50 compound interest, 124 compound risk, 228 computational finance, 34 computer program, 33 conceptual models, 38, 247 conditional probabilities, 122 confidence intervals, 158, 205, 234, 247 confidence limits, 34 connectivity, 52 constraints, 30 consumer price index, 259 contemporary science, 15 control limits, 34 corporate memory facility (CMF), 139 COSO, 181, 182, 186, 187, 200 cost, 47 cost/effectiveness, 47 cost function, 158 counterparty risk, 79, 80, 82, 90, 98, 117 counterparty-to-counterparty evaluation of exposure, 36 creative accounting, 195 creativity, 11 credibility theory, 122 credit at risk, 91 credit derivatives, 97, 98 credit deterioration, 91 credit improvement, 93 CreditMetrics, 96 credit model, 70 CreditPortfolioView, 96 credit ratings, 90, 100 credit rating system, 67 CreditRisk+, 96 credit risk, 17, 67, 68, 72, 80, 91, 101, 185, 275 credit risk analysis, 19 credit risk assessment, 74 credit risk computation, 80 credit risk models, 60, 74, 90, 97, 110 credit risk rating, 86
293
Crédit Suisse First Boston, 253 Credit VAR, 111 credit volatility, 81 cumulative default probabilities, 87 cumulative exposure, 211 cumulative payoff, 51 Davies, Brandon, 221 default likelihood, 141, 142, 145 Dehapiot, Tanguy, 271 Delphi method, 138, 139, 141, 149, 151–53, 157 derivative financial instruments, 58, 62, 104, 122, 180, 191, 269 Derivatives Policy Group (DPG), 277 determinism, 50 deterministic models, 120 Deutsche Bank, 77 discounted value, see intrinsic value dissension, 51 Doherty, Kelly, 274 Dorfman, John, 161 Dow Jones Index, 172, 173 Drucker, Peter, 47, 49 earnings at risk, 221 Edison, Thomas Alva, 39, 40, 42, 143 Edison Electric Light, 40, 42 efficient market theory, 159 eigenmodels, 98, 111 Einstein, Albert, 252 Epicurus, 20, 21 equity price risk, 203 Eratosthenes, 41, 42 Estimated Default Frequency (EDF), 95 European Central Bank, 60, 259 European Organisation for Cooperation and Development (OECD), 78 evolutionary technology, 164 experimentation, 39 expert systems, 21, 80, 249 exposure, control of, 25
294 Index
extreme events, 174, 257, 258 extreme values, 237 Faculty of Actuaries, 114 Federal Reserve Board, 85, 160, 186, 212 Feigenbaum, Dr Mitchell J., 31, 166 Fidelity Investments, 269 Financial Accounting Standards Board (FASB), 180, 191, 198, 199, 240 financial engineering, 26 financial events, 9 financial forecasting, 44 financial modelling, 54 financial reporting standards, 180 financial research and analysis, 54 financial technology, 32 fluid dynamics, 40 Ford motors, 51 Ford, Henry, 28 future interest rates, 114 fuzzy engineering, 123, 221 Galbraith, John Kenneth, 151 Galilei, Galileo, 39 GARCH, 217 GE Capital, 107, 108, 241 General Electric, 19, 40, 42, 241 General Motors, 28, 51 generally accepted risk principles, 277 Gerlach, Peter, 265, 266 German Landesbanken, 88 globalisation, 62 Goldman Sachs, 107 Greene, Jay, 248 Greenspan, Alan, 215 Griep, Clifford, 238 Group of Thirty, 277 Harvard University, 247 hedge accounting, 191 hedge funds, 83 hedges of cash flow exposure, 195 hedges of fair value exposure, 196 Heisenberg, Dr Werner, 33, 50 heuristics, 82
high frequency financial data (HFFD), 36, 167, 169, 170 Hinko, Susan, 238 Hubble, Edwin, 12 in-current-earnings, 193, 194 identification, 57 information technology support, 76 instalment loans, 49 Institute of Actuaries, 114 instrument-by-instrument analysis, 36 intangibles, 58 integrated system, 15 interactive computational finance, 56 interactive visualisation, 266 interest rate predictions, 119 interest rate risk, 203 interest rate swaps, 119 internal auditing, 285–7 internal control (IC), 57, 60, 85, 194, 199, 224, 225, 233, 281–3 internal control framework, 183, 185 Internal Ratings-Based (IRB) solution, 16, 19, 60, 68, 76, 80, 100, 117, 240 intrinsic time, 175, 176 intrinsic value (discounted value), 119, 123, 124 inventing, 7 J.P. Morgan, 214, 234 JP Morgan Chase, 241 Johnson, Lyndon, 160 Julius Baer, 265 Kepler, Johannes, 55 knowledge artefacts, 21 Koontz, Harold D, 153 kyrtosis, 177 Laplace, Honoré de, 50 Leeson, Nick, 226 legal risk, 185 Lehman Brothers, 22
Index
Leibniz, Gottfried, 46 leptokyrtotic distribution, 120 leverage, 82, 228 lifelong learning, 60 limits, 227, 229, 233 liquidity, 36 Loan Advisor Systems (LAS), 96 logical perception, 17 lognormal distribution, 85, 166, 177 London Mercantile Exchange, 155 Long-Term Capital Management (LTCM), 81, 82, 195, 233, 249 Lorenz, Dr Edward, 31 loss thresholds, 82 low technology, 273 Lynch, Peter, 161 management control, 282, 286 management control system, 182 management intent, 195, 196 management of change, 42 management risk, 280 Mandelbrot, Dr Benoit, 31, 158 market-by-market risk exposure, 36 market risk, 17, 67, 185, 209, 210, 275 Market Risk Amendment, 57, 203, 204, 213, 217, 218, 275 market risk models, 60, 109 marking-to-market, 196, 232 marking-to-model, 111, 232 Markowitz, Henry, 159 mathematical analysis, 115 mathematical model, 16 mathematical model-making, 36 mathematical representation, 48 mathematical tests, 38 Maxwell risk, 83 Mazur, Paul, 22 McNamara, Robert, 51 mean reverting, 177, 178 mean time between failures (MTBF), 236 measurement, 57 MeesPierson, 97, 257 Merrill Lynch, 162
295
metafinance, 30 metaknowledge, 30 metalevel, 30, 31 metaphors, 32, 33, 35, 36 metaphysics, 30 Miller, Merton, 254 model arbitrage, 271 model-based system, 265 model building, 57 model literacy, 212, 213, 235 model risk, 84, 97, 195, 211, 261, 268, 269, 271, 278 modelling, 7, 17, 36, 45, 53, 56, 59, 60, 61, 97 models, 7, 16, 17, 21, 43, 45, 51, 53, 57, 59, 60, 70, 81, 84 dynamic, 61 financial, 50 Modigliani, Franco, 254 Monte Carlo simulation, 62, 111, 204, 208, 213, 216, 220, 221, 257, 273 Moody’s Investors Service, 80, 81, 88, 89 Morgan Guaranty, 45 Morgan Stanley, 46 multimarket exposure, 36 narrative disclosures, 190 NASDAQ, 157, 161, 269 National Science Foundation (NSF), 248 NatWest Markets, 263 negation and reconstruction, 13 net present value, 74 neural networks, 257 New Capital Adequacy Framework, 16, 57, 68, 78, 79, 81, 101, 102, 151, 230, 231, 238, 240, 275 new economy, 268 Newton, Isaac, 46 New York Stock Exchange, 269 nonlinearities, 162, 238 null hypothesis, 122 numerical analysis, 56, 138 numerical disclosures, 191, 192
296 Index
object knowledge, 30 Olivetti, 21 operating characteristics (OC) curves, 68, 80, 218, 219, 234 operational risk, 185, 199, 200, 268, 272, 275–9, 281 operations research, 45 Ortega y Gasset, José, 29, 30 out-of-current-earnings, 193, 194 outsourcing, 200, 263, 285 over-the-counter (OTC) derivatives, 86, 261 Paciolo, Luca, 18, 46, 282 parametric VAR, 210, 213 Pareto, Vilfredo, 52 Pareto’s Law, 52, 53 Paribas, 107, 271 Parkinson’s Law, 139 pattern recognition, 164 perception, 29 plan versus actual, 231 Plank, Max, 67 platokyrtotic distribution, 221 portfolio management, 109 possibility theory, 123 prediction theory, 164, 165 present value, 111, 123, 124 pricing errors, 273 principle of confidence, 237 probability of default, 117 problem definition, 59 productivity of capital, 113 profit, 47 profit margins, 47 profitability evaluations, 81 prognostication, 158 program trading, 163 programme, 33 programming language, 51 protocol machine, 48 prototype, 17 prototyping, 13, 156 qualitative criteria, 55, 56 quality control, 218 quantitative approaches, 55, 56
Rand Corporation, 143 real estate mortgages, 49 real world computing program, 255, 256 reliability engineering, 236 reputational risk, 199, 226 research, non-traditional, 55 reserve accounting, 193 Resources Management On-line System (REMOS), 274 return on investment (ROI), 105 rigorous analysis, 43 rigorous analytical concepts, 43 risk and return, 104 risk control, 67 risk control models, 57 risk coverage, 83 risk council, 68 risk factors (RF), 84, 85, 86 risk management, 56, 70, 74 risk ratings, all-inclusive, 90 risk weights, 80 risk-adjusted approaches, 73 Risk-Adjusted Return on Capital (RAROC), 70, 73, 74, 77, 81, 83 RiskCalc, 95 RiskMetrics, 234 rocket scientists, 4, 8, 26, 27, 28, 32, 57, 162, 171 Ruelle, Dr David, 31 rules, 30 Sanford, Charles, 274 scenario analysis, 159 scenarios, 137, 138, 146, 147, 154 Schneider, William, 247 Scholes, Myron, 167 science, 11, 12 science researchers, 27 scientific methodology, 5, 13, 14 scientific thought, 3, 11 scientific truth, 30 scientists, 4, 8, 9, 13 Securities and Exchange Commission (SEC), 197 sensitivity analysis, 203, 216
Index
297
sequential sampling, 73 serial independence, 216 Shah, Sanjiv, 199 Shanks, Eugene, 275 simulated environment, 35 simulation testing, 266 simulation VAR, 213 simulations, 36 Sloan, Alfred, 28, 51 spikes, 259, 261 splines, 259, 261 Standard & Poor’s, 68, 88, 89, 238 Statement of Financial Accounting Standards (SFAS), 180 Statement of Total Recognised Gains and Losses (STRGL), 102, 180, 184, 186–8, 193, 194 statistical quality control, 236 statistical quality control chart, 34, 60 statistical theory, 73 Steinmetz, Dr Charles P., 42 stochastic, 50 stochastic models, 120 stochastic processes, 56, 62 stratified risk assessment, 73 stress analysis, 264 Swiss Federal Banking Commission, 224 syntactical analysis, 142 system, 26 system analysis, 36 system engineer, 27 system engineering, 26 system integration, 264 systematic risk, 82 systemic risk, 191 systems thinking, 38
technology, 5 telecoms debt defaults, 130 term structure, 124, 125 test of hypothesis, 158 test of the mean, 158 test of the variance, 158 thinking, 6, 7, 8 TIAA/CREF, 278, 279 time, concept of, 9 time horizon, 120 time series, 9, 259 time until first failure (TUFF), 235 tolerances, 34 Torricelli, Evangelista, 54 Treadway Commission, 181, 186 Treadway, James Jr, 181 Truman, Harry, 161, 277 trust department, 49
tangible, 58 taxonomical system, 133 technologists, 4, 8
zero sum game, 159 zero tolerance, 34 Zwicky, Fritz, 12
UCLA, 120, 153 uncertainty, 50 uncertainty principle, 50 University of Lausanne, 52 value at risk (VAR), 101, 110, 120, 203–5, 209, 228, 232, 233, 237, 240, 241 variance/covariance, 215 Verkerk, Arjan P., 97 volatility, 36 volatility smile, 105, 164, 262 Volcker, Paul, 277 von Neumann, John, 11, 45 Warburg, Sigmund, 174 Watson, Thomas Sr, 142, 143 Weatherstone, Dennis, 234 Weibull, Walodi, 236 WestLB, 88, 89, 90 Wittgenstein, Ludwig, 21