Improving Operational Risk Management
This page intentionally left blank
Improving Operational Risk Management Written by Dr. ing. Jürgen H.M. VAN GRINSVEN Jürgen van Grinsven is working in the professional services industry. He helps financial institutions solving their complex risk management problems. In his opinion, solutions for risk management need to be effective, efficient and lead to satisfaction when implemented in the business (first line of defense). This book helps financial institutions to understand operational risk. Why? Operational risk does not lend itself to traditional risk management approaches because almost all instances of operational risk losses result from complex and nonlinear interactions among risk and business processes. Therefore, we need to deal with the issues in operational risk management. In this book, a highly structured approach for operational risk management is prescribed and explained in this book. The approach can operate with scarce data and enables financial institutions to understand operational risk with a view to reducing it, thus reducing economic capital within the Basel II regulations.
This page intentionally left blank
For John Lueb †
© Copyright 2009 by Jürgen H.M. van Grinsven and IOS Press. All rights reserved. ISBN 978-1-58603-992-9 Second revised edition. First edition published by Jürgen H.M. van Grinsven in 2007. Cover design and photo: Wilfred Geerlings Dr. ing. Jürgen H.M. van Grinsven www.jurgenvangrinsven.com Tel: +31.6.15.586.586
Published by IOS Press under the imprint Delft University Press Publisher IOS Press BV Nieuwe Hemweg 6b 1013 BG Amsterdam, The Netherlands Tel: +31-20-688 33 55. Fax: +31-20-687 00 19 Email:
[email protected] www.iospress.nl www.dupress.nl Legal notice The publisher is not responsible for the use which might be made of the following information. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form by any means, electronic, mechanical, photocopying, recording, or otherwise, without the written permission of the author. Printed in The Netherlands
Key words Basel II, operational risk management, risk assessment, scenario analysis, expert judgment, financial institution, collaboration, facilitation recipe, group support systems. All rights reserved.
Table of contents 1.
ISSUES IN OPERATIONAL RISK MANAGEMENT............................................ 1
1.1.
Introduction ............................................................................................................ 1
1.2.
Importance of operational risk management ......................................................... 3
1.3.
Benefits of operational risk management............................................................... 5
1.4. Difficulties and challenges in operational risk management................................. 6 1.4.1. Difficulties with loss data...................................................................................................6 1.4.2. Difficulties with expert judgment .....................................................................................7 1.4.3. Difficulties with techniques, technology and software tools ........................................9 1.5. 2. 2.1.
Research objective ................................................................................................ 10 RESEARCH APPROACH ........................................................................................11 Research motivation and questions.......................................................................11
2.2. Research philosophy and strategy ........................................................................ 12 2.2.1. Research philosophy ........................................................................................................ 12 2.2.2. Research strategy .............................................................................................................. 12 2.3. Research instruments ........................................................................................... 15 2.3.1. Literature review............................................................................................................... 15 2.3.2. Case studies and action research.................................................................................... 15 2.3.3. Research design ................................................................................................................ 16 2.4. 3.
Research outline ................................................................................................... 19 LITERATURE REVIEW ........................................................................................ 21
3.1. Operational risk management .............................................................................. 21 3.1.1. Defining operational risk................................................................................................. 21 3.1.2. Dimensions of operational risk ...................................................................................... 24 3.1.3. Management of operational risk .................................................................................... 28 3.2. Expert judgment ................................................................................................... 31 3.2.1. Preparation ........................................................................................................................ 32 3.2.2. Risk identification............................................................................................................. 35 3.2.3. Risk assessment ................................................................................................................ 36 3.2.4. Risk mitigation.................................................................................................................. 38 3.2.5. Reporting........................................................................................................................... 39
3.3. Group support systems......................................................................................... 39 3.3.1. Patterns of group tasks.................................................................................................... 40 3.3.2. GSS support capabilities.................................................................................................. 43 3.3.3. GSS benefits and aspects ................................................................................................ 44 3.4. 4. 4.1.
Conclusions .......................................................................................................... 49 OPERATIONAL RISK MANAGEMENT IN PRACTICE ....................................51 Bank Insure ...........................................................................................................51
4.2. Work process ........................................................................................................ 53 4.2.1. Preparation phase............................................................................................................. 54 4.2.2. Risk identification, assessment and mitigation phase ................................................. 54 4.2.3. Reporting phase................................................................................................................ 55 4.3. Support.................................................................................................................. 55 4.3.1. Preparation ........................................................................................................................ 56 4.3.2. Risk identification, assessment and mitigation............................................................. 56 4.3.3. Reporting ........................................................................................................................... 56 4.4.
Expert estimations................................................................................................ 56
4.5.
Problems ............................................................................................................... 57
4.6.
Starting points for improvement .......................................................................... 58
5.
MULTIPLE EXPERT ELICITATION AND ASSESSMENT...............................61
5.1. Way of thinking .................................................................................................... 62 5.1.1. View on operational risk management.......................................................................... 62 5.1.2. Nature of the design problem ........................................................................................ 64 5.1.3. Position of the researcher ............................................................................................... 65 5.1.4. Design guidelines.............................................................................................................. 65 5.2. Way of working ..................................................................................................... 70 5.2.1. Understanding phase ....................................................................................................... 70 5.2.2. Design phase ..................................................................................................................... 72 5.3.
Way of modeling................................................................................................... 83
5.4.
Way of controlling................................................................................................. 84
5.5. Evaluation of MEEA ............................................................................................ 85 5.5.1. Data sources...................................................................................................................... 86 5.5.2. Method of analysis ........................................................................................................... 86
6.
EMPIRICAL TESTING OF MEEA AT ACE INSURE ........................................ 91
6.1.
Understanding Ace Insure.................................................................................... 91
6.2.
Designing ORM processes for Ace Insure........................................................... 95
6.3.
Results and discussion.........................................................................................108
6.4. Learning moments...............................................................................................127 6.4.1. Way of thinking .............................................................................................................. 127 6.4.2. Way of working .............................................................................................................. 128 6.4.3. Way of modeling ............................................................................................................ 130 6.4.4. Way of controlling.......................................................................................................... 130 7.
EMPIRICAL TESTING OF MEEA AT INTER INSURE ..................................131
7.1.
Understanding Inter Insure.................................................................................131
7.2.
Designing ORM processes for Inter Insure ........................................................138
7.3.
Results and discussion.........................................................................................148
7.4. Learning moments...............................................................................................165 7.4.1. Way of thinking .............................................................................................................. 165 7.4.2. Way of working .............................................................................................................. 166 7.4.3. Way of modeling ............................................................................................................ 168 7.4.4. Way of controlling.......................................................................................................... 168 8.
EPILOGUE .............................................................................................................169
8.1. Research findings ................................................................................................169 8.1.1. Research question one................................................................................................... 169 8.1.2. Research question two................................................................................................... 171 8.1.3. Research question three ................................................................................................ 174 8.1.4. Research question four.................................................................................................. 177 8.2. Research...............................................................................................................178 8.2.1. Application domain........................................................................................................ 178 8.2.2. Research approach ......................................................................................................... 179 8.2.3. Future research ............................................................................................................... 180 REFERENCES...............................................................................................................181 SUMMARY..................................................................................................................... 203 SAMENVATTING .........................................................................................................213 CURRICULUM VITAE ................................................................................................ 223
This page intentionally left blank
Preface and Acknowledgements Operational risk is possibly the largest threat to financial institutions. The operational risk that financial institutions face have become more complex, more potentially devastating and more difficult to anticipate. The Credit Crunch indicates that operational risk does not lend itself to traditional risk management approaches. This is because almost all instances of operational risk losses result from complex and nonlinear interactions among risk and business processes. In this book we focus on an alternative to improve operational risk management that is more effective, efficient and satisfying. Many people and several organizations supported me to achieve the content required for this second revised edition. Special thanks to drs. Wilfred Geerlings, ir. Jeroen Schuuring and dr. ir. Corné Versteegt for being friends and for the constructive discussions we had. Last, but certainly not least, I want to thank my family for their love and support.
Jürgen van Grinsven, Hedel, April 2009
This page intentionally left blank
List of abbreviations BIS II
New Capital Accord
CA
Cronbach’s Alpha test
CAS
Corporate Audit Services
FI
Financial Institution
GORM
Group Operational Risk Management
GSS
Group Support System
IS
Information System(s)
KS
Kolmogorov-Smirnov test
Loss data
Recorded losses using a number of properties
MEEA
Multiple Expert Elicitation and Assessment
OR
Operational Risk
ORM
Operational Risk Management
Pt
T-test
Pwx
Wilcoxon test
UML
Unified Modeling Language
#
Number
This page intentionally left blank
In these volatile times, what you want from risk management is a little less risk and a lot more management. The Chase Manhattan Bank
1.
Issues in operational risk management
In this chapter, a number of issues in operational risk management are described. These issues together with our initial thoughts on risk management (Grinsven, 2001) provide us the guidance and starting points for conducting this research.
1.1. Introduction Operational risk management supports decision-makers to make informed decisions based on a systematic assessment of operational risk (Brink, 2001; Cumming & Hirtle, 2001; Brink, 2002; Cruz, 2002). Operational risk management is in this chapter defined as the preparation, identification, assessment, mitigation and reporting of operational risks in a financial institution and its context. In this chapter, we define an operational risk as any factor or event that could impact the financial institution’s ability to meet its business objectives. Most business decisions in financial institutions are made under both risk and uncertainty (Murphy & Winkler, 1977; Brink, 2001; Turban, Aronson et al., 2001). A decision made under risk is one in which the decision-maker considers various feasible outcomes for each alternative, each with a given frequency of occurrence. This distinguishes risk from uncertainty where the decision-maker does not know, or cannot estimate, the frequency of occurrence and the possible impact (Turban, Aronson et al., 2001). Decision makers usually try to avoid making decisions under uncertainty as much as possible; rather they try to acquire more information so that the problem can be treated under certainty or risk (Lyytinen, Mathiassen et al., 1998). Since most business decisions are made at the operational level, the management of Operational Risks
1
Chapter 1 (OR) appears to be an important topic for financial institutions to survive in a competitive market (Clausing, 1994; Cruz, Coleman et al., 1998; Brink, 2001). Operational risks relate to people, processes, systems and external events (BCBS, 1998; Carol, 2000; Brink, 2001; Cruz, 2002; BCBS, 2003b). This broad range of factors makes it difficult for financial institutions to manage these operational risks (Anderson, 1998; Brink, 2001). Moreover, they face difficulties that are closely related to the compliance requirements with the New Capital Accord (BIS II) regulations: data collection, data tracking and a robust internal risk-control system, see e.g. (BCBS, 1998; Cooper, 1999; Cumming & Hirtle, 2001; Haubenstock, 2001; Harmantzis, 2003; Yasuda, 2003). BIS II emphasizes the importance of collecting comprehensive data for estimating a financial institutions exposure to operational risk (Bryn & Grimwade, 1999; Doerig, 2000; BCBS, 2003b). Regardless the weight that institutions place on the data collection method, BIS II mandates that financial institutions must be able to prove that their data collection methods are robust and can be audited (Young, 1999; Harmantzis, 2003; BCBS, 2003b). Failure to comply can be addressed by BIS II with measures such as increased oversight, senior management changes and additional capital charges (BCBS, 1998; Cooper, 1999; Brink, 2001; Brink, 2002). These issues motivate financial institutions to improve their operational risk management in an adequate manner. Financial institutions however will make a trade-off between the expenses and benefits involved when improving their operational risk management. Developments in using expert judgment in various settings (Cooke, 1991) and the relative low costs of using group support systems (Fjermestad & Hiltz, 2000) make it increasingly possible to improve operational risk management. In practice, however, financial institutions often rely on the completion of detailed questionnaires, open-ended interviews and manual groupfacilitated workshops. Financial institutions have a difficult time deciding whether and how to proceed and those that do carry out such initiatives indicate that they are often disappointed with the results, see e.g. (Cruz, Coleman et al., 1998; Finlay & Kaye, 2002). In this research, we focus on improving operational risk management by utilizing expert judgment and group support systems in the financial service sector. The remainder of this chapter is structured as follows. In section 1.2 of this chapter we discuss the importance of operational risk management. The benefits are discussed in section 1.3.
2
Issues in operational risk management Thereafter, we discuss the difficulties and challenges in section 1.4. Finally, the research objective is considered in section 1.5.
1.2. Importance of operational risk management By the end of the 1990s many financial institutions increasingly focused their risk management efforts on operational risk management. This was mainly motivated by the volatility of today’s marketplace, costly catastrophes e.g. Metallgesellshaft, Barings, Daiwa, Sumitomo, Enron, Worldcom and regulatory-driven reforms such as the New Basel Accord (Young, Blacker et al., 1999; RMA, 2000; Andersen, 2001; Haubenstock, 2001; Cruz, 2002; Harmantzis, 2003; Lum, 2003 a; BCBS, 2003a; Seah, 2004). Three other dynamics that drive this change in focus are (Connolly, 1996; CFSAN & Nutricion, 2002; Karow, 2002; BCBS, 2003a): •
decentralization and employee empowerment: organizational structures that become flatter make decision-making authority become more widely distributed across the financial institution and more significant decisions are made at the operational level. This increases the need for management to understand the risk posed by these isolated decisions
•
market pressure: forces financial institutions to broaden and adapt their product and/or service offerings to the rapidly changing market, thereby exposing the institution to greater risk
•
e-commerce: has made business activities more transparent to the customer, while increasing the need to achieve speed to market with products and services, gain efficiencies in business processes, and allocate capital to activities that have a higher return/risk ratio
These changes are likely to increase the level of exposure to operational risk for financial institutions. Over a few decades many financial institutions have capitalized on these dynamics and have developed new business services for their clients. On the other hand, the operational risks that these institutions face have become more complex (BCBS, 2001a; Yasuda, 2003), devastating and difficult to anticipate (Cruz, Coleman et al., 1998; Harmantzis, 2003). For example, providing e-commerce services to clients via the Internet, such as Internet banking, introduces operational risks that can harm the reputation of a financial institution. Table 1-1 illustrates this by presenting a number of publicized losses due to operational risk over almost two decades (Cooper, 1999; Kaur, 2002; Questa, 2002; Shyan, 2003; Lum, 2003 a; Lum, 2003 b). Note that
3
Chapter 1 these losses only represent the publicized losses; the actual losses due to operational risk are presumably a multiple of that amount. Table 1-1: publicized losses due to ORM in different sectors
Year 2008 2003 2002
2001 2000 1995 1994 1986
Organization Delta Lloyd Nationale Nederlanden Fortis ASR SMI management NASA Allied Irish Bank DBS bank WorldCom Bank of America HIH Enron Asia Pacific Breweries SembLog Barings Bank US Navy Metallgesellshaft NASA
Sector Financial Financial Financial IT Aerospace Financial Financial IT Financial Financial Energy Food Logistics Financial Defense Financial Aerospace
Losses ($) / (€) 300 € million by dissemination of incorrect information 365 € million by dissemination of incorrect information 750 € million by dissemination of incorrect information 1.5 Million by a cheque scam Space Shuttle Colombia 750 Million by unauthorized trading 62000 by online fraud 7.6 Billion by inflated profits and concealed losses 12.6 Million by illegal transactions (fraud) Collapse 250 Billion 116 Million by cheating banks 18.5 Million by false invoices, bogus accounting entries 1.4 Billion by unauthorized trading 31 Million, casualties 1.5 Billion on oil futures Space Shuttle Challenger
Table 1-1 illustrates that such losses are not isolated incidents; rather they occur with some regularity in companies of all sizes and sectors. Noticing the high-profile events, it is not surprising that the financial service sector is increasingly aware of the commercial significance of operational risk management and dedicates significant resources to it (Bryn & Grimwade, 1999). Two reasons might underpin this: the New Basel Accord to which financial institutions presumably have to comply by 2007 (Bryn & Grimwade, 1999; Cumming & Hirtle, 2001) and the fact that directors and managers increasingly face personal liabilities (Young, Blacker et al., 1999). Several important lessons can be learned from these publicized losses and the ever increasing attention to operational risk management (Young, Blacker et al., 1999; McDonnell, 2002): (a) financial institutions have to realize that they have a responsibility, not only to their board of directors and management, but more importantly to their employees, shareholders and the tax payer, to manage their operational risks (Cruz, Coleman et al., 1998; Cruz, 2002) (b) awareness and dedication of resources to operational risk management can lead to significant benefits for financial institutions (Brink, 2001; Brink, 2002; Axson, 2003). The benefits of operational risk management are discussed in the following section.
4
Issues in operational risk management
1.3. Benefits of operational risk management Research on the benefits of operational risk management yields a somewhat inconclusive picture. Although in general, the most obvious benefit seems preventing catastrophic losses, other less obvious benefits are that it prevents rework and stimulates win-win situations. Table 1-2 presents the results of a survey among 55 leading financial institutions that was conducted by the Risk Management Association (RMA). The results indicate that value is attached to benefiting from improving their performance (83%) and reducing operational losses (73%). The RMA study also indicates that these institutions expect to benefit the least (36%) from meeting the Basel regulatory requirements (RMA, 2000; Martin, 2003). Table 1-2: expected benefits of operational risk management for financial institutions (RMA, 2000)
Expected benefits of operational risk management Improving performance Reducing operational losses Increasing accountability and improving governance Protecting against loss of reputation Meeting Sarbanes Oxley requirements Optimizing the allocation of capital Combating the threat of business disruption Meeting Basel regulatory requirements
Relative importance (%) 83 73 70 66 52 51 44 36
Yet, results from another study among banks in the Netherlands indicate that Dutch banks expect to benefit most from reducing operational losses and optimizing the allocation of capital (Ernst & Young, 2003). Similar results were found in the military sector, see e.g. (Airforce, 1997; Scarff, 2003), the aviation sector, see e.g. (Bigün, 1995; FAA, 2000), the industrial, technology, energy and governmental sector, see e.g. (Andersen, 2001). Moreover, these studies also stress that operational risk management can: add value to the organization, support corporate governance requirements, give a clear understanding for organization wide risk, help drive management accountability for risk, and allow a manager to make informed decisions based on a systematic assessment of operational risk. These expected benefits make operational risk management an increasingly important subject for many financial institutions. In the past five to ten years operational risk management has evolved into its own scientific discipline. Not surprisingly, much of this literature is aimed at the financial service sector, see e.g. (Cruz, Coleman et al., 1998; Bier, Haimes et al., 1999; Brink, 2001; Brink, 2002; FSA, 2003; Medova, 2003). Yet, despite the progress made in operational
5
Chapter 1 risk management the next section indicates that financial institutions also encounter difficulties and challenges before they can utilize these benefits.
1.4. Difficulties and challenges in operational risk management Contrary to the benefits, there are a number of difficulties and challenges that most financial institutions face. The major difficulties are closely related to the identification and estimation of the level of exposure to operational risk (Young, Blacker et al., 1999; Carol, 2000). Financial institutions can use loss data and expert judgment as input to estimate their exposure to operational risk (Carol, 2000; Brink, 2001; Cruz, 2002). Exposure to operational risk can be defined as: ‘an estimation of the potential operational losses that the institution faces at a soundness standard consistent with a 99.9 percent confidence level over a one-year period’ (BCBS, 1998; BCBS, 2003b).
1.4.1. Difficulties with loss data Loss data forms the basis for the measurement of operational risk (Cruz, Coleman et al., 1998; Brown, Jordan et al., 2002; Hoffman, 2002; Ramadurai, Beck et al., 2004). Loss data are recorded losses due to an operational risk using a number of properties. A distinction can be made between internal loss data and external loss data. Although internal loss data is considered to be the most important source of information (Brink, 2001; Haubenstock, 2001; Harmantzis, 2003) it is generally insufficient for measuring operational risk because: •
there is a lack of internal loss data, in the past many financial institutions avoided gathering internal loss data because of e.g. economic reasons, strategic behavior, cultural issues or the objective was simply to identify and manage operational risk rather than to gather high quality quantitative internal loss data (RMA, 2000; Hoffman, 2002; Harmantzis, 2003; Tripp, Bradley et al., 2004). Additionally, it is unlikely that a financial institution experienced a sufficient large number of loss events for the measurement of operational risk (O'Brien, Smith et al., 1999; Young, Blacker et al., 1999; Hiwatashi & Ashida, 2002)
•
the internal data often has a poor quality, even when a financial institution recorded internal loss data, the quality is usually too low because the losses are not associated with enough contextual information (Karow, 2002; Harmantzis, 2003). Moreover, the past is not necessarily a good predictor of the future (Toft & Reynolds, 1997).
6
Issues in operational risk management To overcome these problems, financial institutions can increase the sample size by supplementing their internal loss data with external loss data. However, using external loss data for measuring operational risk raises a number of methodological issues such as: •
reliability, the reliability of external loss data is often too low because there might be a lack of similarities in e.g. the size of financial institution, business processes, scope and cultures (O'Brien, Smith et al., 1999; Hulet & Preston, 2000; Frachot & Roncalli, 2002)
•
consistency, external loss data often provides a simplified view of a complex situation and is subject to truncations and biases (Young, Blacker et al., 1999; Baud, Frachot et al., 2002)
•
aggregation, it is often difficult to aggregate external loss data due to issues in e.g. availability (Sih, Samad-Khan et al., 2000; Ernst & Young, 2003), quality (Carol, 2000; Hiwatashi & Ashida, 2002; Rosengren, 2003), relevance (Carol, 2000; Walter, 2004) and applicability (Harris, 2002).
1.4.2. Difficulties with expert judgment Expert judgment can be extremely important where internal loss data and external loss data does not provide a sufficient, robust, satisfactory input to estimate the financial institutions exposure to operational risk (Carol, 2000; RMA, 2000; Cruz, 2002; BCBS, 2003b). Expert judgment is defined as: “the degree of belief, based on knowledge and experience, that an expert makes in responding to certain questions about a subject” (Clemen & Winkler, 1999; Cooke & Goossens, 2004). Expert judgment is increasingly advocated in various sectors for identifying and estimating the level of uncertainty about risk. For the aviation sector see e.g. (Bigün, 1995; FAA, 2000), for the financial sector see e.g. (Muermann & Oktem, 2002; Ramadurai, Beck et al., 2004), for the IT sector see e.g. (Kaplan, 1990; Genuchten, Dijk et al., 2001), for the nuclear sector see e.g. (Cooke & Goossens, 2002), for the meteorology sector see e.g. (Murphy & Winkler, 1974) and for the defense sector see e.g. (Airforce, 1997). Moreover, expert judgment can be used as a means to incorporate forward-looking activities and to prevent financial institutions from catastrophic losses (Carol, 2000; Hiwatashi & Ashida, 2002; Chappelle, Crama et al., 2004). Utilizing expert judgment is usually accomplished with more than one expert, often referred to as individual self-assessments, or group-wise with more than one expert, often referred to as group-facilitated self-assessments. A self-assessment must not be confused with asking a single expert what he or she thinks is the financial institutions exposure to operational risk in a 7
Chapter 1 particular business process or project (Coleman, 2000; Cruz, 2002). Even an experienced expert is highly unlikely to be able to foresee all the operational risks involved given its multidimensional characteristics. Rather it requires the judgment of multiple experts to provide a financial institution with the input to derive an estimate of exposure to operational risk at a given confidence level. Table 1-3 presents different examples of how expert judgment can be utilized in operational risk management, also see (Coleman, 2000; Brink, 2002; Finlay & Kaye, 2002). Table 1-3: examples of utilizing expert judgment in operational risk management
Example
Utilizing expert judgment in operational risk management Individual Group-wise Completion of detailed questionnaires Group-facilitated manual face to face workshop Completion of open-ended questions Group-facilitated computer supported face to Interviews face workshop
While Coleman (2000) indicates that 70% of the surveyed organizations use self assessments, Andersen (2001) in another study indicates that 82 % uses individual self-assessments and 52 % uses group-facilitated face-to-face assessments. Similar results are found by Raft International plc (2002) who conducted a study among practitioners in the financial service sector across the globe. Their results indicate that 81% conducts self-assessments. Unfortunately, this study does not specify if they are individual or group-wise. These findings are somewhat in line with GAIN (2004) who indicate that 51.2% uses facilitated workshops and/or interviews. Moreover, another study conducted by Ernst & Young that focused on the Dutch banks indicates that 50 % of the large-sized banks use individual self-assessments and 75 % of the large- and mediumsized banks use group-facilitated face-to-face self-assessments (Ernst & Young, 2003). While individual self-assessments are currently the leading practice, the trend is more towards groupfacilitated self-assessments (Andersen, 2001). Despite the high scores in individual and group-wise self-assessments, very few financial institutions deploy self-assessments to provide them the input for an estimate of their exposure to operational risk (Carol, 2000; Brink, 2001; Finlay & Kaye, 2002). The reasons for this represent a multitude of challenges and include, but are not limited to the (Anderson, 1998; Young, Blacker et al., 1999; Carol, 2000; Finlay & Kaye, 2002; Karow, 2002; Ernst & Young, 2003; Harmantzis, 2003; Ramadurai, Beck et al., 2004; Tripp, Bradley et al., 2004):
8
Issues in operational risk management •
value of the output, results from a self-assessment is perceived to be subjective, value laden and often of a poor quality due to e.g. organizational culture, internal politics and individual interpretations which can lead to bias
•
inconsistent use of self-assessments across departments and business units, this makes consolidation, analysis and aggregation of the data difficult
•
static view of self-assessments, because assessments are time consuming and labor intensive, they are conducted with a relatively low frequency and do therefore not provide a dynamic view of the operational risks in a fast changing business environment.
1.4.3. Difficulties with techniques, technology and software tools Techniques, technology and software tools can be used to support expert judgment activities in operational risk management (Hulet & Preston, 2000; Brink, 2001; Karow, 2002). Facilitation techniques, interviews, and software tools to record the results and provide reporting can support the expert judgment activities (Hulet & Preston, 2000; Haubenstock, 2001). Several difficulties with manual techniques such as interviews and questionnaires relate to not understanding the biases of interviewees, interfering with the conduct of the project, taking too much time and chasing changing data (Hulet & Preston, 2000). Further, a study among practitioners in the financial service sector indicates that 47 % uses manual techniques such as detailed questionnaires, open-ended questionnaires and interviews (Finlay & Kaye, 2002). The study further indicates that 35 % uses software tools that have been developed in-house. Moreover, their results indicate that a relatively high proportion of financial institutions develops in-house solutions (71%) as opposed to buying off-the-shelf solutions. Contrary to these results, GAIN (2004) indicates that only 4.2% of the software tools, used by internal auditors, are developed in-house. Further, this study indicates that 19.5% of the current practices are often not shared in the organization. Moreover, the results indicate that 22 % is dissatisfied and 11 % is very dissatisfied with the quality of their organization’s information technology services. Eeten (2001) uses a group support system to support experts in a risk analysis and concludes that it is not possible to use a ‘general’ risk analysis; rather Eeten suggests that several options are needed to meet the wishes from the organization. In general, it seems that it is hard to piece together data from multiple experts and high volumes of data derived from multiple experts requires coordination (Hulet & Preston, 2000).
9
Chapter 1
1.5. Research objective This research project is based on the main hypothesis that an approach to improve the process of utilizing expert judgment can improve operational risk management and deal with a number of issues mentioned in this chapter. We further believe that such an approach can help financial institutions whose internal loss data and external loss data does not provide them with a sufficient, robust and satisfactory input to estimate their exposure to operational risk. Moreover, we believe that such an approach can help to incorporate forward-looking activities to prevent financial institutions from catastrophic losses. To our knowledge, there were no such approaches available when we started this research. The issues described in the preceding sections provide the starting point for our research project and result in the following research objective: develop an approach to improve the process of utilizing expert judgment in operational risk management. Scientific relevance The scientific contribution comes from the results of our research project and can be used to provide insight into a number of issues: (1) how one can design and utilize expert judgment in operational risk management, (2) how to make best use of the available techniques and supporting technology to support expert judgment activities in operational risk management. We believe to contribute to the literature on operational risk management by providing detailed insights on how to utilize expert judgment. We also believe to contribute to the expert judgment literature by providing detailed insight into the application of expert judgment in the financial service sector. We further believe to contribute to the group support systems literature by providing insights on how to utilize the software tools in the context of operational risk management. Societal relevance The societal relevance of this research project becomes clear when the benefits and challenges of ORM are viewed. The societal contribution is aimed at helping to solve issues involving the use of expert judgment in financial institutions to improve operational risk management. These issues hinder the further development of operational risk management and implementation of expert judgment in many financial institutions.
10
Experience is a dear teacher, and only fools will learn from no other. Benjamin Franklin
2.
Research approach
The research approach defines the strategy, which is followed within a part of research, in which a set of research instruments are employed to collect and analyze data on the phenomenon studied, guided by a certain research philosophy.
2.1. Research motivation and questions In chapter one it was argued that the financial service sector is a challenging domain for researching operational risk management. Despite the potential for using expert judgment, very few financial institutions utilize expert judgment to provide them with the input to estimate their exposure to operational risk. Our choice for financial institutions is grounded in the expectation that they will benefit significantly from improvements to expert judgment, see chapter 1. We formulate our research objective by combining the scientific and societal relevant issues presented in chapter one: ‘develop an approach to improve the process of utilizing expert judgment in operational risk management’. Based on the literature described in chapter one, we formulate the following research questions. •
Research question 1, what are the generic characteristics of utilizing expert judgment in operational risk management?
•
Research question 2, what concepts can be used to improve operational risk management by utilizing expert judgment?
•
Research question 3, what does an approach to improve operational risk management by utilizing expert judgment look like?
•
Research question 4, how do we evaluate the improvements that were made to operational risk management? A part of this question is to identify the performance indicators of operational risk management. 11
Chapter 2
2.2. Research philosophy and strategy 2.2.1. Research philosophy A research philosophy underlines the way in which data on the phenomenon of interest is collected and analyzed (Orlikowski & Baroudi, 1991). When studying the application of information technology within organizations a distinction can be made between a positivist and an interpretivist philosophy (Galliers, 1991; Trauth & Jessup, 2000). Interpretive research focuses on the complexity of human decision-making as the situation emerges, in contrast to positivist research that focuses on predefining dependent and independent variables (Janssen, 2001). Although a great deal of GSS research has been conducted within the tradition of a positivist research philosophy (Connolly, Jessup et al., 1990; Dennis, Nunamaker et al., 1991; Trauth & Jessup, 2000), recent Information Systems (IS) literature indicates an interpretivist movement. In this research project an interpretive philosophy is chosen for. The objective of this philosophy is to join knowledge in a coherent manner that is gained only through social constructions such as contextual data, shared meanings and documents (Trauth & Jessup, 2000; Janssen, 2001). Interpretivists claim that reality can only be understood by subjectively interpreting observations of reality ('t Hart, Van Dijk et al., 1998) and they focus on the phenomenon of interest in its natural setting maintaining that researchers have an impact on this phenomenon. We choose this philosophy because we think it is appropriate for investigating how we can improve operational risk management. It will be extremely difficult to answer our research question without entering the reality of financial institutions where multiple interpretations of this reality are possible.
2.2.2. Research strategy A research strategy consists of a rough overall plan for conducting a research project (Meel, 1994), in which a sequence of activities is described (Herik, 1998). In research strategies, a distinction can be made between theory building and theory testing (Galliers, 1991). Strategies for theory building are often based on inductive reasoning and use qualitative research instruments e.g. case studies and action research for data collection (Meel, 1994; 't Hart, Van Dijk et al., 1998). Strategies for theory testing are often based on deductive reasoning and use
12
Research approach quantitative instruments such as laboratory experiments and field experiments('t Hart, Van Dijk et al., 1998). The choice of a research strategy depends on the nature of the research problem and the status of theory development within the research field (Sol, 1982). Following Sol (1982) we argue that utilizing expert judgment in operational risk management represents an ill-structured problem because (1) there can be many financial institutions with various business units and actors involved who have opposing perspectives on improving operational risk management, (2) there are many alternative courses of actions or solutions available to improve operational risk management and (3) the outcome of these courses of action can not be evaluated by only using numerical data. Further, as argued in chapter one, the scientific literature on utilizing expert judgment in operational risk management is scarce on the moment we started this research. Therefore, we argue that it is very difficult to solve this ill-structured problem in a purely deductive manner because there is no appropriate theory available for improving operational risk management. We argue that it is imperative to inductively identify and explore the requirements of our approach in reality. Therefore, we propose an inductive-hypothetic model cycle as our research strategy. This strategy is based on Churchman’s Singerian inquiring system, see e.g. (Churchman, 1971; Bosman, 1977; Sol, 1982). The main benefits of this inductive-hypothetic model are (Sol, 1982): •
it stresses the inductive specification, testing, and expanding of a theory
•
it offers possibilities for interdisciplinary research
•
it makes space for the generation of various solutions for a problem situation
•
it emphasizes learning by considering analysis and synthesis as interdependent activities
The inductive-hypothetic research strategy consists of five activities: initiation, abstraction, theory formulation, implementation and evaluation, see Figure 2-1.
13
Chapter 2 1. initiation
Descriptive empirical model
5. evaluation and comparison
Prescriptive empirical model
2. abstraction
Descriptive conceptual model
4. implementation
3. theory formulation
Prescriptive conceptual model
Figure 2-1: inductive-hypothetic research strategy
1. Initiation, one or more problem situations are selected for the initial study on the basis of a set of initial theories. A description of the relevant aspects of the problem situation is made using descriptive empirical models. This first step is primarily used to gain a better understanding of the problem domain. 2. Abstraction, the descriptive empirical models is abstracted into a descriptive conceptual model. This model can be used to describe all the relevant elements and aspects of the problem situation. 3. Theory-formulation, a prescriptive conceptual model can be built from the descriptive conceptual model and literature review. The theory formulated in this step should be able to solve the observed problems. Note that in the context of the inductive-hypothetic research strategy, the term theory is used in a broad sense. For example a set of guidelines, modeling concepts (Bots, 1989) or a way of working. 4. Implementation, the prescriptive conceptual model is implemented in one or more practical problem situations. The result of this step is a set of alternatives that should provide solutions to the original problems and is described in a prescriptive empirical model. 5. Evaluation, the theory is evaluated by comparing the descriptive empirical model and the prescriptive empirical model. Ideally, all observed problems are solved in the final prescriptive empirical model. Moreover, the inductively expanded theory can be used as initial theory in empirical situations to start a new cycle.
14
Research approach
2.3. Research instruments Research instruments are used to describe the way the data on the phenomenon studied will be collected and analyzed. The set of research instruments chosen depends on the research philosophy used, the amount of existing theory available, the nature of the research problem, aspects of the research objects and on the research questions (Yin, 1994). The primary research instruments used in this research project are the case study and action research, such instruments underpin an interpretive philosophy (Sol, 1982).
2.3.1. Literature review We start with a literature review to get a starting point thereby using our initial thoughts on risk management (Grinsven, 2001). Building on this, we decide to gain a deeper understanding of the problems thereby using the case study as an instrument. In this context we contrast the findings from the literature review with our experiences from the case study. The development of the theory is based on the literature review and the case. After the theory development, a comparison is made to support our findings.
2.3.2. Case studies and action research Case study research is intended to investigate a contemporary phenomenon within its natural setting, especially when the boundaries between the phenomenon and context are unclear (Yin, 1994). Case study research pays attention to the ‘why?’ and ‘how?’ questions (Yin, 1994; 't Hart, Van Dijk et al., 1998; Swanborn, 2000) and as such, it can be explanatory, descriptive, or exploratory (Swanborn, 2000). Further, it can be characterized as qualitative and observatory using predefined research questions (Yin, 1994). The main critique of case study research centers on criticism on interpretivist research instruments, which argues that it offers little basis for scientific generalization as one cannot generalize from a single case, researchers may have little control over their experimental conditions and that they may rely to much on subjective interpretation (Yin, 1994; 't Hart, Van Dijk et al., 1998; Swanborn, 2000). The major strengths of case study research are that it involves the most direct form of observation, it allows researchers to capture reality in greater detail and subjects forget that they are the subject of a research project and thus act naturally (Checkland, 1981; Galliers, 1991). Action research is focused on the ‘how?’ questions in contrast to case study research, which emphasizes the ‘why?’ question (Checkland, 1981; Meel, 1994; Swanborn, 2000; Janssen, 2001).
15
Chapter 2 Action research can be seen as a subset of case study research (Galliers, 1992). In action research, the researcher actively participates in the application of theory and the testing of improvements ('t Hart, Van Dijk et al., 1998). Action research focuses on intervention and on actually designing the processes used in the phenomena under study. It can be used for theory building, testing and expanding (Galliers, 1991). The main critiques of action research are that it fails to fulfill the requirement for repeatability and experimental control objective observation, additionally it is sometimes seen as a permit for consultancy (Meel, 1994). The major strengths of action research are that it allows the researcher(s) to develop close relationships with the subjects and permit the simultaneous application and evaluation of theory (Galliers, 1991; Herik, 1998). Following Yin (1994) we argue that the case study and action research as research instruments closely fit our inductive-hypothetic research strategy because the phenomenon needs to be studied in its natural setting, the focus of our research should be on the process, i.e. on the ‘how’ and ‘why’ questions, additionally few previous studies have been carried out in our research area.
2.3.3. Research design Consciously designing a research approach allows us to combine the strengths of the approach and helps us to try to deal with the weaknesses of case study research and action research (Herik, 1998). Thus, following an inductive research strategy, we first formulate an initial framework (see chapters one and three) before beginning the descriptive case study (chapter four). This framework is used to guide the data collection process, confine the possible level of generalizability and helps us to avoid ending up with ‘story telling’ rather than theory building and testing (Yin, 1994). We use the following case study guidelines: site selection, choice of data collection instruments and method of analysis to help us design our research strategy as defined by (Yin, 1994; 't Hart, Van Dijk et al., 1998; Swanborn, 2000). Site selection A case study should be a conscious and formalized step in the design of a research project. Yet the choice of a case study is often based on opportunism rather than on rational grounds (Yin, 1994). Using our research field, questions and objective as a basis we formulate the following criteria for selecting our case studies:
16
Research approach •
the financial institution must already use or plan to use expert judgment in operational risk management
•
the financial institution must attach high importance to improve operational risk management by using expert judgment
•
the financial institution is interested in the added value of improving operational risk management
•
the business processes and/or projects of the financial institution(s) involved must require multiple actors.
The number of sites used for case studies is an important issue in the design of a case study or action research (Yin, 1994; 't Hart, Van Dijk et al., 1998; Swanborn, 2000). Using multiple cases allows us to contrast and compare the results of individual cases and to build our theory irrespective of a particular organization. Based on our criteria we select three cases. 1. Bank Insure, department Corporate Operational Risk Management (CORM), supports the business and management by making operational risks visible and by recommending improvements to processes to control identified operational risks. 2. Ace Insure, insures individuals against a loss of income should they become unfit for work. 3. Inter Insure, provides integrated financial products and services using insurance-advisers, the Internet and the banking structure of which they are part. Table 2-4 shows the nature of the case study, the primary instrument used and in which chapter the report on the case can be found. Bank Insure is our first case, which is descriptive in nature and the instrument used is the case study. The aim of our first case is to sharpen our view on the problem domain and derive starting points for improvement. During the two test case studies, the researcher is actively involved in theory application and in the testing of improvements. Our aim in these test case studies is to: (1) show that our approach works according to the pre-established specifications and (2) to evaluate whether there was an improvement when a comparison was made with the contemporary situation. Ace Insure was selected as our first test case study. We decide for a second test case study at Inter Insure to sharpen our approach for the improvement of operational risk management.
17
Chapter 2
Table 2-4: case studies
Name
Sector
Chapter
Bank Insure Ace Insure Inter Insure
Financial Financial Financial
4 6 7
Nature Descriptive Prescriptive X X X
Instrument Case Action X X X
Data collection instruments Qualitative and quantitative data collection instruments are used in this research project to create a rich picture of the phenomena of interest, contrast and compare the data. This is called triangulation (Yin, 1994; 't Hart, Van Dijk et al., 1998; Swanborn, 2000). The qualitative instruments included observation of the work processes in operational risk management, semistructured and open interviews, studying of organizational documents, meeting minutes, reviewing of relevant literature and observations made by researchers. The quantitative instruments include questionnaires, expert estimations and meeting logs of computer-supported meetings. Being aware of the strengths and weaknesses of these instruments (Yin, 1994; 't Hart, Van Dijk et al., 1998; Swanborn, 2000) we use the same data collection procedures for the test case studies to overcome the negative effects of using different instruments in different cases. The precise use of these procedures is explained in chapter five, six and seven. Method of analysis Following Meel (1994) a project plan is made before the start of each case study. This project plan describes the goal and steps to be taken in the project, the approach to be used and its execution. A report is written for these case studies and the results had been discussed with the problem owners of the participating financial institution and other researchers, see e.g. (Grinsven & Vreede, 2002). Several researchers were involved in our case studies. The input of these researchers is used to improve the validity of the case studies and to counterbalance the effects of subjective interpretations. Prior and additional to this research, a number of papers and articles were presented covering different aspects of our approach. These papers and articles are used to reflect on the approach we are developing. Our initial ideas on risk management and using software tools are presented in (Grinsven, 2001), guidelines for risk management are presented in (Grinsven & Vreede, 2002a; Grinsven & Vreede, 2002b), a repeatable process for risk management is presented in (Grinsven & Vreede, 2003a), using a group support system for operational risk management is 18
Research approach presented in (Grinsven, 2003; Grinsven & Vreede, 2003b), operational risk management as a shared business process is presented in (Grinsven, Janssen et al., 2005), risk management for financial institutions is presented in (Grinsven, Ale et al., 2006), a number of laws in risk is presented in (Grinsven & Santvoord, 2006) and collaboration methods and tools for operational risk management are presented in (Grinsven, Janssen et al., 2007).
2.4. Research outline Figure 2-2 shows the outline of this research project. This outline follows the inductivehypothetic cycle, as presented in this chapter, see Figure 2-1. Our research project starts in chapter one with an investigation and description of the relevant issues in operational risk management, our research objective, societal relevance and scientific contribution. The research approach, questions, philosophy, strategy and instruments are elaborated upon in this chapter. The theoretical background of this research project is described in chapter three. The initial theories presented in chapter one and three are used to guide our observations in the first case, which is presented in chapter four. Using the initial theories and observations made in the first case study we formulate an approach to improve operational risk management, which is presented in chapter five. In chapter six and seven the approach is applied to two test case studies and evaluated. Chapter eight concludes this research by summarizing the main results and suggestions for future research.
19
Chapter 2
1. Introduction
2. Research approach 3. Relevant literature initiation
4. Bank Insure
abstraction
5. Approach
implementation
6. Ace Insure
7. Inter Insure
evaluation and comparison
8. Conclusions & epilogue
Figure 2-2: research outline
20
Genius is rare because the means of becoming one have not been made commonly available. Luis Alberto Machado
3.
Literature review
Important concepts drawn form the literature on operational risk management, expert judgment and group support systems are discussed in this chapter. The initial theories presented in chapter one and the literature described in this chapter provide us the starting point for our first case study, which is presented in chapter four. Literature on operational risk management is introduced in section one. Expert judgment is discussed in section two wherein the important issues of using multiple experts are discussed. Finally, group support systems are discussed in section three.
3.1. Operational risk management 3.1.1. Defining operational risk In order to accurately measure and manage risk, it is necessary to properly define it (Geus, 1998; Young, Blacker et al., 1999; Power, 2003). Risk for a financial institution is defined in earnings volatility (Kuritzkes & Scott, 2002). Earnings volatility creates the potential for loss, which in turn needs to be funded. It is this potential for loss that imposes a need for financial institutions to hold capital that will enable them to absorb losses. Risk can be divided between two main sources of earnings volatility: financial risk and non-financial risk (Brink, 2001; Medova, 2003; Chappelle, Crama et al., 2004), see Figure 3-3. Financial risks are risks that a financial institution assumes directly in its role as a financial intermediary and these can be broadly classified into credit, market, asset / liability mismatch, liquidity and insurance underwriting risk, see e.g. (Carol, 2000; Medova & Kyriacou, 2001; Kuritzkes & Scott, 2002; Tripp, Bradley et al., 2004). In this research project we focus on nonfinancial risk. Non-financial risks arise when a financial institution incurs an operating loss due to
21
Chapter 3 non-financial causes, see section dimensions of operational risk. Unlike financial risk, nonfinancial risk is common to all organizations (Carol, 2000; Kuritzkes & Scott, 2002). Moreover, according to a benchmarking study conducted by Oliver, Wyman & company of ten large internationally active U.S. and Europan financial institutions, shows that non-financial risk already accounts for 25-30% of the losses. Non-financial risk can be subdivided into: internaland external event risk and business risk see Figure 3-3. Risk
Nonfinancial
Financial
Credit
Market
ALM
Internal
External
Business
Operational Risk Figure 3-3: taxonomy of risk in financial institutions (Kuritzkes & Scott, 2002)
•
Internal event risk, losses due to internal failures such as: fraud, human errors, system failures, legal liability and compliance costs. A classic example of an internal risk is the bankruptcy of the Barings Bank as a result of unauthorized trading in a subsidiary; see Table one in chapter one.
•
External event risk, losses due to an uncontrollable external event such as terrorism and natural disasters. An example of an external risk is the loss reported by the Bank of New York as a result of the terrorist attack on September 11th 2001.
•
Business risk, residual risk not attributable to internal or external risk such as a drop in volumes, a shift in demand or regulatory changes. A recent example is the enormous loss reported by Credit Suisse First Boston, due to a collapse in investment banking activity.
We enumerate a number of definitions of operational risk (OR) to explicate our view on the main characteristics. Most definitions of an OR include possible causes and consequences; see Table 3-5 for an overview. It is noticed that an important debate is going on about defining loss, which can be direct or indirect. This debate is significant for financial institutions because it reflects the nature of losses which are relevant to operational risk (Carol, 2000; Power, 2003)
22
Literature review and it restricts or extents the efforts that need to be undertaken to control operational risk (Cruz, Coleman et al., 1998; Cruz, 2002). A well known and frequently used definition is: “the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events” (RMA, 2000; Medova & Kyriacou, 2001; BCBS, 2003b). This definition explicitly excludes indirect losses because they are difficult to measure in practice (RMA, 2000; Kuritzkes & Scott, 2002; BCBS, 2003b). Although this definition is well known, the semantic debate of defining an operational risk is still going on (Medova & Kyriacou, 2001). Table 3-5: definitions of operational risk
Author (BCBS, 2003b)
Definition OR The risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. (McDonnell, 2002) Risks deriving from a company’s reliance on systems, processes and people. These include succession planning, human resources and employment, information technology, accounting, auditing and control systems and compliance with regulations. (Karow, 2002) An operational risk is the risk of loss caused by deficiencies in information systems, business processes or internal controls as a result of either internal or external events. (King, 2001) A measure of the link between a firm’s business activities and the variation in it’s business results. (Medova & A consequence of critical contingencies most of which are quantitative in Kyriacou, 2001) nature and many questions regarding economic capital allocation for operational risk continue to be open. (BCBS, 2001b) An operational risk is the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events (RMA, 2000) An operational risk is the risk of direct or indirect loss resulting from inadequate or failed internal processes, people and systems or from external events. (Doerig, 2000) Operational risk is the risk of adverse impact to business as a consequence of conducting it in an improper or inadequate manner and may result from external factors. (Pyle, 1997) Operational risk results from costs incurred through mistakes made in carrying out transactions such as settlement failures, failures to meet regulatory requirements, and untimely collections.
Losses resulting from an operational risk can be either direct or indirect (RMA, 2000). Direct
losses often have a direct visible influence on the profit and loss account of a financial institution (Brink, 2002; Hiwatashi & Ashida, 2002). Capital is needed to cover for these losses (BCBS, 2003b). In contrast, indirect losses e.g. reputation might lead to a loss of customers or potential clients may not be directly visible (Carol, 2000; Brink, 2002). Capturing indirect losses objectively is perceived to be more difficult but indirect losses may be far higher than direct losses and may therefore be the most significant source of non-financial risk (Brink, 2001;
23
Chapter 3 Brink, 2002; Hiwatashi & Ashida, 2002). Following this argumentation, we propose to use the following definition of an OR: “the risk of direct or indirect loss resulting from inadequate or failed internal
processes, people and systems or from external events” (RMA, 2000). It is important to mention that in this definition the internal processes include the procedure and the embedded internal controls (Brink, 2001; Brink, 2002). We define a loss as: the financial impact associated with an operational event.
3.1.2. Dimensions of operational risk The definition of OR is based on the underlying causes of such risks and seeks to identify why an OR loss happened (BCBS, 2001c; Cruz, 2002). A causal based definition can be used by experts to identify, assess and manage operational risk (Brink, 2002). Figure 3-4 illustrates that a loss is caused by an operational event, which in turn is caused by four different factors: processes, people, systems or external events (BCBS, 1998; RMA, 2000; Brink, 2002; Hiwatashi & Ashida, 2002; Axson, 2003; Harmantzis, 2003; BCBS, 2003b). A recent study conducted by the Risk Management Association (RMA) indicates that processes (64%) and people (25%) represent the primary causes of an OR as compared to systems (2%) and external events (7%) (RMA, 2000; Harmantzis, 2003). Processes People Event
Loss
Systems External events Figure 3-4: dimensions of operational risk (RMA, 2000; Brink, 2002; Hiwatashi & Ashida, 2002)
Processes Employees working in financial institutions carry out business processes and apply internal control procedures to prevent operational risks from materializing. These business processes and procedures have become increasingly complex, especially in the financial service sector (Brink, 2002). People often apply these internal control procedures without even noticing it. For example when they enter a password to log on to the computer network, or when they leave the building in which they work by signing out with their employee card. Nevertheless, losses may
24
Literature review occur due to a wrongly designed procedure, a deficiency in an existing procedure, an absence of a procedure, or a wrongly executed procedure (Brink, 2002). Losses in this category can result from the errors that people make or failure to follow an existing procedure (Harmantzis, 2003).
People Operational risks associated with people can be classified into: concentration problems, overtime, insufficient knowledge of products or procedures and fraud by employees (Cooper, 1999; O'Brien, Smith et al., 1999; Brink, 2002; Harmantzis, 2003). Whether we like it or not, people forget things or are unable to draw clear boundaries between their business and private lives (Brink, 2002). For example: the passing away of a dear friend may cause concentration
problems for the employee resulting in errors (O'Brien, Smith et al., 1999; Brink, 2002). Structural overtime might also be a cause of a higher quota of errors. For example, unfit employees typically have a higher proportion of mishandled transactions on a Monday and Friday, see e.g. (O'Brien, Smith et al., 1999) for an elaborate overview. With respect to people, insufficient knowledge may lead to three different patterns of behavior (Brink, 2001; Brink, 2002):
•
the employee does not recognize that important knowledge is lacking and executes his task
•
the employee recognizes that important knowledge is lacking but is uncomfortable to explain that he or she is not familiar with the task or situation
•
the employee recognizes the lack of important knowledge and tries to take advantage of that lack.
In the first two cases losses e.g. claims or a loss of reputation, due to insufficient knowledge often occur unintentionally (Brink, 2002; Harmantzis, 2003). In the last case, losses caused by intention are called fraud, this is especially so when internal procedures or control measures are implemented wrongly or do not correspond to reality. If a fraud case becomes public, the trustworthiness of a financial institution can be affected (Brink, 2002).
Systems Losses stemming from systems can be caused by breakdowns in existing systems or technology (RMA, 2000; Harmantzis, 2003). Following Brink (2001; 2002) the operational risks associated with systems can be classified into: general risks, application-oriented risks and user-oriented risks. General risk can be disaggregated into risks caused by physical access to hardware, logical access to IT systems, change management, capacity management, emergency management and insufficient backup recovery measures. Application-oriented risk affects the quality of the
25
Chapter 3 processed information directly. Attention should be paid to data recording, data storage and including of data in reports, calculations and the timely availability of data. Finally, user oriented
risks are strongly related to people risk, particularly in the area of communication between staff and computer. In particular, attention should be focused on the controls, which are finally executed by staff members. If these controls are ineffective the processing of data may be affected (Brink, 2002).
External events Losses occurring as a result of external events can be classified in external service and suppliers, disasters or criminal activities (Brink, 2002; Cruz, 2002; Harmantzis, 2003). Risks regarding external services and suppliers have become more important in recent years. Outsourcing can be a risk-mitigating measure and a source of risk at the same time. Risks may for example originate from not meeting a pre-specified service level agreement. Bankruptcy of a third party is an example of an extreme situation where an OR becomes an acute credit risk, see Brink (2002) for an elaboration. External criminal activities such as: terrorism, misuse of websites, money laundering and internal and external fraud may cause losses for financial institutions.
Event An event can be characterized by its frequency of occurrence and impact (Medova & Kyriacou, 2001; Kuritzkes & Scott, 2002; Chappelle, Crama et al., 2004). Note: it is important not to confuse an event with an external event as described above. Table 3-6 presents a possible classification scheme for mapping events. Table 3-6: classification of events (Medova & Kyriacou, 2001; Kuritzkes & Scott, 2002)
High frequency Low frequency
Low impact High impact Expected loss e.g. data entry errors, or routine Not applicable processing errors Expected loss Unexpected /catastrophic loss e.g. branch robbery e.g. rogue trading or the 9/11 event
Financial institutions usually experience high frequency, low impact events such as lost cash with small amounts of cash lost errors and to small payment errors and sometimes low frequency, low impact events regarding risks such as branch robberies (Medova & Kyriacou, 2001; Harmantzis, 2003). Financial institutions rarely experience low frequency, high impact events such as the rogue trade discovered in the Barings Bank. High frequency, high impact
26
Literature review events are assumed to be not applicable because high repeated losses would put almost any financial institution out of business.
Loss Losses can be divided into expected losses, unexpected losses and catastrophic losses (O'Brien, Smith et al., 1999; Brink, 2002; Kuritzkes & Scott, 2002; Medova, 2003; Chappelle, Crama et al., 2004), see Figure 3-5. This division has significant implications for operational risk management in the financial service sector (BCBS, 2001c; Ramadurai, Beck et al., 2004). Frequency
Expected Loss
Unexpected Loss Catastrophic Loss
Confidence level
Impact
Figure 3-5: loss distribution (Carol, 2000; Brown, Jordan et al., 2002)
•
Expected losses (EL), resulting from e.g. data entry errors or a branch robbery and their consistent operational risks, can be reduced by setting up internal control procedures. The costs of such procedures will be accounted for in the operations budget (Medova & Kyriacou, 2001; Cruz, 2002).
•
Unexpected losses (UL), resulting from e.g. rogue trading have to be covered using capital (BCBS, 2001c; Harmantzis, 2003; Medova, 2003). The aim of capital is to absorb unexpected swings in earnings (Kuritzkes & Scott, 2002) and insure the capacity of a financial institution continue to operate (Medova, 2003) up to a particular confidence level (Ramadurai, Beck et al., 2004), also see chapter one.
•
Catastrophic loss (CL), e.g. the bankruptcy of the Barings Bank is any loss in excess of the budgeted loss plus the capital of a financial institution. Financial institutions usually attempt
27
Chapter 3 to transfer this type of loss through insurance (Harmantzis, 2003) and this mitigates the resulting absolute losses for the institution.
3.1.3. Management of operational risk The management of operational risk (OR) is motivated by various factors including, but not limited to, the catastrophic losses, regulatory capital charge and corporate governance requirements (Oldfield & Santomero, 1997; BCBS, 2001c; Brink, 2002), also see chapter one. The management of operational risk involves a multitude of techniques that serve two main purposes: loss reduction and avoidance of catastrophic losses, see Figure 3-5. Some of these techniques help to reduce losses, some to avoid the event, and some both (BCBS, 2001c; Chappelle, Crama et al., 2004). The management of OR is economically bound i.e. the expenses for risk mitigating control measures should not exceed the operational risks to which the financial institution is exposed to. Generally, there are four possible courses of action to manage OR (Oldfield & Santomero, 1997; Young, Blacker et al., 1999; Brink, 2002; Hoffman, 2002):
•
accept the operational risk, accepting an OR is only compelling when the consequences are controllable and the impact low. Internal control measures may go beyond the actual exposure to operational risk.
•
avoid the operational risk, the practice of OR avoidance involves actions to reduce the chances of idiosyncratic losses by eliminating risks that are redundant to the institutions business goal (Oldfield & Santomero, 1997). Although according to Brink (2002), avoidance is equal to closing the business down, an OR can also be avoided, by using simple business practices such as underwriting standards, hedges or asses-liability matches, diversification and due diligence investigation (Oldfield & Santomero, 1997; Cruz, 2002).
•
transfer the operational risk, if the financial institution has no advantage in mitigating the OR, it can be transferred to a third party by outsourcing or insurance. Outsourcing should be accompanied by an adequate service level agreement. However, one should bear in mind that the regulatory authorities do not allow financial institutions to fully delegate all responsibilities (Chappelle, Crama et al., 2004). Insurance organizations exist for the claims issues by many financial institutions. They can buy or sell financial claims to diversify or concentrate the OR in their portfolios (Axson, 2003; Harmantzis, 2003)
•
28
mitigate the operational risk.
Literature review In this research we further discuss the mitigation of operational risk because we believe that the other possibilities do not actually reduce the OR; rather it remains.
Mitigation of operational risk Operational risks can be mitigated by a set of internal control measures. However, these control measures will only function properly if the control environment in which they are embedded is established appropriately (Haubenstock, 2001; Brink, 2002). A control environment does not come into existence through being described in the financial institutions manual (Haubenstock, 2001; Brink, 2002). The following aspects are generally considered to be important for the implementation of a control environment (RMA, 2000; Brink, 2002; Cruz, 2002; Hoffman, 2002; Muermann & Oktem, 2002; BCBS, 2003a).
•
Control awareness, is considered an important factor in the control of OR. Senior management must be familiar with the consequences and with the causes of OR. They must have an active role in increasing the financial institution’s awareness of internal controls and OR. Someone must also be responsible for making other organizational levels aware (Haubenstock, 2001). The result should be that all management levels acknowledge the importance of internal control measures and implement them effectively. To achieve this, it is important to create a culture that encourages people to speak freely about controls and OR that worry them (Ramadurai, Olson et al., 2004). Cultural aspects such as the ‘tone at the top’, communication, clear ownership of objectives and sharing knowledge helps to set the expectations for decision making (RMA, 2000).
•
Result driven, desired results should be formulated as targets in a functioning control environment. Achievement of these targets must be checked regularly on the tactical and strategic level. The result should be an increased level of transparence i.e. that responsibilities are clear for the duties delegated. To achieve this, it is important that the same rules apply to all levels in the financial institution.
A set of internal control measures can mitigate the operational risk. The definition of internal control measures is broad and can consist of many components (Haubenstock, 2001). Following Brink (2001; 2002) we discuss those we consider important to this research project.
•
Policies are general guidelines that describe how a financial institution covers certain areas. Policies form the basis for the description of procedures. Examples of policies are: a
29
Chapter 3 remuneration policy, an appraisal policy, an IT policy and a treasury policy. Procedures describe the way in which products or information is processed by the various business functions. Examples are: procedures for trading, and for the drafting of management information.
•
Segregation of duties is based on the principles division of labor and specialization (Simon, 1977). If employees specialize in particular activities they may achieve better results together than if they handle all the tasks themselves (Smith, 1776, 1963). Segregation of duties is based on conflicting interests of employees. For example, if the purchase and sales functions are combined in one function, the integrity of revenues cannot be guaranteed because there are possibilities for the staff member to sell without involving the organization. Well-known segregations are (Carol, 2000): front office versus back office, back office/front office versus accounting and controlling and static data maintenance versus transaction processing. Brink (2002) identifies the following segregations: authorization, safeguarding, recording and execution.
•
Dual control is different from segregation of duties because it means that two employees contribute to the performance of one task. As such, dual control can be an effective internal control measure to mitigate operational risk. Two implementations of dual control are well known. One, the data entry and verification function when transactions are being processed. Two, the instruction that two authorized staff members must sign important documents, which make the institution legally responsible. Verification is, for example, important when an account number is entered wrongly in the system. As a result from this, all cash flows will be routed in the wrong direction and the institution is liable for all interest claims issues by the client.
•
People represent a large portion of the primary causes of operational risk; therefore, recruiting reliable personnel is one of the most important internal control measures to mitigate operational risk. Reliable personnel who are involved in the institution will try to eliminate sources of error and will try to improve the procedures for which they are responsible. Guidelines such as: background scanning, check of references and employee interview techniques form a basis for and adequate control environment.
30
Literature review
3.2. Expert judgment Expert judgment can be defined as: “the degree of belief, based on knowledge and experience, that an expert makes in responding to certain questions about a subject” (Clemen & Winkler, 1999). In this research project we consider the problem of utilizing expert judgment, individually and / or group-wise, in the context of operational risk management in financial institutions. In this context, expert judgment is used for providing the input for e.g. reducing capital (Brink, 2002; Hiwatashi & Ashida, 2002; BCBS, 2003b), improving the performance of people, processes and systems (O'Brien, Smith et al., 1999; Brown, Jordan et al., 2002; Martin, 2003), optimize the allocation of resources (Cruz, 2002; Adel, 2003; Rosengren, 2003) and preventing future losses (Bryn & Grimwade, 1999; Axson, 2003; Scarff, 2003). Moreover, expert judgment can be utilized in situations in which either because of costs, technical difficulties, internal- and external data issues, regulatory requirements or the uniqueness of the situation makes it impossible to provide the input for measuring exposure to operational risk. Decision-makers want to take, and want to be perceived as taking decisions in a rational manner (Cooke & Goossens, 2000; Goossens & Cooke, 2001; Cooke & Goossens, 2004). The question for us is how can a decision-maker in a financial institution take rational decisions using expert judgment? Literature on expert judgment indicates that using a structured process provides better results than using unstructured processes (Clemen & Winkler, 1999; Arkes, 2001; MacGregor, 2001; Cooke & Goossens, 2002). Although numerous structured processes for expert judgment exist, see e.g. A Procedures Guide for Structured Expert Judgment (Cooke & Goossens, 2000; Goossens & Cooke, 2001), Nominal Group Technique (Delbecq, Ven et al., 1975), and Delphi (Linstone & Turoff, 1975; Rowe & Wright, 2001), no single process is best in all circumstances (Harnack, Fest et al., 1977; Clemen & Winkler, 1999; Rowe & Wright, 2001; Cooke & Goossens, 2004). However, an understanding of all the pros and cons and key principles will help us to produce a well-founded approach for improving operational risk management and as such help decision-makers take decisions in a rational manner. Generally, the process of utilizing expert judgment in operational risk management can be divided into five phases: preparation, OR- identification, assessment, mitigation and reporting, see Figure 3-6. Each phase can be carried out sub optimally wherein inconsistency and bias play an important role (Armstrong, 2001a). Inconsistency is defined as a random or a-systematic deviation from the optimal, whereas bias involves a systematic deviation from the optimal
31
Chapter 3 (Bolger & Wright, 1994; Arkes, 2001; Harvey, 2001; Stewart, 2001). Following Cooke and Goossens (2000) we briefly explain the aim of each phase, what it represents and what the main activities are. Further, we present existing principles to carry out this phase and activities as optimal possible. The principles aim to minimize inconsistency and bias and are presented in a logical order. input
Preparation
Risk mitigation
Risk identification
Reporting
Risk assessment
output
Figure 3-6: expert judgment process (Cooke & Goossens, 2000)
3.2.1. Preparation The preparation phase is used to provide the frame for the experts, taking into account that all the most important activities prior to the expert judgment exercise should be considered (Goossens & Cooke, 2001). It is important to note that this phase must take place before the phases identification, assessment and mitigation are started (Harvey, 2001). The activities can be divided in: describe the context and objectives, choose the method, identify and select experts, define the experts’ roles, feedback and reporting and the dry run exercise.
Describe the context and objectives This activity is used to describe the problem, context and aim of the particular exercise. The background to the problem should be clearly stated (Harvey, 2001; Cooke & Goossens, 2002), as should what is to be expected from the experts, who should be familiarized with the major issues of the problem so that they have a common understanding (Clemen & Winkler, 1999). Relevant information should be provided on where the results of the exercise will be used,
32
Literature review (Goossens & Cooke, 2001) and this phase should be used to help the experts to focus on the relevant information (Armstrong, 2001a). Several principles exist and can be used during the preparation phase, to carry out the context and objectives activity, which aim to ensure procedural consistency and may help to prevent experts with stakes in the outcomes from introducing biases. First, use checklists for all the relevant information, the aim is to improve the consistency regarding the problem context and objectives (Hulet & Preston, 2000; Harvey, 2001). Checklists are useful because experts can rarely to bring to mind all the information relevant to the task when they need to do so (Stewart, 2001). Second, checklists should be made from the accumulated wisdom within an organization. Examples are organizational documents, fault trees, computer programs and interviews with business experts (Harvey, 2001). Third, it appears to be important to select relevant and topical subjects that matter for the organization (Hulet & Preston, 2000; Weatherall & Hailstones, 2002).
Choose the method This activity provides the rationale behind the selection of appropriate means to elicit and combine the results derived from multiple experts. These means can consist of methodologies and /or hard and software tools to support the exercise. It must be clear why such methods as for e.g. multiple expert elicitation (Rowe & Wright, 2001) aggregating the results (Clemen & Winkler, 1999; Goossens & Cooke, 2001), and the weighing methods for the experts and expert results (Cooke, 1991; Goossens & Gelder, 2002) are chosen. Several principles can be used to carry out the choice of method activity, all with the aim of ensuring procedural consistency and preventing experts with stakes in the outcomes from introducing biases. First, make a list of criteria e.g. accurateness, timeliness, cost savings, anonymity, ease of interpretation, flexibility and ease of use and use them to judge whether the method is suitable or not. Second, use these criteria to assess and select a method (Clemen & Winkler, 1999; Armstrong, 2001a). Third, use simple methods because they perform better than more complex methods. Simple methods increase expert’s understanding, implementation, reduces the frequency of mistakes and are less expensive (Kaplan, 1990; Clemen & Winkler, 1999; Armstrong, 2001a). Fourth, match the method to the situation at hand (Arkes, 2001).
33
Chapter 3
Identify and select the experts This activity is used to provide the rationale for identifying and selecting the experts. This activity concerns the experts knowledge width (Stewart, 2001), gender (Karakowsky & Elangovan, 2001) and the number of experts to be used (Delbecq, Ven et al., 1975; Hulet & Preston, 2000). Several principles can be used for identifying and selecting experts, all of which are aimed at minimizing inconsistency and bias. First, identify all possible experts (Goossens & Cooke, 2001). Second, use experts with appropriate and disparate domain knowledge (Rowe & Wright, 2001) and process knowledge (McKay & Meyer, 2000). Third, list criteria e.g. experience or reputation, diversity in background, interest in the project, availability, familiarity with context, and then use these criteria to select experts (Cooke & Goossens, 2000). Fourth, the number of experts should vary between five to twenty, see e.g. (Rowe & Wright, 2001). However, in general, the largest number of experts with a minimum of four should be used (Goossens & Cooke, 2001). Fifth, use where possible mixed gender expert groups to avoid internal politics, see e.g. (Karakowsky & Elangovan, 2001).
Define the expert’s roles This activity aims to denote the roles that each expert has in the process and sub-activities. It should be clear how they must act or interact with other experts in the specific problem situation at hand (Armstrong, 2001b). Several principles exist that can be used to support the definition of experts’ roles which mainly aim to counteract overconfidence (Arkes, 2001), groupthink (Janis, 1972) and conflict (Armstrong, 2001b; Dahlbäck, 2003). First, experts should take on roles that are similar to their normal working role (Armstrong, 2001b). Second, a facilitator with good facilitation skills and relevant business knowledge should be used (Hulet & Preston, 2000; Weatherall & Hailstones, 2002). Third, make use of a devil’s advocate during group interaction (Quaddus, Tung et al., 1998; Arkes, 2001). Fourth, make sure that all the roles are made explicit to the experts (Armstrong, 2001b).
Feedback and reporting This activity describes all relevant information and results data that is needed to present to the decision makers and to the experts in a formal report after completing all the phases (Cooke & Goossens, 2000; Goossens & Cooke, 2001). Feedback to the experts after the exercise has been
34
Literature review completed is found to be effective (Harvey, 2001) and enables experts to learn from their tasks (Bolger & Wright, 1994) although it is difficult to obtain feedback for some tasks that are executed by the experts (Arkes, 2001). Several principles exist that can be used to obtain feedback and produce documentation, which reduce both inconsistency and bias. First, keep records and use them appropriately to obtain feedback (Harvey, 2001; Rowe & Wright, 2001). Second, treat the results anonymously (Goossens & Cooke, 2001). Third, each expert should have access to feedback information e.g. his/her assessment and passages in which his/her name is used (Goossens & Cooke, 2001). Fourth, remind that all the feedback is valuable; do not belittle it (Arkes, 2001).
Dry run exercise This activity provides the decision makers and a representative number of experts with the opportunity to try out all phases or the most important parts of the phases (Cooke & Goossens, 2000). An important principle for the dry run exercise is to fit the real situation as close as possible in order to reduce inconsistency.
3.2.2. Risk identification The risk identification phase is used to provide the frame for the experts, taking into account that the most important issues should have been considered. The issues can be divided in information gathering and information processing.
Information gathering The information gathering activity is key to a good risk assessment (Hulet & Preston, 2000). It should be used to describe how the operational risks are gathered (Rowe & Wright, 2001), how questions are presented to the experts (Rowe & Wright, 2001) and how the information is organized (Stewart, 2001). Several principles exist that can be used for information gathering, aiming to reduce inconsistency and bias. First, organize and present the information in a form that clearly emphasizes all the relevant information (Hulet & Preston, 2000; Stewart, 2001). Second, use a small number of really important cues (MacGregor, 2001; Stewart, 2001). Third, in phrasing these cues, use clear, brief and balanced wording (Rowe & Wright, 2001). Fourth, start with identifying events anonymously and continue identifying events until the responses show
35
Chapter 3 stability; generally, three structured rounds are enough (Rowe & Wright, 2001) Fifth, search for the causes of these events (Rowe & Wright, 2001; Brink, 2002; Hiwatashi & Ashida, 2002).
Information processing This activity refers to the processing of the information by experts. This activity deals with the amount of information that can be processed when the number of cues cannot be limited (Stewart, 2001). Several principles exist that can be used for the information processing activity, which aim to reduce inconsistency and bias. First, limit the amount of information (Armstrong, 2001a). Second, decompose complex tasks into several simpler tasks (MacGregor, 2001), especially when uncertainty is high (Armstrong, 2001a). Third, use computers to support experts during information processing (Rowe & Wright, 2001; Stewart, 2001).
3.2.3. Risk assessment The risk assessment phase aims to provide a decision-maker with assessment results derived from the experts. Experts usually assess identified risks by evaluating its frequency of occurrence and the impact associated with the possible loss (Keil, Wallace et al., 2000; Rowe & Wright, 2001; Brink, 2002). But how does one get the best assessment from experts? In some fields, experts give good assessments e.g. bankers and meteorologists (Murphy & Winkler, 1977; Hammitt & Shlykhter, 1999), physicians (Winkler & Poses, 1993), industrial hygenists (Hawkins & Evans, 1989) while others e.g. financial analysts do not (Chatfield, Moyer et al., 1989; Bigün, 1995; Dechow & Sloan, 1997; Hammitt & Shlykhter, 1999). A number of studies have examined the accuracy of group judgments, see e.g. (Clemen & Winkler, 1999; Karakowsky & Elangovan, 2001; Rowe & Wright, 2001; Dahlbäck, 2003; Cooke & Goossens, 2004). Looking only at quantity estimation, which is significant for operational risk management studies, the conclusion is that a group of experts is slightly more accurate than the average individual expert (Clemen & Winkler, 1999) and the best individual expert in the group often outperforms the group as a whole (Hill, 1982). Consulting multiple experts can be viewed as increasing the sample size for providing the input to estimate a financial institutions exposure to operational risk. The issues can be divided in assessment of operational risk and aggregating the results.
36
Literature review
Assessment of operational risk Experts usually first assess the identified operational risks by evaluating frequency of occurrence and the impact associated with the possible loss (Hulet & Preston, 2000; Keil, Wallace et al., 2000; Brink, 2002; Weatherall & Hailstones, 2002). Several principles exist that can be used for the assessment of operational risk, aiming to reduce inconsistency and bias. First, decompose the problem into smaller more manageable problems. Second, the facilitator should guide the experts and help them to make more accurate estimates from partial or incomplete knowledge. Third, enable the use of different sets of experts for different assessment tasks, thereby matching expertise to the task at hand (MacGregor, 2001; Armstrong, 2001a).
Aggregate the results Although it is sometimes reasonable to provide a decision-maker with individual expert assessment results, it is often necessary to aggregate the experts’ assessments into a single one. This is founded in the fundamental principle that underlies the use of multiple experts: a set of experts can provide more information than a single expert (Clemen & Winkler, 1999). Aggregation methods can be divided into mathematical aggregation methods and behavioral aggregation methods (Clemen & Winkler, 1999; Goossens & Cooke, 2001; Cooke & Goossens, 2004).
•
Mathematical aggregation methods, a single ‘combined’ assessment is constructed per variable by applying procedures or analytical models that operate on the expert’s individual assessment. The methods range from simple e.g. equally weighed average, to more sophisticated methods e.g. Bayesian. For reviews of the literature on mathematical methods, see e.g. (Cooke, 1991; Clemen & Winkler, 1999; Harvey, 2001; Rowe & Wright, 2001).
Several principles exist that can be used for mathematical aggregation methods, all of which are aimed at reducing inconsistency and bias. First, use simple aggregation methods because they perform better than more complex methods (Winkler & Poses, 1993; Clemen & Winkler, 1999; Armstrong, 2001a). Second, use equal weights unless you have strong evidence to support unequal weighing of expert’s estimations (Armstrong, 2001c). This is in contrast with Cooke and Goossens (2000; 2004) who advocate weighing expert’s estimations. Finally, according to Armstrong (2001a) it is important to match the aggregation method to the current situation in the organization.
37
Chapter 3
•
Behavioral aggregation methods, require experts, who have to make an estimate from partial or incomplete knowledge, to interact in some fashion (Clemen & Winkler, 1999; Cooke & Goossens, 2004). Some possibilities include face-to-face ‘manual’ group meetings or ‘computer supported’ group meetings. Emphasis is sometimes placed on attempting to reach agreement, consensus or just on sharing information (Winkler & Poses, 1993; Goossens & Cooke, 2001; Rowe & Wright, 2001).
Several principles exist that can be used to facilitate behavioral aggregation methods, all of which are aimed at reducing inconsistency and bias, more specifically, overconfidence (Heath & Gonzales, 1995; Arkes, 2001). First, use experienced facilitators to guide the experts through the assessment (Clemen & Winkler, 1999). Second, structure the expert’s interaction (Linstone & Turoff, 1975; Rowe & Wright, 2001). Specific procedures exist that can be used to structure and facilitate the expert interaction e.g. the Delphi method (Linstone & Turoff, 1975; Rowe & Wright, 2001), the Nominal Group Technique (Delbecq, Ven et al., 1975) and Kaplans expert information technique (Kaplan, 1990). Third, use group interaction only when needed e.g. to discuss relevant information (Clemen & Winkler, 1999). Fourth, when interaction is needed, use a devil’s advocate (Heath & Gonzales, 1995; Quaddus, Tung et al., 1998; Arkes, 2001) to feed the experts with additional challenging information (Stewart, 2001), also see the preparation phase. Fifth, enable experts to share information because when information is shared, it is expected that the better arguments and information that result will be more important for influencing the group and that information proved to be redundant will be discounted (Clemen & Winkler, 1999).
3.2.4. Risk mitigation Risk mitigation usually involves identifying alternative control measures which aim to minimize the frequency of occurrence and / or impact of the operational risks (Keil, Wallace et al., 2000; Jaafari, 2001), re-assessment of the operational risks by (again) evaluating its frequency of occurrence and the impact associated with the possible loss (Carol, 2000; Brink, 2002; Weatherall & Hailstones, 2002), assigning problem owners who are responsible for implementing the control measures (Hammitt & Shlykhter, 1999) and defining action plans (Weatherall & Hailstones, 2002). Several principles can be used for risk mitigation, aiming to reduce inconsistency and bias. First, each individual operational risk needs its own re-assessment and as a result a specific set of
38
Literature review alternative internal control measures, see (Hulet & Preston, 2000; Brink, 2002). Second, it should be considered that the implementation of control measures is effective and efficient (Haubenstock, 2001). Third, control measures should stay within a reasonable relationship to the losses in case of non-mitigated operational risks (Hulet & Preston, 2000; Brink, 2002). Fourth, implementation of controls should start with awareness of control measures at all management levels. Fourth, the desired outcome must be formulated as a target, and the realization of these targets will be checked on a regular basis. Principles from risk identification and assessment seem to apply here since some parts in the risk mitigation phase are identical e.g. information gathering and processing and aggregation methods.
3.2.5. Reporting Reporting concludes the exercise, in this phase all the relevant information regarding the problem and data derived from the experts should be noted down in a formal report and be presented to the decision makers and to the experts (Goossens & Cooke, 2001). It is important to note that this step should not be confused with the feedback and reporting activity in the preparation phase, which describes what is needed for the report. Although the level of reporting will depend on the requirements of the problem owners (Cooke & Goossens, 2000), several principles exist for this phase. First, check relevant reporting standards such as issued by the Financial Service Authority or by the Basel Committee and apply them to your situation (BCBS, 2001c; Finlay & Kaye, 2002; FSA, 2003). Second, communicate all relevant (future) events and operational risks to the business line and higher management in order to help them understand and control the operational risks (RMA, 2000; BCBS, 2003b). Finally, there needs to be a structured process for feedback of the results to the experts in order to leverage the experiences gained (Cooke & Goossens, 2000; Cooke & Goossens, 2002; Goossens & Gelder, 2002) and maintain continuity (Weatherall & Hailstones, 2002).
3.3. Group support systems A group support system (GSS) can be defined as a socio-technical system consisting of software, hardware, meeting procedures, facilitation support, and a group of meeting participants engaged in intellectual collaborative work (Jessup, Connolly et al., 1990; Vreede, Vogel et al., 2003). GSS are designed to improve group work (Vogel, Nunamaker et al., 1990;
39
Chapter 3 Nunamaker, Dennis et al., 1991; Herik, 1998; Vreede, 2000). Such improvements are achieved by using information and communication technology to further structure group members exchange of ideas, opinions and preferences (Herik, 1998; Fjermestad & Hiltz, 2001; Turban, Aronson et al., 2001). GSS are used in various settings e.g. face to face and/or distributed (Fjermestad & Hiltz, 2001), and the application areas are diverse, including for example: business process redesign (Vreede, 1995; Kock & McQueen, 1998), policy formulation (Herik, 1998), creativity sessions (Nunamaker, Applegate et al., 1988), strategic planning (Dennis, Tyran et al., 1997), process quality assessment and improvement (DeSanctis, Poole et al., 1992; Easley, Devaraj et al., 2003), software inspections (Genuchten, Cornelissen et al., 1998; Genuchten, Dijk et al., 2001), risk assessment (Weatherall & Hailstones, 2002), innovation processes (George, Nunamaker et al., 1992), project management (Romano, Chen et al., 2002) and coordination of distributed work (Laere, 2003). In this thesis we consider using a GSS in the context of operational risk management in financial institutions. Recent developments have seen Web based GSS become increasingly popular as a means to support the coordination of distributed groups (Qureshi & Vogel, 2001; Romano, Chen et al., 2002). These Web-based GSS allow experts to work from almost any location any time. These distributed groups are usually occupied in various projects and they may work on different tasks at the same time (Herik, 1998). A common situation is that groups work on the same task but individual members will work at different times and places. All these tasks always have some form of inter-dependency (Robbins, 1998) and have to be coordinated (Sol, 1982).
3.3.1. Patterns of group tasks Most GSS offer support for a common collection of group tasks (Nunamaker, Dennis et al., 1991). Several taxonomies have been suggested to categorize these group tasks into easy communicable and distinctly supportable categories (Mennecke & Wheeler, 1993). The most widely used taxonomies are those of McGrath (1984) and Bostrom et al. (1992). The taxonomy of Briggs and Vreede (2001a) is more recent. McGrath’s taxonomy is based on two dimensions (McGrath, 1984; Gallupe, 1990; Herik, 1998). The first dimension classifies tasks on the basis of outcome: cognitive or intellectual tasks. The second dimension distinguishes between social participation of the members: collaboration and conflict resolution tasks. McGrath emphasizes
40
Literature review the function of technology as a means to structure meeting interaction and suggests four categories of activities:
•
generate: idea generation and plan development
•
choose: selection of a correct answer or a preferred solution
•
negotiate: resolve conflicting viewpoints or conflicting interests
•
execute: perform a competitive task or a task based on external standards.
The advantage of this task categorization is its explicit focus on the conflict versus collaboration aspect of group tasks: people’s interests in group meetings are not always the same, which commonly results in mixed motive tasks (Herik, 1998). Further, McGrath had a significant impact in the GSS field and recognizes the importance of participant’s goals and interests. Bostrom et al. (1992) take a general goal attainment process as a starting point and suggests a division into four activities: generation, organization, communication and evaluation. Meetings often take place in the order given below but other variations also exist (Herik, 1998).
•
Generation: participants at a meeting can generate, anonymous or not, a large number of ideas in a relatively short time using the parallel input facility of the GSS. Practical applications vary from brainstorming on risks, to generate alternative control measures.
•
Organization: after a list of ideas has been produced using the idea generation facilities, ideas can be organized into clusters and further analyzed. More insight into the issue can be gained by structuring, clustering and or categorizing.
•
Communication: the GSS can also be used as a platform for information dissemination and exchange. Communication between participants is facilitated by shared access to the individual, often anonymous, responses given during the meeting and to the external information sources such as a historical database.
•
Evaluation: the evaluation task serves two main purposes: selection and evaluation. Based on voting tools, selection procedures can be designed creatively e.g. to prioritize a list of risks.
The advantage of this classification is that it represents an elegant categorization that can be understood and communicated more easily than the categorization presented by McGrath.
41
Chapter 3 Building on the above, Briggs and Vreede (2001a) argue that a group moves through some combination of patterns of collaboration when working towards a common goal. They present five patterns of collaboration: diverge, converge, organize, evaluate and build consensus.
•
Diverge, to move from a state of having fewer concepts to a state of having more concepts. For example when a group has to identify risks, e.g. brainstorming, they move from having a few risks to having more risks.
•
Converge, to move from a state of having many concepts to a state of having a focus on, and understanding of, the few worthy of further attention. For example when a group identified 120 risks but they cannot be taken all into account.
•
Organize, to move from less to more understanding of the relationships among concepts. For example when a group has to classify risks into relevant impact areas e.g. front office, back office, IT, and headquarter. This will give them more understanding of the impact of the identified risks.
•
Evaluate, to move from less to more understanding of the possible consequences of concepts. For example when risks are assessed in terms of frequency of occurrence and impact the group moves towards more understanding of the possible consequences of risks.
•
Build consensus, to move from having less to having more agreement on courses of action. This holds true when people disagree about e.g. the level of assessed risks. It might be the case that a stakeholder voted high in terms of impact while another participant voted low impact for the same risk. Building consensus then moves the stakeholders to more agreement by e.g. group discussion.
The advantages of this classification scheme is that it expected to be more easily understood than the previous schemes and is expected to be implemented in practice more easily (Lowry & Nunamaker, 2002a). Some researchers e.g. Huber (1980) tried to combine certain combinations of facilitation prompts, tools and their configuration into a facilitation recipe that can be used for a group task such as brainstorming. As such, a classification scheme based on patterns of collaboration and facilitation recipes appear to be useful to our research project.
42
Literature review
3.3.2. GSS support capabilities GSS offer a number of support capabilities, such as communication support, deliberation support and information access support see e.g. (Nunamaker, Applegate et al., 1988; Briggs, 1994; Vreede & Briggs, 1997; Briggs, 1998). Following Herik (1998) we argue that a GSS facilitates communication and cognitive tasks, both for process and content, see table Table 3-7. Communication refers to the support provided by electronic messaging between group members via networked PCs. Cognitive tasks by nature tend to be intellectually difficult tasks, such as diverging, converging, organizing, evaluating and building consensus. The process dimension refers to the structuring of well-prepared and scheduled activities. The content dimension deals with supporting the actual substance of the communication or the cognitive task. Table 3-7: GSS capabilities in meetings (Herik, 1998)
Communication support or information input, storage and access support Cognitive support or patterns of collaboration support
Content Parallel communication Anonymity Problem visualization Electronic data storage Electronic voting Easy data manipulation Convenient information display Computational support e.g. calculator
Structure Process and agenda structuring Process visualization Structured analysis techniques Decision making techniques
GSSs offer communication content support by supporting parallel communication and allowing anonymous contributions to be made (Vreede & Briggs, 1997). Experts in a GSS meeting can interact simultaneously, thereby reducing airtime fragmentation (Briggs, 1994). Communication process support is offered by the computer communication platform that facilitates interaction between the participants. A communication structure e.g. a strict agenda can be imposed on this interaction for specific tasks, also see the next paragraph. GSSs offer support for cognitive tasks using the computational and information storage and retrieve abilities of a computer. All input entered by the participants is automatically safely stored, creating a group memory which can be accessed at all times (Vreede & Briggs, 1997; Briggs, 1998). Further, a GSS can also be used to support the structuring of tasks since group interaction often requires a step-by-step procedure to reach a desired goal, see the next paragraph.
43
Chapter 3
3.3.3. GSS benefits and aspects The GSS features described above lead can to a number of effectiveness, efficiency and satisfaction benefits for meetings (Herik, 1998; Briggs, Vreede et al., 2003). Yet, research into the effects of GSS on these benefits is not unequivocal; it is ambiguous and sometimes conflicting (Briggs, Vreede et al., 2001). In general, GSS seem to have a positive effect on group effectiveness (Genuchten, Cornelissen et al., 1998; Easley, Devaraj et al., 2003), efficiency (Vreede & Dickson, 2000; Fjermestad & Hiltz, 2001) and satisfaction (Mejias, Shepherd et al., 1997; Briggs & Vreede, 1997b; Reinig, 2003). Meta-analyses of lab experiments present mixed results, see e.g. (Dennis, Nunamaker et al., 1991; Dennis & Gallupe, 1993; Fjermestad & Hiltz, 2001). Field research results, consistently paints a more positive picture (Nunamaker, Vogel et al., 1989a; Fjermestad & Hiltz, 2001; Genuchten, Dijk et al., 2001; Vreede, Vogel et al., 2003): using a GSS can save up to 50% of person hours and increase effectiveness up to 80% when compared to regular meetings, while high levels of participant satisfaction are achieved (Fjermestad & Hiltz, 2001). However, the extent to which this beneficial effects occur appears to depend on a variety of variables (Valacich, Vogel et al., 1989). Improving effectiveness, efficiency and satisfaction can be pursued through the improvement of a variety of variables, which can include, but are not limited to: facilitation, goals, tasks, structure, group composition, group size and anonymity (Nunamaker, Dennis et al., 1991).
Facilitation Facilitation of a GSS meeting is perceived to be one of the most important variables for a high quality meeting outcome (Bostrom, Watson et al., 1992; Anson, Bostrom et al., 1995; Romano, Nunamaker et al., 1999; Vreede, Davison et al., 2003). Facilitation is a dynamic process that involves many functions such as: managing relationships between meeting participants, meeting procedures-design, promoting ownership, presenting information to the group, selecting and preparing appropriate technology, structuring tasks and focusing the group on the need for a high quality product as the outcome of the meeting (Limayem, Lee-Partridge et al., 1993; Romano, Nunamaker et al., 1999; Hengst & Adkins, 2004). Recent research has recognized the importance of the facilitator and the focus is now increasingly on supporting the facilitator, see e.g. (Briggs & Vreede, 1997a; Briggs, Vreede et al., 2001; Briggs & Vreede, 2001a; Vreede, Boonstra et al., 2002). There are two main reasons for this.
•
The facilitator may be a bottleneck for the widespread diffusion of GSS (Briggs & Vreede, 2001a; Briggs, Vreede et al., 2003). Factors such as the high cognitive load that GSS users
44
Literature review face make it difficult for them to understand what the system is supposed to do for them. The need to deal effectively with system complexity and organizational economics makes experienced facilitators not widespread and makes it increasingly difficult to keep these facilitators in place (Briggs, Vreede et al., 2003). Moreover, several studies in which satisfaction was explored as one of the dependent variables when dealing with the effect of an intervention, show that GSSs are often used with the help of professional facilitators (Limayem, Lee-Partridge et al., 1993; Anson, Bostrom et al., 1995; Mittleman, Briggs et al., 2000).
•
Distributed GSS meetings are becoming increasingly popular (Romano, Nunamaker et al., 1999; Hengst & Adkins, 2004) and as such these distributed meetings are more complicated for both the facilitator and participants (Romano, Nunamaker et al., 1999; Mittleman, Briggs et al., 2000; Lowry & Nunamaker, 2002b). Facilitators for example lack the immediacy of feedback, the ability to monitor and control the meeting (Niederman, Beise et al., 1993; Romano, Nunamaker et al., 1999). Participants in distributed meetings have more difficulties in e.g. following the process of the meeting, lack non-verbal cues and experience more problems during convergence activities (Rutkowski, Vogel et al., 2002; Hengst & Adkins, 2004). Several studies, in which satisfaction with process, outcome and the GSS system were explored, have shown that distributed groups are less satisfied than face-to-face groups (Valacich & Schwenk, 1995; Burke & Chidambaram, 1996; Romano, Nunamaker et al., 1999). Other studies found that distributed groups are more effective and efficient for certain group tasks (Valacich, Nunamaker et al., 1994; Valacich & Schwenk, 1995).
Several ways to support the facilitator have been addressed in the literature, all of which can increase the effectiveness, efficiency and satisfaction of participants in (distributed) GSS meetings (Briggs, 1998; Hengst & Vreede, 2004). Some researchers place their emphasis on the technology and tools that are used e.g. storing and retrieving shared information, making communication less expensive and providing tools such as pen based interfaces and shared applications (Briggs, 1993; Andriessen, 2000; McQuaid, Briggs et al., 2000; Davison & Vreede, 2001; Rutkowski, Vogel et al., 2002; Easley, Devaraj et al., 2003). Other researchers provide guidelines for the facilitator e.g. start with a kick off session, establish goal congruence, decrease cognitive load, select tasks in which participants have high vested interests and contact participants directly (Vreede & Muller, 1997; Mittleman, Briggs et al., 2000; Vreede, Davison et al., 2003; Santanen, Briggs et al., 2004). Some researchers try to combine certain combinations of facilitation prompts, tools and their configuration into a facilitation recipe that can be used
45
Chapter 3 for a group task such as brainstorming (Huber, 1980), and building on this work: (Vreede & Briggs, 2001; Briggs & Vreede, 2001a; Grinsven 2007). McGoff et al. emphasize the critical premeeting design role of the facilitator (McGoff, Hunt et al., 1990).
Goals The goals of a meeting are important for any team project (Mittleman, Briggs et al., 2000). A goal can be defined as a desired outcome, the object or aim of an action (Locke & Latham, 1990). Research has shown the importance of clarity in setting goals (Vreede & Wijk, 1997a; McQuaid, Briggs et al., 2000). In general, most studies show that a lack of a clear goal often results in ineffective and inefficient meetings (Vreede & Muller, 1997; Herik, 1998; Vreede, 2000). Several ways to improve goal attainment have been addressed in the literature, all of which can increase the effectiveness, efficiency and satisfaction of (distributed) GSS meetings. Grohowski et al. (1990) argue that the pre-planning of meetings is taking on increased importance, and that it includes participant and tool selection as well as expectation management. Vreede et al (2003) emphasize that in the pre-planning phase the following have to be identified: the goal of the meeting, the deliverables that the group is expected to create and use, the sequence of steps that has to be followed to create these deliverables and the prompts and the questions for each step. Vreede et al. (2003) further suggest that sub-goals should pre-planned and spelled out clearly to the group before starting the session. Romano et al (1999) suggest that the participants need to have a vested interest in these goals and that they must be measurable, so they can track progress toward it. This is in line with Vreede et al. (1997) who suggest that group goals need to be congruent with the goals of individual group members.
Task Most GSS support the tasks of a group meeting (Nunamaker, Dennis et al., 1991). A task can be defined as the behavior requirements for accomplishing stated goals, via some process, using given information (Zigurs & Buckland, 1998). Complex tasks place high cognitive demands on the task performer (Campbell, 1988). Task and task complexity has received a lot of attention in GSS research (Pinsonneault & Kraemer, 1989; Dennis & Gallupe, 1993). Most of this research indicates that a lack of clear tasks often results in ineffective and inefficient meetings (Nunamaker, Briggs et al., 1997; Vreede & Muller, 1997; Herik, 1998; Vreede, 2000).
46
Literature review Several ways to improve task performance have been addressed in the literature, all of which can increase the effectiveness, efficiency and satisfaction of (distributed) GSS meetings. Some researchers have focused on applying structured procedures e.g. providing instructions to group members for tasks (Bostrom, Anson et al., 1993; Vreede & Wijk, 1997a) and separating idea generation from evaluation (Delbecq, Ven et al., 1975). Other researchers emphasize encouraging effective task behaviours e.g. discussing task procedures (Kaplan, 1990) and some emphasize encouraging effective relational behaviours such as applying active listening techniques (Bostrom, Anson et al., 1993). Further, a number of studies suggest that GSS may be more appropriate for more complex tasks, rather than simple tasks (Gallupe, DeSanctis et al., 1988; Dennis & Gallupe, 1993).
Structure The structure of a group meeting provides the road map of group processes and group interaction toward a common goal (Nunamaker, Briggs et al., 1997). A distinction can be made between process and task related structures (Nunamaker, Briggs et al., 1997). Process structure refers to process techniques or to rules that direct the pattern, timing or content of this communication. Task structure refers to techniques, rules or to models for analysing taskrelated information to gain new insight (Nunamaker, Dennis et al., 1991). Structuring group meetings has received a lot of attention in GSS research. Generally, groups who used a structured GSS meeting were found to be more effective, efficient and satisfied with the process and the outcomes than traditional groups (Dennis & Gallupe, 1993; Ocker, Hiltz et al., 1996), however a poor structure can be disastrous for a group meeting, especially in distributed sessions (Mittleman, Briggs et al., 1999; Lowry & Nunamaker, 2002b). Several efforts made by researchers to improve structure have been addressed in the literature, all of which can increase the effectiveness, efficiency and satisfaction of (distributed) GSS meetings (Bostrom, Anson et al., 1993). Designing an agenda and then imposing it on the group meeting is a form of process structure and is often suggested in the literature as a means to improve the structure of a group meeting (Nunamaker, Vogel et al., 1989b). An example of an agenda is to first diverge on alternatives and weigh factors, and then to evaluate these to get the best opinions. In this way, the agenda is used to structure a strictly applied time division over each subject (Dennis, Valacich et al., 1996). Another way to improve the structure of a group meeting is to use a task related structures such as the Delphi method (Linstone & Turoff, 1975; Rowe & Wright, 2001; Cho, Turoff et al., 2003), Nominal Group Technique (Delbecq, Ven et
47
Chapter 3 al., 1975; Huber, 1980) and Kaplans expert information technique (Kaplan, 1990). Further, Dennis and Gallupe (1993) show that the use of structure appears to be case specific; a structure that ‘fits’ the task can improve performance, but an incorrect structure for the task can reduce performance.
Group composition Group composition significantly influences group processes and has received a lot of attention in GSS research. Variables such as background, personal goals, education, age and gender seem to play an important role in group processes, see e.g. (Herik, 1998; Vreede, 2000; Karakowsky & Elangovan, 2001). Several ways to improve group composition have been addressed in the literature, all of which can be used to increase the effectiveness, efficiency and satisfaction of (distributed) GSS meetings. Since some participants may rebel, the facilitator should make sure they are well informed about the composition of the group (Vreede & Bruijn, 1999). Vreede and Briggs (1997) suggest selecting group members with a diversity of knowledge and experience. Fjermestad and Hiltz (2001) propose using professionals such as managers, senior managers or professional staff.
Group size Group size considerably influences group processes. The optimal group size in a face-to-face GSS supported group is usually found between ten to twenty (Dennis, Heminger et al., 1990; Dennis, Nunamaker et al., 1991; Dennis & Gallupe, 1993). This has been compared to a manual face-to-face group size which typically consist of 3-5 members (Shaw, 1981; Herik, 1998). A number of studies have shown that group size affects effectiveness, efficiency and satisfaction with GSS use. The results may be summarized as follows. In general large face-to-face GSS supported groups outperform small unsupported teams (Nunamaker, Dennis et al., 1991; Cho, Turoff et al., 2003). Further, satisfaction increases with group size (Dennis, Heminger et al., 1990; Dennis & Gallupe, 1993) but overcrowded face-to-face groups do not satisfy their members with their decision-making process (Miller, 1950; Hackman & Vidmar, 1970). Larger groups benefit more from GSS use than do smaller groups (Dennis & Gallupe, 1993). However, as group size increases asynchronous groups may have more difficulties in coordinating the work (Cho, Turoff et al., 2003).
48
Literature review Several ways to improve group size have been addressed in the literature, all of which can be used to increase the effectiveness, efficiency and satisfaction of (distributed) GSS meetings. In general, group members of varying sizes may collaborate asynchronously or synchronously and may move between these two modes during different phases of a project (Romano, Nunamaker et al., 1999). Vreede and Briggs (1997) suggest to select the GSS technology appropriate to the group size and they further suggest using groups larger than seven or eight because they seem to benefit more from the GSS than do smaller groups.
Anonymity The principal effect of anonymity is a reduction of characteristics such as member status, internal politics fear of reprisals (Grohowski, McGoff et al., 1990) and groupthink (Janis, 1972). The anonymity offered by a GSS can be used by group members to contribute anonymously (Nunamaker, Applegate et al., 1988). A great number of studies show that teams using anonymous GSS technology are more effective when they are allowed to enter both positive and negative comments (Nunamaker, Briggs et al., 1997). A study from Romano et al (1999) indicates that anonymity requirements may be different and not as important for distributed sessions as it is for face-to-face sessions, as for example anonimity makes free riding easier. Several ways to improve the use of anonymity have been addressed in the literature, all of which can be used to increase the effectiveness, efficiency and satisfaction of (distributed) GSS meetings. One way is to introduce partial anonymity e.g. by asking participants to use an alias or introducing some verbal discussion (Nunamaker, Briggs et al., 1997). Another way is to use subgroups, so that participants know which group made the comment, but not who made the comment (Romano, Nunamaker et al., 1999). Further, it can help distributed participants to remind who is attending the distributed meeting by reflecting names while facilitating (Mittleman, Briggs et al., 2000).
3.4. Conclusions Literature on operational risk management is discussed in this chapter. For the purpose of this research project, we define operational risk management as the risk of direct or indirect loss resulting from inadequate or failed internal processes, people and systems or from external events (RMA, 2000). It is important to mention that in this definition the internal processes include the procedures and the embedded internal controls. We define a loss as: the financial impact associated with an operational event. Considerable attention in our discussion is given to
49
Chapter 3 losses, caused by an event, which in turn is caused by four primary dimensions of an operational risk: processes, people, systems, and external events. We conclude that these events together with the primary dimensions can be used to identify operational risks. We further conclude that alternative control measures have to be identified to minimize the frequency of occurrence and/or impact of the operational risks. An overview of the literature on expert judgment is presented in this chapter. From this literature it becomes apparent that operational risk management can benefit from expert judgment when the process is designed and used in a structured manner. Considerable attention is given to the activities and principles in the phases, preparation, risk identification, risk assessment, risk mitigation and reporting. We conclude that important principles exist to carry out these activities as optimal possible thereby aiming to minimize inconsistency and bias. We furthermore conclude that these activities and principles are rather abstract and need to be further specified and applied to operational risk management. This will help financial institutions to arrive at an accurate estimate of exposure to operational risk. In chapter four, a case study is used to analyze the activities and principles used in practice. Group support systems literature is also discussed in this chapter. From this literature it becomes apparent that operational risk management can benefit from Group Support Systems. Considerable attention is given to the support of group tasks, support capabilities and the most important variables that seem to influence the effects of using GSS. Several ways have been presented to improve these variables, thereby aiming to increase the effectiveness, efficiency and satisfaction. Despite the fact that the causes of these effects are difficult to determine, we conclude that a dedicated preparation, appropriate use of GSS technology and good facilitation can improve operational risk management. We further conclude that the variables and ways need to be specified in the context of operational risk management. In chapter four, a case study is used to sharpen our view on how expert judgment is utilized in the context of operational risk management within a large financial institution.
50
The fact is that bankers are in the business of managing risk. Pure and simple, that is the business of banking. Walter Wriston
4.
Operational risk management in practice
A case study is used to investigate Operational Risk Management (ORM) in practice. For the development of an approach to improve ORM it is important to study the problems in practice. As such, our underlying motives for this investigation are to derive starting points for the improvement of operational risk management and sharpen our view on ORM. Bank Insure, a large financial institution is selected as our first case study because they attach high importance to the improvement of their ORM and they are willing to cooperate in this research. We use our initial view on risk management, literature review from chapter one and three to guide our observations in this study. The research method used is exploratory in nature. We use interviews, study several internal documents, search the Internet and analyze project plans. Then, we document our investigation in an initial report and interview two employees from Bank Insure by phone to provide us with their comments. Finally, the employees from Bank Insure approved this case study as their contemporary situation.
4.1. Bank Insure Bank Insure is a financial institution offering banking, insurance and asset management to corporate and institutional clients. With a diverse workforce, Bank Insure comprises a broad spectrum of prominent companies that increasingly serve their clients under its brand. Key to Bank Insure is its distribution philosophy 'click–call–face'. This is a flexible mix of internet, call centers, intermediaries and branches with which Bank Insure can fully deliver what today's clients expect: unlimited access, maximum convenience, immediate and accurate execution, personal advice, tailor-made solutions and competitive rates. Bank Insure’s strategy is to achieve stable growth while maintaining healthy profitability. Bank Insure’s financial strength, its broad
51
Chapter 4 range of products and services, the wide diversity of its profit sources and the good spread of risks form the basis for Bank Insure’s continuity and growth potential. Bank Insure’s shareholders, board, rating agencies, international1 and national2 regulators require that Bank Insure consistently and periodically identifies, measures and monitors its key operational risks that the business runs in achieving its objectives. The Risk Policy Committee of Bank Insure decided early in the year 2001 to set up the function Group ORM (GORM). This function exists next to functions such as internal control, business control and corporate audit services (CAS). GORM predominantly aims to support general management with raising operational risk awareness and create early insight. Other important goals of GORM are: increasing operational risk and loss transparency e.g. incident reporting, improving risk processes e.g. to identify and control operational risks, prepare Bank Insure for Basel II and improve GORM. The recommendations made by GORM to the management committees and the business units are mainly based on information that is derived from e.g. facts on historical loss data, expert judgment, critical incidents and near misses. As stated in chapter one, in this research project we are particularly interested in how a financial institution utilizes experts’ judgment to provide them with the input to estimate their exposure to operational risk. Before the actual expert judgment activities take place, there is an underlying motive for GORM to initiate an inquiry. The documents that we studied and the interviews we conducted indicate that it is important to have an understanding of these motives because it can influence: the facilitation, the goal of the business process or organization under investigation, the selection of experts and the required number of experts for providing the input to estimate a financial institutions exposure to operational risk, also see chapter three. The motives to initiate an inquiry can be classified in the following categories:
•
ongoing system and process audit. This audit is initiated by GORM and is an ongoing process assessment, which occurs every four-year for less important business processes, and usually every year for important business processes. It is important to note that a system and process audit is time consuming and therefore usually takes place once every four-year
1 The international regulator is the Basel Committee on Banking Supervision. The committee is part of the Bank for International Settlements (BIS), an international organization that fosters co-operation among central banks and other agencies in pursuit of monetary and financial stability. The Basel Committee formulates broad supervisory standards and guidelines 2 The national regulator is De Nederlandsche Bank (DNB). DNB is represented in the Basel Committee on Banking Supervision.
52
Operational risk management in practice
•
compliance audits enforced by both internal and external rules and regulations. This activity takes place on a yearly basis. It is important to note that mainly external rules dominate and that a compliance audit only takes place where it has to
•
signals from the business units, because GORM holds office in the subsidiaries from Bank Insure, they are able to pick up signals that might lead to the start of an audit
•
information systems audits, these audits are specifically aimed at the IT platform and infrastructure
•
audits requested by the business unit management, board of directors, or GORM itself
•
special investigations e.g. fraud are usually requested by the management.
4.2. Work process We studied several internal documents and interviewed five employees from GORM to find out how relevant information was gathered, processed and how they made recommendations to business unit management and the relevant stakeholders. A distinction can be made between primary and secondary activities for information gathering and processing, see Table 4-8. The primary activities are concerned with the actual operating activities and the secondary activities are not directly related to the actual operating activities. Table 4-8: primary and secondary activities
Primary activities Desk research Interviews with initiators Interviews with key experts
Secondary activities Work assignment Preparation Integrating interview results Reporting
The flow of the activities is depicted in Figure 4-7. We present the activities following the phases described in chapter three. The start represents the motive for an inquiry. After the start, the activities: work assignment, preparation, and desk research are carried out in parallel. The activities design questionnaire, interviews, integrating the interview results, and finalizing the report are carried out sequentially. Finally, a report with recommendations to business unit management and relevant stakeholders is made.
53
Chapter 4 Input
Preparation
Work assignment
Desk research
Design questionnaire
Interviews
Integrating interview results
Finalizing report
Report
Figure 4-7: flow of activities
4.2.1. Preparation phase First there is a Work assignment discussion, in this activity the work assignment is discussed between employees from GORM and the responsible manager. Second, a Plan is made for how to approach the problem. Third, the Scope of the project is determined and preliminary appointments are made with the initiators and key experts. Two employees of GORM usually do this and divide the tasks. Fourth, the activity Desk research is performed. In this activity, employees of GORM do fact finding on historical loss data, critical incidents, and near misses. Moreover, management information is analyzed, financial, audit and regulator reports are studied. Finally, using this information, a questionnaire is designed to guide initial interviews with the responsible managers, often referred to as initiators. The questions are based on the outcome of the desk research step and usually differ for each inquiry. The outcome of these interviews sharpens the inquiry.
4.2.2. Risk identification, assessment and mitigation phase These phases consists of two main activities. The first activity consists of Interviews with the
initiator(s), during these interviews the initiators are asked about their perception on the operational risks their business is running. The initiators usually consist of one or more managers who also are often also the problem owner. During these interviews, the initiators are
54
Operational risk management in practice confronted with questions and challenged with the relevant facts derived from the desk research activity. In the second activity, Interviews with the key experts, the key experts who work in the business process under investigation, are interviewed. These experts are asked to ventilate their opinion about the operational risks. Usually four to six experts are interviewed, but when the scope becomes more complex, more experts are involved in the interview process. The interviews are usually conducted face to face and sometimes in a workshop.
4.2.3. Reporting phase In this phase, the activity Detail check is performed first. A detailed check is made to validate the interview results and the information found in the desk research activity. One employee usually does this, and he or she reflects the findings with his interviewing partner when all the information is gathered. Second, all the different views are integrated and presented in a Formal
report with recommendations e.g. to business unit management. The formal report includes an assessment matrix, which sometimes consists of more than thirty pages. Summaries of all findings are presented to Bank Insure’s board two times a year.
4.3. Support Group Operational Risk Management uses supporting software tools and techniques to help them carry out their operational risk management work process and activities. Although our main focus is on software tools that support expert judgment activities, we also observe that interview techniques and a devil’s advocate are used to challenge the experts during the activities. We want to point out that simple software tools and techniques are used at Bank Insure to support the activities. The support for each step is mapped to the activities and is presented in Table 4-9. Table 4-9: supporting tools and techniques
Activity Initiation / preparation Work assignment discussion Desk research Design questionnaire Interviews Integrating the interview results Finalizing the report
Tool Phone and organizer Microsoft Word Loss database Microsoft Word Microsoft Word Phone, email and organizer Pen and paper Microsoft Word Microsoft Word, Excel, email
Technique Computer skills Computer skills Interview techniques Devil’s advocate Computer skills Computer skills
55
Chapter 4
4.3.1. Preparation A phone, organizer and Microsoft Word are used to support the planning and scoping of the project under investigation. The phone is used to contact the initiators and/ or the experts who are often distributed in time and place e.g. experts who work in another subsidiary or part time. When contact is made, an organizer is mainly used as a support tool for scheduling. A special software tool, the loss database, was used to support the desk research activity. The loss database contains relevant information about historical loss data, critical incidents, and near misses. This information derived is then used to design the questionnaire.
4.3.2. Risk identification, assessment and mitigation A phone, organizer, email, pen and paper are used to support the interviews. The interviews with the initiators and experts are scheduled and are then conducted face to face. A phone and email is used occasionally when it is not possible to travel to a geographically distributed location to conduct an interview.
4.3.3. Reporting For reporting purposes Microsoft Word and Excel are used to support the detail check and
integration of the interview results. Excel is used to calculate the estimated losses due to operational risks. Email is used for internal communication.
4.4. Expert estimations Table 4-10 shows a conservative estimation of the man-hours and throughput time needed to execute the primary and secondary activities. These estimations are based on using two GORM employees, one initiator and six experts. It is important to note that we do not count the time needed to schedule appointments with the initiators and the key experts, which is usually done by a secretary. We also did not include a feedback session where the interview results are presented to initiators and experts. The column, Activity, represents the activity which is performed by the employee(s). The column, Man hours GORM, represents the estimated time in man-hours needed for two employees of GORM to execute the activity. The column, Man hours
initiator/expert, represents the estimated time in man hours needed for the initiator and/or experts. The column, Throughput time in weeks, represents the total throughput time in weeks which is needed for initiators and experts to execute the activities.
56
Operational risk management in practice Table 4-10: expert estimations
Activity Initiation / preparation Work assignment discussion Desk research Design questionnaire Interviews Integrating the interview results Finalizing the report Total
Man hours Group ORM 8 8 8 8 18 8 30 88
Man hours initiator/expert 0 4 0 0 9 0 0 13
Throughput time in weeks 4 1 3 1 1 10
The conservative estimations explicate that the preparation with an initiator immediately doubles the time for GORM. This is because two employees of GORM are involved in the interview. Further, an average interview with an expert takes on 1.5 hour, not including travel time and losses due to e.g. coffee breaks. Since two GORM employees often conduct these interviews, it doubles the time. These interviews are also conducted by two persons of GORM. Busy agenda’s from initiators and experts is one important cause that the average throughput time for the interviews is three weeks. Finalizing the report takes up an enormous amount of time because the interview results have to be validated and an assessment matrix of the estimated frequencies and impacts of the operational risks has to be made.
4.5. Problems We interviewed five experts, of which three were managers, made direct observations and studied six confidential internal reports to derive the current problems at Bank Insure. From this, it is concluded that the current situation to utilize expert’s judgment is not satisfying for GORM, Bank Insure’s management and the initiators from the business unit(s) involved. We present the problems following three categories: operational risk management, expert judgment and support, see Table 4-11. These categories are chosen because they correspond with the structure of our literature review in chapter three. As such it is easier to contrast our findings with the existing literature. We categorized the main perceived problems using five experts, two from CAS and three from GORM, all of them are employees of Bank Insure. Third, we used two feedback rounds to validate the main perceived problems and to make sure no confidential information is used.
57
Chapter 4 Table 4-11: main perceived problems
Operational risk management
Expert judgment
Support
Main perceived problems The outcomes are often too biased for the initiators and managers to take effective decisions Initiators often feel punished by the interviewers. They feel that when they identify and assess operational risks that are not in line with interviewers’ findings, they receive a bad rating The total throughput time is too long to respond timely to possible operational risks and to gain benefits from possible opportunities Structuring the problem is recurring, difficult and time consuming Initiators and experts do not have a clear insight into the process of expert judgment Initiators and experts do not share the outcomes of the expert judgment process Initiators feel forced into political behavior to ‘not’ identify certain operational risks due to possible bad ratings Initiators often do not feel committed to the operational risks and controls found in the desk research activity Time and costs constrains all phases and activities Scheduling and involving the initiators and experts is a burden Manual processing of all the interview results causes long reporting time and a many hours of manual labor The questions in the questionnaire need to be specified and scaled correctly for every new business process under investigation
Table 4-12 presents a summary of the research activities wich are described in section four. Table 4-12: summary of the research activities
Research activity Internet search Interview with one employee
Output summary Bank Insure’s background Importance of ORM is addressed Bank Insure uses manual self assessments Focus is on the process of workshop Document study Risk & Control Self Assessment framework document is studied Interview with two employees Motives for starting an ORM investigation Estimations of the ORM process Document study & interviews Discussion of phases and activities in operational risk management with four employees Key risk library is studied Identification of the main perceived problems Contemporary situation ORM at Bank Insure
4.6. Starting points for improvement We identified a broad range of issues in operational risk management, expert judgment and support in this chapter. We can conclude from these issues that an approach to improve operational risk management could be of great use for financial institutions. Our research
58
Operational risk management in practice objective is to develop an approach to improve the process of expert judgment used as input for estimating the level of exposure to operational risk. This does not mean that other issues are not important. In the following chapter we present an approach for improving operational risk management. Based on the literature review in chapter one and three and the issues identified using the inductive case study, we derived the following starting points for our approach.
•
For initiators and managers to take effective decisions, the results need to be free from biases and accepted by experts and initiators as well. It is important that the results are sufficient, reliable and robust to enable an accurate estimation of a financial institution’s exposure to operational risk. Our literature review and inductive case indicate that increased acceptance of the results can be achieved by reducing a number of biases that play an important role in using expert judgment to estimate exposure to operational risk. Further, one of the main reasons for a low acceptance of the results seems to be a lack of understanding about the benefits provided by ORM. Providing initiators and experts with these insights might help them to accept the results.
•
There is a need to formulate a clear ORM process to provide the initiators and experts with a detailed insight into the process and activities they have to perform. The process and activities must be easy to communicate, so using a consistent terminology might help them to create a shared understanding of the situation.
•
In order for ORM to add value to the institution, the process and outcomes should meet the sometimes-conflicting goals of the institution, initiators, experts and stakeholders as close as possible. It seems essential that all actors can identify themselves with ORM.
•
This case study taught us that: scheduling initiators and experts, reporting time and labor costs hinder the success of the ORM activities. Possibilities to speed up the ORM process e.g. by a dynamic participation of initiators and experts, or reporting might be found in the application of Information and Communication Technology (ICT) to support the ORM process and activities performed by the experts.
•
Since business processes vary in complexity the ORM process must be flexible. For example a detailed identification of existing control measures might be not be required for future business processes while it can be required for existing business processes.
These starting points describe our approach in a rough sense and need to be specified. This is done in chapter five.
59
60
Every man is born with a live computer…but without the instruction manual. The most important job of science today is to draw up that manual. Luis Alberto Machado
5.
Multiple Expert Elicitation and Assessment
We formulated a number of important starting points for our approach in chapter four. Using these starting points, our literature review in chapter one and three, and our personal experiences we will present an approach to improve operational risk management, called as Multiple Expert Elicitation and Assessment, abbreviated to MEEA. MEEA is structured following the framework of Seligman et al. (1989) and Sol (1990), see Figure 5-8. This framework has proven its value in many different application domains and research projects, see e.g. (Verbraeck, 1991; Meel, 1994; Eijck, 1996; Herik, 1998; Janssen, 2001). MEEA aims to help financial institutions to improve their operational risk management. MEEA consists of a way of thinking, way of working, way of modeling and way of controlling. In the way of thinking we present our view on the problem domain operational risk management and how we think that the specific elements of this domain should be interpreted, the nature of the design problem, the position of the researcher, and a number of design guidelines. The way of thinking predominantly determines the main beliefs followed in the way of working, modeling and controlling. The way of modeling and the way of working are closely related, and therefore situated in a small dashed line box, see Figure 5-8. In the way of working, we discuss the steps that need to be taken to deal with the issues and problems identified in the previous chapters, to improve operational risk management. In the way of modeling we discuss the modeling concepts that are constructed when following a methodology. In the way of controlling, we discuss the managerial aspects of the problem solving process. Finally, we discuss how we evaluate the improvements made to operational risk management. The main focus of this chapter is on the way of working.
61
Chapter 5
Way of thinking
Way of working Way of controlling Way of modeling
Figure 5-8 analytical framework for design methodologies (Seligman, Wijers et al., 1989; Sol, 1990)
5.1. Way of thinking The way of thinking encompasses our ‘Weltanschauung’, which is a particular, non-absolute world image that we take for granted and through which we interpret reality (Churchman, 1971; Checkland, 1981). The way of thinking reflects our view on operational risk management (ORM), provides an underlying structure, it sets the overall tone, delineates how we think that the specific elements should be interpreted and provides the design guidelines on which MEEA is based (Lohman, 1999; Janssen, 2001). The assumptions made in our way of thinking are closely related to our initial thoughts on risk management (Grinsven, 2001) and our first case study, see chapter four. Following Babeliowsky (1997) and Herik (1998) we address: our view on the problem domain, the nature of the design problem, our position as a researcher and a number of design guidelines.
5.1.1. View on operational risk management Design The objective of our research is to improve ORM. We aim to achieve this by developing an approach in which expert judgment is utilized. Design involves how artificial things with desired properties should be made and/or should be achieved (Checkland, 1981). According to
62
Multiple Expert Elicitation and Assessment Churchman (1971), design is thinking behavior, which abstractly selects the alternative between a set of alternatives that leads to the desired goal. Following Churchman (1971) and Cross (2000), we believe that, in design, three characteristics have to be taken into account. First, design attempts to differentiate different sets of behavior patterns. Second, design attempts to estimate the fit between each alternative set of behavior patterns and a specified set of goals. Third, part of the design is communicating thoughts to other minds.
Problem solving perspective We view the improvement of ORM from a problem solving perspective. The accent in our problem solving perspective is on the structured-ness of the processes in which expert judgment is utilized. These processes can be viewed as a sequence of interrelated activities and each process forms the input for the next process. We argue that improving ORM can be considered as solving an ill-structured problem. For a definition of an ill-structured problem we refer to (Sol, 1982 p.5). Many actors are involved in the design process, i.e. problem owners, experts and stakeholders. All these actors have their own goals and objectives. It is therefore difficult, or even impossible for these actors to rationally find an optimal solution. Therefore, we follow Simon (1977) and use a bounded rationality view. We stress that it is not our goal to arrive at the solution; rather we aim for a model that is appropriate for the designer and leads to an acceptable solution for the financial institution(s) involved.
Structuring processes Following Clemen and Winkler (1999), we think that ORM can be improved by structuring processes in which expert judgment is utilized. A structured process can be viewed as an explicit designed process of sequential interrelated activities. This process can help the facilitator, initiators and experts focus on solving the relevant problem at hand. Two examples are given. First, the facilitator can use this process to help the initiators in the preparation phase to determine the exact goal, sub goals, select the experts and determine the structure of the agenda. Without a structured process, the initiators can pursue the wrong goals, or select the wrong participants, which can lead to an ineffective and inefficient outcome. Second, the facilitator can use a structured process to elicit the opinions of experts in the risk identification, assessment and mitigation phase in which the gathering, processing and aggregation of the information plays an important role. If a structured process is lacking, experts can wrongly identify and estimate the level of exposure to operational risk because they do not understand the operational risks and/or leave out important control measures in their estimation.
63
Chapter 5
Expert judgment Expert judgment can be defined as the degree of belief, based on knowledge and experience, that an expert makes in responding to certain questions about a subject (Clemen & Winkler, 1999). We believe that utilizing multiple experts’ judgment can improve ORM. First, because the elicitation of multiple experts’ judgment can be viewed as increasing the sample size, see section 1.4 in chapter one. Further, we believe that multiple experts are more likely to foresee the operational risks involved as compared to a single expert given its multidimensional characteristics. Second, utilizing multiple experts’ judgment can enable a more precise assessment of a financial institutions exposure to operational risk, see chapter three. Finally, using multiple experts can make it possible to share the identified operational risks and control measures, which enables a commitment towards the outcome. Having this commitment is important because the control measures have to be implemented in the financial institution. The initiators and managers have the responsibility that experts carry out such implementation tasks in their daily operations.
Group support Our literature review and inductive case indicate that ORM can benefit from group support. Group support can be used to further structure the sequential interrelated activities such as information gathering and processing. Group support, in the broadest sense, can include e.g. facilitation techniques and recipes, group methods, and software tools. We believe that group support can be applied to all phases of ORM to support the information exchange between managers, initiators and experts. Group support can help speed up activities in which multiple experts need to gather and/or process information. Information gathering activities are activities such as identifying operational risks, and information-processing activities are activities such as assessing operational risks. These activities can be speeded up through extensive agenda structuring and increased focus on the activities. An increased focus on the activities can for example be achieved by using standardized facilitation recipes. We believe that such recipes can help the facilitator to structure the ORM process even further, thereby clarifying the activities for the experts.
5.1.2. Nature of the design problem Improving operational risk management is considered to be a complex activity because many actors are involved who have their own goals, goals that are sometimes conflicting. Moreover, there are many different interdependent processes, tasks and activities that have to be carried 64
Multiple Expert Elicitation and Assessment out to arrive at a model that is appropriate for the designer and leads to an acceptable solution for the financial institution(s) involved. MEEA should support the designer e.g. risk manager in reducing this complexity or deal with it when it cannot be reduced.
5.1.3. Position of the researcher Our action research design, which was discussed in chapter two, largely determines the position of the researcher. We argue that this position can range from a prescribing role to an observant role. We further argue that there are limits to the action-research character of the cases presented here. Although we expect that both management and the expert participants determine the boundaries of an action researcher and influence the design, we argue that the researcher should actively participate in the application and testing of MEEA. Further, we think that it is difficult to impose a standard process for operational risk management; rather it should be prepared, discussed intensively and iteratively adapted, involving the important organizational stakeholders.
5.1.4. Design guidelines A number of important design guidelines for improving ORM are presented in this section. The design guidelines are broader than the way of working and as such deal with the philosophy behind it. We use these guidelines in our understanding phase, design phase and test case studies. The design guidelines vary in detail and are underpinned with references to the difficulties and challenges in ORM, see chapter one, our literature review, see chapter three, and observations in our inductive case, see chapter four.
Design guideline 1 The first guideline is founded in the ORM literature discussed in chapters one and three, and our first case study. This guideline emphasizes the focus on compliance with relevant standards such as policies, legal requirements and best practices. Complying to relevant standards is an important aspect in ORM because it directly affects the competitive position of a financial institution (BCBS, 1998; Carol, 2000; Brink, 2001). Compliance to relevant standards however is an arbitrary question; the distinction between qualified or not is subject to a set of criteria to be assessed by internal and/or external auditors (BCBS, 2001c).
65
Chapter 5
║ Comply with relevant standards when utilizing expert judgment in ORM. This first guideline can be used for the understanding phase and the design phase, see section 5.2. From our literature review and first case study we have learned that expert judgment is often utilized inconsistently throughout a financial institution. This can lead to: outcomes that are difficult to compare, difficulties for the experts in the exercises and offers a low chance of successfully repeating the same process again. Several standards can be used when utilizing expert judgment. Examples are standards provided by the Bank for International Settlements (BIS), the Australian and New Zealand Risk Management Standard (AS/NZS 4360: 1999), abbreviated to ANZ and best practices in other industries, see e.g. (Cooke & Goossens, 2000; Hulet & Preston, 2000; Brink, 2001; BCBS, 2001c; Brink, 2002; Hoffman, 2002). Using such standards enables internal and external auditors to reassure management and the supervisory authorities that the processes in which expert judgment is utilized are well designed and adhered to. Although currently there is a strong push of emerging supervisory authorities such as BIS, we believe that the pull of business benefits should be the most important force for complying with relevant standards.
Design guideline 2 The second guideline is founded in the expert judgment literature discussed in chapter one, three and our first case study. This second guideline anticipates human behavior in processes, activities and tasks. Decision makers want to take, and want to be perceived as taking decisions in a rational manner (Winkler & Poses, 1993; Clemen & Winkler, 1999; Cooke & Goossens, 2000; Goossens & Cooke, 2001). Because decision-makers often base their actions on the judgments of experts, the processes followed by these experts are important. From our first case study we learned that experts do not always share the results and support the outcome of these processes. Moreover, we believe that when the wishes, needs and motivation of individual experts regarding these processes change from time to time, then, the decisions cannot be made in a rational manner. Given the level of complexity associated with human behavior we argue that these processes need to be as rational as possible.
║ Ensure procedural rationality when utilizing expert judgment in ORM.
66
Multiple Expert Elicitation and Assessment Procedural rationality is attainable if decision-makers, initiators, experts and stakeholders commit in advance to the process, method and tools by which multiple experts are elicited and their views combined. The process, method and tools needs to take into account all possible inconsistencies and biases with which they are associated. Once committed to the method and tools used, it is impossible for decision-makers, initiators, experts and stakeholders to rationally reject the results without breaking this commitment. Following Cooke and Goossens (2000) we propose to use the following principles as conditions for achieving procedural rationality.
•
Scrutability/accountability, all data, including experts’ names and assessments, and all processing tools are open to peer review and the results must be reproducible by competent reviewers.
•
Empirical control, quantitative expert assessments are subject to empirical quality controls.
•
Neutrality, the method for eliciting, combining and evaluating expert opinions should encourage experts to state their true opinions, and must not bias results.
•
Fairness, experts are not pre-judged, prior to processing the results of their assessments.
Design guideline 3 The third guideline is founded in the GSS literature discussed in chapter three and our first case study presented in chapter four. This guideline emphasizes that building a shared understanding about the outcome is very different from building 100% consensus, especially in the case of operational risks. In our first case study we observed that many stakeholders e.g. managers, initiators and multiple experts are involved in ORM. It is therefore likely that this wide range of views cannot be drawn together into a 100% consensus about the outcome. We think that it is important that these disparate views e.g. of potential operational risks and control measures, are not lost in a drive to build 100% consensus.
║ Ensure that the processes in which expert judgment is utilized supports the building of a shared understanding about the outcome between stakeholders. The building of a shared understanding is possible when disagreements are not lost through averaging them out, they should be continually revisited and explored. For this, several procedures can be used that require experts to interact in some fashion. Examples of such procedures are the Delphi method, Nominal Group Technique and Kaplan’s information approach (Delbecq, Ven et al., 1975; Linstone & Turoff, 1975; Kaplan, 1990). This interaction
67
Chapter 5 can include face-to-face group meetings, interaction supported by computers, or sharing information in other ways. From our literature review and first case study we observed that it is important to structure the interaction carefully using extensive facilitation.
Design guideline 4 The fourth guideline is founded in our literature review discussed in chapter one and three and our first case study. This guideline is introduced to keep the complexity limited for the facilitator while the relevant issues regarding utilizing multiple experts are considered. Our literature review and case study indicate that it is imperative for financial institutions to periodically update their risk assessment, or at least a part of it, to see if plans should be changed. However, in our first case study we have observed that this update is often carried out infrequently because of problems with e.g. budget, scheduling of experts, facilitation of the meetings and the reliability of the ORM processes in which experts are utilized. Moreover, we observed that because these processes are often impractical and experts are often used to justify an already existing view on OR rather than to improve the business performance. We believe that when a financial institution only utilizes experts to justify an existing view on OR, they miss out an important opportunity to learn from their mistakes and improve their business performance.
║ Make the process of utilizing expert judgment practical and flexible. The GSS literature suggests several ways to make the processes in which expert judgment is utilized more practical and flexible. First, emphasis can be placed on the technology and tools that are used. For example, tools such as pen based interfaces and shared applications can make storing and retrieving shared information, more practical (Briggs, 1993; Andriessen, 2000; McQuaid, Briggs et al., 2000; Davison & Vreede, 2001; Rutkowski, Vogel et al., 2002). Second, emphasis can be placed on providing guidelines for the facilitator. For example, guidelines such as: start with a kick off session, establish goal congruence, decrease cognitive load, select tasks in which participants have high vested interests and contact participants directly can make the processes more practical (Vreede & Muller, 1997; Mittleman, Briggs et al., 2000). Third, emphasis can be placed on using standard facilitation recipes to make the processes more flexible. For example, facilitation recipes such as thinkLets can be used to design processes in which multiple experts need to be utilized, see chapter three.
68
Multiple Expert Elicitation and Assessment
Design guideline 5 This fifth guideline is founded in the expert judgment literature review discussed in chapter three and the case study presented in chapter four. This guideline emphasizes the assignment of relevant roles. From our literature review we observed that roles represent different sources of information and accountability in ORM. Moreover, individual behavior is influenced by the role an individual has assumed knowingly or unknowingly (Armstrong, 2001b). In our inductive case we found that various roles often need to be combined to describe a problem situation. Therefore, we believe that the assignment of roles in all phases of the ORM process can be considered as important. A clear description and assignment of roles can help to understand the interaction between the decision-makers, initiators, experts and other stakeholders. Moreover, it can help the facilitator to have more control over the interaction between the initiators, managers and experts.
║ Consider the relevant roles in the processes in which expert judgment is utilized and assign them explicitly. As described by Pulkkinen and Simola (2000) we propose using several roles and basic functions to delineate the roles in ORM.
•
“Decision-maker, presents the strategic view, status of the process, and the objective of the outcome, is responsible for the decisions based on the risk assessment, identifies and selects stakeholders, defines the resources needed in the process and provides the decision criteria.
•
Referendary (equal to the initiator in our case study), selects the experts, describes the case, comments on the formats of the experts’ judgments, takes part in the discussion, asks the opinion of stakeholders on the quality of results, accepts the summary report, and explains the content and conclusions to the decision-maker.
•
Normative expert (equal to the facilitator in our case study), is an expert in expert judgment methods, is responsible for expert training, elicitation and combination of judgments, responsible for the elicitation of stakeholders’ preferences and reporting, and draws conclusions.
•
Domain expert(s), is familiar with the issue, responsible for the analysis of the issue and giving qualitative/quantitative judgments on it.
•
Stakeholder(s), are affected by the decision, give feedback during the process, affirm the scope and completeness of the issue”.
69
Chapter 5 The assignment of roles can be initiated by the decision-maker and complemented by the initiator(s) who are accountable for the overall ORM project. The facilitator can help to identify and select the experts.
5.2. Way of working The way of working describes the process, activities and steps that need to be executed in the phases preparation, risk identification, risk assessment, risk mitigation, and reporting of operational risk management. Following Mitroff, Betz et al. (1974) and Sol (1982) we adopt a problem-solving perspective to achieve a better understanding of the problem situation and to define a set of possible solutions. Following Janssen (2001) we argue that the problem solving cycle can be divided into an understanding phase and a design phase, see Figure 5-9. In the design phase, work will be carried out in parallel on several activities, at different levels of abstraction and various iteration cycles are possible (Checkland, 1981). Below, we elaborate on the understanding and design phase and discuss the subsequent steps. Evaluation problem diagnosis Conceptual model
Specification
Conceptualization
Empirical model
Design
Models of possible solutions
Correspondence test
Contemporary situation
Model of the chosen solution
Choice
Implementation
Evaluation
Improved situation
Figure 5-9: problem-solving cycle (Mitroff, Betz et al., 1974; Sol, 1982)
5.2.1. Understanding phase We present three steps for the understanding phase, conceptualize the problem situation, specify the problem situation and validate the problem situation. We elaborate on each step below. These steps are summarized in Table 5-13.
70
Multiple Expert Elicitation and Assessment
Step one: conceptualize the problem situation The understanding phase starts with the conceptualization of the contemporary problem situation and the aim of this step is to arrive at a broad qualitative understanding of this problem situation. In practice this means that you ‘create’ a first draft of the real problems. The description of the contemporary problem situation includes an overview of topics such as the current problem issues, the goals, ORM standards used, potential interruptions, and a list of all the possible stakeholders involved. Particular attention should be paid to the goal of the ORM exercise, which should be stated as clearly as possible. Another important aspect is the roles played by the stakeholders involved. Information about roles is important to select experts for the ORM exercise and delegate tasks. Constructing a visual model of the contemporary situation can help in describing the elements of the problem. For example, the model and its description can address which activities are carried out by whom and in what order. An example of such a model is depicted in Figure 4-1. In this first step, attention should be paid to the supporting techniques, technology and software tools used in the main phases of ORM.
Step two: specify the problem situation The second activity concerns the specification of a descriptive empirical model. In this activity, a model is constructed and empirical data is gathered. Using this model and data we can analyze and diagnose the main obstacles in the contemporary problem situation. Our first case pointed out that it should be clear what the motive is to start an operational risk investigation, see chapter four. We found that this motive can be ‘intrinsic’ thus come from the business unit under investigation or extrinsic, thus from ‘outside’ the business unit. In our first case we observed that the motive underlies the commitment that the initiators have to an ORM investigation. Special attention should further be given to the finding of relevant facts by studying e.g. audit reports, financial reports, the internal operational losses in the database, and management information. Further, the description of the contemporary situation should include the perceived problems, which should be stated as clearly as possible. The problems should be made explicit in terms of content e.g. inadequate insight into the operational risks, a lack of alternative solutions e.g. using techniques and technologies to elicit multiple experts. The knowledge of initiators and experts is needed to formulate these problems because they possess the necessary knowledge. Moreover,
71
Chapter 5 the initiators and experts have to execute and implement the designed solution for which commitment is often needed. Commitment to a solution is achieved by involving the initiators and expert participants in the specification of the problem (Herik, 1998). We observed that the goals of decision makers, initiators and expert participants often show a discrepancy. Additional to step one, it is essential to formulate a clear and unambiguous goal before starting the ORM exercise. The goal should be formulated in terms of (1) tangible output such as an estimation of the institutions exposure to operational risk, applicability and quality of the results, and (2) intangible output such as increased shared understanding, satisfaction with outcome and process.
Step three: validate the problem situation In the third activity, the correspondence test, we aim for a validation of the descriptive empirical model. Involving two or more initiators and several expert participants can help to validate the model. Ideally, experts who did not participate in the problem descriptions should be selected. The validation can be achieved by a walk through the model step-by-step to find out if the model corresponds with their reality, solves the main problems, addresses the key operational risk area, and is able to reach the stated goals. The validation step should conclude with a work assignment signed by the responsible manager. This helps to ensure procedural rationality. Table 5-13: summary of the understanding phase
What you need to do 1. Conceptualize the problem situation
2. Specify the problem situation
3. Validate the problem situation
Points to consider Current problem issues, goals, ORM standards used, stakeholders. Contemporary work process Supporting techniques and tools Motives for starting ORM Internal data, external data (facts) Explicit defined problems Commitment Unambiguous defined goals Correspondence with reality Solving the main problems Reaching the stated goals
How Group wise
Who Initiator Experts Facilitator
Group wise
Initiator Experts Facilitator
Individually with one Experts or two experts Facilitator Manager
5.2.2. Design phase We organized the design phase following the five phases of the expert judgment process, see chapter three. For each phase we present a number of steps in a logical order. Each phase and
72
Multiple Expert Elicitation and Assessment step within that phase is underpinned with references to our literature review, observations in our first case study, way of thinking and design guidelines.
Preparation phase The preparation phase initiates the design phase and provides the framework for the experts, taking into account the most important activities prior to the identification, assessment, mitigation, and reporting of operational risks. The preparation phase is divided into the following steps: determine the context and objectives, identify the experts, select the experts, assess the experts, define the roles, choose the method and tools, tryout the exercise, and train the experts. The steps in this phase may require several iterations. Each step aims to minimize inconsistency and bias. These steps are summarized in Table 5-14.
Step one: determine the context and objectives This first step is used to describe the context, relevant problems, and objectives of the particular ORM exercise in which expert judgment is utilized. Although these issues are for a large part described in the understanding phase we emphasize once more that determining the objectives of the exercise is crucial for the design and a successful execution. Often, the initiator suggests one or more objectives for the ORM exercise. Then, after several iterations, the initiator and the facilitator can determine the definite objective. We suggest consulting several experts in these iterations and selecting relevant topics and subjects that matter for the institution. To execute this step, we suggest using organizational documents, fault trees, computer programs and interviews with initiators and experts. The results of this step should be written up in a document and distributed to management and the experts in the expert training session, see step eight.
Step two: identify the experts This step is used to provide the rationale for selecting experts. It concerns a broad identification of a pool of experts using a predefined list of broad criteria such as the necessary experts’ knowledge width, gender, number of experts to be used and internal politics, also see chapter three. Where possible use mixed gender expert groups to avoid e.g. internal politics and groupthink (Karakowsky & Elangovan, 2001). Although expert judgment literature suggests that the largest possible number of experts should be used, the ideal number of experts supported by GSS can be between 10 and 20. Although a GSS can be used to support large expert groups, the group size might become a problem when executing convergence or
73
Chapter 5 evaluation patterns. Therefore, we suggest using 15 experts as a rule of thumb for relatively simple patterns of collaboration and dividing the experts into subgroups when convergence or evaluation patterns need to be executed. Further, following Cooke and Goossens (2000) we suggest to use a minimum of five experts.
Step three: select the experts This step is used to narrow down which experts should attend the exercise. The experts should be selected by the initiator(s). First, we suggest selecting experts primarily based on their substantive and process knowledge. Substantive knowledge relates to having knowledge of the science and the relevant facts. Process knowledge relates to having the necessary modelling, computational and analytical skills (McKay & Meyer, 2000). We expect that substantive knowledge is required to identify operational risks and control measures and we expect that process knowledge is required to assess the operational risk in terms of frequency and impact. Second, we suggest fine tuning the list of criteria to select experts. Criteria such as the availability of experts, commitment to the method, experience, reputation in the field, familiarity with the context, and interest in the ORM project can be used for this. Third, we suggest considering the added value of selecting more experts by making a cost benefit analysis. Adding more experts does not necessarily mean a more precise estimation of a financial institutions exposure to operational risk. Moreover, using more experts is more expensive.
Step four: assess the experts Although it is sometimes reasonable to provide a decision maker with the individual assessments of operational risks, it is often necessary to aggregate the assessments into a single one. The assessment of experts aims to weigh the performance of each selected expert to aggregate his or her assessment of operational risks more accurately into one combined assessment. Moreover, it also helps experts to improve the experts subjective sense of uncertainty against quantitative measures of performance. Following Cooke (1991) and Cooke and Goossens (2000) we suggest using performance variables to be assessed by the experts. Performance variables are questions to which the initiator and facilitator already know the answers. The questions need to be identified in advance and should cover the entire spectrum of issues in the ORM exercise sufficiently. In general, domain related questions are expected to be the most meaningful because they are contiguous to the business process under
74
Multiple Expert Elicitation and Assessment investigation. The performance of the experts can be measured in this way, enabling a more accurate aggregation of the individual assessments. We suggest assessing the experts prior to the risk identification phase.
Step five: define the roles This step aims to denote all the roles necessary to accomplish the activities in the ORM phases. The definition of roles can be used for: transferring knowledge to experts in our test cases, see step eight, constructing the building blocks for our explicitly defined process of sequential interrelated ORM activities, and quick modeling of processes in which expert judgment is utilized such as in our test cases. The definition of roles aims to counteract overconfidence, groupthink, and conflict, see chapter three. Moreover, it enables us to add new tasks to existing roles. We suggest that the following roles are essential: manager, initiator, expert, and the facilitator. The manager is usually responsible for a business unit or department, and is expected to have the least active role during the execution of the phases and/or processes. He or she is mainly responsible for the work assignment, see step three of the understanding phase. This actor should not interfere in the ORM process for reasons of objectivity, assurance, and commitment to the outcome. The initiator receives the work assignment from the manager and actively participates in the design phase. We suggest using experts who have substantive and process knowledge. Preferably, an expert is commonly someone who is regarded and accepted as more knowledgeable than others. Besides the initiator, we suggest inviting one or two experts to participate in the design phase to validate the final ORM process. The facilitator is involved in all ORM phases. He or she is responsible for functions such as: managing relationships between experts and initiators, ORM procedures-design, promoting ownership, presenting information to the group, selecting and preparing appropriate methods and tools, structuring tasks and focusing the group on the need for an accurate estimation of an organizations exposure to operational risk. See also chapter three.
Step six: choose the method and tools The sixth step is used to choose the right method, in combination with supporting tools. This step aims to choose the method and supporting tools and map them in such a way that synergy is created. A method can consist of an explicit designed process of sequential interrelated activities and facilitation recipes. Supporting tools can consist of a group support system that
75
Chapter 5 supports the communication between the experts and their cognitive tasks. Tools can also consist of functions such as word processing, calculation or presentation. The choice of the method and tools depends on the context, objectives and the setting in which the ORM exercise takes place. Following Herik (1998) and Clemen & Winkler (1999) we suggest using simple methods and tools because this increases the experts’ understanding, implementation, reduces the frequency of mistakes, and is less expensive. Moreover, it requires fewer skills from the experts. We have chosen to not make an explicit prescription of methods and tools for specific activities, rather, we suggest analyzing the situation at hand before deciding. We acknowledge that the choice for a specific recipe is not always an easy task; rather it depends on a myriad of activity characteristics and the task involved.
Step seven: tryout the exercise This step aims to find out if the design is appropriate to reach the objectives and solve the problems defined in the understanding and design phase. We suggest trying out the entire operational risk management exercise when possible. This should include a ‘walk through’ all the activities in the identification, assessment, mitigation and reporting phase. If this is not possible e.g. due to scheduling problems, then, dry run the most critical activities in the identification, assessment and mitigation phase using the initiator and several experts. Special attention should be paid to the specific context and objectives of the exercise, clarity of the ORM process, steps to be followed and, the scales used for estimation of the frequency and impact of operational risks.
Step eight: train the experts This step aims to make the experts familiar with the operational risk management exercise and/or project. It is recommended to execute this step with all the experts present at the same time, at the same place. In the training session, attention should be paid to the context of the operational risk management exercise and objectives. A brief theoretical overview of ORM and its importance should be presented to the experts, the ORM process that will be followed, roles and responsibilities, the method and tools used for the exercise, how the results will be processed and used, and the feedback and reporting. The experts should commit in advance to the entire process, methods and tools used for the exercise. The way of working for the preparation phase is summarized in Table 5-14.
76
Multiple Expert Elicitation and Assessment Table 5-14: summary of the preparation phase
What you need to do 1. Determine the context and objectives
2. Identify the experts
3. Select the experts 4. Assess the experts 5. Define the roles
6. Choose the method and tools 7. Tryout the exercise 8. Train the experts
Points to consider Business process under investigation Organizational level where ORM takes place Stakeholders (broadly) Required substantive and process knowledge Gender, number of experts, internal politics Variables such as expertise, availability, and reputation Costs / benefits Performance variables Tasks, activities, responsibilities, substantive and process knowledge, objectivity, assurance commitment Complexity of activities Required resources? Required skills for tools and techniques? Specific context and objectives Goal and problems Scales used for frequency/impact Specific context and objectives ORM, process, roles and responsibilities, method and tools, processing of the results, feedback and reporting
How Checklists, Interviews Study documentation Select relevant subjects that matter Interviews
Who Management Initiator Facilitator
Interviews
Initiator Facilitator
Interviews, checklists, documentation Analysis, interviews
Initiator Facilitator Initiator Facilitator
Interviews
Experts Facilitator
Group wise Using method and tools Group wise Using method and tools
Initiator Experts Facilitator Initiator Experts Facilitator
Initiator Facilitator
Risk identification phase The aim of the risk identification phase is to provide a reliable information base, which is important to provide the input for an accurate estimation of the frequency and impact of operational risks in the risk assessment phase. We discuss the steps, identify the operational risks, and categorize the operational risks in this phase. These steps may require several iterations. Each step aims to minimize inconsistency and bias. These steps are summarized in Table 5-15.
Step nine: identify the operational risks The aim of this step is to identify the operational risks that are relevant to the pre-defined context and objectives. An explicit designed process of sequential interrelated activities combined with standard facilitation recipes and GSS tools should be used to support this step
77
Chapter 5 to increase the chance of a comprehensive identification and decrease the likelihood that an unidentified risk becomes a potential threat to the institution. We suggest using two different roles to execute this step. First, a facilitator who is experienced with the issues in the project, ORM, utilizing expert judgment, and facilitation skills can be used to guide the experts through the process. Second, a substantive expert, who is familiar with the domain, is an advocate when interaction about the content is necessary. Based on our literature review, we recommend starting with identifying events anonymously. When identifying the events, the facilitator and substantive expert should present the information to the experts using a small number of relevant important cues, clear, brief and balanced wording to help the experts in identifying the events. When the number of cues cannot be limited two recommendations can be made. First, decompose a complex task into smaller simpler tasks. The aim of decomposition is to improve the reliability of information processing and limiting the mistakes that can be made by distractions to irrelevant cues. We expect to benefit most from decomposition when uncertainty in a task is high. For example, when experts need to be accurate in describing an operational risk, see chapter three. Second, use computers to support the experts in the processing of information. For example, when an operational risk event needs to be identified, an expert receives a high number of cues. Computers can help experts not to miss out the important cues by providing communication support, see chapter three. When the events are identified, then the causes of these events can be explored in a similar manner and framed together with an event in a clear formulation of an operational risk. Based on the Delphi method, see e.g. Linstone and Turoff (1975) we expect that three structured rounds will be enough. For assessment purposes it is essential that all experts interpret the identified operational risks equally. The operational risks should not be differently interpretable due to their descriptions. To establish this, we recommend eliciting multiple experts group wise e.g. in a face-to-face manual workshop or a computer supported workshop, see chapter one. We expect that group wise elicitation will enable a shared understanding, clear insight, and commitment towards the outcome.
Step ten: categorize the operational risks This step is used to categorize the operational risks. Categorization is important for assessment, mitigation and reporting purposes. We suggest categorizing operational risk based on its causes
78
Multiple Expert Elicitation and Assessment rather than on its effects using an explicit designed process of sequential interrelated activities supported with facilitation recipes and GSS tools to speed up this activity. There are three reasons for categorization (Carol, 2000). First, it enables the facilitator to provide a frame of reference to the experts when they need to assess the operational risks. Second, it enables a financial institution to take control measures aimed at the causes of operational risk. Third, it complies with regulatory standards such as the event types suggested by Basel II, see e.g. (BCBS, 1998; Brink, 2002).
Step eleven: perform a gap analysis This step is used to compare the identified operational risks with the relevant internal and external loss data, which is found in the understanding phase. A detailed comparison by using several experts may be difficult due to issues with using loss data, see chapter 1. Moreover, expert judgment is often utilized in situations for which additional data is required. However, when it is possible for the experts to make a comparison, there can be two possible consequences: (1) the identification of operational risk has to be reconsidered, and (2) the understanding phase has to be reviewed. Therefore, we advise involving the initiator and a substantive expert in the identification of operational risks to find out quickly if the ORM exercise meets the expectations. A spreadsheet with loss data or an internal loss database can be used to execute this step. The way of working for the risk identification phase is summarized in Table 5-15. Table 5-15: summary of the risk identification phase
What you need to do 9. Identify the operational risks
10. Categorize the operational risks 11. Perform a gap analysis
Points to consider Facilitation skills Substantive expertise Identification of events Causes of the events Clear description of operational risk Pre-define categories Categories suggested by regulators Expectations management and initiator
How Group wise, using a well-defined process in combination with a method and GSS
Who Initiator Experts Facilitator
Group wise, using a well-defined process in combination with a method and GSS Individually with one or two initiators using a spreadsheet or loss database
Initiator Experts Facilitator Initiator Facilitator
79
Chapter 5
Risk assessment phase The risk assessment phase aims to provide the manager and initiator with an accurate quantification of the frequency of occurrence and the impact associated with the potential loss of the identified operational risks and control measures. We distinguish between two steps: assess the operational risks and aggregate the results. Each step aims to minimize inconsistency and bias. These steps are summarized in Table 5-16.
Step twelve: assess the operational risks This step is used to quantify the operational risks in terms of its frequency of occurrence and the impact associated with the possible loss. The aim of this step is to arrive at an accurate quantification of the operational risks, see chapter one, section 1.3. We suggest using two important sub steps. First, assessing the absolute level of exposure to operational risk. In this assessment, the experts disregard the existing control measures. Although it might be difficult for the experts to leave out cognitively the existing controls in their assessment, it is expected to provide the initiators and experts with a clear understanding of the business process under investigation. Second, assessing the managed level of exposure to operational risk. In this assessment, the experts take into account the existing control measures. We suggest using experts who have substantive knowledge of the existing controls measures and operational risks to enable a correct assessment. It should be noted that the form of the assessment e.g. group wise or individually can influence the results. For both sub steps we advise letting the experts assess the operational risks individually to prevent the assessment results from combining possible inconsistencies and biases.
Step thirteen: aggregate the results This step is used to aggregate the results, which are derived from the individual assessments. The aim of this step is to arrive at an accurate estimation to provide a financial institution with the input to measure their exposure to operational risk. We advocate using a mathematical method combined with a behavioral method to aggregate the results. Combining these aggregation methods is desirable to reduce errors, when there are uncertainties about the situation or which method is most accurate. First, we suggest using a simple mathematical method because this usually performs better than a more complex method (Clemen & Winkler, 1999; Hulet & Preston, 2000). When there are uncertainties regarding the experts’ substantive knowledge, we recommend using an equal-weighed average rule to combine the individual results. We advise using a weighed average rule if the experts have excellent substantive 80
Multiple Expert Elicitation and Assessment knowledge. Second, the behavioral aggregation method should be designed explicitly for the information available to the experts. We propose that the facilitator leads the experts through a discussion of the available information. The objective of this discussion should be sharing information for the variable of interest. The experts should be encouraged by the facilitator and substantive expert to provide the rationales behind their individual assessment. This will help to clarify the substantive issues. The way of working for the risk assessment phase is summarized in Table 5-16. Table 5-16: summary of the risk assessment phase
What you need to do 12. Assess the operational risks 13. Aggregate the results
Points to consider Relevant existing controls Substantive and process expertise Scale used for the frequency Scale used for the impact Complexity of the method Substantive expertise Structure of the interaction
How Individually, using a well-defined process in combination with a method and tool Using a mathematical and behavioral aggregation method
Who Initiator Experts Facilitator Experts Facilitator
Risk mitigation phase The risk mitigation phase aims to mitigate the operational risks that, after assessment, still have an unacceptable level of frequency and/or impact. We distinguish between three steps: identify alternative control measures, re-assess the residual operational risks and aggregate the results. The latter two are for a large part equal to the steps described above. Therefore only the differences are discussed. These steps are summarized in Table 5-17.
Step fourteen: identify alternative control measures This step is used to mitigate the operational risks that are not sufficiently managed by the existing control measures. Identifying, and applying alternative control measures can mitigate such operational risks. Each operational risk to be mitigated needs its own specific set of control measures. To achieve this, several suggestions can be made. First, use an existing control framework, database or benchmark study to check for relevant control measures. Second, match the relevant controls to each operational risk by using experts who have substantive knowledge of the controls and operational risks to enable a correct matching. If an existing control framework does not exist or is insufficient, we advise identifying mitigating control measures that stay within a reasonable relationship to the expected losses e.g. the cost of the controls should not exceed the expected loss. For this identification we recommend using a facilitator and a substantive expert to guide the elicitation process.
81
Chapter 5
Step fifteen: re-assess the residual operational risks This step has similarities with step twelve and is used to quantify the residual operational risks in terms of its frequency of occurrence and the impact associated with the possible loss. The difference is that in this step, for the re-assessment, the experts need to take into account simultaneously the existing control measures and the alternative, not yet existing, control measures. We expect that this is a difficult task for the experts. Therefore, we suggest decomposing the task into smaller more manageable tasks. For example, a paired comparison of the alternative controls can be made by the experts using realistic criteria e.g. effectiveness, efficiency and applicability. The results of this sub activity might simplify the re-assessment task. For re-assessment we suggest using experts who have substantive knowledge and process knowledge. We expect that substantive knowledge is needed in the re-assessment with respect to the existing controls measures, alternative control measures and operational risks. Process knowledge is expected to enable a correct assessment. For re-assessment we advise letting the experts re-assess the operational risks individually to prevent the re-assessment results from possible inconsistencies and biases.
Step sixteen: aggregate the results This step is similar to step thirteen and is used to aggregate the results, which are derived from the individual re-assessments. For a discussion we refer to step thirteen. The way of working for the risk mitigation phase is summarized in Table 5-17. Table 5-17: summary of the risk mitigation phase
What you need to do 14. Identify alternative control measures
Points to consider Relevant alternative controls Substantive and process expertise
15. Re-assess the residual operational risks
Relevant existing controls Substantive and process expertise Scale used for the frequency Scale used for the impact Complexity of the method Substantive expertise Structure of the interaction
16. Aggregate the results
How Group wise, using a well-defined process in combination with a method and GSS Individually, using a well-defined process in combination with a method and tool Using a mathematical and behavioral aggregation method
Who Initiator Experts Facilitator Initiator Experts Facilitator Experts Facilitator
Reporting The reporting phase aims to provide the manager, initiator and experts with the relevant information regarding the ORM exercise. It is important to note that this phase should not be
82
Multiple Expert Elicitation and Assessment confused with the understanding phase wherein the requirements for reporting purposes are defined. We distinguish between two steps: document the results and feedback to the experts. These steps are summarized in Table 5-18.
Step seventeen: document the results This step is used formally to document the results of the ORM exercise. The aim of this step is to present the managers and initiators with all the relevant information and data in a formal report. This report should help to communicate the results to higher management levels. We suggest basing the report, at minimum, on the requirements set by the managers and initiators. These requirements should be made clear in the preparation phase. In this step it is recommended that the facilitator checks the latest relevant reporting standards as issued by several financial service authorities, discusses them with the managers and initiators, and applies them appropriately.
Step eighteen: provide feedback to the experts This step is used to provide feedback to the experts. The aim of this step is to enable experts to leverage the experiences gained and to maintain continuity. We advise presenting and discussing the results group-wise. We also recommend sending the formal document to each expert. The way of working for the reporting phase is summarized in Table 5-18. Table 5-18: summary of the reporting phase
What you need to do 17. Document the results 18. Provide feedback to the experts
Points to consider Relevant reporting standards Substantive and process expertise Scale used for the frequency Scale used for the impact Complexity of the method Substantive expertise Structure of the interaction
How Individually
Who Manager Initiator Facilitator
Group-wise Individually
Initiator Facilitator
5.3. Way of modeling The way of modeling concerns the modeling techniques used to construct models in the methodology we follow, see Figure 5-8. Several modeling techniques are used within the field of utilizing expert judgment in operational risk management, see e.g. (Bigün, 1995; Cruz, Coleman et al., 1998; Bier, Haimes et al., 1999; Cruz, 2002; Hoffman, 2002; Chappelle, Crama et al., 2004). Many of these models focus on modeling the output of expert judgment activities such
83
Chapter 5 as extreme values, severity models, frequency models, and causal models see e.g. (Cruz, Coleman et al., 1998; Medova & Kyriacou, 2001). Because these models often have a quantitative nature, we argue that these models are not very suitable for dealing with situations that are characterized by multiple experts, complex technology and a bounded rationality view. Rather we need modeling techniques that support the modeling of the process and activities of the understanding phase and the design phase. A distinction can be made between conceptual and empirical models (Sagasti & Mitroff, 1973; Mitroff, Betz et al., 1974).
•
Conceptual models are characterized by high levels of abstraction and fuzziness and help us to structure perception, representation and reasoning regarding a problem situation (Sol, 1982). Moreover, they can be used as a vocabulary or a vehicle of communication (Bots, 1989; Cross, 2000).
•
Empirical models enable us to analyze and diagnose a problem situation and find possible solutions. Additionally, they are formalized representations of reality and capture more detail and time ordered dynamics of interdependent activities (Sagasti & Mitroff, 1973).
Following the above, we propose to use models with different degrees of abstraction to support the understanding and design phase, see Figure 5-9. From an immense collection of modeling techniques we choose a small subset. We propose to use visual models to support structuring the problem situation (Baron, 1994; Simons, 1994). Visual models can be used for clarification and illustration of the problem situation. An example of a visual model is presented in chapter two: the research outline. We propose to use activity diagrams and sequence diagrams which are available from the Unified Modeling Language, abbreviated to UML, to support analyzing, diagnosing and finding possible solutions. UML is a widely accepted standard graphical oriented representation technique. Activity diagrams, of which the core is the activity, can be used to describe a sequence of activities such as identifying events or assessing operational risks, see e.g. (Bots, 1989; Verbraeck, 1991; Wijers, 1991). Sequence diagrams can be used to describe how experts collaborate. These diagrams model dynamic aspects of the system and emphasize the sequence or order in which activities take place see e.g. (Haan, Chabre et al., 1999; Versteegt, 2004).
5.4. Way of controlling The way of controlling describes the control of the way of working and the models that we use to design a process to improve operational risk management. This implies a project
84
Multiple Expert Elicitation and Assessment management approach involving e.g. project design, checkpoints, documentation, decisions that have to be made and time management (Turner, 1980 a; Eijck, 1996). We advise to use a project management approach that is widely accepted, for example Prince2 see e.g. (Turner, 1999 b; Akker, 2002; Onna & Koning, 2004). Following Sol (1992) we advise using a ‘middle out’ and incremental point of view in carrying out the way of working and modeling process. In this way, the focus should be on a small but critical part of the financial institution that is recognizable. This facilitates quick feedback, which is useful for learning, the quality of design and it helps to strengthen management and employee support for the change process. The steps that we presented in this chapter are not carried out sequential; rather they are carried out using this middle out and incremental point of view. We suggest treating the way of working and way of modeling as a combined task wherein the manager, initiator and facilitator are responsible for various activities at different times. For examples of the roles, see the column ‘who’ in the tables presented in this chapter. We further advise using an adaptive control strategy to facilitate learning for both facilitator and the experts of the client organization (Meel, 1994). The facilitator should be motivated to bring forward his or her knowledge with respect to techniques, tools, methods and possible alternatives. The experts should be motivated to bring forward their views and knowledge of the organization, business process, bottlenecks and possible improvements.
5.5. Evaluation of MEEA When we apply MEEA in practice, we need an evaluation framework to help us understand the research findings of our test case studies. Moreover, this helps us to evaluate MEEA. We structure our evaluation following an Input, Process, Outcome (IPO) framework of Nunamaker, Dennis et al. (1991) to evaluate the improvements made by MEEA to operational risk management. Further, we aim to use survey questions which have been previously applied in financial institutions and have as such a close relation to our research project. We do this to be able to compare our research findings and observations to other studies in the financial service sector, expert judgment studies and Group Support Systems (GSS) studies, see Figure 510. Below, we discuss the data sources and method of analysis used in this research.
85
Chapter 5
Context
Team
Process
Outcome
Technology
Figure 5-10: evaluation framework (Nunamaker, Dennis et al., 1991)
5.5.1. Data sources During this research project, we collect both quantitative and qualitative data from our test case studies to enable a rich presentation on each construct. Following Wijk (1996a; 1996b), we use the following specific data collection instruments to analyze the results from the test case studies:
•
survey, managers, initiators and all experts are asked to fill out a survey to measure their perception on a number of constructs related to the ORM session
•
interview, the managers and initiators sessions are interviewed before and immediately after the sessions, in addition, a number of experts are interviewed by telephone
•
expert estimation, the initiators and experts are asked to give estimations with respect to the construct outcome before each ORM session
•
direct observation, during the preparation, planning, and execution of each session, notes are made by researchers of all the events they thought to be important to this research
•
system logs, we use the electronically stored results of each ORM session to reconstruct the sessions tasks in great detail, track the time spend on each task, and other important events.
5.5.2. Method of analysis Following Nunamaker, Dennis et al. (1991) and Dennis & Gallupe (1993) we consider the following constructs to be important for evaluating improvements made to operational risk
86
Multiple Expert Elicitation and Assessment management. The constructs reflect important aspects in operational risk management, expert judgment and GSS meetings. The constructs and the data that was recorded on each construct are briefly presented in Table 5-19. We use a quantitative and qualitative analysis to get indications on the improvements made by MEEA to operational risk management. Table 5-19: constructs used for evaluating improvements to operational risk management
Construct
Description of data recorded
Context Importance
Context and the level of importance of the problem that was addressed
Team Demographical Composition Collaboration
Size, gender, average age, work experience, experience with GSS Suitability of the group for reaching the goals Collaboration with other group members
Technology Anonymity
Extent to which the ability to enter data anonymously was valued and functional
Process Structure Involvement Interaction Facilitation
Enough time spent on important topics Extent to which participants are involved and feel encouraged to participate Extent to which participants react to each other and collaborate Extent of influence of facilitator on the group, level of knowledge
Outcome Effectiveness Efficiency Satisfaction
Extent to which the outcomes of a session fit with the planned outcomes Extent to which the session time was actually used for achieving the outcomes Satisfaction with outcome, satisfaction with process
A number of questions are used to measure each construct. The questions are presented to the experts on an ordinal scale using three categories ranging from being most negative, being equal, and being most positive. The questions which are used to measure the constructs are all originally in Dutch and based on (Wijk, 1996a; Wijk, 1996b). To measure the constructs: context, technology, process, outcome effectiveness and efficiency we use the questions presented in (Vreede & Wijk, 1997a) who build on Wijk’s research. To measure the constructs outcome effectiveness and efficiency, participants point of view we used questions presented in the same study and a risk management GDR evaluation questionnaire. To measure satisfaction with outcome we used the first three questions presented in (Briggs, Reinig et al., 2003) and the last question from the short version of the general meeting assessment survey (Reinig, Briggs et al., 2003). To measure satisfaction with process we used four questions presented in (Briggs, Reinig et al., 2003) and one question presented in (Reinig, Briggs et al., 2003).
87
Chapter 5
Quantitative analysis of the results We designed the flow of activities presented in Figure 5-11 to analyze quantitatively the results from the test case studies. First, we verify if more than one question is used to measure the underlying construct. When two or more questions are used to measure the same underlying construct, then, we use the Cronbach’s Alpha (CA) test to investigate if the questions are reliable enough to measure the same underlying construct ('t Hart, Van Dijk et al., 1998). The CA test is a measurement of how well a set of questions measures a single uni-dimensional latent construct. When data have a multidimensional structure, the CA will usually be between 0.70 and 0.80 (UCLA, 2002). We have set the value for CA to 0.80 to test the inter-item correlation because it is an acceptable value in most social science studies. Now, there are two options that we have to consider. Option 1: if CA is higher than 0.80, the inter-item correlations are high, and there is an indication that the questions are measuring the same underlying construct. This means that there is high reliability, because the items measure a single unidimensional latent construct ('t Hart, Van Dijk et al., 1998; Vocht, 1999). Option 2, if the CA is equal to or lower than 0.80 we considered the questions to be not reliable enough to measure the same underlying construct. Then, we test the questions individually. We use the independent Kolmogorov-Smirnov (KS) test to analyse the normality of the results for both options (Vocht, 1999). Following Vocht (1999) and UCLA (2002) we consider that if the Asymp Sig (1-tailed) value is lower than 0.025, there is an indication that the construct does not have a normal distribution, otherwise there is an indication that the construct does have a normal distribution. The KS test results are presented with indicators Pks below the tables in both test cases. In the case of an abnormal distribution and a sample size, that is larger than thirty, a T-test can be used, see the dotted-line arrow n>30 in Figure 5-11. However, we choose to use the Wilcoxon test because our sample size was only 34, as in this case we need to be more careful (Darlington, 2002). Depending on the fact of whether there is a normal distribution or not, either the Wilcoxon-test or the T-test was used to test if H 0 can be accepted or rejected. The Wilcoxon test results are presented with the indicator Pwx and the T-test with the indicator Pt. Following ‘t Hart (1998), Vocht (1999) and Darlington (2002) H0 is formulated as ‘there is no significant difference between the contemporary situation and the improved situation’, and H1 as ‘there is a significant positive improvement compared to the contemporary situation’. Either way, if the Asymp Sig (1-tailed) value is lower than 0.025, H 0 can be rejected. Then, the conclusion is that the construct or the individual question is significantly improved compared to the contemporary situation. Moreover, we tested if the average value on the 88
Multiple Expert Elicitation and Assessment ordinal scale is larger than ‘being equal’. If this is the case, we also have a strong indication that there is a positive significant improvement.
start analysis results construct
Yes
Measure Cronbach's Alpha
More than one question to measure the construct? No
Use independent Kolmogorov Smirnov (KS) test
Cronbach's Alpha > 0.80? Yes
Questions are measuring same construct
No
Questions are not measuring same construct
Asymp Sig (1-tailed) value < 0.025? Yes
No
Not a normal distribution
Normal distribution
n>30
Use Wilcoxon test
Use T-test
Asymp Sig (1-tailed) value < 0.025? Yes
Accept H0
No
Reject H0
Figure 5-11: flow of activities used to quantitatively analyze the results from the test case studies
89
Chapter 5
Qualitative analysis We use qualitative data sources such as observations made by the researcher and interviews to elaborate on the quantitative results from the survey, also see chapter 2 and paragraph 5.5.1.. Qualitative information will help us to enrich our quantitative analysis by providing a more holistic insight in the dynamics between constructs and variables used in this research ('t Hart, Van Dijk et al., 1998; Swanborn, 2000). Further, using qualitative data enables us to describe the phenomena of interest and then compare and contrast with the existing literature and the differences with the contemporary situation. Moreover, we use our qualitative data in combination with our quantitative data to be able to get indications on causal relationships between the constructs used in this research, see Figure 5-10.
90
Data! Data! Data! I cannot make bricks without clay. Sherlock Holmes
6.
Empirical Testing of MEEA at Ace Insure
The concepts of MEEA, which are presented in chapter five, are tested using two case studies. We selected these case studies based on the criteria discussed in chapter two. The main theme of the case studies is to test our approach MEEA. The second case study was carried out at Inter Insure. In this case study, described in chapter seven, MEEA is applied to design, execute and evaluate an ORM process for Inter Insure. The first case study, described in this chapter, is aimed at evaluating MEEA using the constructs presented in chapter five. This case study was carried out at Ace Insure, a department of a business unit, which in turn is part of Bank Insure, a large financial institution. This first case was performed between February 2002 and August 2002 and has two main goals. First, to apply MEEA to design an ORM process for Ace Insure. Second, to evaluate MEEA. This second case study is described in a linear fashion for readability purposes using the way of working, which is described in chapter five. However, in reality certain activities are performed parallel and iteratively.
6.1. Understanding Ace Insure Step one: conceptualize the problem situation To conceptualize the problem situation at Ace Insure, we interviewed two initiators in a series of three meetings. The contemporary situation regarding the work process and support was equal to the situation as described in chapter four. Moreover, Ace Insure is a department of a business unit, which is in turn part of Bank Insure, the financial institution described in chapter four. Therefore, we only address the issues that were different in this first case. We respectively address the: description of the organization, current problem issues, ORM standards used, potential stakeholders, potential interruptions and goal description.
91
Chapter 6 The description of the organization We refer to chapter four for a more elaborate description of Bank Insure. Bank Insure is divided into several business units, each responsible for a special kind of service. Ace Insure is a department of one of these business units and is selected for our first test case. Ace Insure insures individuals against loss of income should they become unfit for work. To this insurance product is also referred as workmen’s compensation insurances, individuals. The current problem issues Ace Insure operates in a competitive market and has to deal with a number of problems. Despite the strong reputation of Ace Insure, several competitors have entered the market. These competitors sell their insurance policies directly to the customer and offer lower prices to customers and respond faster as compared to Ace Insure. It seemed that Ace Insure suffered losses because their operational processes were not performing as well as the processes of their competitors. Ace Insure’s management therefore decided to initiate an ORM project to investigate the performance of their primary operational processes. However, at the time management was not precisely clear as to how best achieve this. ORM standards When performing ORM, Ace Insure wanted to use two main parts of the Australian New Zealand Standard, AS/NZS 4360: 1999 (ANZ), see chapter five. The first part was the generic guidelines for establishing and implementing ORM processes. The second part was a 5x5 matrix that classified the frequency of occurrence and impact in a five-point scale. Ace Insure further wants to comply with the standards provided by the Basel Committee on Banking Supervision. The committee formulates broad supervisory standards and guidelines and is part of the BIS, see chapter four and five. These standards can be found in BCBS (2003b). Possible stakeholders Together with Ace Insure’s management the following possible groups of stakeholders were identified: Bank Insure’s management, group ORM (GORM), Ace Insure management, initiators and experts. The management of Ace Insure reports the performance to the Financial Institution’s management. The stakeholder GORM is a corporate and function specific organization that focuses on the development, implementation and rollout of operational risk management for Bank Insure.
92
Test cases Potential interruptions Several interruptions were identified. The first interruption was that several important experts could possibly not attend the planned training session due to a holiday or another business meeting. The second interruption was that two experts from Corporate Audit Services (CAS) wanted to participate in the ORM sessions. According to the Ace Insure initiators this could be an interruption because CAS wrote a negative audit report about Ace Insure’s that resulted in a low rating. They furthermore argued that the presence of CAS experts during the ORM sessions could negatively influence the other experts in the ORM sessions. Goal description The goal descriptions of the stakeholders varied. It was therefore important for all stakeholders to formulate the goals before the ORM sessions. Together, they formulated three goals. One, develop a shared plan for Ace Insure to improve their performance in four themes: client focus (growth), operational excellence (cost management), return on investment (risk- and damage management), and continuity (solvability). Two, transfer the low rating to a higher rating. Three, evaluate the use of Group Support Systems, see (Grinsven & Vreede, 2002b).
Step two: specify the problem situation The specification of the problem situation was carried out by investigating the following documents: income report Ace Insure 2001, development Ace Insure first quart 2002, bad cells investigation 2002, results quality research / low rating from CAS, and the annual account. All these documents were classified. Additionally, we interviewed three managers, four experts, examined a power point presentation of the main goals and researched four internal memos. To specify the problem situation we respectively address the motive, perceived problems, the overall goals, performance indicators, model reductions and the collection of input data. The motive At the time of this first test case, Ace Insure’s return on investment and market share were under heavy pressure. In a time period of just five months Ace Insure had dramatically lost market share to competitors and seen an enormous decrease in their return on investment. On top of this, CAS gave the department a low rating, which means that they felt the operational risks in relation to the performance were bad. The motives to start an ORM investigation are the decreased return on investment and the low rating from CAS. The motives can be
93
Chapter 6 compared with the motives mentioned in chapter four: a signal from the business unit and as requested by management. The perceived problems The problems of the contemporary ORM process were identified by two managers and two experts, see Table 6-20. After identification, the problems were categorized into four predefined categories: (1) preparation of the ORM sessions, (2) process that is followed with the experts in the ORM sessions i.e. risk identification, assessment and mitigation, (3) outcome of the ORM sessions and (4) support such as methods and tools. Table 6-20: perceived problems
Category Preparation Process
Outcome
Support
Description of the perceived problem Determining the goals and scope of the ORM project is time consuming Desk research is time consuming and difficult because relevant loss data is lacking The process to identify and assess operational risks is not clear to the expert participants CAS interviewers experience difficulties not to get caught up with details and therefore often miss out the important operational risks Ace Insure’s management and do not share the outcome of the findings by CAS Expert participants believe the final report is biased by CAS Employees do not support the recommendations made by CAS Ace Insure’s management is not able to take effective decisions based on the CAS report The results are not applicable to the daily practice The overall quality of the results is low It is difficult to guide the process of interviewing managers and expert participants Integrating all the results of the interviews with MS word / Excel is a burden Reporting takes too long to respond timely to possible operational risk
The overall goals Generally, the overall goal aims to regain return on investment and increase their market share and is formulated by Ace Insure’s management, GORM and two experts. There are three goals: (1) identify and assess the goals, operational risks and control measures for Ace Insure, (2) reach consensus and focus at the Ace Insure’s management level about the operational risks, control measures, desired direction of chance and responsibilities, (3) comply with corporate governance guidelines used by Bank Insure. Performance indicators Performance indicators serve in this first test case as a substitute to deduce the performance of the contemporary operational risk management process and to compare the performance with
94
Test cases the improved process. We used the constructs described in chapter five as performance indicators in this case study. Model reductions Two model reductions were made to reduce the complexity at Ace Insure and to arrive at a model that corresponded most to the essential characteristics of the contemporary situation at Ace Insure. First, we had chosen not to build a process model of the preparation phase since our primary interest was in utilizing multiple experts’ judgment in the risk identification, assessment and mitigation phase. Second, we had chosen not to model the reporting phase as Ace Insure only needed the output results from the ORM sessions. Collecting input data Input data was collected using expert estimations, see chapter five. We used ex-ante and ex-post interviews with initiators to collect data about the efficiency of the contemporary ORM situation with respect to man-hours and throughput time, also see (Grinsven & Vreede, 2002b). We used this data to compare the efficiency of the contemporary situation with the improved situation. Further, Ace Insure executes its primary and secondary operational risk management activities similar to the situation described in chapter four. The section ‘results’ elaborates on this.
Step three: validate the problem situation The problem situation was validated by asking three Ace Insure managers and one expert, who were familiar with the situation, if our description of the situation reflected their reality. Moreover, a cost offer was prepared for Ace Insure by researchers of the faculty Technology, Policy and Management (TPM), section Systems Engineering. Included in this cost offer were e.g. the goals, approach, deliverables, planning, research activities and budget. Ace Insure approved this offer.
6.2. Designing ORM processes for Ace Insure In this section, we present the design phase following MEEA, see chapter five.
Preparation phase This phase is used to describe the steps: determine the context and objectives, select the experts, assess experts, define roles, choose the method and tools, tryout the exercise and train the experts. 95
Chapter 6
Step one: determine the context and objectives The context and objectives were already for a large part described in the understanding phase. In this step we emphasize those aspects that are particularly important to assure that the experts identify, assess and mitigate operational risks within the context and objectives set by the managers and initiators. The Ace Insure context and objectives was determined by the facilitator and two initiators and consisted of three aspects. The first aspect consisted of determining the main Ace Insure objective. This objective was important because it helps to focus the experts and was explained and discussed with them before the ORM sessions started. The main objective was to identify, assess and mitigate the operational risks for Ace Insure in the themes client focus, return on investment (ROI), operational excellence (OE) and continuity, see Figure 6-12. The main objective was made more practical for the experts by presenting them a figure with several examples for each theme. For reasons of confidentiality only two made-up examples e.g. image and satisfaction for each theme are depicted.
Operational risk themes
Client focus
Return on investment
Operational excellence
Continuity
Image Satisfaction
Cost reduction Volume
Quality Organization
Employees Processes
Figure 6-12: operational risk themes
The second aspect consisted of a further delineation of the objectives, which were stated by GORM. These objectives were explained and discussed by Ace Insure managers with the experts before the ORM sessions started. First, the outcome of the ORM sessions should enable the development of a shared plan for Ace Insure to improve the main Ace Insure objective. Second, the outcome of the ORM sessions should lead to an improvement of the low-rating to a higher-rating. Third, it was asked to the experts to fill out questionnaires after the scheduled ORM sessions, which were used to test Group Support Systems, see (Grinsven & Vreede, 2002b) and MEEA in chapter 5.
96
Test cases The third aspect consisted of focusing the selected experts on the relevant topics and subjects that mattered to Ace Insure. This helped the facilitator to design and execute the sequence of activities in the ORM sessions. To focus the selected experts in the ORM sessions, four levels were identified at Ace Insure in which operational risks could occur. The levels were detailed with a practical example to clarify each level to the experts, see Table 6-21. A senior manager of Ace Insure presented these four levels to the selected experts before the ORM sessions started. Table 6-21: levels in which operational risk can occur
Event A person who is insured can be silent about an occurred event Segregation of duties is not clear
Levels Indicator Process Workloads that become Making offers to clients higher Accepting clients Percentage of damage Monitor clients Claims
Organizational Knowledge management Quality assurance
Step two: identify the experts The initiators made a first broad identification of the experts that were needed for the ORM sessions. The identification was done on the basis of the experts’ knowledge, role in the organization, costs of using a group support system, and availability. An attendance list was kept and used for the training session and two ORM sessions. Eighteen experts, consisting of two females and sixteen males were identified and invited for the training session and two ORM sessions. The number and gender of experts were identified using the criteria discussed in step two of paragraph 5.2.2, see chapter five.
Step three: select the experts Although eighteen experts were identified, only nine experts confirmed that they could participate in the scheduled training session while the others responded that they could not participate due to a holiday, other business meeting or had a day off. Both females and seven males could not participate in the scheduled training session. Three male experts responded that they possibly could not participate in the first ORM session. One male and one female expert responded that they possibly could not participate in the second ORM session. Other criteria as substantive knowledge and process knowledge were not used to select the experts. For reasons of progress, the initiators decided to start the training session on the planned date and to brief the experts separately that could not participate. Further, it was decided to agree on the
97
Chapter 6 scheduled dates for the first and second ORM session. The final selection of experts consisted of two females and fifteen males. Two of them were from CAS, two were from group ORM and two of them were the initiators. All of them were able to attend both ORM sessions.
Step four: assess the experts The performance of the experts was not assessed. There were several reasons for this. First, the initiators argued that the costs would be too high. Second, since no relevant loss data was available, it seemed to be difficult for the initiators to prepare the questions that were needed for the assessment of the experts.
Step five: define the roles The following roles were defined to accomplish the activities in the ORM process: manager, initiator, expert and facilitator. Two persons fulfilled the role manager: a manager from Bank Insure and a manager from Ace Insure. The manager from the Bank Insure was actively involved in the preparation phase and the work assignment. The Ace Insure manager was responsible for the work assignment and was actively involved in the preparation phase. Two persons also fulfilled the role initiator. Both persons were employed by Ace Insure and actively participated in all phases of the ORM process. Eighteen persons fulfilled the role expert. One external expert participated in the design of the ORM process while seventeen experts participated in the execution of the ORM sessions. A researcher from TPM fulfilled the role facilitator, was involved in all phases of the ORM process and was responsible for the functions mentioned in chapter five plus the outcome of the sessions.
Step six: choose the method and tools The managers and initiators had to make a choice for the method and tools. Microsoft Excel, Word and Power Point were chosen for the preparation phase and the reporting phase. Excel was chosen to aggregate the results derived from the experts’ assessments because it was fast and simpler to use than the tools in GroupSystems. Word was used for reporting purposes because the layout format of the reporting tool in GroupSystems did not match the reporting standards from Ace Insure and Bank Insure. Power Point was used to present information to the experts in the training session, at the start of each ORM session. It was also used during the sessions to visualize the intermediate aggregated results from the risk assessment phase. Email was used to inform and prepare the experts before the start of the ORM sessions. The first rough process draft was prepared by one of the initiators. Then, a first agenda consisting of the 98
Test cases following process steps: opening & introduction, risk identification, risk assessment, risk focusing and closure was made. This draft and agenda was then fine-tuned by the TPM facilitator in a sequence of two meetings with the initiators and one with two persons from GORM, for more details see (Grinsven & Vreede, 2003a). Patterns of collaboration and facilitation recipes were chosen to help the facilitator guide the activities in the phases risk identification, assessment and mitigation. The facilitation recipes were mapped to each activity and used in combination with the software tool GroupSystems workgroup edition / professional suite from GroupSystems.com. See the subsequent sections for the resulting process models and a description.
Step seven: tryout the exercise The facilitator suggested to tryout the ORM exercise by using a ‘step-by-step walk through’ with two initiators and one expert. During this tryout, one of the initiators found that using GroupSystems could lead to usability problems because not all the experts had prior experience in using such a tool. One expert found that the scales used for estimating the frequency and impact were not linear. Although the initiators had no intention to change the scales, it was noted that this could make the final results less clear.
Step eight: train the experts A group-facilitated training session was prepared and scheduled by the facilitator. In this session, the initiators explained and discussed the context and objectives, importance of operational risk management and the process to be followed with the experts who were able to participate in the meeting. Those who could not participate in the training session were briefed using several emails. The roles and responsibilities, which were discussed in step five, were also made clear to the experts. Further, the initiators explained for whom the results were and how the results of the ORM sessions would be used.
Risk identification phase We used several meetings with the initiators and one expert to design a sequential process of interrelated risk management activities for the risk identification phase. Drawing on the description of patterns of collaboration in chapter three, we choose the appropriate pattern for each activity. Based on the activity and the pattern, we chose a facilitation recipe that was most suitable for that specific activity and our context. We refer to Briggs and Vreede (2001b) for an elaborate overview of facilitation recipes for each pattern of collaboration. Figure 6-13 depicts
99
Chapter 6 the process model symbols, which are based on the first designs of the Ace Insure’s sessions and the observations made from these sessions (Grinsven & Vreede, 2002b). Figure 6-14 depicts the resulting process model for risk and control identification. We numbered each activity for quick reference. Table 6-22 summarizes the mapping of the pattern and choice of a recipe to each activity. Note that we refer to a facilitation recipe instead of the word thinkLet, also see chapter 3, paragraph 3.3.1.
Pattern
thinkLet Activity name
Activity
Activity name
Compound activity
Decision name Decision
'Outcome decision'
Flow direction
Figure 6-13: process model symbols (Grinsven & Vreede, 2002b; Grinsven, 2007)
100
Test cases
Other
(1) Determine participants for risk identification
Risk list 'feels' complete? No Yes
Determine (extra) risks for relevant themes
Other
Diverge
(2) DirectedBrainstorm
Define most important risks
(7) LeafHopper Diverge
Converge
(3) FastFocus
Diverge
Organize
(4) PopcornSort Categorize risks in appropriate themes
Define current controls for 'known' risks (8) LeafHopper Check and complete current controls for 'known' risks (9) ExpertChoice
Converge
Evaluate
(5) BucketWalk Check placement and clarity of risks
(6) Determine participants for inventory of existing controls
Formulate a clear collection of controls per risk
Figure 6-14: process model for risk & control identification (Grinsven & Vreede, 2002b; Grinsven, 2007)
Step nine: identify the operational risks The activities 1 to 3 were used to identify extra operational risks and define the most important operational risks. The first activity was used to determine which experts, labeled participants in Figure 6-14, were able to contribute to the identification of operational risks. The second activity was used to identify extra operational risks. For this activity, a DirectedBrainstorm facilitation recipe was used. During the execution of this activity, the facilitator used four relevant themes, client focus, return on investment, operational excellence, and continuity to trigger the experts. When the second activity had finished, it turned out that the experts had identified 143 extra operational risks, which were then added to the existing CAS list. The third activity was used to 101
Chapter 6 define the most important operational risks. For this activity, a FastFocus facilitation recipe was used. In this activity, each expert had an overview of a subset of all the identified operational risks. Each expert had to abstract or quote and clearly describe the most important risk from this subset to the group. The third activity resulted in 27 clearly described operational risks.
Step ten: categorize the operational risks The activities 4 and 5 were used to categorize the identified operational risks and to check the clarity of their descriptions. The fourth activity was used to categorize the 27 operational risks in the appropriate themes using the PopcornSort facilitation recipe. We refer to Briggs and Vreede (2001b) for an elaborate overview of facilitation recipes. This resulted in the following categorization: 6 risks for operational excellence, 8 risks for return on investment, 7 risks for client focus and 6 risks for continuity. The fifth activity was executed with the initiators and experts to check if the categorization of the operational risks for each theme was done correctly and determined if some descriptions of risks were still unclear.
Step eleven: perform a gap analysis A decision was used to perform a gap analysis. It was asked to the initiators and experts if they ‘felt’ that the risk list was incomplete or unclear. If this was the case, the process could start again in the second activity. Note: we refer to Briggs and Vreede (2001b) for an elaborate overview of facilitation recipes mentioned below. The activities 6 to 9 were used to define and match the existing control measures to each operational risk. Note that this is actually a sub step of step thirteen of the risk assessment phase, see chapter five. All experts were selected in the sixth activity. Using the seventh and eighth activity, two LeafHoppers, experts identified 85 existing controls. The leafhopper enabled experts to work in subgroups. This speeded up the process because it enabled the experts to start defining existing controls for those operational risks of which they had the most knowledge. Where needed, the experts added comments. Finally, the ninth activity, an ExpertChoice facilitation recipe was used to formulate a clear collection of existing controls for each operational risk. This resulted in 19 controls for operational excellence, 22 controls for return on investment, 20 controls for client focus and 24 controls for continuity.
102
Test cases Table 6-22: summary activities risk & control identification process
Activity 1. Determine participants 2. Determine (extra) risks 3. Define most important risks 4. Categorize risk in themes 5. Check placement and clarity 6. Determine participants for controls 7. Define current controls 8. Check and complete current controls 9. Formulate a clear collection of controls
Pattern of collaboration Diverge Converge Organize Evaluate Diverge Diverge Converge
Facilitation recipe DirectedBrainstorm FastFocus PopcornSort BucketWalk LeafHopper LeafHopper ExpertChoice
Risk assessment phase The design, modeling and choice of facilitation recipe was done similar to the risk identification phase. We refer to Briggs and Vreede (2001b) for an elaborate overview of facilitation recipes for each pattern of collaboration. To fit the Ace Insure situation, several modifications had to be made to MEEA, which is presented in chapter five. First, to save time and reduce costs, the estimation of the absolute level of operational risk should not be used. Moreover, it would be difficult for the experts to leave out the existing controls because they were already identified. Second, the initiators needed the experts to agree on their estimations as an alternative for Building Consensus, which is advised in chapter five. Third, identifying and matching existing controls could be left out since this is already done in the previous risk & control identification process. Fourth, the initiators wanted to label managed risk as residual risk. Fifth, the initiators were not sure which assessment method to use. Therefore it was decided to use a simple voting tool before the actual MultiCriteria facilitation recipe was used. This enabled the initiators to compare the assessment results from both tools. The simple voting tool was not modeled in Figure 6-15. Figure 6-15 depicts the resulting process model for risk assessment. We numbered each activity for quick reference. Table 6-23 summarizes the mapping of the pattern and choice of a recipe to each activity.
103
Other
Chapter 6 (1) Determine participants for estimating residual risk
Evaluate
(2) MultiCriteria Determine residual risk
Are there risks with a to low consensus value? Yes Buildcons.
(3) RedL.Gr. /Crowb. Target for agreement for risks with a low consensus value
Figure 6-15: process model for risk assessment (Grinsven & Vreede, 2002b; Grinsven, 2007)
Step twelve: assess the operational risks The activities 1 and 2 were used to estimate the managed level of operational risk. The first activity was used to determine which experts; labeled participants in Figure 6-15 were able to contribute to the estimation of the residual risk. The second activity was used to determine the residual risks in terms of their frequency of occurrence and impact. For this, a five-point scale was used with 1 being insignificant, 2 being minor, 3 being moderate, 4 being major and 5 being catastrophic. A realistic value was attached by the initiators to each number but is not given for reasons of confidentiality. This activity resulted in an estimated value for each residual operational risk. The level of consensus between the experts was calculated using the standard deviation, which was set to a threshold value. A high standard deviation indicated that some of the estimations were extreme distant and needed to be discussed further to target for agreement.
Step thirteen: aggregate the results The activities 2 and 3 were used to aggregate the results. Aggregation was completed using a combination of a mathematical aggregation method and a behavioral aggregation method. First,
104
Test cases we used the equal-weighed average rule to combine the individual assessment results from the experts. Then, the individual results were calculated and presented to the experts. Using the standard deviation function it was determined if there were operational risks with a too low consensus value that needed to be discussed to target for agreement. Second, we used the third activity, a RedLightGreenLight and a CrowBar facilitation recipe, for the behavioral method. In this activity, the experts were encouraged to provide their rationales behind their individual assessments. The first facilitation recipe was used to keep constant track of changing patterns of consensus while the second was used to target for agreement and share information. We used both facilitation recipes to guide the discussion about the operational risk. When there were no risks left to discuss, the risk assessment ended. We refer to Briggs and Vreede (2001b) for an elaborate explanation and discussion of these facilitation recipes. Table 6-23: summary activities risk assessment process
Activity 1. Determine participants for residual risk 2. Determine residual risks 3. Target for agreement
Pattern of collaboration Evaluate BuildConsensus
Facilitation recipe MultiCriteria RedLightGreenlight CrowBar
/
Risk mitigation phase The design, modeling of processes and choice of facilitation recipes was done similar to the risk identification phase. We refer to Briggs and Vreede (2001b) for an elaborate overview of facilitation recipes for each pattern of collaboration. To fit the Ace Insure situation, several modifications had to be made to MEEA, which was presented in chapter five. First, the initiators decided that five operational risks had to be added to the list, bringing the total to 32 operational risks. This resulted in 1 extra risk for operational excellence, 1 extra risk for return on investment, 0 extra risks for client focus and 3 extra risks for continuity. Second, the initiators wanted the experts to identify the existing controls for these operational risks. This resulted in 31 existing control measures for those five operational risks, bringing the total to 116. Third, the initiators needed the experts to agree on their estimations regarding residual risk as an alternative for Building Consensus, which was advocated in chapter five. Fourth, categorizing and verifying alternative control measures should be left out. Figure 6-15 depicts the resulting process model for risk and control identification. We numbered each activity for quick reference. Table 6-23 summarizes the mapping of the pattern and choice of a recipe to each activity.
105
Chapter 6
Is it necessary to limit the identification of new controls to a subset of the risks?
Yes
Other
Diverge
No (3) ComparativeBrainst.
Converge
Other
(1) Determine participants for identification of new controls (2) BroomWagon Select most important risk for which new controls must be considered
Consider more effective controls than current (4) Determine participants for estimating residual risk
Evaluate
(5) MultiCriteria Determine residual risk Yes Are there risks with a to low consensus value?
Are there risks for which additional controls have to be considered?
No
Yes
(7) Other
Buildcons.
(6) RedL.Gr./Crowb. Target for agreement for risks with a low consensus value
No
Determine risk owner
Figure 6-16: process model for risk mitigation (Grinsven & Vreede, 2002b; Grinsven, 2007)
106
Test cases
Step fourteen: identify alternative control measures The activities 1, 2 and 3 were used to identify alternative control measures. The first activity was used to determine which experts; labeled participants in Figure 6-16 were able to contribute to the identification of new controls. The second activity was used to select the most important risks for which new controls had to be considered. The third activity was used to identify new, not yet existing controls. For this activity, the ComparativeBrainstorm facilitation recipe was used. Using this recipe, the facilitator ‘prompted’ the experts to stimulate them in their identification activity of new controls. The goal was to identify controls that reduced the impact and/or likelihood and as such were more effective than the existing controls. This activity resulted in 8 extra controls for operational excellence, 25 extra controls for return on investment, 12 extra controls for client focus and 4 extra controls for continuity.
Step fifteen: re-assess the residual operational risks The activities 4, 5 and 6 were used to re-assess the residual operational risks. The fourth activity was used to determine which experts; labeled participants in Figure 6-16 were able to contribute to the estimation of residual risk. The fifth activity was used to re-evaluate the risks and their set of controls. The same five-point scale was used as in the assessment phase.
Step sixteen: aggregate the results The activities 5 and 6 were used to aggregate the results. Aggregation was completed using the same mathematical method and behavioral method as in the risk assessment process. Once these activities were completed, the seventh activity concluded the process. In this activity, the initiators determined the risk owner. The risk owner had the responsibility for implementation of the controls and which resources were needed to achieve this. Table 6-24: summary of the risk mitigation process
Activity 1. Determine participants 2. Select most important risk 3. Consider more effective controls 4. Determine participants 5. Determine residual risk 6. Target for agreement
Pattern of collaboration Converge Diverge Evaluate BuildConsensus
7. Determine risk owner
-
Facilitation recipe BroomWagon ComparativeBrainstorm MultiCriteria RedLightGreenlight / CrowBar -
107
Chapter 6
6.3. Results and discussion Below, we discuss the results of the improvements, which were made by applying MEEA to operational risk management at Ace Insure. An evaluation of using a group support system in combination with Bank Insure’s risk & control self assessment process is presented in Grinsven and Vreede (2002b). The results of applying MEEA to Ace Insure are described following an input, process, output model (Nunamaker, Dennis et al., 1991) and the constructs presented in the method of analysis described in chapter five. We tested MEEA using the survey questions mentioned in chapter five. All questions were presented to the participants on an ordinal scale. One part of the survey consisted of questions using a seven-point scale with 1 being most negative, 4 being equal and 7 being most positive, and another part of the survey consisted of questions using a five-point scale with 1 being most negative, 3 being equal and 5 being most positive. To analyze and compare the results with each other and the existing literature we coded the ordinal answer categories from the five-point scale to a seven-point scale (Hildebrand, Laing et al., 1977; Johnson & Albert, 1999; DeVellis, 2003; Netemeyer, Bearden et al., 2003). The ordinal answer categories from the seven-point scale remained the same. The questionnaire items used for satisfaction with outcome are, on a seven-point scale, statistically validated and presented in a working paper, see (Briggs, Reinig et al., 2003). Table 6-25 presents the coding from the ordinal five-point answer categories. The facilitator asked the participants to compare the improved situation to their contemporary situation, see Figure 5-9 in chapter five. Seventeen experts reflected their opinion on the survey in both case studies. The results of the case studies are analyzed following the method of evaluation described in chapter five. See (Grinsven & Vreede, 2002b) for the questionnaire items in Dutch. Table 6-25: coding the ordinal five-point answer categories
Ordinal answer categories (both scales) Being most negative Being equal Being most positive
Five-point scale 1 2 3 4 5
Seven-point scale 1 2,5 4 5,5 7
Team We measured and respectively discuss the elements of the construct team: demographical data, composition and collaboration.
108
Test cases
Demographical data The results in Table 6-26 represent the demographical data with respect to the team size, gender, average age, work experience and experience with ORM tools and techniques. The same experts, two female and fifteen male, were selected for both ORM sessions. Our interviews revealed that these experts had dissimilar knowledge about the problem domain. The results in the table further indicate that the experience with similar tools and techniques used for ORM was very low. This is in sharp contrast with the number of years working experience of the experts. A logical explanation was found in our interview results, which indicated that at the time of this case study, ORM was at a low maturity level in the institution. Table 6-26: demographical data
ORM session Team size Male Female Average age Average number of years working experience Average number of years experience with similar tools / techniques
1 17 15 2 40.76 18.76 1.41
2 17 15 2 40.29 19.03 2.00
μtot 17 15 2 40.53 18.90 1.71
We used seventeen experts for both our ORM sessions. This is in line with the GSS literature that states that the optimal group size in a face-to-face GSS supported group is usually between ten to twenty experts (Dennis, Heminger et al., 1990; Dennis, Nunamaker et al., 1991; Dennis & Gallupe, 1993). However, this is in contrast with the literature on expert judgment and operational risk management, which argues that, in principle, the largest number of experts should be used to increase the sample size and to minimize inconsistency and bias (Clemen & Winkler, 1999; Goossens & Cooke, 2001). This literature however, does not point out how many experts should be used as the largest number. From our interview results we found that using a mixed-gender group in our ORM sessions avoided internal politics and prevented the experts in the risk assessment phase from groupthink and biases. These results are respectively in line with (Karakowsky & Elangovan, 2001) and (Janis, 1972). Moreover, feedback from the experts indicated that the women, in contrast to the men, were more cautious in being overconfident while assessing the operational risks in terms of frequency of occurrence and impact. These results are in line with (Heath & Gonzales, 1995; Karakowsky & Elangovan, 2001).
109
Chapter 6
Composition The results in Table 6-27 indicate that the composition of this group is significantly improved to reach the goals compared to the contemporary situation. Moreover, the average value of the construct composition (μ = 5.28 > 4) indicates that there is a positive significant improvement. Table 6-27: composition
ORM session 1 2 σ2 μ tot σtot Distribution Improved σ1 Yes The composition of this group was suitable 5.50 1.59 5.06 0.88 5.28 1.29 Not normal to reach the goals pKS = 0.004 pWX = 0.000 μ = 5.28
The overall positive results are however somewhat in contrast with the results from our observations and interviews. We selected the experts mainly on the basis of their appropriate substantive expertise. This was mainly because the initiators wanted to prevent internal politics. This is in line with Fjermestad and Hiltz (2000) who propose to use professionals. The results from session two point out that the match between experts and their task was not optimal. Our interviews and observations enforced this finding and indicate that, specifically in the risk assessment phase, relevant managerial knowledge was lacking. Experts mentioned that it was difficult and sometimes even impossible to assess the operational risks on the basis of the knowledge that they had. This indicates that some of the experts were lacking the necessary analytical skills to estimate the frequency of occurrence and impact of operational risks. This finding is in line with McKay (2000) who states that there is a necessity to deploy substantive expertise and process expertise in any risk analysis. Generally, from this we can conclude that substantive expertise is needed more than process expertise in the risk identification phase. We also can conclude that both substantive and process knowledge is needed in the risk assessment and mitigation phase.
Collaboration The results in Table 6-28 indicate that collaboration with other group members is significantly improved compared to the contemporary situation. Moreover, the average value of the construct collaboration (μ = 5.50 > 4) indicates that there is a positive significant improvement. Table 6-28: collaboration
ORM session 1 2 σ2 μ tot σtot Distribution Improved σ1 Collaboration with other group members is 5.76 0.79 5.24 0.59 5.50 0.74 Not normal Yes gone good pKS = 0.000 pWX = 0.000 μ = 5.50
110
Test cases Briggs et al. (2003) argue that a group moves through some combination of patterns of collaboration when working towards a common goal. Building consensus is one of these patterns, see chapter three. Despite the fact that the experts evaluated collaboration as improved, there were some negative experiences with respect to building consensus in the risk assessment phase. Two logical explanations for this were found in our observations and interviews. One, experts often did not want to change their opinion about the estimated frequency and impact values by building consensus with each other. They held on to their own estimation because by nature experts differ in opinion. This finding is in line with Cooke and Goossens (2000) who argue that one subjective probability is as good as another and that there is no rational mechanism for persuading individuals to adopt the same degrees of belief. Second, the experts felt that exchanging arguments would not necessarily improve their estimation. Our findings are also in line with Clemen and Winkler (1999) who argue that after a discussion, a group will typically advocate a riskier course of action than they would if acting individually or without discussion. Moreover, they argue when experts do modify their opinion to be closer to the group, the accuracy of the estimation decreases. Generally, from the above we can conclude that collaboration can be risky while experts estimate the frequency and impact of the operational risks.
Technology We measured and respectively discuss the elements of the construct technology anonymity. Anonymity The results in Table 6-29 indicate that the questions are not reliable enough to measure the underlying construct anonymity. However, results from the individual questions indicate that, the option to work anonymously in ORM significantly improved compared to the contemporary situation and, the functionality of anonymity significantly improved compared to the contemporary situation. Table 6-29: anonymity
ORM session 1 2 σ2 μ tot σtot Distribution Improved σ1 Yes The option to work anonymously was well 5.32 1.17 5.06 0.70 5.19 0.96 Not normal received Anonymity is functional 5.94 1.03 5.41 0.83 5.68 0.96 Not normal Yes CA = 0.6014
111
Chapter 6
•
Question 1: the option to work anonymously. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value is 0.004 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the option to work anonymously was received better as compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.19 > 4) indicates that there is a positive significant improvement.
•
Question 2: anonymity is functional. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.001 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the functionality of the anonymity significantly improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.68 > 4) indicates that there is a positive significant improvement.
The principal effect of anonymity should be a reduction of characteristics such as member status, internal politics, fear of reprisals and groupthink (Janis, 1972; Grohowski, McGoff et al., 1990). Our results indicate that anonymity in the risk identification phase, see session 1, was higher valued than in the risk assessment phase, see session 2. The results also indicate that anonymity in the risk identification phase is more functional than in the risk assessment phase. One of the expert’s statement was ‘now I can truly say what the risks are that our business runs without being punished’. This finding is in line with Rowe and Wright (2001) who state that events should be identified anonymously and also with Nunamaker et al. (1988) who state that a GSS supports anonymous contributions. However, several experts mentioned that, in the risk assessment phase, anonymity diminished because of the verbal discussions. They mentioned that (1) they feared being punished if management knew ‘who’ made a certain statement, and (2) they felt that the discussions often pressed them to change their estimation of the frequency and/or impact. Generally, from the above we can conclude that to ensure anonymity in the risk identification and assessment phase less verbal discussions are needed and/or they need to safeguard anonymity.
Process We measured and respectively discuss the elements of the construct process: structure, involvement and participation, interaction and facilitation.
112
Test cases
Structure The results in Table 6-30 indicate that the structure of the ORM process is significantly improved compared to the contemporary situation. Moreover, the average value of the construct structure (μ = 4.97 > 4) indicates that there is a positive significant improvement. Table 6-30: structure
ORM session 1 2 σ1 σ2 μ tot σtot Distribution Improved Enough time was spent on important topics 5.24 0.95 4.71 0.94 4.97 0.97 Not normal Yes pKS = 0.002 pWX = 0.000 μ = 4.97
These results are however somewhat in contrast with the results from our observations and interviews. In session 1 the experts reflected that enough time was spent on important subjects. Our results and observations enforce these findings. They indicate that the process structure in session 1 was clear and the activities relatively simple. In session 2 however, the experts indicated that the process structure was less clear and the activities more difficult to execute. They reflected that the structure in this session was less clear because the initiators doubted the accuracy of the operational risks that were identified in session 1. Our interviews showed that most experts were ‘annoyed’ by the fact that the results from session 1 had to be supplemented with new operational risks. The interviews also indicated that the activities in session 2 were more difficult because more verbal discussions were needed to reach a certain level of consensus. One initiator mentioned ‘I wanted less verbal discussion in session 2’. Our findings are in line with Clemen and Winkler (1999) and Cooke and Goossens (2002) who argue that using structured methods provides better results than using unstructured methods. The findings are also in line with Dennis and Gallupe (1993) who argue that the use of structure appears to be case specific; a structure that ‘fits’ the task can improve performance, but an incorrect structure for the task can reduce performance. Moreover, our interviews revealed that the experts were more distracted from their goal in session 2 than in session 1 which influenced the execution of the activities negatively. This is in line with some literature that argues that a lack of a clear goal often results in ineffective and inefficient meetings (Herik, 1998). From the above we can conclude that the structure in the risk identification phase had a better fit with the activities than in the risk assessment phase. The activities in the risk assessment phase are perceived to be more difficult and managerial knowledge is needed to perform the activity. We also conclude that enough time has to be spent to the process structure, task structure and the goals in the risk assessment phase.
113
Chapter 6
Involvement and participation The results in Table 6-31 indicate that the questions are not reliable enough to measure the underlying construct involvement and participation. However, the individual questions indicate that using a GSS meeting in MEEA increased the experts’ involvement significantly improved compared to the contemporary situation and, using a GSS with MEEA meeting encouraged participation significantly improved compared to the contemporary situation. Table 6-31: involvement and participation
ORM session 1 2 σ1 σ2 μ tot σtot Distribution Improved Yes The GSS meeting increased the participants’ 4.97 1.29 4.71 0.77 4.84 1.06 Not normal involvement The GSS meeting encouraged participation 5.41 1.24 5.15 0.84 5.28 1.05 Not normal Yes CA = 0.7002
•
Question 1: the GSS increased the participants’ involvement. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1tailed) value of 0.006 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the GSS increased the participants’ involvement as compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.84 > 4) indicates that there is a positive significant improvement.
•
Question 2: GSS improves participation. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.002 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the GSS significantly improved the level of participation as compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.28 > 4) indicates that there is a positive significant improvement.
Our observations and interviews indicate why the involvement and participation slightly decreased in session 2. The experts and initiators indicated that their involvement decreased by a lack of anonymity in session 2, the risk assessment phase. This finding is in contrast with the mixed results presented in Wijk (1996b), Vreede and Wijk (1997a) where it is argued that involvement increased by a lack of anonymity in discussions. Our observations indicate that the ability to meet other expert participants face to face increased participation in both sessions.
114
Test cases However, the level of participation was higher in session 1 than in session 2. Our interviews indicate that, specifically in session two, a lack of anonymity and process structure caused the experts to feel that the discussions were too mechanical. This finding is somewhat in line with Wijk (1996b), Vreede and Wijk (1997a) who argue that the ability to ventilate ideas anonymously can increase the sense of participation in the session. From the above we can conclude that using a GSS in MEEA can increase experts’ involvement and the level of participation. However, precaution is needed because experts have different preferences with respect to the process structure and anonymity.
Interaction The results in Table 6-32 indicate that the questions are not reliable enough to measure the underlying construct interaction. However, the individual questions indicate that, the interaction among experts was significantly improved by using the GSS in MEEA compared to the contemporary situation and, the exchange of information and ideas improved significantly compared to the contemporary situation. Table 6-32: interaction
ORM session 1 2 σ2 μ tot σtot Distribution Improved σ1 Yes The GSS meeting encouraged interaction 5.24 1.09 4.88 0.93 5.06 1.01 Not normal among the participants Yes The exchange of information and ideas 4.88 1.19 4.79 0.77 4.84 0.99 Not normal between participants increased CA = 0.7889
•
Question 1: the GSS meeting encourages interaction. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.000 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1tailed): 0.000 < 0.025). Therefore, we can conclude that the GSS meeting improves interaction among participants compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.06 > 4) indicates that there is a positive significant improvement.
•
Question 2: the exchange of information and ideas. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.011 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the exchange of information and ideas improved significantly compared to the contemporary situation. Moreover, the average
115
Chapter 6 value of this specific question (μ = 4.84 > 4) indicates that there is a positive significant improvement.
The results in Table 6-32 indicate that the interaction slightly decreased in session 2. Interviews with experts revealed that some of them preferred electronic interaction over verbal interaction. This finding is in line with Wijk (1996b), Vreede and Wijk (1997a). Moreover, feedback from the experts taught us that the verbal discussions in session 2 were perceived as negative because they aimed at reaching more agreement about frequency and impact estimations. This finding is in line with Cooke and Goossens (2000) who argue that one subjective probability is as good as another and that there is no rational mechanism for persuading individuals to adopt the same degrees of belief. In our electronic meeting logs we found that the experts identified 143 operational risks during session 1. This indicates that the experts exchanged a great deal of information and ideas electronically. This is in line with Rowe and Wright (2001) and Stewart (2001) who argue that computers should be used to support experts during information processing. In session 2 however, the exchange of information and ideas was mainly verbal due to the nature of the activities. These activities were mainly aimed at aggregating the estimated frequency and impact results by using group interaction. The experts perceived this as negative because the group interaction did not lead to more agreement about the conflicting viewpoints. This is in contrast with Clemen and Winkler (1999) who argue that sharing information should lead to better arguments and that redundant information would be discounted. Our findings are however somewhat in line with Heath and Gonzales (1995) and Quaddus et al. (1998) who argue to use a devil’s advocate when interaction is needed. From the above we can conclude that using MEEA in combination with a GSS is valuable to gather information and process operational risks and supports sharing of information during the risk identification phase. On the other hand we conclude that a GSS is less suitable to support the operational risk assessment phase where aggregation of the results is needed.
Facilitation The results in Table 6-33 indicate that the facilitation of the ORM sessions significantly improved compared to the contemporary situation. Moreover, the average value of the construct facilitation (μ = 5.04 > 4) indicates that there is a positive significant improvement.
116
Test cases Table 6-33: facilitation
ORM session 1 The facilitator had a positive influence on 5.50 this process The facilitator had sufficient understanding 5.41 of the meeting subject to support the process CA = 0.9090 pKS = 0.077 pT = 0.000
•
2 σ1 σ2 μ tot σtot Distribution Improved 1.06 4.44 1.03 4.97 1.16 Not normal Yes 1.12 4.79 1.20 5.10 1.19 Not normal
Yes
μ = 5. 04
Question 1: the facilitator had a positive influence. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.017 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the facilitators’ influence on the ORM process significantly improved as compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.97 > 4) indicates that there is a positive significant improvement.
•
Question 2: the facilitators’ understanding of the subject. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.010 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1tailed): 0.000 < 0.025). Therefore, we can conclude that the facilitator’s understanding of the meeting subject, to support the ORM process, was significantly improved as compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.10 > 4) indicates that there is a positive significant improvement.
Our results indicate that the facilitator had a positive influence on the process and are as such in line with (Bostrom, Watson et al., 1992; Anson, Bostrom et al., 1995). Moreover, several positive comments of experts related to the process such as ‘good in front of a group’, ‘confident’, ‘only an assistant is lacking’ (Grinsven & Vreede, 2002b). Furthermore, our observations and interviews and indicate that the facilitator was important to reach the goals. This is in line with Vreede et al. (2002). The results and feedback also point out that, in the risk identification phase, the process structure and the momentum were better than in the risk assessment phase because the activities were clearer. Further, Vreede and Wijk (1997a) suggest that extensive subject matter expertise is not critical to successfully support an electronic meeting. Unfortunately, they do not make clear to what level subject matter expertise is needed. Our results are somewhat in contrast with their findings for two reasons. First, our results
117
Chapter 6 indicate that the facilitator had sufficient understanding of the meeting subject to support the process.
Second, our observations and interviews point out that, especially in the risk
assessment phase, knowledge of the subject matter helps the facilitator to challenge the experts and feel more confident with respect to the content. This finding is somewhat in contrast with Heath and Gonzales (1995) and Quaddus et al. (1998) who suggest to use a devil’s advocate when interaction is needed to feed the experts with additional challenging information. From the above we can conclude that a certain level of subject matter expertise is necessary for the facilitator to support ORM meetings. This expertise can established by extensive preparation and subsequently be divided over certain roles, for example, the facilitator and a devil’s advocate. However, we recommend weighing the extra costs against the expected benefit of using a devil’s advocate in ORM sessions.
Outcome Following Nunamaker et al. (1989a) we measured and discuss the constructs effectiveness, and efficiency from an initiators and participant’s point of view respectively. Following George et al. (1990) we argue that if experts dislike our approach MEEA, they are less likely to use it, even if it might help them to improve the effectiveness and efficiency of the identification, assessment and mitigation of operational risks. Following Fjermestad and Hiltz (2000) we measure and discuss satisfaction with outcome and process respectively.
Effectiveness initiators The results in the Table 6-34 indicate that the effectiveness of ORM improved. Note that Cronbach’s Alpha (CA) is not determined because only one manager and two initiators were interviewed. Therefore the columns ‘distribution’ and ‘improved’ are not applicable (n.a.). Table 6-34: effectiveness - initiators
ORM session Goal of the session is achieved Applicability of the results is high GSS increased the quality of the outcomes
1 2 σ1 σ2 μ tot σtot Distribution Improved 6.5 0.86 6 0.87 6.21 0.76 n.a. n.a. 5.5 1.5 5 1.73 5.21 1.34 n.a. n.a. 6 0.86 5.5 1.5 5.71 1.04 n.a. n.a.
Fjermestad and Hiltz (2000) evaluated 54 case and field studies and found that 89% of the studies that measured effectiveness report that effectiveness was improved using GSS technology in comparison to other methods. Our results and interviews support these findings. The interviews with the initiators indicate that MEEA helped to improve ORM. It enabled the initiators to achieve their goals while maintaining a high quality of the results. More specifically,
118
Test cases one of the initiators argued that the results were very ‘useful’ for their organization because (1) the experts accepted the identified operational risks and control measures, (2) the descriptions of the risks and controls were much clearer than before and (3) it increased the experts’ awareness about operational risks and control measures. Another interview revealed that it was most likely the entire structure of the ORM process that helped them so much, not the GSS itself. Moreover, the initiators felt that this really helped them to improve their business processes. From the above we can conclude that our approach MEEA enabled the initiators (1) to achieve their goals more effectively, (2) to apply their results, and (3) to increase the quality of the outcomes i.e. description of operational risk and assessment in terms of frequency and impact.
Effectiveness participants The results in Table 6-35 indicate that the effectiveness of operational risk management is significantly improved compared to the contemporary situation. Moreover, the average value of the construct effectiveness (μ = 5.10 > 4) indicates that there is a positive significant improvement. Table 6-35: effectiveness - participants
ORM session 1 2 σ1 The GSS session was more effective than a 5.59 0.99 4.88 manual session The GSS session helped the group to 5.50 0.75 5.15 generate the most important ideas and alternatives The GSS session increased quality of 5.50 0.75 4.53 outcomes of the session The outcomes of the sessions met my 5.41 1.12 4.26 expectations CA = 0.8302 pKS = 0.182 pT = 0.000 μ = 5.10
•
σ2 μ tot σtot Distribution Improved 0.93 5.24 1.01 Not normal Yes 0.66 5.32 0.72 Not normal
Yes
0.74 5.01 0.88 Not normal
Yes
0.95 4.84 1.18 Not normal
Yes
Question 1: GSS session more effective than a manual session. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1tailed) value of 0.001 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the GSS ORM sessions are more effective than the manual ORM sessions in the contemporary situation. Moreover, the average value of this specific question (μ = 5.24 > 4) indicates that there is a positive significant improvement.
119
Chapter 6
•
Question 2: GSS helped to generate ideas. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.000 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the GSS improved the identification of operational risks and control measures compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.32 > 4) indicates that there is a positive significant improvement.
•
Question 3: GSS session increased quality. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.001 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the use of a GSS session increased the quality of outcomes compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.01 > 4) indicates that there is a positive significant improvement.
•
Question 4: outcomes met my expectations. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.013 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.001 < 0.025). Therefore, we can conclude that an improvement was made to the expectation of participants towards the outcomes of the ORM sessions compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.84 > 4) indicates that there is a positive significant improvement.
Our results and interviews with experts indicate that MEEA improved the identification of operational risks and control measures. As such, these findings support the findings from the initiators. The experts’ results show that the quality of the outcome significantly increased. Our interviews revealed that most experts reflected to quality as the descriptions of the operational risks and control measures. As such, we support the findings from Genuchten et al. (1998), Easley et al. (2003), who argue that using a GSS has a positive effect on effectiveness. Further, a number of GSS studies stress the importance of a fit between task and technology to increase the effectiveness of a meeting, see e.g. (Diehl & Stroebe, 1987; DeSanctis, Poole et al., 1993; Dennis, Hayes et al., 1994; Howard, 1994). Our results, observations and interviews however, indicate that this fit was not always optimal. Observations and feedback from experts taught us that it was difficult to use the GSS software to support the assessment of operational risks.
120
Test cases From the above we can conclude that our approach MEEA which was used in these ORM sessions is more effective than the approach used in the contemporary situation, see chapter four.
Efficiency initiators Table 6-36 presents the average estimated values when alternative approaches are used for ORM. The results indicate that our approach saves average 55,5 % on man-hours and 63,4 % in throughput-time. Following Grinsven and Vreede (2002b) alternative approaches were estimated by expert initiators and are elaborated upon below the table. Table 6-36: efficiency – initiators (Grinsven & Vreede, 2002b)
Estimated Real Savings
Man hours per session Session 1 Session 2 226,0 281,0 113,5 109,5 49,9% 61,0%
μ 253,5 111,5 55,5%
Throughput time (in days) Session 1 Session 2 6,5 7,5 3,0 2,0 53,8% 73,0%
μ 7,0 2,5 63,4%
“We used ex-ante and ex-post interviews to ask three initiators about the efficiency with respect to man-hours and throughput time. It was asked to the initiators to reflect on their alternative scenarios they normally used. We specifically asked the initiators how many man-hours and throughput time it would take them to reach similar results as in these two sessions. The first
initiator stated that he normally conducted interviews with a number of experts using interview techniques for ORM. This work process usually consists of: working out the interviews, summarize the interviews and give feedback to the interviewees in a general workshop. He mentioned that the number of interviews would be minimal twenty-five to thirty people to build in more assurance and synergy. The second initiator stated that he usually used a brown paper workshop. To reach similar results, he mentioned that the workshop would at least take one and a half day. The third initiator stated that he used two different approaches to reach similar results in ORM. The first approach was a brown paper workshop similar to the second initiator but he argued that one full day would normally be enough. The second approach was the use of thematic subgroups that prepare a certain part of the ORM session where after all results are summarized and presented to the group as a whole. At minimum there should be seven thematic subgroups and that each subgroup should be a factor three times as big per subgroup”.
121
Chapter 6 Our positive findings are in line with most GSS research that states that using a GSS can save up to 50% of person hours (Nunamaker, Vogel et al., 1989b; Fjermestad & Hiltz, 2000). Notwithstanding these positive findings, our observations and interviews with the initiators point out that the GSS particularly contributes to efficiency gains when much interaction is required e.g. when identifying operational risks. They also indicate that most efficiency was gained in the preparation phase. Moreover, the efficiency gains were measured in hours and throughput time within the sessions. This means that the efficiency gains reduce significantly when we take into account (1) the actual costs of using a group support system to support multiple experts, (2) the actual time spent by the facilitator and initiators in the preparation phase and (3) the actual time spent by the experts and initiators in the sessions. From the above we conclude that efficiency gains are primarily achieved in the preparation phase. Thus when a GSS is not used. Note that in the preparation phase a variety of variables are structurally addressed by MEEA to improve ORM in all phases. This finding is in line with Valacich et al. (1989) who state that efficiency can be pursued to the improvement of a variety of variables. Although efficiency seems to depend on a number of variables we conclude that our approach improves the efficiency.
Efficiency participants The results in Table 6-37 indicate that the efficiency of the ORM sessions significantly improved compared to the contemporary situation. Moreover, the average value of the construct efficiency (μ = 4.53 > 4) indicates that there is a positive significant improvement. Table 6-37: efficiency - participants
ORM session 1 2 σ2 μ tot σtot Distribution Improved σ1 The available time has been used well 5.68 0.73 4.00 1.19 4.84 1.29 Not normal Yes The agenda has been executed efficiently 5.59 0.83 4.26 1.21 4.93 1.23 Not normal Yes CA = 0.9323 pKS = 0.195 pT = 0.003 μ = 4.53
•
Question 1: available time has been used well. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.002 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that usage of the available time has been improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.84 > 4) indicates that there is a positive significant improvement.
122
Test cases
•
Question 2: agenda has been executed efficiently. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.001 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.001 < 0.025). Therefore, we can conclude that the efficiency of executing the agenda improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.93 > 4) indicates that there is a positive significant improvement.
In general, our interviews with the experts indicated that using a GSS has a positive effect on efficiency and that using a clear agenda, and sticking to that agenda helped them to execute the meeting. As such, our findings are in line with Fjermestad and Hiltz (2000) who state that a GSS increases the efficiency of group meetings. However, we note that the experts filled out the questionnaire after the sessions and only reflected on the risk identification, assessment and mitigation phase. They evaluated the efficiency of session 2 lower than session 1. From the above we conclude that although in general, our findings with respect to efficiency are positive, the extent to which this positive effect appears to depend on a variety of factors such as preparation of the ORM sessions and structure of the process.
Satisfaction with outcome The results in Table 6-38 indicate that the satisfaction with outcome is significantly improved compared to the contemporary situation. Moreover, the average value of the construct satisfaction with outcome (μ = 4.57 > 4) indicates that there is a positive significant improvement. Table 6-38: satisfaction with outcome
ORM session 1 2 σ1 I liked the outcome of today's meeting 5.00 0.87 4.06 I feel satisfied with the things we achieved in 5.24 0.83 4.24 today’s meeting When the meeting was finally over, I felt 5.35 0.93 3.94 satisfied with the results Our accomplishments today give me a 5.18 1.01 4.06 feeling of satisfaction I am happy with the results of today's 4.65 1.41 4.00 meeting CA = 0.9516 pKS = 0.916 pT = 0.006 μ = 4.57
σ2 μtot σtot Distribution Improved 1.25 4.53 1.16 Normal Yes 1.09 4.74 1.08 Normal Yes 1.20 4.65 1.28
Normal
Yes
1.25 4.62 1.26
Normal
Yes
1.22 4.32 1.34
Normal
No
123
Chapter 6
•
Question 1: I liked the outcome. The independent KS test indicates that the results of this question are normally distributed; the Asymp Sig (1-tailed) value of 0.186 (> 0.025). The Ttest indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.012 < 0.025). Therefore, we can conclude that the level of appreciation of the outcome improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.53 > 4) indicates that there is a positive significant improvement.
•
Question 2: satisfaction with the achievements. The independent KS test indicates that the results of this question are normally distributed; Asymp Sig (1-tailed) value of 0.088 (> 0.025). The T-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the satisfaction of the participants’ achievements in the meeting improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.74 > 4) indicates that there is a positive significant improvement.
•
Question 3: satisfaction with the results. The independent KS test indicates that the results of this question are normally distributed; the Asymp Sig (1-tailed) value of 0.226 (> 0.025). The T-test indicates that H
0
can be rejected (Asymp Sig (1-tailed): 0.006 < 0.025).
Therefore, we can conclude that the participants’ satisfaction of the results after the meeting improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.65 > 4) indicates that there is a positive significant improvement.
•
Question 4: satisfaction with the accomplishments. The independent KS test indicates that the results of this question are normally distributed; the Asymp Sig (1-tailed) value of 0.229 (> 0.025). The T-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.007 < 0.025). Therefore, we can conclude that the feeling of satisfaction with respect to the accomplishments significantly improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.62 > 4) indicates that there is a positive significant improvement.
•
Question 5: happiness with the results. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.092 (> 0.025). The T-test indicates that H
0
cannot be rejected (Asymp Sig (1-tailed): 0.169 > 0.025).
Therefore, we can conclude that the participants feeling of happiness concerning the results of the meeting did NOT significantly improve compared to the contemporary situation.
124
Test cases Fjermestad and Hiltz (2000) suggest that groups using a GSS are more satisfied with the outcome compared to manual or face-to-face meetings. Our results above support these findings. However, they also indicate that the satisfaction in session 1 is higher than in session 2. More specifically, although the overall feeling of happiness with the results improved slightly, it did not improve significantly. Interviews with the expert participants revealed that they were less satisfied when estimating the frequency of occurrence and impact of operational risks as opposed to identifying operational risks. The expert participants reported that this was because they had to deliberate and take minor decisions with other experts about the aggregation of the frequency and impact results of operational risks. This finding is in line with Shaw (1998) who states that groups who use a GSS are more satisfied when completing idea generating tasks than in decision-making tasks. Further, our interviews with expert participants revealed that they were satisfied with the quality of the outcome: the quantity and accurateness of descriptions regarding the identified operational risks, identified control measures and overall estimations of the frequency of occurrence and impact. These findings are in line with Fjermestad and Hiltz (2000) who suggest that a perceived greater quality of the results is one of the reasons for overall improvement in satisfaction.
Satisfaction with process The results in Table 6-39 indicate that satisfaction with the ORM process is significantly improved compared to the contemporary situation. Moreover, the average value of the construct satisfaction with process (μ = 4.80 > 4) indicates that there is a positive significant improvement. Table 6-39: satisfaction with process
ORM session 1 2 σ1 I feel satisfied with the way in which today's 5.53 0.72 4.06 meeting was conducted I feel good about today's meeting process 5.53 0.87 4.00 I found the progress of today’s session 5.65 0.79 4.47 pleasant I feel satisfied with the procedures used in 5.53 1.07 3.88 today's meeting I feel satisfied about the way we carried out 5.47 0.80 3.88 the activities in today’s meeting CA = 0.9595 pKS = 0.201 pT = 0.000 μ = 4.80
•
σ2 μtot σtot Distribution Improved 1.14 4.79 1.20 Not normal Yes 1.22 4.76 1.30 1.23 5.06 1.18
Normal Normal
Yes Yes
1.32 4.71 1.45
Normal
Yes
1.27 4.68 1.32 Not normal
Yes
Question 1: satisfied with the meeting. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.012 (< 0.025).
125
Chapter 6 The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.001 < 0.025). Therefore, we can conclude that the satisfactory feeling from the way in which the meeting was conducted improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.79 > 4) indicates that there is a positive significant improvement.
•
Question 2: feeling good about the meeting process. The independent KS test indicates that the results of this question are normally distributed; the Asymp Sig (1-tailed) value of 0.030 (> 0.025). The T-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.002 < 0.025). Therefore, we can conclude that the feeling of goodness concerning the meeting process improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.76 > 4) indicates that there is a positive significant improvement.
•
Question 3: progress of the session. The independent KS test indicates that the results of this question are normally distributed; the Asymp Sig (1-tailed) value of 0.085 (> 0.025). The T-test indicates that H
0
can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025).
Therefore, we can conclude that the pleasantness of progress in ORM sessions significantly improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.06 > 4) indicates that there is a positive significant improvement.
•
Question 4: satisfaction with the procedures. The independent KS test indicates that the results of this question are normally distributed; the Asymp Sig (1-tailed) value of 0.287 (> 0.025). The T-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.008 < 0.025). Therefore, we can conclude that the satisfactory feeling of the used procedures improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.71 > 4) indicates that there is a positive significant improvement.
•
Question 5: satisfaction with the activities. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.012 (> 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.010 < 0.025). Therefore, we can conclude that the satisfactory feeling concerning the execution of the activities during the meeting improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.23 > 4) indicates that there is a positive significant improvement.
126
Test cases The literature indicates that using a GSS seems to have a positive effect on satisfaction with the process (Fjermestad & Hiltz, 2000). Our results support these findings and indicate that the expert participants were more satisfied with the ‘new’ ORM process as compared to their contemporary situation. The results indicate that the expert participants were more satisfied with the ORM process in session 1 than in session 2. Analysis of the interviews revealed that this was mainly due to the procedures used in session 2. As mentioned before, the initiators wanted to identify operational risks again before assessing them. They wanted to broaden the operational risk identified in session 1. This disturbed the ORM process significantly because most experts were annoyed by the fact that the initiators wanted to push their operational risks into the assessment phase. We can conclude that using MEEA in combination with a GSS improves the satisfaction with process.
6.4. Learning moments Three ORM processes were prepared, designed and evaluated for Ace Insure; a risk identification process, a risk assessment process and a risk mitigation process. In this section, we discuss the learning moments for MEEA following the way of thinking, way of working, way of modeling and way of controlling as described in chapter five.
6.4.1. Way of thinking Using a bounded rationality view helped us to design and choose an appropriate and acceptable solution. Numerous stakeholders were involved in the design and evaluation of Ace Insure. Although their goals and objectives were different and sometimes even conflicting, we learned that fine-tuning the common elements of these goals helps us to arrive at an acceptable solution for all stakeholders. We learned that ORM can be improved by structuring the phases of operational risk management; preparation, risk identification, risk assessment, risk mitigation and reporting. Structuring the preparation phase was found to be critically important because it largely determines the frame of reference for the experts. Design guideline two helped us to create commitment towards the outcome by ensuring procedural rationality among decisionmakers, initiators, experts and stakeholders. Further, we learned that the decision-makers and experts were able to create a shared understanding by interacting face-to-face and exchanging viewpoints about operational risks. Design guideline three helped us in this. Design guideline four helped us in making the operational risk management processes, in which expert judgment is utilized, more practical. This was mainly achieved by guiding facilitation principles found in
127
Chapter 6 the GSS literature and by using standard facilitation recipes. We learned that flexibility with these recipes was more difficult to accomplish because once a recipe was chosen they were not changed during the sessions. However, during the design phase, different recipes can be tried out thereby increasing the flexibility of choice for the facilitator.
6.4.2. Way of working Understanding phase Conceptualizing, specifying and validating the problem situation helped us to understand the situation at Ace Insure. Describing the, often different, goals helps to collect an adequate amount of information. Then, when these goals need to be attained, different parts of the collected information can be used to establish them. Further, we learned that by investigating the main perceived problems the motive(s) and hidden assumptions became clearer. These motives and assumptions proved to be useful when eliciting the viewpoints from experts. We also learned that initiators and decision-makers often want to achieve different goals in a relatively short timeframe. Then, often compromises have to be made with respect to the deliverables. It is important that the stakeholders commit to these compromises before entering the sessions.
Design phase The case study taught us that the preparation phase is important to provide a frame of reference for the experts. We learned that here specific attention needs to be paid to a further clarification of the context and objectives. Practical examples help the experts to focus on the context and objectives which have to be set by the initiators and/or decision-makers in the preparation phase. Further, seventeen, mixed gender, expert participants were used to identify, assess and mitigate the operational risks. This quantity was used to minimize inconsistency and bias and was adequate to complete all the activities in the processes. We learned that using a mixed gender group of expert participants can minimize internal politics and groupthink. Female experts seem more cautious in being overconfident than male experts while estimating the frequency of occurrence and impact of operational risks. The composition of the expert participants was suitable to reach the goals. We learned that it is necessary to deploy appropriate process knowledge, next to substantive knowledge, to estimate the frequency of occurrence and impact of operational risks. Further, we learned that the collaboration between expert participants needs to be limited to safeguard the accuracy of their estimations of the frequency
128
Test cases of occurrence and impact of the operational risks. We found that this can be established by keeping the number of activities in which discussions are needed to a minimum. We learned that the GSS was primarily used to support the experts in the identification of operational risks and control measures. The GSS supported parallel communication and allowed the experts to make anonymous contributions. We learned that experts and initiators have different preferences with respect to anonymity. The expert participants in the risk identification process received the functionality of anonymity better than in the risk assessment process. This was because operational risks could be identified anonymous while anonymity diminished during the aggregation of operational risks i.e. discussion was needed. Experts valued anonymity more than the initiators because it reduced the fear of reprisals. Further, we learned that involvement increased due to the function anonymity in the GSS. Moreover, we learned that participation in the sessions was likely to be increased due to the fact that experts had the possibility to meet and exchange opinions with other experts face to face rather than by using a GSS. With respect to facilitation, we learned that a facilitator with sufficient understanding of the meeting subject needs to facilitate the processes. This understanding can be achieved by extensive preparation using the approach MEEA, which is described in chapter five. Further, we learned that it is important for the facilitator to have a positive influence on the expert participants and the process. We also found out that the stakeholders want to achieve their goals effectively and efficiently while maintaining a high quality and applicability of the results. Moreover, the stakeholders want to be satisfied with the results. We learned that MEEA will be used in practice by financial institutions if improvements are made to the effectiveness, efficiency and satisfaction of operational risk management. We argue that this is because that e.g. if our approach compared to the contemporary situation is (a) equally effective or slightly more effective, (b) more effective but less efficient, (c) more effective and more efficient but leading to less satisfaction, financial institutions might not see the added value to use MEEA. Note that other combinations of a, b, c are also possible. Moreover, we argue that if financial institutions dislike MEEA, they are less likely to use it even if it might help them to improve the effectiveness and efficiency of operational risk management. We can conclude that financial institutions are likely to use MEEA if improvements are made to the effectiveness, efficiency and satisfaction of operational risk management.
129
Chapter 6
6.4.3. Way of modeling We learned that using simple visual models helps to provide a frame of reference to the experts. These models should capture the reality as close as possible. Particularly, modeling the problem situation helps to focus the experts on the problem at hand. We also learned that models regarding the problem situation should be constructed during the understanding phase, rather than during the design phase. In this way, we expect that a better understanding of the problem situation can be achieved. Further, several process models were presented for the identification, assessment and mitigation phase. These process models are based on the risk management activity, pattern of collaboration and facilitation recipe used. We learned that the activities principally determine the choice for an appropriate pattern and corresponding facilitation recipe. We also learned that these models help the facilitator in communicating the session structure with the decision-makers and structure the interaction between experts. The models also help the facilitator in executing the activities and creating insight into the dynamics of the interdependent activities.
6.4.4. Way of controlling We learned that it is important to closely follow a project management approach when different stakeholders are involved. Different roles in a research project should be clear before starting a case study. We learned that it is important to build-in enough time for checkpoints and decisions that had to be made by the different stakeholders. Moreover, we learned that it is imperative to carefully document the (intermediate) deliverables, discuss and present them accordingly. The roles and deliverables should be made explicit before documenting the results. Further, we observed that using a 'middle out' and incremental point of view helps to gather support for a change process; improving operational risk management. By focusing on a small, but essential part of the financial institution, Ace Insure, support was gathered to change the current way of working incrementally. We learned that it is imperative to have a 'champion' in the financial institution who can facilitate this change process.
130
... when you can measure what you are speaking about, and express it in numbers, you know something about it…
Lord Kelvin
7.
Empirical Testing of MEEA at Inter Insure
The concepts of MEEA, which are presented in chapter five, are tested using two case studies. We selected these case studies based on the criteria discussed in chapter two. The main theme of the case studies is to test our approach to improve Operational Risk Management, abbreviated to ORM. In the first case study, described in chapter six, the concepts of MEEA were tested. MEEA was used to design and evaluate an Operational Risk Management (ORM) process for Ace Insure. The second case study, described in this chapter, is aimed at evaluating MEEA using the constructs presented in chapter five. This case study was carried out at Inter Insure, a business unit of Bank Insure. This case was performed between January 2003 and August 2003 and has two main goals. First, to apply MEEA to design an ORM process for Inter Insure. Second, to evaluate MEEA. This second case study is described in a linear fashion for readability purposes using the way of working, which is described in chapter five. However, in reality certain activities are performed parallel and iteratively.
7.1. Understanding Inter Insure Step one: conceptualize the problem situation To conceptualize the problem situation at Inter Insure, we interviewed two initiators, one manager and one assistant face-to-face in a series of three meetings. Furthermore, we studied several documents provided by Inter Insure. The contemporary situation regarding the ORM work process and supporting techniques and tools was equal to the situation as described in chapter four. We tested this by formally asking the initiators and manager to react, or make suggestions to the contemporary situation as described in chapter four. No changes were made to the situation as described in chapter four. Moreover, Inter Insure is a business unit of Bank
131
Chapter 7 Insure and formally complies with the ORM standards imposed by them. For this reason, we only address the issues that were different in this second test case. We respectively address the: description of the organization, current problem issues, ORM standards used, potential stakeholders, potential interruptions and goal description. The description of the organization Inter Insure is a large Dutch financial services provider offering integrated financial products and services using insurance-advisers, the Internet and the banking structure of which they are part. Inter Insure holds office in two major cities the Netherlands and is one of the market leaders. Its main activity is to provide financial solutions to its corporate and private customers. The firm’s financial solutions include, among others, accident insurance, life employee benefits, personal financial planning and personal insurances. The management process of Inter Insure’s primary process is selected for our second test case. The current problem issues Inter Insure operates in a highly competitive and dynamic market and has to cope with a number of problems that have risen recently. Despite its strong reputation and market brand, several other financial service providers have entered the Dutch market and gain market share. Moreover, these competitors offer lower prices and respond faster to customers as compared to Inter Insure. It seemed that Inter Insure suffered losses because the management process from their primary operational processes was not performing well. The primary operational processes consist of making offers to clients, accepting new insurances, making changes in the financial administration, selling insurances and damage treatment. The management process consists of several sub-processes, in which the operational risks and control measures are identified from the primary process. These sub-processes lead to a report made by a specific business function. The organization Inter Insure consists of the business functions quality improvement, actuarial, demand & change, compliance, business process management, operational risk management, corporate audit services, and a fraud coordination team. Each business function performs its own specific task in the organization. These specific tasks are not described for reasons of confidentiality. The department quality improvement signaled that each business function views the primary process from their point of view, and then reports to management, see Figure 7-17. According to the interviewed initiators, manager and assistant this hampers a shared view on the operational risks and control measures in the primary operational processes, which in turn
132
Test cases makes it extremely difficult for management to prioritize and plan actions. Moreover, according to them, the lack of a shared view complicates the allocation of resources and budgets to the business functions. An example of this problem was given by one of the initiators: “Employees from different departments report misuse of passwords to management. The departments who report in this example are compliance, ORM, and corporate audit services (CAS). The compliance officer reports that rules and regulations need to be followed properly, while ORM argues that several rules are impossible to implement in practice. As such, the report lacks a shared view of the operational risks and controls to be taken. Therefore, the management is not able to take an effective decision”.
Quality Improvement Fraud Coordination Team
Corporate Audit Services
Actuarial
Management
Operational Risk Management
Demand & Change
Compliance Business Process Management
Figure 7-17: each business function separately reports to management (Grinsven, 2007)
The department quality improvement suggested to Inter Insure’s management that all business functions should provide a shared view on the operational risks and control measures of the primary operational processes, see Figure 7-18. This shared view would then help Inter Insure’s management to prioritize, plan actions, allocate resources and assign budgets more easily. Inter Insure’s management approved to this suggestion and decided to initiate an ORM project. In their opinion, this project should investigate the operational risks that hamper this shared view. However, at that time Inter Insure’s management was not precisely clear as to how best achieve this.
133
Chapter 7
Business function
Current view
Desired shared view
Quality Improvement Fraud Coordination Team Demand & Change Actuarial Compliance Operational Risk Management Corporate Audit Services Business Process Management
Figure 7-18: desired shared view from different business functions (Grinsven, 2007)
ORM standards Management of Inter Insure advised to comply with the Australian New Zealand Standard AS/NZS 4360: 1999 (ANZ) standard when performing an ORM project, see chapter five and six. The first component of this standard suggests a number of generic guidelines for establishing and implementing ORM processes. The second component suggests using a 5x5 matrix to classify the frequency of occurrence and impact in a five-point scale. The management of Inter Insure together with GORM further suggested complying with the standards provided by the Basel Committee on Banking Supervision. This committee formulated a number of supervisory standards and guidelines and is part of the BIS, see chapter four and five. For these standards and guidelines we refer to BCBS (2003b). Possible stakeholders The initiators of Inter Insure identified the following possible groups of stakeholders: (1) Bank Insure’s management in the Netherlands, (2) Group-ORM (GORM), (3) Inter Insure’s management, (4) Inter Insure’s initiators and experts, to the latter is also referred as participants or expert participants. The first and second groups of stakeholders have a specific function in relation to ORM projects and are explained in more detail in chapter six. Potential interruptions Several potential interruptions were identified using several face-to-face interviews and documents provided by Inter Insure. The first possible interruption was that the focus of the ORM project should be on the processes regarding the management layer instead of the internal processes that lead to that layer. The second interruption was that all experts needed to 134
Test cases collaborate in order to create a shared view on the operational risks in the management process. According to the assistant of Inter Insure’s initiators this could be an important aspect because eight different disciplines are involved. The initiators furthermore argued that the presence of corporate audit services (CAS) during the ORM project could negatively influence the behavior of other experts. Goal description Inter Insure’s initiators formulated three goals. The facilitator refined these goals with the manager in a one-hour during face-to-face session. The resulting goals are formulated as follows. One, design a process that helps the business functions to identify the operational risks and (mitigating) control measures in the primary operational processes. Two help the business functions to create a shared view about the identified operational risks and control measures. Finally, help the business functions to work more effective and efficient in an ORM project by identifying potential collaboration possibilities. These goals were communicated to the expert participants using a face-to-face briefing and email well before the ORM sessions started.
Step two: specify the problem situation By investigating internal memos, internal reports, two power point presentations and the annual report we carried out the specification of the problem situation. The initiators and assistant provided this information to the researcher. The internal memos addressed the business functions: quality improvement, actuarial, demand & change, compliance, business process management, operational risk management, corporate audit services and the fraud coordination team. The internal reports addressed the first quarter of 2003 and the internal compliance. The presentations addressed the possible collaboration of the business functions in relation to Inter Insure’s policy and strategy. The annual report addressed the organization Inter Insure. All these documents were classified. Additionally, we interviewed one manager, two initiators and one assistant. To specify the problem situation we respectively address the motive, perceived problems, the overall goals, performance indicators, model reductions and the collection of input data. The motive At the time of this second test case, Inter Insure’s management was under heavy pressure. In a relatively short period Inter Insure’s market share dramatically decreased. Moreover, the large Financial Institution required Inter Insure to increase their return on investment. The motives
135
Chapter 7 to start this ORM project are identified as to enable management making informed decisions and decrease costs. The ORM project should help management to prioritize, plan actions, allocate resources and assign budgets. These motives can be compared with the motives mentioned in chapter four: a signal from the business unit (quality improvement) and as requested by management (Inter Insure). The perceived problems In a third interview, one manager two initiators and one assistant identified the perceived problems with the contemporary ORM process at Inter Insure, see Table 7-40. After identification, they categorized the problems into four pre-defined categories: (1) preparation of the ORM sessions, (2) process that is followed with the experts in the ORM sessions i.e. risk identification, assessment and mitigation, (3) outcome of the ORM sessions and (4) support such as methods and tools. The researcher prepared and guided this interview. Table 7-40: perceived problems
Category Preparation
Process
Outcome
Support
Description of the perceived problem Determining the stakeholders, accurate objective of the ORM project is perceived to be difficult and requires the capacity and competence of employees Setting up distinct criteria for to work more effective and efficient is difficult because a shared view on these criteria is lacking The process to identify and assess operational risks is often carried out differently by each business function. Moreover, it is often perceived as “not uniform” to the expert participants Interviewers from Inter Insure often miss out the big picture because in the interviews they get caught up with to much details Business functions do not share the outcome of their findings with other functions Managers consider the report made by Corporate Audit Services (CAS) as “leading” Other business functions do not support the recommendations made by CAS Management is not able to take effective decisions based on the individual reports The results from the individual reports are difficult to aggregate The overall quality of the results is low Enabling a shared view between managers and expert participants is difficult Aggregating the intermediate interview results and holding expert feedback rounds is a burden Reporting through put time is too long for decisions to be made
The overall goals Inter Insure’s management and two experts formulated the overall goals. There are three overall goals: (1) create a shared view about the identified operational risks and control measures of the primary operational processes (2) reach consensus and focus at the Inter Insure management level and (3) comply with standards and guidelines issued by internal and external regulations.
136
Test cases Performance indicators Performance indicators serve in this test case as a substitute to deduce the performance of the contemporary ORM process and to compare the performance with the designed and improved process. We proposed to Inter Insure to use the constructs described in chapter five as performance indicators in this test case study. Model reductions One model reduction was made to reduce the complexity at Inter Insure and to arrive at a model that corresponded most to the essential characteristics of the contemporary situation at Inter Insure. We had chosen not to build and present a process model of the reporting phase as Inter Insure had it’s own specific way of reporting. Moreover, our primary research interest was in utilizing multiple experts’ judgment in the risk identification, assessment and mitigation phase. Collecting input data Collecting input data was done using expert estimations, see chapter five. We used ex-ante and ex-post interviews with the manager, initiators and assistant to collect data about the efficiency of the contemporary ORM situation with respect to man-hours and throughput time. We used this data to compare the efficiency of the contemporary situation with the improved situation, see the problem solving cycle in chapter five. Further, our interviews indicated that the contemporary situation at Inter Insure was similar to the situation as described in our first case study in chapter four. Therefore, we refer to chapter four for input data about the primary and secondary ORM activities.
Step three: validate the problem situation We validated the problem situation by asking two initiators, one manager and one assistant if the identified problems reflected their reality. Due to scheduling problems it was not possible to use experts other than these to validate the problem situation. The problem situation as described was confirmed by all persons involved. Moreover, a cost offer was prepared for Inter Insure by the researcher. Included in this cost offer were, among other items, the goals, approach, deliverables, planning, research activities and budget. Inter Insure approved the offer thereby validating a part of the problem situation.
137
Chapter 7
7.2. Designing ORM processes for Inter Insure Preparation phase This phase is used to determine the context and objectives, select experts, assess experts, define roles, choose the method and tools, tryout the exercise and train the experts.
Step one: determine the context and objectives The context and objectives are already roughly discussed in the understanding phase. In this step we highlight those aspects that are predominantly important to assure that the experts, who are involved in the ORM sessions, identify, assess and mitigate the operational risks within the context and objectives set by the manager and initiators. The initiators together with the facilitator determined the main context and objective for the ORM sessions. The context and objectives help to focus the experts on the subject of the ORM sessions. The main objective was stated as: “all business functions should provide Inter Insure management with a shared view on the operational risks and control measures of the primary operational processes, see Figure 7-18. This shared view should help Inter Insure’s management to prioritize, plan actions, allocate resources and assign budgets more easily”. This objective was explained and discussed with the experts before the ORM sessions started. The objective was made more practical for the expert participants by presenting them several figures with textual examples, see Figure 7-17 in this chapter, Figure 7-18 in this chapter and Figure 7-19 below. The examples are not depicted in the figures for reasons of confidentiality.
Policy & Strategy Monitoring & Improving
Problem & Risk Identification
Information & Communication
Measuring & Evaluation Control Measures
Figure 7-19: creating a collaborative view on operational risks (Grinsven, 2007)
138
Test cases
Step two: identify the experts The initiators broadly identified the experts that were needed for the ORM sessions. This was done based on the experts’ knowledge, role in the organization and availability. Gender was not used as a criterion. Eighteen experts, consisting of five females and thirteen males were identified and invited for the training session and two ORM sessions. An attendance list was made and kept for all sessions. For the training session, see step eight. The number of experts were identified following to the criteria suggested in step two of the preparation phase, see chapter five.
Step three: select the experts Although seventeen experts were identified, only eight experts confirmed that they could participate in the scheduled training session. Four females and five males could not participate in the scheduled training session due to reasons such as: a holiday, other business meeting or had a day off. Criteria as substantive knowledge and process knowledge were used to select the experts. For reasons of progress, the initiators decided to start the training session on the planned date and to brief the experts via email that could not participate. Further, it was decided to agree on the scheduled dates for the first and second ORM session. The final selection of experts consisted of four females and thirteen males. They represented the business functions from Inter Insure, see Figure 7-17 and Figure 7-18. One of them was a manager, two were the initiators and one was the assistant. All of them were able to attend both ORM sessions.
Step four: assess the experts The assessment of experts aims to weigh the performance of each selected expert in order to aggregate his or her assessment of operational risk more accurately into one combined assessment. For various reasons, the performance of the experts was not assessed. First, the manager and initiators argued that there was not enough time to assess the experts. Second, the costs would be too high. Third, since no information was available, it seemed to be difficult for the manager, initiators and assistant to prepare the necessary questions for the assessment of the experts.
Step five: define the roles We defined the following roles to complete the activities in the ORM process: manager, initiator, expert, devil’s advocate, facilitator and chauffeur. All roles were made clear to the group before the sessions started. One person from Inter Insure fulfilled the role manager. The 139
Chapter 7 manager was actively involved in the preparation phase, the work assignment and the ORM sessions. Two persons fulfilled the role initiator and actively participated in all performed phases of the ORM process. One of them fulfilled the role devil’s advocate. Seventeen persons fulfilled the role expert and represented the Inter Insure business functions. One Researcher from TPM fulfilled the role facilitator and he was involved in all phases of the ORM process. Moreover, he was responsible for the functions mentioned in chapter five plus the outcome of the sessions. The chauffeur supported the facilitator by operating the Group Support System (GSS) used for the ORM sessions.
Step six: choose the method and tools The facilitator suggested a variety of methods and tools to the initiators. Based on these suggestions the initiators had to make a choice for the method and tools. Microsoft Word, Excel, Power Point and email were chosen for the preparation phase and the reporting phase. Word, in combination with Excel was chosen for reporting purposes because the layout format of the reporting tool in GroupSystems did not match the Inter Insure’s risk management reporting standards. Excel was chosen to aggregate and present the results derived from the experts’ assessments because from our first case we learned that it was faster and simpler to use than the multi-criteria tools in GroupSystems. Power Point was used to present information such as examples and figures to the experts in the training session and before the start of the ORM sessions. Excel and Power Point were also used during the sessions to visualize the intermediate aggregated results from the risk assessment phase. Email was used in the preparation phase to present information to the experts who could not attend the training session. The facilitator suggested using MEEA to design a sequential process of activities for the phases risk identification, assessment and mitigation. The facilitator furthermore suggested using thinkLets as standard facilitation recipes to support the facilitation of the experts. The recipes were mapped to each risk management activity and used in combination with the software tool GroupSystems workgroup edition / professional suite from GroupSystems.com. See the subsequent sections for the resulting process models and a description.
Step seven: tryout the exercise The facilitator suggested practicing the activities of the ORM exercise using a ‘step-by-step walk through’ with one initiator and one assistant. During practicing, Inter Insure's assistant found that using high-level examples in the risk assessment phase could lead to problems because not all the experts had exact equal experience in the management process. The initiator found that 140
Test cases the scales used for estimating the frequency and impact were not linear and clear enough. The following changes were made: (1) the assessment examples were made more practical and (2) the scales were made linear and explicit in terms of ‘time’ and ‘money’.
Step eight: train the experts A group-facilitated training session was prepared, scheduled and carried out one week before the sessions were performed. In this training session, the context and objectives, importance of operational risk management to Inter Insure, the process to be followed with the experts, and the tools were explained and discussed. The experts who could not participate in the training session were briefed using email, see step three above. The roles and responsibilities were also made clear to the experts. Further, the initiator explained how Inter Insure would use the results of the ORM sessions.
Risk identification phase We used a sequence of three face-to-face meetings with two initiators and one assistant to design a sequential process of interrelated risk management activities for the risk identification phase. We used MEEA and the process symbols presented in Grinsven and Vreede (2002b) to design this process. Using the description and order of risk management activities we choose the appropriate pattern of collaboration and facilitation recipe that was most suitable for the activity and our context. We refer to Briggs and Vreede (2001b) for an elaborate discussion of these facilitation recipes. We numbered each activity for quick reference. Figure 7-20 depicts the resulting process model for risk identification and Table 7-41 summarizes the mapping of the pattern and choice of a recipe to each activity.
141
Chapter 7
Diverge
(1) DirectedBrainstorm
Necessary to add operational risks? yes
no
Identify events
Converge
(2) FastFocus Formulate the operational risk
Organize
(3) PopcornSort Categorize the operational risk
(4) Perform a gap analysis
Figure 7-20: process model for risk identification (Grinsven, 2007)
Step nine: identify the operational risks Following MEEA, The first activity was used to identify events. Note that for all facilitation recipes we refer to Briggs and Vreede (2001b). During the execution of this activity, the facilitator used six themes, policy & strategy, problem & risk identification, measuring & evaluation, control measures, information & communication, monitoring and improving to trigger the experts, see Figure 7-19. For this activity, a DirectedBrainstorm facilitation recipe was used. When the activity had finished, it turned out that the experts had identified 217 events. The second activity was used to explore the causes of each event and frame them into a clear formulation of an operational risk. For this activity, a FastFocus facilitation recipe was used. When the second activity had finished, it turned out that the experts had formulated 18 operational risks.
142
Test cases
Step ten: categorize the operational risks The third activity was used to categorize the operational risks into predefined impact areas. For this activity, a PopcornSort facilitation recipe was used. Before the activity started, the facilitator presented the intermediate results to the experts. During the execution of the activity, each expert had an overview and clear description of the impact areas, corporate management, business unit management team, business unit operational management, and business unit staff. All operational risks were categorized into the impact area corporate management.
Step eleven: perform a gap analysis The fourth activity was used to perform a gap analysis. It was asked to the manager and initiators, and more specifically to the devils advocate, if the operational risk list was complete. Using relevant facts from the desk research activity, the devil’s advocate and manager added 8 operational risks to the list. The facilitator checked with the experts if they understood these risks before adding them to the list. The experts categorized the risks. In total, 26 operational risks were identified. Table 7-41: summary of activities in the risk identification process
Activity 1. Identify the events 2. Formulate the operational risk 3. Categorize the operational risks 4. Perform a gap analysis
Pattern of collaboration Diverge Converge Organize -
Facilitation recipe DirectedBrainstorm FastFocus PopcornSort -
Risk assessment phase We used the same three face-to-face meetings as described in the risk identification phase to design a sequential process of risk management activities for the risk assessment phase. MEEA was used to design this process. Based on the description and order of risk management activities we choose the appropriate pattern of collaboration for each activity. Then, we mapped a facilitation recipe that was most suitable for the activity and our context. We numbered each activity for quick reference. Figure 7-21 depicts the resulting process model for risk assessment and Table 6-23 summarizes the mapping of the pattern and choice of a recipe to each activity.
143
Chapter 7 (5) MultiCriteria
Prioritise the operational risks
Evaluate
Evaluate
(1) StrawPoll
Assess the operational risks Discussion needed? no
Discussion needed? no
Discuss the operational risk
Diverge
(3) LeafHopper Identify control measures
Buildcons.
Buildcons.
yes (2) Crowbar
yes (6) Crowbar Discuss the operational risk with control measures (7) Prioritize the operational risks
Diverge
(4) LeafHopper Provide examples for control measures
Figure 7-21: process model for risk assessment (Grinsven, 2007)
Step twelve: assess the operational risks The first activity was used to prioritize the operational risk list. This was done because the manager argued that it was not necessary to identify control measures for all risks. For prioritization, a five-point scale, with one being ‘strongly disagree’ and five ‘strongly agree’ was used. For this activity, a StrawPoll facilitation recipe was used. The facilitator framed several criteria during prioritization to focus the experts. The experts carefully prioritized each risk using the criteria presented by the facilitator. When the activity had finished, it turned out that the experts had prioritized 26 risks. The results from this activity indicated that control measures had to be identified for 10 operational risks. The facilitator presented the intermediate results to the experts. The second activity was used to discuss the operational risks that had a high
144
Test cases standard deviation. For this activity, a Crowbar facilitation recipe was used. The level of consensus between the experts was calculated using the standard deviation, which was set by the facilitator to a threshold value from 0.9. A higher standard deviation indicated that these estimations were extreme distant and needed to be discussed further. As a result from this, three operational risks were discussed. Further, it was agreed with the experts that the other 16 risks were not discussed further in the session. The third activity was used to identify existing control measures. This was done in two phases. The first phase was used to the top five and the second phase was used to for the top five to ten operational risks. For this activity, a LeafHopper facilitation recipe was used. The fourth activity was used to provide realistic examples for the existing control measures. For both activities, a LeafHopper facilitation recipe was used. When the activity had finished, it turned out that the experts had identified 192 control measures and provided a number of realistic examples for each of them. The fifth and
sixth activity were performed with the manager and two initiators in a separate, third ORM session, at Inter Insure’s head office. In the fifth activity, the managed level of exposure to operational risk was assessed using a MultiCriteria facilitation recipe. The sixth activity was used to discuss the operational risks that needed special attention. For this activity, a Crowbar facilitation recipe was used. The facilitator guided the discussion with the manager and initiators. Five operational risks were discussed. Based on the discussion, the sixth activity was used by the manager to prioritize the risk list again. Three operational risks were chosen as priority one, two and three.
Step thirteen: aggregate the results The first, second, fifth and sixth activities were used to aggregate the intermediate results. Following MEEA, aggregation was completed using a combination of a mathematical method and a behavioral method. For the first activity, the mathematical method, we used the equal-weighed average rule to combine the individual results from the experts. Then, the calculated results were presented to the experts using Excel in combination with a beamer. In the second activity, the behavioral method used the standard deviation to determine if there were operational risks that needed to be discussed. The experts provided their rationales behind their individual assessments and discussed three operational risks in detail. For the fifth activity, a MultiCriteria facilitation recipe was used. We used the equal-weighed average rule to combine the individual results from the experts. The results were discussed verbally with the manager and two initiators using the Crowbar as facilitation recipe.
145
Chapter 7
Table 7-42: summary of activities in the risk assessment process
Activity 1. Prioritise the operational risks 2. Discuss the operational risks 3. Identify control measures 4. Provide examples for control measures 5. Assess the operational risks 6. Discuss the risk with control measures 7. Prioritize the operational risk
Pattern of collaboration Evaluate BuildConsensus Diverge Diverge Evaluate BuildConsensus -
Facilitation recipe StrawPoll Crowbar LeafHopper LeafHopper MultiCriteria Crowbar -
Risk mitigation phase Following MEEA in a separate third ORM session, a sequential process of risk management activities for the risk mitigation phase was designed. Based on the description and order of risk management activities we choose the appropriate pattern of collaboration for each activity. Then, we mapped a facilitation recipe that was most suitable for the activity and our context. We numbered each activity for quick reference. Figure 7-22 depicts the resulting process model for risk assessment and Table 7-43 summarizes the mapping of the pattern and choice of a recipe to each activity.
(1) Evaluate
Determine the expert for reassessment
Re-assess the operational risks
Buildcons.
(3) MultiCriteria
(4) Crowbar Discuss the operational risk with new control measures
Consensus? no
Diverge
yes (2) ComparativeBrainst. Identify alternative control measures
Figure 7-22: process model for risk mitigation (Grinsven, 2007)
146
Test cases
Step fourteen: identify alternative control measures The first activity was used to determine the expert who would re-assess the operational risks. Based on consensus, the manager from Inter Insure was chosen to perform this activity. The
second activity was used to identify alternative control measures for the top three of operational risks. The manager and initiators were chosen to identify more effective and efficient control measures because together they had the substantive knowledge that was needed. For this activity, a ComparativeBrainstorm facilitation recipe was used. When the activity had finished, it turned out that the experts had identified 8 alternative control measures.
Step fifteen: re-assess the residual operational risks In the third activity, the manager re-assessed the three operational risks in combination with the existing and alternative control measures. For this activity, a MultiCriteria facilitation recipe was used. Although only one expert performed the assessment, a Crowbar was used to provide the rationales behind the individual assessment. The rationales were used in the report written by the facilitator.
Step sixteen: aggregate the results Aggregation was completed in the third and fourth activity by using the same mathematical and behavioral method as described in the risk assessment process. First, we used the equal-weighed average rule to combine the individual results from the experts. Then, the calculated results were presented to them. In the behavioral method, the standard deviation was used to focus the discussion. Rationales behind the individual assessments were provided by the experts. The aggregation was completed in the fourth activity, the Crowbar facilitation recipe, in which three operational risks were discussed. Table 7-43: summary of activities in the risk mitigation process
Activity 1. Determine the expert for re-assessment 2. Identify alternative control measures 3. Re-assess the operational risks 4. Discuss the risk with new control measures
Pattern of collaboration Diverge Evaluate BuildConsensus
Facilitation recipe ComparativeBrainstorm MultiCriteria Crowbar
Reporting phase Following MEEA the results of all ORM sessions were documented and feedback was provided to the experts.
147
Chapter 7
Step seventeen: document the results The initiators asked the facilitator to write a report using Inter Insure’s reporting standards. The facilitator prepared a first draft and wrote two separate reports. In the first formal report, the relevant background, method, process, steps, choices, recommendations and all results of the ORM sessions were documented. Two iteration loops were made with the initiators before the report was approved. This document was used to communicate the results with higher management. The second report was written to provide feedback to the experts, see step eighteen.
Step eighteen: provide feedback to the experts The facilitator wrote a report to provide feedback to the experts. This report was based on the agenda’s, process and the output of the ORM sessions. One iteration loop was made before the report was approved. Since, a group-wise discussion with the experts was not possible, this document was communicated with them using email.
7.3. Results and discussion Below, we discuss the results of the improvements, which were made by applying MEEA to operational risk management at Inter Insure. The results are described following the description presented in chapter 6, paragraph 6.3 results and discussion.
Team We measured and respectively discuss the elements of the construct team: demographical data, composition and collaboration.
Demographical data Table 7-44 represents the demographical data with respect to the team size, gender, average age, work experience and experience with similar tools and techniques. Four females and thirteen males participated in both sessions. The results in the table indicate that the experience with similar tools and techniques used for ORM was very low. This finding is in line with the contemporary situation as described in chapter four. However, this is in contrast with the average number of years working experience of the experts. One of the interviews indicated that Inter Insure used different (external) facilitators each applying their own techniques and tools.
148
Test cases Table 7-44: demographical data
ORM session Team size Male Female Average age Average number of years working experience Average number of years experience with similar tools / techniques
1 17 13 4 39.82 18,93 1.60
2 17 13 4 39.82 19,82 1.70
μtot 17 13 4 39.82 19,38 1.65
Most GSS literature indicates that the optimal group size in a face-tot-face GSS supported group is usually between ten tot twenty experts (Dennis, Hamminger et al.,1990; Dennis, Nunamaker et al.,1991; Dennis and Gallupe, 1993). In line with this literature and MEEA, we used seventeen experts. However, this number is in contrast with the literature on expert judgment, which argues that, in principle, the largest possible number of experts should be used to increase the sample size and to minimize inconsistency and bias (Clemen and Winkler, 1999; Goossens and Cooke, 2001). Unfortunately, this literature does not explicate what the largest number of experts exactly is. Further, four females participated in the sessions. Our interview results indicated that using a mixed gender-group in our ORM sessions helped to prevent the experts in the risk assessment phase from groupthink and biases. Similar to the results in chapter six our interview results indicate that the women, in contrast to the men, are more cautious in being overconfident while assessing the operational risks. This is in line with Heath and Gonzales (1995) and Karakowsky and Elangovan (2001) who state that, where possible, a mixed gender group should be used to avoid internal politics. Our findings are also in line with Janis (1972) who argues that groupthink can be minimized by using a mixed gender group. Generally, from the above we conclude that a group size of ten to twenty experts, preferably a mixed-gender group should be used when performing an ORM session supported with GSS.
Composition The results in Table 7-45 indicate that the composition of this group is significantly improved to reach the goals compared to the contemporary situation. Moreover, the average value of the construct composition (μ = 5.37 > 4) indicates that there is a positive significant improvement. Table 7-45: composition
ORM session 1 2 σ2 μ tot σtot Distribution Improved σ1 Normal Yes The composition of this group was suitable 5.68 1.17 5.06 1.27 5.37 1.25 to reach the goals pKS = 0.077 pT = 0.000 μ = 5.37
149
Chapter 7 Observations and interview results taught us that the fit between participants and tasks was quite good in the risk identification phase as well as in the risk assessment phase. As one participant mentioned ‘the risks that we identified and assessed are very close to our personal experience’. Our findings are in line with McKay and Meyer (2000) who argue that it is important to deploy substantive and process expertise in any risk analysis. Generally, from the above we can conclude that the criteria, as discussed in chapter five, are useful to select the experts.
Collaboration The results in Table 7-46 indicate that collaboration with other group members is significantly improved compared to the contemporary situation. Moreover, the average value of the construct collaboration (μ = 5.24 > 4) indicates that there is a positive significant improvement. Table 7-46: collaboration
ORM session 1 2 σ2 μ tot σtot Distribution Improved σ1 Collaboration with other group members is 5.50 0.92 4.97 0.91 5.24 0.94 Not normal Yes gone good pKS = 0.02 pWX = 0.000 μ = 5.24
The values indicate that collaboration with the other experts in session 2 was not as good as in session 1. We found a logical explanation for this in our observations and interviews. The experts in session 2 had fewer discussions than in session 1. One expert argued that ‘not much collaboration was needed in the first session, and this was experienced as being positive’. Another expert argued that a discussion after risk assessment is not always valued. Moreover, in session 2 it was not possible to change the individual assessment results based on the discussion. Although the experts experienced providing the rationales behind as positive, they responded that they would take a riskier course of action if they could re-assess the risks. Our finding is in line with Cooke and Goossens (2000) who argue that one subjective assessment is as good as another and there is no rational mechanism for persuading individual experts to adopt the same degrees of belief. Our findings are also in line with Clemen and Winkler (1999) who argue that after a discussion, a group will typically advocate a riskier course of action than when they would if acting individually or without discussion. Therefore, in our situation, revoting was made impossible. Moreover, following Clemen and Winkler (1999) we argue that when experts do modify their opinion to be closer to the group, the accuracy of the estimation decreases. Generally, from the above we can conclude that collaboration between experts after
150
Test cases assessing the operational risks can help to provide the rationale behind the individual assessments. We furthermore can conclude that collaboration can lead to a less accurate estimation when the experts re-assess the risks after a discussion.
Technology We measured and respectively discuss the element anonymity of the construct technology. Anonymity The results in Table 7-47 indicate that the questions are not reliable enough to measure the underlying construct anonymity. However, the individual questions indicate that, the option to work anonymously significantly improved compared to the contemporary situation and, the functionality of anonymity significantly improved to the contemporary situation. Table 7-47: anonymity
ORM session 1 2 σ2 μ tot σtot Distribution Improved σ1 Yes The option to work anonymously was well 5.06 0.70 4.79 0.91 5.01 0.80 Not normal received Anonymity is functional 5.50 0.75 5.95 0.99 5.54 0.86 Not normal Yes CA = 0.3702
•
Question 1:the option to work anonymously. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value is 0.000 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the option to work anonymously was received better compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.01 > 4) indicates that there is a positive significant improvement.
•
Question 2: anonymity is functional. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value is 0.001 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the functionality of the anonymity significantly improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.54 > 4) indicates that there is a positive significant improvement.
Our results indicate that working anonymous was well received in the risk identification and assessment phase. Our results further indicate that functionality in the risk identification phase
151
Chapter 7 was valued higher than in the risk assessment phase. These results further indicate that anonymity in ORM sessions is highly appreciated. Our interview results indicate that the experts appreciated anonymity because some events and operational risks that were identified by them could be seen as negative towards Inter Insure. The anonymity assured the experts that they did not have to fear reprisals. Our findings are in line with Janis (1972) and Grohowski and McGoff et al. (1990) who argue that the principal effect of anonymity should be a reduction of characteristics such as member status, internal politics, fear of reprisals and groupthink. Our results are furthermore in line with Rowe and Wright (2001) who argue that events should be identified anonymously. Our results are also in line with Nunamaker et al. (1988) who argue that a GSS supports anonymous contributions. Generally, from the above we can conclude that anonymity is functional in an ORM session when member status, internal politics, fear of reprisals and groupthink needs to be minimized.
Process In this section we discuss the elements of the construct process: the structure, involvement and participation, interaction and the facilitation.
Structure The results in Table 7-48 indicate that the structure is significantly improved compared to the contemporary situation. Moreover, the average value of the construct structure (μ = 5.14 > 4) indicates that there is a positive significant improvement. Table 7-48: structure
ORM session 1 2 σ2 μ tot σtot Distribution Improved σ1 Enough time was spent on important topics 5.31 1.21 4.79 1.05 5.14 1.13 Not normal Yes pKS = 0.02 pWX = 0.000 μ = 5.14
Our results and observations from the first session indicate that the process structure was clear. The interviews indicated that the time, which was spent on important subjects, was good. The task structure was less clear in one of the risk identification activities. An example is presented: the experts reflected that adding extra risks to the list did not disturb them, however observations indicate that the task structure became a little less clear because one of the initiators interfered the facilitation process by trying to change the rules to identify risks. Our results are in line with Arkes (2001) and MacGregor (2001) who argue that using a structured method provides better results than using unstructured methods. The findings are also in line
152
Test cases with some GSS researchers who argue that groups who use a structured meeting are more effective, efficient and satisfied with the process, but an incorrect structure for the task can reduce performance, see e.g. (Bostrom, Anson et al., 1993; Dennis & Gallupe, 1993; Ocker, Hiltz et al., 1996). Generally, from the above we can conclude that process structure needs to have a fit with the activities.
Involvement and participation The results in Table 7-49 indicate that the involvement and participation is significantly improved compared to the contemporary situation. Moreover, the average value of the construct involvement and participation (μ = 4.97 > 4) indicates that there is a positive significant improvement. Table 7-49: involvement and participation
ORM session 1 The GSS meeting increased the participants’ 4.97 involvement The GSS meeting encouraged participation 5.06 CA = 0.8695 pKS = 0.158 pT = 0.000
•
2 σ2 μ tot σtot Distribution Improved σ1 1.29 4.88 1.60 4.93 1.43 Not normal Yes 1.03 4.97 1.40 5.01 1.21 Not normal μ = 4.97
Yes
Question 1: the GSS increased the participants’ involvement. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1tailed) value of 0.004 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.001< 0.025). Therefore, we can conclude that the GSS increased the participants’ involvement compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.93 > 4) indicates that there is a positive significant improvement.
•
Question 2: the GSS improves participation. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.000 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (2-tailed): 0.000 < 0.025). Therefore, we can conclude that the GSS significantly improved the level of participation compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.01 > 4) indicates that there is a positive significant improvement.
Our interviews indicated that the ability to meet other expert colleagues face-to-face increased participation in the sessions. In contrast, our interviews indicated that some experts felt less 153
Chapter 7 involved to participate when anonymity was lacking. This was the case when verbal discussions were involved. As such, we support the findings from Vreede and Wijk (1997a) who argue that the ability to ventilate ideas anonymously can increase the sense of participation in the session. From the above we can conclude that using a expert face-to-face meeting supported with GSS can increase the involvement and the level of participation.
Interaction The results in Table 7-50 indicate that the questions are not reliable enough to measure the underlying construct interaction. However, the individual questions indicate that, the interaction among participants was significantly improved compared to the contemporary situation and, the exchange of information and ideas improved significantly compared to the contemporary situation. Table 7-50: interaction
ORM session 1 2 σ1 σ2 μ tot σtot Distribution Improved The GSS meeting encouraged interaction 5.06 0.88 4.62 1.51 4.84 1.24 Not normal Yes among the participants Yes The exchange of information and ideas 4.97 1.49 4.79 1.41 4.88 1.44 Not normal between participants increased CA = 0.7208
•
Question 1: the GSS improves interaction. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.006 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.001< 0.025). Therefore, we can conclude that the GSS improves interaction among participants compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.84 > 4) indicates that there is a positive significant improvement.
•
Question 2: the exchange of information and ideas. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.024 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.003 < 0.025). Therefore, we can conclude that the GSS improved the information and idea exchange significantly. Moreover, the average value of this specific question (μ = 4.88> 4) indicates that there is a positive significant improvement.
154
Test cases A GSS was used to electronically support the experts during the exchange of information and ideas. For example, in the risk identification phase, the experts identified 217 events indicating that the experts exchanged a great deal of information electronically. This is in line with Rowe and Wright (Rowe & Wright, 2001) who argue that computers should be used during information processing. Further, a devil’s advocate was used to ‘trigger’ the experts while entering information in the GSS. This is in line with Heath and Gonzales (1995) who argue to use a devil’s advocate when interaction is needed. From the meeting logs it became clear that the experts exchanged large amount of information electronically in session 1. The results in Table 7-50 indicate that this exchange slightly decreased in session 2. This was likely due to the nature of the risk management activity, i.e. in session 1 a large number of operational risks were identified while in session 2 the risks were assessed and control measures identified. For the assessment, less information exchange was needed between the experts. Moreover, interviews with the experts indicated that some experts preferred to interact electronically as opposed to interact verbally. Generally, from the above we can conclude that a GSS is valuable to support the identification and sharing of operational risks and control measures and processing the results.
Facilitation The results in Table 7-51 indicate that the questions are not reliable enough to measure the underlying construct facilitation. However, the individual questions indicate that, the influence of the facilitator on this process was significantly improved compared to the contemporary situation and, the facilitator’s understanding of the meeting subject, to support the ORM process, was significantly improved compared to the contemporary situation. Table 7-51: facilitation
ORM session 1 2 σ2 μ tot σtot Distribution Improved σ1 The facilitator had a positive influence on 5.59 0.83 5.06 0.70 5.32 0.81 Not normal Yes this process Yes The facilitator had sufficient understanding 5.50 0.75 5.24 1.09 5.37 0.93 Not normal of the meeting subject to support the process CA = 0.6731
•
Question 1: the facilitator had a positive influence. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.000 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (2-tailed): 155
Chapter 7 0.000 < 0.025). Therefore, we can conclude that the facilitator’s influence on the process significantly improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.32 > 4) indicates that there is a positive significant improvement.
•
Question 2: the facilitator’s understanding of the subject. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.000 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (2tailed): 0.000 < 0.025). Therefore, we can conclude that the facilitator’s understanding of the meeting subject, to support the ORM process, improved. Moreover, the average value of this specific question (μ = 5.37 > 4) indicates that there is a positive significant improvement.
Our results and feedback from experts indicate that the facilitator had a significant positive influence on the process. They also indicate that the facilitator had sufficient understanding of the meeting subject to support the process. Our findings are in line with several GSS and facilitation researchers who argue that facilitation is one of the most important variables for a high quality meeting outcome, see e.g. (Bostrom, Watson et al., 1992; Anson, Bostrom et al., 1995). Further, we observed that the role of a devil’s advocate was important because in this case the facilitator guided the process and did not have to interfere with the content. This is in line with Quaddus et al. (Quaddus, Tung et al., 1998) who suggest using a devil’s advocate when interaction is needed to feed the experts with additional challenging information. From the above we can conclude that a certain level of subject matter expertise is desirable for the facilitator to support ORM meetings. However, as we have argued above, this expertise can be divided over different roles.
Outcome Following Nunamaker et al. (1989a) we measured and discuss the constructs effectiveness, and efficiency from an initiators and participant’s point of view respectively. Following George et al. (1990) we argue that if experts dislike our approach MEEA, they are less likely to use it, even if it might help them to improve the identification, assessment and mitigation of operational risks. Following Fjermestad and Hiltz (2000) we measure and discuss satisfaction with outcome and process respectively.
156
Test cases
Effectiveness initiators The results in Table 7-52 indicate that the goal of the session is achieved and that the applicability of the results is high. They also indicate that the GSS did not increase the quality of the results (4 was neutral). Because we could only interview one manager and two initiators, the columns ‘distribution’ and ‘improved’ are not applicable (n.a.). Table 7-52: effectiveness - initiators
ORM session Goal of the session is achieved Applicability of the results is high GSS increased the quality of the outcomes
1 2 σ2 σ1 6.3 0.58 6,7 0,29 6.2 0.76 6.2 0.58 4.8 0.76 3.8 1.04
μ tot 6,5 6.2 4.3
σtot Distribution Improved 0,45 n.a. n.a. 0.61 n.a. n.a. 0.98 n.a. n.a.
Interviews with the manager and initiators indicated that our structured approach, MEEA, helped them to achieve their goals more effectively as compared to their contemporary situation. One initiator mentioned ‘usually our group meetings are not so highly structured’. The manager argued that the results could be easily applied to his daily practice because the experts provided realistic examples for the identified control measures. The initiators felt that the GSS did not increase the quality of the outcomes, rather they argued that structuring the processes contributed positively to this effect. This finding is in contrast with Fjermestad and Hiltz (2000) who found that 89% of the studies measuring effectiveness report that effectiveness was improved using GSS technology.
Effectiveness participants The results in Table 7-53 indicate that the effectiveness of utilizing a GSS in ORM significantly improved compared to the contemporary situation. Moreover, the average value (μ = 4.90 > 4) indicates that there is a positive significant improvement. Table 7-53: effectiveness - participants
ORM session 1 The GSS session was more effective than a 5.32 manual session The GSS session helped the group to 4.94 generate the most important ideas and alternatives The GSS session increased quality of 4.84 outcomes of the session The outcomes of the sessions met my 4.56 expectations CA = 0.801 pKS = 0.188 pT = 0.000
2 σ2 μ tot σtot Distribution Improved σ1 1.04 5.15 1.13 5.24 1.07 Not normal Yes 1.21 5.24 0.95 5.09 1.04 Not normal
Yes
1.09 4.71 1.08 4.77 1.07 Not normal
Yes
1.21 4.44 1.03 4.50 1.10 Not normal
Yes
μ = 4.90
157
Chapter 7
•
Question 1: the GSS session is more effective than a manual session. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1tailed) value of 0.004 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the GSS supported ORM sessions are more effective than manual ORM sessions. Moreover, the average value of this specific question (μ = 5.24 > 4) indicates that there is a positive significant improvement.
•
Question 2: the GSS session helped the group to generate the most important ideas and alternatives. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.001 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the GSS improved the generation of important ideas and alternatives compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.09 > 4) indicates that there is a positive significant improvement.
•
Question 3: the GSS session increased quality of outcomes. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1tailed) value of 0.002 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.001 < 0.025). Therefore, we can conclude that the use of a GSS session increased the quality of outcomes compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.77 > 4) indicates that there is a positive significant improvement.
•
Question 4: the outcomes of the sessions met my expectations. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1tailed) value of 0.015 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.016 < 0.025). Therefore, we can conclude that an improvement was made to the expectation of participants towards the outcomes of the ORM. Moreover, the average value of this specific question (μ = 4.50 > 4) indicates that there is a positive significant improvement.
Our results, observations and interviews with the experts indicate that using a GSS in combination with MEEA improves the effectiveness of ORM sessions. These findings are in line with Genuchten et al. (1998) and Easley et al. (2003), who argue that a GSS has a positive
158
Test cases effect on effectiveness. These findings are however somewhat in contrast with the findings from the initiators. The results from the experts indicate that using a GSS increased the quality of the outcome while the initiators argued that structuring the processes applied by MEEA contributed positively to this effect. However, as our interviews indicate, the structure of the sessions did not always have an optimal fit with the task and technology used. This finding is in line with a number of studies that argue that the fit between task and technology is important to increase the effectiveness of a meeting, see e.g. (Diehl & Stroebe, 1987; DeSanctis, Poole et al., 1993; Dennis, Hayes et al., 1994; Howard, 1994). Generally, we can conclude that using a GSS in combination with MEEA for ORM has a positive effect on effectiveness.
Efficiency initiators Table 7-54 indicate the conservative estimated values when alternative approaches other than MEEA are used for performing the activities in operational risk management. We used ex-ante and ex-post interviews to ask one manager and two initiators about the efficiency with respect to man-hours. The activities performed by Inter Insure are matched to the activities from MEEA and compared with each other, see chapter five. The results indicate that our approach can save up to 26 man-hours, equals 16 %. Table 7-54: efficiency - initiators
Activity Inter Insure Initiation / preparation Work assignment Desk research Interviews Workshop Integrating the results Finalizing the report Total
Activity MEEA Preparation Session design and execution Reporting
Man hours Inter Insure 12 7 36 4 136 8 24 227
Man hours MEEA
Difference (hours)
39
-16
136
-4
16
-16
191
-36
The manager and initiators estimated the amount of man-hours that they would need to achieve similar results as the outcome of the current ORM sessions. Their conservative estimation is based on using 17 experts. The activities of Inter Insure consisted of initiation, work assignment, desk research, interviews, a workshop, integrating the results and finalizing the report. Notice that this work process has similarities to the contemporary situation, which is described in chapter four. The manager and initiators argued that the initiation and work assignment would consist of discussions with higher management about the scope of the
159
Chapter 7 project. Once the work assignment was signed, desk research can be performed to find relevant facts. Further, they would use a combination of interviews and a workshop with seventeen experts. The interviews are used to verify the findings from desk research with statements of the interviewed experts. Typically, four experts were interviewed individually in a one-hour session. Then, a one-day manual workshop with seventeen experts is used to assess and discuss the risks. Our findings in Table 7-54 indicate that most savings are achieved during the preparation and reporting phase. During session design & execution not much time seemed to be saved, likely because all experts need to be present in the sessions. Our research results are in contrast with most GSS research that indicates that a GSS can save up to 50% of person hours, see e.g. (Nunamaker, Vogel et al., 1989b; Fjermestad & Hiltz, 2000). A possible explanation for this can be that the researchers compare using a GSS with rather ill-structured sessions. Because MEEA structures the operational risk management process through a number of variables it can be argued that the savings from GSS are reduced. This is in line with Valacich (1989), who argues that efficiency can be pursued through the improvement of a variety of variables. Such variables include, among others, the context, composition of the experts, anonymity, structure, interaction and facilitation.
Efficiency participants The results in Table 7-55 indicate that the efficiency of the ORM sessions significantly improved (Asymp Sig (2-tailed): 0.000 < 0.025) compared to the contemporary situation. Moreover, the average value (μ = 5.09 > 4) indicates that there is a positive significant improvement. Table 7-55: efficiency - participants
ORM session 1 2 σ1 σ2 μ tot σtot Distribution Improved The available time used well 5.32 0.73 4.88 1.19 5.10 1 Not normal Yes The agenda executed efficiently 5.22 0.82 4.79 0.91 5.09 0.86 Not normal Yes CA = 0.9134 pKS = 0.000 pWX = 0.000 μ = 5.09
•
Question 1: the available time has been used well. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.000 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that usage of the available time has been
160
Test cases improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.10 > 4) indicates that there is a positive significant improvement.
•
Question 2: agenda has been executed efficiently. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.000 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (2-tailed): 0.000 < 0.025). Therefore, we can conclude that the efficiency of executing the agenda improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.09 > 4) indicates that there is a positive significant improvement.
Our results indicate that the available time has been used well and the agenda has been executed efficiently. Interviews with the experts indicated that the preparation and structure imposed by the facilitator contributed most to the overall efficiency. The experts further argued that using a GSS increased the efficiency when large amounts of information had to be processed. This finding is in line with Fjermestad and Hiltz (2000) who argue that, in general, a GSS increases the efficiency of group meetings. Generally, we can conclude from the above that the preparation, structure of the process applied by MEEA in combination with a GSS has a positive effect on efficiency.
Satisfaction with outcome The results in Table 7-56 indicate that the satisfaction with outcome is significantly improved compared to the contemporary situation. Moreover, the average value (μ = 4.46 > 4) indicates that there is a positive significant improvement. Table 7-56: satisfaction with outcome
ORM session 1 2 σ1 I liked the outcome of today's meeting 4.69 1.14 4.35 I feel satisfied with the things we achieved in 4.63 1.15 4.35 today’s meeting When the meeting was finally over, I felt 4.56 1.15 4.35 satisfied with the results Our accomplishments today give me a 4.38 1.03 4.47 feeling of satisfaction I am happy with the results of today's 4.44 1.03 4.41 meeting CA = 0.9582 pKS = 0.453 pT = 0.012 μ = 4.46
σ2 1.11 0.93
μ tot 4.52 4.48
σtot Distribution Improved 1.12 Normal Yes 1.03 Not normal Yes
1.11 4.45 1.12
Normal
No
1.07 4.42 1.03
Normal
Yes
1.06 4.42 1.03
Normal
Yes
161
Chapter 7
•
Question 1: the level of appreciation of the outcome. The independent KS test indicates that the results of this question are normally distributed; the Asymp Sig (1-tailed) value of 0.041 (> 0.025). The T-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.013 < 0.025). Therefore, we can conclude that the level of appreciation of the outcome improved. Moreover, the average value of this specific question (μ = 4.52 > 4) indicates that there is a positive significant improvement.
•
Question 2: satisfaction with the achievements. The independent KS test indicates that the results of this question are not normally distributed; the Asymp Sig (1-tailed) value of 0.006 (< 0.025). The Wilcoxon-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.014 < 0.025). Therefore, we can conclude that the satisfaction of the participants’ achievements in the meeting improved. Moreover, the average value of this specific question (μ = 4.48 > 4) indicates that there is a positive significant improvement.
•
Question 3: satisfaction with the results. The independent KS test indicates that the results of this question are normally distributed; the Asymp Sig (1-tailed) value of 0.136 (> 0.025). The T-test indicates that H
0
cannot be rejected (Asymp Sig (1-tailed): 0.026 > 0.025).
Therefore, we can conclude that the participants’ satisfaction of the results after the meeting is not improved.
•
Question 4: satisfaction with the accomplishments. The independent KS test indicates that the results of this question are normally distributed; the Asymp Sig (1-tailed) value of 0.067 (> 0.025). The T-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.024 < 0.025). Therefore, we can conclude that the participants feeling of happiness concerning the results of the meeting improved. Moreover, the average value of this specific question (μ = 4.42 > 4) indicates that there is a positive significant improvement.
•
Question 5: satisfactory feeling with the results. The independent KS test indicates that the results of this question are normally distributed; the Asymp Sig (1-tailed) value of 0.067 (> 0.025). The T-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.024 < 0.025). Therefore, we can conclude that the participants feeling of happiness concerning the results of the meeting improved. Moreover, the average value of this specific question (μ = 4.42 > 4) indicates that there is a positive significant improvement.
162
Test cases In general, a GSS seems to have a positive effect on group satisfaction (Mejias, Shepherd et al., 1997; Fjermestad & Hiltz, 2000). Although our results support these overall findings, they also indicate that other variables contribute to satisfaction. For example, our interviews indicate that managing the variety of expectations from experts is important well before and in the sessions. They further indicate that the experts were not always satisfied with the task they had to perform. One of the experts mentioned that he liked the identification of operational risks and control measures but he did not like the outcome of the discussions that followed the assessment of operational risks. This finding is in line with Shaw (1981; 1998) who argues that groups using a GSS are more satisfied when completing idea generation tasks than in decisionmaking tasks. Generally, we can conclude from the above that the satisfaction with the outcome significantly improved compared to the contemporary situation.
Satisfaction with process The results in Table 7-57 indicate that the satisfaction with process is significantly improved compared to the contemporary situation. Moreover, the average value of the construct satisfaction with process (μ = 4.84 > 4) indicates that there is a positive significant improvement. Table 7-57: satisfaction with process
ORM session 1 I feel satisfied with the way in which today's 5.06 meeting was conducted I feel good about today's meeting process 5.12 I found the progress of today’s session 4.65 pleasant I feel satisfied with the procedures used in 5.29 today's meeting I feel satisfied about the way we carried out 5.00 the activities in today’s meeting CA = 0.9471 pKS = 0.626 pT = 0.000
•
2 σ2 μ tot σtot Distribution Improved σ1 0.90 4.53 1.01 4.79 0.98 Normal Yes 0.93 4.65 0.93 4.88 0.95 1.00 4.35 0.86 4.50 0.93
Normal Normal
Yes Yes
0.92 4.94 0.83 5.12 0.88
Normal
Yes
0.94 4.76 1.09 4.88 1.01
Normal
Yes
μ = 4.84
Question 1: satisfied with the meeting. The independent KS test indicates that the results of this question are normally distributed; the Asymp Sig (1-tailed) value of 0.054 (< 0.025). The T-test indicates that H
0
can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025).
Therefore, we can conclude that the satisfactory feeling from the way in which the meeting was conducted improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.79 > 4) indicates that there is a positive significant improvement. 163
Chapter 7
•
Question 2: feeling good about the meeting process. The independent KS test indicates that the results of this question are normally distributed; the Asymp Sig (1-tailed) value of 0.062 (> 0.025). The T-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the feeling of goodness concerning the meeting process improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.88 > 4) indicates that there is a positive significant improvement.
•
Question 3: progress of the meeting. The independent KS test indicates that the results of this question are normally distributed; the Asymp Sig (1-tailed) value of 0.116 (> 0.025). The T-test indicates that H
0
can be rejected (Asymp Sig (1-tailed): 0.004 < 0.025).
Therefore, we can conclude that the pleasantness of progress in ORM sessions significantly improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.500 > 4) indicates that there is a positive significant improvement.
•
Question 4: satisfaction with the procedures. The independent KS test indicates that the results of this question are normally distributed; the Asymp Sig (1-tailed) value of 0.095 (> 0.025). The T-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the satisfactory feeling of the used procedures improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 5.12 > 4) indicates that there is a positive significant improvement.
•
Question 5: satisfaction with the activities. The independent KS test indicates that the results of this question are normally distributed; the Asymp Sig (1-tailed) value of 0.026 (> 0.025). The T-test indicates that H 0 can be rejected (Asymp Sig (1-tailed): 0.000 < 0.025). Therefore, we can conclude that the satisfactory feeling concerning the execution of the activities during the meeting improved compared to the contemporary situation. Moreover, the average value of this specific question (μ = 4.88 > 4) indicates that there is a positive significant improvement.
Our research results indicate that the expert participants were more satisfied with the ORM process as compared to their contemporary situation. This was established by using MEEA in combination with a GSS. This finding is in line with Fjermestad and Hiltz (2000) who show that using a GSS seems to have a positive effect satisfaction with the process. Further, from Table 757 we observe that the experts were more satisfied with the process in session 1 than in session
164
Test cases 2. A possible explanation for this is that the process in session 1 was clearer to the participants and the activities to be executed easier. We also note that fewer discussions were needed in session 1. Based on our observations we believe that discussions are often perceived as ‘negative’ and therefore can reduce satisfaction in a session. From the above we can conclude that satisfaction with process improved by using MEEA.
7.4. Learning moments Three ORM processes were prepared, designed and evaluated for Inter Insure; a risk identification process, a risk assessment process and a risk mitigation process. In this section, we discuss the learning moments for MEEA following the way of thinking, way of working, way of modeling and way of controlling as described in chapter five.
7.4.1. Way of thinking The accent on the structured ness of the processes in which expert judgment is utilized helped us to design an acceptable solution and improve operational risk management at Inter Insure. Numerous stakeholders, with different goals and objectives were involved in the design of a sequence of interrelated activities and evaluation. We learned that using a bounded rationality view then helps to arrive at an acceptable solution rather than an optimal solution. From our first test case we learned that the preparation phase was very important. Therefore, in the second case study, we structured the preparation phase more carefully. We also structured the understanding phase more carefully. Models were constructed to gain a better understanding of the problem situation and examples were provided to clarify the problems even further. We learned that this improved the building of a frame of reference for the experts. We learned that facilitation techniques, group methods, GSS software tools and facilitation techniques help to improve operational risk management. Software tools helped to speed up activities in which large amounts of information need to be processed e.g. identifying operational risks and/or control measures. We learned that standard recipes help the facilitator in constructing a mental image of the sessions and experiment with them before they actually take place. Further, design guideline one helped us to comply with relevant standards. We learned from other financial institutions how they utilize expert judgment in operational risk management by investigating their working methods.
165
Chapter 7 Similar to our first test case, design guideline two helped us to create commitment towards the outcome by ensuring procedural rationality among decision-makers, initiators, experts and stakeholders. Further, we learned that the decision-makers and experts were able to create a shared understanding by interacting face-to-face and exchanging viewpoints about operational risks and control measures. Design guideline three helped us in achieving this. Design guideline four helped us in making the operational risk management processes more practical for the experts. This was, similar to our first test case, mainly achieved by facilitation principles found in the GSS literature and by using standard facilitation recipes. We learned however that flexibility with these recipes was more difficult to accomplish because once a recipe was chosen they were not changed during the sessions. However, during the design phase, different recipes can be tried out thereby increasing the flexibility of choice for the facilitator.
7.4.2. Way of working Understanding phase Conceptualizing, specifying and validating the problem situation helped us to understand the situation at Inter Insure. Describing the, often different and sometimes conflicting goals helps to collect an adequate amount of valuable information. Then, when these goals need to be attained, different parts of the gathered information can be used to achieve them. Further, similar to our first test case, we learned that by investigating the perceived problems the motive(s) and hidden assumptions of the financial institutions management became much clearer. These motives and assumptions proved to be useful when eliciting the viewpoints from experts. We also learned that initiators and decision-makers often want to achieve different and many goals in a relatively short timeframe. Then, often compromises have to be made with respect to the deliverables. It is important that all the stakeholders e.g. experts commit to these compromises before entering the sessions.
Design phase This test case study taught us that the preparation phase is important to provide a frame of reference for the experts. This finding is in line with our first test case. We also learned that here specific attention needs to be paid to a further clarification of the context and objectives. Practical examples help the experts to focus on the context and objectives which have to be set by the managers, initiators and/or decision-makers in the preparation phase. Further, seventeen, mixed gender, expert participants were used to identify, assess and mitigate the operational
166
Test cases risks. This quantity was used to minimize inconsistency and bias and was adequate to complete all the activities in the processes. We learned that using a mixed gender group of expert participants can minimize internal politics and groupthink. Female experts seem, once again, more cautious in being overconfident than male experts while estimating the frequency of occurrence and impact of operational risks. The composition of the experts was also suitable to reach the goals. We learned that it is necessary to deploy appropriate process knowledge, next to substantive knowledge, to estimate the frequency of occurrence and impact of operational risks. Further, we learned that the collaboration between expert participants needs to be limited to safeguard the accuracy of their estimations of the frequency of occurrence and impact of the operational risks. Similar to our first test case, we found that this can be established by keeping the number of activities in which discussions are needed to a minimum. We learned that the GSS was primarily used to support the experts while in identifying operational risks and control measures i.e. when not much verbal discussions are needed. The GSS supported parallel communication and allowed the experts to make anonymous contributions. Further, we learned that managers, experts and initiators have different preferences with respect to anonymity. Once again, the experts in the risk identification process received the functionality of anonymity better than in the risk assessment process. This was because operational risks could be identified anonymous while anonymity diminished during the aggregation of operational risks i.e. discussion was needed. Experts valued anonymity more than the managers and initiators because it reduced the fear of reprisals. Further, we learned that involvement increased due to the function anonymity in the GSS. Moreover, we learned that participation in the sessions was likely to be increased due to the fact that experts had the possibility to meet and exchange opinions with other experts face to face rather than by using a GSS. With respect to facilitation, we learned also in this test case that a facilitator with sufficient understanding of the meeting subject needs to facilitate the processes. This understanding can be achieved by extensive preparation using the approach described in chapter five. Further, we learned that it is imperative for the facilitator to have a positive influence on the experts and the process. We also learned that the stakeholders want to attain their goals effectively and efficiently while maintaining a high quality and applicability of the results. Moreover, the stakeholders want to be satisfied with the results. We once more learned in this test case that MEEA will be used in practice by financial institutions if improvements are made to the 167
Chapter 7 effectiveness, efficiency and satisfaction of operational risk management. Moreover, we argue that if financial institutions dislike MEEA, they are less likely to use it even if it might help them to improve the effectiveness and efficiency.
7.4.3. Way of modeling We learned that using simple visual models helps to provide a frame of reference to the managers, initiators and experts. These models should capture the reality of the ORM situation as close as possible. Particularly, modeling the problem situation helps to focus the managers and later on the experts on the problem at hand. We also learned that models regarding the problem situation should be constructed during the understanding phase, rather than during the design phase. In this way, we believe that a better understanding of the problem situation is achieved. Further, several process models were presented for the identification, assessment and mitigation phase. These process models are based on the risk management activity, pattern of collaboration and facilitation recipe used. Similar to our first test case we learned that the activities principally determine the choice for an appropriate pattern and corresponding facilitation recipe. We also learned that these models help the facilitator in communicating the session structure with the decision-makers and structure the interaction between experts. The models also help the facilitator in executing the activities and creating insight into the dynamics of the interdependent activities.
7.4.4. Way of controlling It is important to follow a project management approach when different stakeholders are involved. Different roles in a research project should be made explicit before the start. We learned that it is important to build-in enough time for checkpoints and decisions that had to be made by the different stakeholders. Similar to our first test case, we learned that it is imperative to document the deliverables, discuss and present them accordingly. The deliverables should be made explicit before documenting the results. Further, we observed that using a 'middle out' and incremental point of view helps to gather support for a change process; improving operational risk management. By focusing on a small, but essential part of the financial institution, Inter Insure, support was gathered to change the current way of working incrementally. We also learned here that it is imperative to have a 'champion' e.g. experienced facilitator in the financial institution who can facilitate this change process.
168
…the art of making alternative choices, an art that properly should be concerned with the anticipation of future events rather than reaction to past events.
Felix Kloman
8.
Epilogue
We started our research with the observation that recent developments in utilizing expert judgment ask for a new approach in operational risk management (ORM). The objective of our research project was to develop an approach to improve the process of utilizing expert judgment in ORM. The concepts of our approach were applied and tested using two case studies in the financial service sector. We selected these case studies based on the criteria discussed in chapter two. The first case study was performed at a large insurance organization, Ace Insure. The second case study was performed at Inter Insure. Both are part of Bank Insure, which is a large Dutch financial institution. In this chapter we reflect upon our research project by discussing and presenting an outlook of our research findings, our research and provide starting points for future research.
8.1. Research findings We discuss our research findings by discussing and answering our research questions in this section. The Multiple Expert Elicitation and Assessment (MEEA) approach and the modeling of expert judgment activities are also discussed in this section.
8.1.1. Research question one Our first research question was formulated as what are the generic characteristics of utilizing expert
judgment in operational risk management? This research question helps us to understand the problems, issues and challenges of utilizing expert judgment in ORM. We answered this research question by discussing literature on ORM and expert judgment (chapter one and three) and conducting a case study (chapter four).
169
Chapter 8 Our literature review and case study indicate that developing an approach to improve the process of utilizing expert judgment in ORM is a complex and challenging activity. Expert judgment is used as an input for estimating the level of exposure to operational risk in financial institutions. From the highly fragmented literature we identified a broad range of issues in operational risk management, expert judgment and supporting software tools and techniques. The case study helped us to sharpen, compare, contrast and extend our literature findings. The main issues that hinder the use of expert judgment in financial institutions and constrain the development of an approach are related to the following generic characteristics:
•
regulations, financial institutions face difficulties that are closely related to the compliance requirements with the New Capital Accord (BIS II) regulations: data collection, data tracking and a robust internal control system
•
loss data, there is a lack of internal loss data and when it is available, it often has a poor quality to accurately estimate the exposure to operational risk. Using external loss data raises methodological issues such as reliability, consistency and aggregation
•
value of the output, results from risk self assessments are perceived to be subjective, value laden and often have a poor quality
•
inconsistent use of expert judgment in risk self assessments, this makes analysis and aggregation of data more difficult
•
a static view of risk self assessments, because risk self assessments are time consuming and labor intensive, they are often conducted with a low frequency and as such do not provide a dynamic view of the operational risks
•
software tools, it is hard to piece data together derived from multiple experts. Moreover, there is a lack of synergy between methodologies and tools to structure expert judgment activities.
Because of these issues, it is challenging to develop an approach to improve the process of utilizing expert judgment in ORM. First, because for decision makers to take effective decisions, the results need to be free from biases and accepted by expert participants, initiators and the institution as well. Our literature review indicates that biases are often caused by incompleteness of goals. Framing the goals in different ways can help decision makers to acquire enough information to treat the problem under certainty or risk. Second, in order for expert judgment to add value to the financial institution, the process and outcomes should meet the sometimes170
Epilogue conflicting goals of the institution, decision makers, initiators, expert participants, and stakeholders as close as possible. Although situational factors often encourage attention to a single goal, considering more than one goal will help to build agreement between groups and the preferences of individuals. Third, there is a need to formulate a clear process to provide the decision makers with detailed insight into the process and activities that have to be performed by the experts. The process and activities must be easy to communicate, so using a consistent terminology helps them to create a shared understanding. Moreover, it helps to implement the preferred solution in the financial institution. Fourth, scheduling initiators and expert participants, reporting time and labor costs hinder the success of implementing expert judgment in financial institutions. Fifth, the process in which multiple experts are utilized must be flexible to anticipate on dissimilar business processes in financial institutions. For example, a new client take on business process is more dynamic than selling an insurance policy. In the first case, forward looking risk identification is more important than in the second case. A flexible process helps to anticipate on dissimilar business processes by establishing an increased focus on a certain phase in ORM. Sixth, the approach to improve the process of utilizing expert judgment in ORM should comply with with regulatory requirements as issued. Moreover, most regulators require financial institutions to have a process that is auditable.
8.1.2. Research question two Our second research question was formulated as what concepts can be used to improve operational risk
management when utilizing expert judgment? This second research question helps us to find out what relevant concepts can be used to improve ORM when utilizing expert judgment. We answered this question by studying literature on operational risk management, expert judgment and group support systems (chapter one and three). Although the concepts discussed in this fragmented literature have different objectives, they all have been used in the context of risk management. We discussed a number of definitions of operational risk, see chapter three. From the risk management literature it becomes evident that ORM can benefit from concepts such as causal based definitions and event classifications. Definitions that include possible causes and consequences are promising concepts to accurately measure and manage operational risk (Young, Blacker et al., 1999; RMA, 2000; Brink, 2002). Operational risk is defined as ‘the risk of direct or indirect loss resulting from inadequate or failed internal processes, people and systems or from external events’ (RMA, 2000). This causal based definition of operational risk helps experts to identify, assess and manage operational risk. More specifically, the concept of an 171
Chapter 8 event, combined with the primary causes: processes, people, systems and external events will help experts to identify operational risk. Further, an event is characterized by its frequency of occurrence and impact. This helps experts to classify events into distinct loss categories such as expected, unexpected or catastrophic loss. Using a classification of events has its own drawbacks. From our test cases we found that experts tend to overestimate the probability of low frequency, high impact events while underestimating high frequency, low impact events. The Barings case is a classic example of this. Availability of such extreme news may lead to an overestimation of such risks. This has implications for ORM given that high frequency low impact events can also lead to a catastrophic loss, also see table 3-2. Training the experts in risk management thinking before the sessions take place and focusing the experts within the sessions e.g. by using facilitation recipes are promising concepts to prevent under- and overestimation. To manage operational risk, there are generally four possible courses of action: acceptance, avoidance, transfer and mitigation. Mitigation of operational risk is the most compelling because the other possibilities do not actually reduce the risk; rather the risk remains. For example: today, many insurance products exist e.g. to transfer the risk to an insurance firm. We have to bear in mind that such insurance firms also exclude a number of factors in their policy that do not cover the risk. Moreover, when you transfer the risk, you do not actually reduce the risk. As such, operational risk can be mitigated by a set of internal control measures. These will only function properly if the internal control environment, such as procedures and working processes, in which they are embedded is established appropriately. We also discussed literature on expert judgment. Expert judgment is defined as ‘the degree of belief, based on knowledge and experience, that an expert makes in responding to certain questions about a subject’ (Clemen & Winkler, 1999; Cooke & Goossens, 2004). In this view, a probability judgment of an operational risk can be based on an expert’s belief and knowledge. This view allows financial institutions to assess probabilities about operational risk when they need them for decision making. It also allows us to use all the information we have available, including knowledge about frequencies, a set of logical possibilities and including knowledge from previous experiences. However, as the risk management literature indicates, experts tend to overestimate the probability of events with a low frequency and underestimate the probability of events with a high frequency. From the expert judgment literature it becomes evident that operational risk management can benefit from concepts such as using structured methods to utilize expert judgment (Baron, 1994). Using structured methods provides better
172
Epilogue results than using unstructured methods (Clemen & Winkler, 1999; Cooke & Goossens, 2002). Moreover, the existing expert judgment methods look promising to structure the operational risk management process into a number of phases and activities. This structuring allows initiators, experts, stakeholders and the facilitator to work simultaneously on activities in different phases. Each phase can be further divided in activities. In essence, this is the classic theory of division of labor and organizing, see e.g. (Bosman, 1995). Literature however, indicates that each activity can be carried out sub optimally wherein inconsistency and bias play an important role (Armstrong, 2001a). Inconsistency is a random or a-systematic deviation from the optimal, whereas bias involves a systematic deviation from the optimal. Numerous principles are discussed that help to carry out each phase and activity as optimal possible thereby aiming at minimizing inconsistency and bias (see chapter three). Most principles however, are aimed at minimizing bias. From our test cases we found that insufficient search, the failure to consider alternative possibilities, additional evidence and goals are found to be central biases that prevent expert judgment activities in operational risk management to be carried out optimally. In this research, we have interpreted the biases and principles found in literature from a prescriptive point of view because we aimed to prescribe a theory for operational risk management, see chapter five. We argue that these prescriptive principles will help the facilitator to have more control over the activities in each phase and enables him or her to better control the whole process. In chapter three we also discussed concepts from the group support systems literature. A group support system is defined as ‘a socio-technical system consisting of software, hardware, meeting procedures, facilitation support, and a group of meeting participants engaged in intellectual collaborative work’ (Jessup, Connolly et al., 1990). From this literature it becomes evident that operational risk management can benefit from concepts such as expert judgment meeting procedures and how to facilitate communication and cognitive tasks for both process and content (Nunamaker, Dennis et al., 1991). In the operational risk management context is it the
process dimension that refers to the structuring of well-prepared and scheduled expert judgment activities and the content dimension to supporting the actual substance of the communication or cognitive task. Several concepts were discussed to help us categorize cognitive tasks into easy communicable and distinctly supportable categories. We found patterns of collaboration to be useful concept for this because each pattern can be easily communicated. Moreover, facilitation recipes exist for each pattern that support the facilitator in his or her tasks. From our test cases
173
Chapter 8 we conclude that these facilitation recipes in combination with the ORM context help the facilitator to focus the experts on variables that should be considered thereby improving their judgments. However, these facilitation recipes have their own drawbacks. From our case studies we conclude that they are not always easy to use. We argue that the facilitation recipes must be tailored to the situation at hand. Further, we discussed a number of concepts for the variables facilitation, goals, task, structure, group composition, group size and anonymity. From our test cases we conclude that these concepts help to increase the effectiveness, efficiency and satisfaction of ORM. We also conclude that these variables need to be taken into account in the preparation phase. We ended our literature study with the observation that a dedicated preparation and structuring of the expert judgment activities; appropriate use of GSS technology and good facilitation improves operational risk management.
8.1.3. Research question three Our third and main research question was formulated as what does an approach to improve operational
risk management by utilizing expert judgment look like? This research question helped us to develop an approach to improve operational risk management. The approach is described in chapter five. Based on our literature review, the starting points from our first case study and our own experiences, we developed an approach to improve operational risk management, called Multiple Expert Elicitation and Assessment, abbreviated to MEEA. We use this name because we elicit the opinions of multiple experts and they assess the operational risk. MEEA was structured using the framework of Seligman, Wijers et al. (1989) and Sol (1990). This framework seemed promising to structure MEEA and is discussed in a way of thinking, way of working, way of modeling, and way of controlling. In the way of thinking we presented our view on operational risk management, provided an underlying structure, have set the overall tone and provided design guidelines on which MEEA is based. The guidelines are prescriptive in nature and help to make a choice when designing. We defined our view on the problem domain in chapter five. We argued that operational risk management can be improved by designing processes in which expert judgment is utilized in a structured manner. Based on our literature review and first case study we expected that structuring the expert judgment processes is the most important variable for improvement. From our test case studies we can confirm this expectation. Further, improving operational risk management was considered as an ill-structured problem because many alternative solutions can be proposed to improve operational risk management, and it is difficult to quantitatively 174
Epilogue evaluate the effect of these solutions. Based on our test cases we partly agree with this because although actors value these effects differently we developed an evaluation framework to quantitatively and qualitatively evaluate the effects of the solution, we discuss this in research question four. We further believed that using multiple experts improves operational risk management (see chapter five). From our literature review and first case study we conclude that sharing thoughts about operational risks and control measures enables a commitment towards the outcome. Further, we expected that it was difficult to determine from the literature and our first case if using multiple experts would enable a more precise assessment. We confirm that this is complicated because there was no alternative method for assessment and thus to compare results. We used a number of design guidelines to deal with the philosophy behind the way of working and to organize our design ideas (see chapter five). Design guidelines were developed on compliance with relevant standards, procedural rationality, enabling a shared understanding, flexibility of the process, and the roles to be considered. From our test case studies we conclude that our guidelines were broad and also specific enough to guide the design process (see chapter six and seven). However, we would like to note that these guidelines are tested in a financial institution operating within the European Union (EU). Different guidelines might apply outside the EU. Further, based on our literature review and observations in our test case studies, we argue that an understanding how experts in general think about risk helps to improve operational risk management. In the way of working, we prescribed the process and activities that need to be executed in the phases preparation, risk identification, risk assessment, risk mitigation and reporting of operational risk management. We adopted a problem solving approach to achieve a better understanding of the problem situation and define a set of possible solutions (Sol, 1982; Bots, 1989). The problem solving approach was divided in an understanding phase and a design phase (see chapter five). From our test case studies we conclude that this problem solving approach helps us to focus more on the important aspects in operational risk management. The understanding phase helped us in the test cases to conceptualize, specify and validate the problem situation (see chapter six and seven). Specifically, from our test case studies we conclude that validation of the problem situation by a signed work assignment helps the responsible manager to sharpen his or her view. Moreover, we conclude that framing the goals in different ways helps decision makers to acquire enough information to sharpen the view/scope and find appropriate solutions. We also conclude that the understanding phase is
175
Chapter 8 critical for providing a frame of reference for the multiple experts, but it is also time consuming. The design phase was concerned with finding appropriate solutions. Alternative solutions were worked out in a number of models of possible solutions. After evaluation, one of these models was chosen and implemented as an appropriate solution in the financial institution. We note that the evaluation of a possible solution is not an easy task. We observed that this was mainly because evaluation was qualitative and decisions had to be made based on this. In the way of modeling we prescribed the modeling concepts that we constructed following Sol’s (1990) analytical framework. We made a distinction between conceptual models and empirical models. Conceptual models are characterized by high levels of abstraction and fuzziness and helped us to structure perception, representation and reasoning regarding the problem situation. We used simple visual models to give a clear qualitative understanding of the problem situation. From our test case studies we conclude that these models help us to communicate on a managerial level. Empirical models enabled us to analyze and diagnose a problem situation and find possible solutions. They are more formalized representations of reality and capture more detail and time ordered dynamics of interdependent activities. For this, we used activity diagrams supplemented with patterns of collaboration and facilitation recipes. We conclude that the activity diagrams help us to diagnose the problem situation and find alternative solutions. However, we found that our empirical models are difficult to quantify which makes it difficult for a decision maker to quantitatively evaluate a particular solution. We also conclude that the activity diagrams help us to communicate the chosen solution at the operational level. We also prepared a number of sequence diagrams for each phase of operational risk management. We made these sequence diagrams from a process, facilitator and actor point of view. We experienced that drawing these diagrams is very time consuming and difficult to communicate with stakeholders. On the positive side, they do provide a detailed and dynamic insight in the activities and sequence of activities. These models are presented in a confidential report. In the way of controlling we prescribed the control of the way of working and the models that we used to design a process to improve operational risk management. We used a project management approach involving project design, checkpoints, documentation, decisions to be made and time management, see e.g. (Akker, 2002; Onna & Koning, 2004). Based on our experiences with the first test case study we argue that following such a project management approach can prevent future conflicts about documentation and decisions made. We used a 176
Epilogue ‘middle out’ and incremental point of view in carrying out the way of working and modeling process. Based on our experiences in the two test case studies we argue that focusing on a small but critical part of the financial institution is useful for design and implementation. The first test case study formed one of the starting points for implementing a way of working with multiple experts in the financial institution Bank Insure. Numerous risk managers were trained at Bank Insure. We treated the way of working and way of modeling as a combined task wherein the manager, initiator and facilitator are responsible for various activities at different times. From our test case studies we conclude that the steps taken in the way of working need to be carried out iteratively. As mentioned previously, the operational risk management process must be flexible enough to apply it to different business processes. Our test case studies furthermore indicated that choosing the vocabulary in the process models needs to be carried out iteratively. This was done using a number of operational risk management training sessions at Bank Insure.
8.1.4. Research question four Our fourth research question was formulated as how do we evaluate the improvements that were made to
operational risk management? This research question helped us to evaluate the improvements that were made to ORM. We were able to implement and test MEEA against the contemporary situation of the financial institution under investigation. We evaluated MEEA by testing the constructs context, team, technology, process and outcome, which were identified and described in chapter five, section 5.5.2. The goal of the evaluation was to make reasonable that using our approach improves ORM and is ‘better’ than using the contemporary way of working in the financial institution under investigation or not doing so. We acknowledge that it is difficult to measure improvements since previous experience establishes a reference point for the experts, against which future experiences are compared. Moreover, it is expected that experts benefit more from improvement than from decline thereby possibly biasing the results. Nevertheless, we made an effort to make it reasonable that ORM improved by collecting the opinions of experts, initiators, facilitators and stakeholders regarding our approach. We were able to collect both quantitative and qualitative data from our test case studies by making use of the data collection instruments (1) survey, (2) interviews, (3) expert estimations, (4) direct observations, and (5) system logs. We used the qualitative findings in support for our statistical tests and to enable a rich presentation on each construct. Based on our experiences in the test cases we argue that using quantitative and qualitative data enables us to provide a detailed insight and make it reasonable that using our approach improves ORM. From our test case
177
Chapter 8 studies we can conclude that MEEA is more effective, efficient and leads to more satisfaction when implemented as compared to the contemporary situation in the financial institution under investigation. MEEA can be used to support risk managers and decision makers in their efforts to provide a financial institution with the input to estimate their exposure to operational risk. In addition, MEEA can operate with scarce data and enables financial institutions to understand operational risk with a view to reducing it, thus reducing economic capital within the Basel II regulations.
8.2. Research 8.2.1. Application domain The topic of our research project was improving ORM. Three ORM processes were studied during this research. We reflect on our application domain, the financial service sector, in this section. Our first case study was at Group Operational Risk Management at Bank Insure. In this case study, we explored the phenomena involved when developing an approach to improve the process of utilizing expert judgment in ORM. We argue that this case study has all the characteristics of ORM. The case study at Bank Insure helped us to identify the perceived problems in ORM. Bank Insure will use expert judgment as an input to estimate their level of exposure to operational risk because they face similar difficulties and challenges with loss data mentioned in chapter one. It is expected that, in the future, expert judgment will also be used to add valuable information to existing loss data and provide insights on possible future catastrophic losses. Further, it can be concluded that this case study provided good starting points for the development of MEEA to improve ORM at Bank Insure. The second and third case study were carried out at different business units of Bank Insure. Both cases involved an insurance firm. The involved managers and experts of Bank Insure found that these case studies were representative for applying and testing MEEA to improve operational risk management. We conclude that the criteria to select the cases were appropriate. Both cases concerned the phases preparation, risk identification, risk assessment, risk mitigation and reporting. We also tested MEEA outside Bank Insure to investigate the generalizability. We applied MEEA to a large Dutch logistics firm and a large Dutch investment firm. Although the results from this case are confidential, first research results indicate that MEEA is usable
178
Epilogue outside the financial service sector (Grinsven, 2006). Moreover, the results indicate that the effectiveness, efficiency and satisfaction are improved.
8.2.2. Research approach The output results of our research are discussed above. In this section we discuss the research approach that we used to accomplish these output results. We reflect on our research philosophy & strategy and instruments in this section. An interpretive research tradition was followed as our research philosophy. The necessity to arrive at a better understanding of the development of an approach to improve the process of utilizing expert judgment in operational risk management was an important reason to choose this philosophy. Further, based on the limited availability of literature and the issues presented in chapter one, four and our test case studies presented in chapter six and seven we argue that it is justified to choose for an interpretive research tradition for the development of an approach to improve the process of utilizing expert judgment in operational risk management. An inductivehypothetic research strategy was followed as our research strategy. As no other theories were available at the time we started our research, we argue that it is difficult or even impossible to conduct this research in a solely deductive way. Therefore we argue that the inductivehypothetic strategy was appropriate. Literature research, case studies and action research were used as the main research instruments to gather empirical data. We used literature research to provide us the guidance and starting points for conducting this research. We used the existing literature to compare and contrast our research findings from our test case studies. Using the literature, we also sharpened our view from the first case study. We complemented our literature review by using case studies and action research. Our case studies support our observations that active participation of the researcher enables him or her to capture the reality in great detail. A disadvantage however, is the generalizability of the research findings. Since the findings of the case studies are based on experiences in three case studies, the results cannot be generalized outside the scope of the context in which they occurred.
179
Chapter 8
8.2.3. Future research This research project answered many questions regarding the development of an approach to improve the process of utilizing expert judgment in operational risk management. We also gained a number of indications for research purposes. In this section we introduce our recommendations for future research. MEEA was applied by a small group of researchers and experts working in the field of operational risk management in the financial service sector. MEEA was developed to support multiple experts in the field of operational risk management within a financial institution. This leads us to our first recommendation.
•
Research the application of MEEA by other experts in other financial institutions.
These other experts should be independent of the developers of MEEA. MEEA can be further improved or extended based on the experiences of these experts. MEEA is recently also applied to a logistic, governmental and pension fund organization. First research results indicate that MEEA is applicable and useful in these other organizations. The goal of this research project was to develop an approach to improve the process of utilizing expert judgment in operational risk management. We argued that the financial service sector is a challenging domain for improving operational risk management. MEEA was applied in the financial service sector and for this we used a specific group support system: GroupSystems. This leads us to our second recommendation.
•
Research the generalizability of MEEA by using other group support systems.
Hypothetically, MEEA can be applied by using a different group support system. High failure rates associated with group support systems implementations indicate that other coordination mechanisms should be considered (Grinsven & Vreede, 2003a). During our research we found support for this. For example, we experienced several software failures and hardware problems that hampered the progress of the operational risk management session. In our experience in applying MEEA in other settings than described in this research we found that a well-facilitated brown paper session can also lead to effective, efficient and satisfying results. These first results will be published in (Grinsven, 2006).
180
References
References Adel, M. v. d. (2003). Bazel Kapitaal Accoord Implicaties voor het Bankwezen. Paper presented at the IIR Conference Basel II: Best Practices in risk management and -measurement. February 4th and 5th, Amsterdam (in Dutch). Airforce. (1997). Airforce Civil Engineers: Operational Risk Management (ORM) Handbook: US Airforce. Akker, A. (2002). Prince 2 compact: methode voor projectmanagement: LAGANT Management Consultants BV (in Dutch). Andersen, A. (2001). Risk Management: an enterprise perspective. Results of FEI Research Foundation. Andersen survey. Anderson, A. (1998). Operational Risks and Financial Institutions. Andriessen, J. H. T. H. (2000). How to Work Apart Together? An Integrated Approach to Explain and
Design Work with Collaboration Technology. Paper presented at the 9th European Congress on Work and Organizationnal Psychology, Helsinki. Anson, R., Bostrom, R. P., & Wynne, B. E. (1995). An Experiment Assessing GSS and Facilitator Effects on Meeting Outcomes. management Science, 41, 189-208. Arkes, H. R. (2001). Overconfidence in Judgmental Forecasting. In J. S. Armstrong (Ed.),
Principles of Forecasting: A handbook for Researchers and Practitioners (pp. 495-515). Boston/Dordrecht/London: Kluwer Academic Publishers. Armstrong, J. S. (2001a). Selecting Forecasting Methods. In J. S. Armstrong (Ed.), Principles of
Forecasting:
A
handbook
for
Researchers
and
Practitioners
(pp.
364-403).
Boston/Dordrecht/London: Kluwer Academic Publishers. Armstrong, J. S. (2001b). Role Playing: A Method to Forecast Decisions. In J. S. Armstrong (Ed.), Principles of Forecasting: A handbook for Researchers and Practitioners (pp. 364-403). Boston/Dordrecht/London: Kluwer Academic Publishers. Armstrong, J. S. (2001c). Combining Forecasts. In J. S. Armstrong (Ed.), Principles of Forecasting:
A handbook for Researchers and Practitioners (pp. 364-403). Boston/Dordrecht/London: Kluwer Academic Publishers.
181
References Axson, D. (2003). Operational Risk Management: A New Performance Management Imperative. Business Performance Management, 34-44. Babeliowsky, M. N. F. (1997). Designing Interorganizational logistic networks: A simulation based
interdisciplinary approach. Unpublished Doctoral Dissertation, Delft University of Technology, Delft. Baron, J. (1994). Thinking and Deciding (2nd ed.): Cambridge University Press. Baud, N., Frachot, A., & Roncalli, T. (2002). Internal data, External data and consortium data for operational risk measurement: how to pool data properly. Working paper: Groupe de Recherche Operationnelle, Credit Lyonnais. BCBS. (1998). Operational Risk Management: Basel Committee on Banking Supervision. BCBS. (2001a). Risk Management Practices and Regulatory Capital: Cross sectoral comparison: Basel Committee on Banking Supervision. BCBS. (2001b). Sound Practices for the Management and Supervision of Operational Risk: Bank for International Settlements. Basel Committee Publications. BCBS. (2001c). Working Paper on the Regulatory Treatment of Operational Risk: Bank for International Settlements. Basel Committee Publications. BCBS. (2003a). Sound Practices for the Management and Supervision of Operational Risk: Bank for International Settlements. Basel Committee Publications No 96. BCBS. (2003b). Supervisory Guidance on Operational Risk: Advanced Measurement Approaches for Regulatory Capital: Office of the Comptroller of the Currency (OCC). Bier, V., Haimes, Y. Y., Lambert, J. H., Matalas, N., & Zimmerman, R. (1999). A Survey of Approaches for Assessing and Managing the Risk of Extremes. Risk Analysis, 19(1), 8376. Bigün, E. S. (1995). Risk analysis of catastrophes using experts' judgements: An empirical study on risk analysis of major civil aircraft accidents in Europe. European Journal of Operational
Research, 87, 599-612. Bolger, F., & Wright, G. (1994). Assessing the Quality of Expert Judgment. Decision Support
Systems, 11, 1-24.
182
References Bosman, A. (1977). Een Metatheorie over Het Gedrag Van Organisaties. Leiden: Stenfert Kroese (in Dutch). Bosman, A. (1995). Werkverdeling en Organiseren. Universiteitsdrukkerij, Groningen (in Dutch). Bostrom, R. P., Anson, R., & Clawson, V. K. (1993). Group Facilitation and Group Support Systems. In J. Valacich (Ed.), Group Support Systems: New Perspectives (pp. 146-168). New York. Bostrom, R. P., Watson, R. T., & Kinney, S. T. (1992). Computer Augmented Teamwork. New York: Van Nostrand Reinhold. Bots, P. W. G. (1989). An environment to support problem solving. Unpublished doctoral dissertation, Delft University of Technology, the Netherlands. Briggs, R. O. (1994). The Focus Theory of Group Productivity and its application to the development and testing of electronic group support systems. Unpublished Doctoral Dissertation, The University of Arizona, Tucson, A.Z. Briggs, R. O., Dennis, A. R., Beck, B. S. & Nunamaker, J. F. Jr. (1993). Whither the Pen-Based Interface? Journal of Management Information Systems, 9(3), pp. 71-90. Briggs, R. O., Nunamaker, J.F. Jr., & Sprague, R. (1998). 1001 Unanswered Research Questions in GSS. Journal of Management Information Systems, 14(No.3), pp.3-21. Briggs, R. O., Reinig, B. A., & Vreede, G. J. d. (2003). Satisfaction Attainment Theory and its Application To Group Support Systems Meeting Satisfaction: University of Nebraska at Omaha. Briggs, R. O., & Vreede, G. J., de.,. (2001b). ThinkLets: Building Blocks for Concerted Collaboration: GroupSystems.com. Briggs, R. O., & Vreede, G. J. d. (1997a). Meetings of the Future: Enhancing Group Collaboration with Group Support Systems. Journal of Creativity and Innovation Management,
6(2), 106-116. Briggs, R. O., & Vreede, G. J. d. (1997b). Measuring Satisfaction in GSS Meetings. Paper presented at the Proceedings of the 18th International Conference on Information Systems.
183
References Briggs, R. O., & Vreede, G. J. d. (2001a). Thinklets and Repeatable Processes: Keys to Sustained Success
with GSS. Paper presented at the Proceedings of the 2001 Group Decision & Negotiation Conference, La Rochelle, France. Briggs, R. O., Vreede, G. J. d., & Nunamaker, J. F. (2003). Collaboration Engineering with ThinkLets to Pursue Sustained Success with Group Support Systems. Journal of
Management Information Systems, 19(4), 31-64. Briggs, R. O., Vreede, G. J. d., Nunamaker, J. F. J., & Tobey, D. (2001). ThinkLets: Achieving
Predictable, Repeatable Patterns of Group Interaction with Group Support Systems (GSS). Paper presented at the Proceedings of the 34th Hawaii International Conference on System Sciences, Maui, Hawaii. Brink, G. J. v. d. (2001). Operational Risk, Wie Banken das Betriebsrisiko beherrschen. Stuttgart (in German). Brink, G. J. v. d. (2002). Operational Risk The New Challenge for Banks. New York: Palgrave. Brown, M., Jordan, J. S., & Rosengren, E. (2002). Quantification of Operational Risk, 239-248. Bryn, H. K., & Grimwade, M. (1999). Everything has its place. Risk Magazine. Burke, K., & Chidambaram, L. (1996). Do mediated contexts differ in information richness? A comparison
of collocated and dispersed meetings. Paper presented at the Proceedings of the Twenty-Ninth Hawaii Interantional Conference on Systems Sciences, Hawaii. Campbell, D. J. (1988). Task Complexity: A Review and Analysis. Academy of Management Review,
13(1), 40-52. Carol, A. (2000). Bayesian Methods for Measuring Operational Risk. Reading, UK: The Univisity of Reading. CFSAN, & Nutricion, C. f. F. S. a. A. (2002). Initiation and Conduct of All 'Major' Risk Assessments within a Risk Analysis Framework: U. S. Food and Drug Administration. Chappelle, A., Crama, Y., Hubner, G., & Peters, J. P. (2004). Basel II and Operational Risk:
Implications for risk measurement and management in the financial sector (No. NBB Working Paper no. 51): National Bank of Belgium. Chatfield, R. E., Moyer, C. R., & Sisneros, P. M. (1989). The Accuracy of Long-Term earnings Forecasts for Industrial Firms. Quarterly Journal of Business Economics, 28, 91-104.
184
References Checkland, P. (1981). Systems Thinking, Systems Practice. Chichester: Wiley. Cho, H.-K., Turoff, M., & Hiltz, S. R. (2003). The Impacts of Delphi Communication Structure on
Small and Medium Sized Asynchronous Groups: Preliminary Results. Paper presented at the Proceedings of the Thirty-Sixth Hawaii International Conference on System Sciences, Hawaii. Churchman, C. W. (1971). The Design of Inquiring Systems: Basic Concepts of Systems and Organization. New York: Basic Books. Clausing, D. (1994). Total Quality Development: a step-by-step guide to World-Class Concurrent Engineering. New York: Asme Press. Clemen, R. T., & Winkler, R. L. (1999). Combining Probability Distributions From Experts in Risk Analysis. Risk Analysis, 19(2), 187-203. Coleman, R. (2000). Using Modelling in Operational Risk Management. Paper presented at the Conference "Operational Risk in Retail Financial Services", London. Connolly, J. M. (1996). Taking a Strategic Look at Risk: Marsh & McLennan Companies. Connolly, T., Jessup, L. M., & Valacich, J. S. (1990). Effects of Anonymity and Evaluative Tone on Idea Generation in Computer-Mediated Groups. management Science, 36(6), 689-703. Cooke, R. M. (1991). Experts in uncertainty; opinion and subjective probability in science. New York: Oxford University Press. Cooke, R. M., & Goossens, L. H. J. (2000). Procedures Guide for Structured Expert Judgement (No. EUR 18820). Brussels-Luxembourg: European Commission. Cooke, R. M., & Goossens, L. H. J. (2002). Procedures Guide for Structured Expert Judgement in Accident Consequence Modelling. Radiation Protection Dosimetry, 90(3), 303-309. Cooke, R. M., & Goossens, L. H. J. (2004). Expert judgement elicitation for risk assessments of critical infrastructures. Journal of Risk Research, 7(6), 643-156. Cooper, L. (1999). Operational Risk - Fear the worst. Risk Magazine. Cross, N. (2000). Engineering design methods: strategies for product design: Wiley & Sons. Cruz, M. (2002). Modeling, measuring and hedging operational risk: Wiley Finance. Cruz, M., Coleman, R., & Salkin, G. (1998). Modeling and Measuring Operational Risk. Journal
of Risk, 1, pp.63-72. 185
References Cumming, C., & Hirtle, B. (2001). The Challenges of Risk Management in Diversified Financial
Companies: Federal Reserve Bank of New York Economic Policy Review. Dahlbäck, O. (2003). A Conflict Theory of Group Risk Taking. Small Group Research, 34(3), 251289. Darlington, R. B. (2002). Some Works of Richard B. Darlington Davison, R., & Vreede, G. J. d. (2001). The Global Application Of Collaborative Technologies.
Communications of the ACM, 44(12), 69-73. Dechow, P. M., & Sloan, R. G. (1997). Returns to Contrarian Investment Strategies: Tests of Naive Expectations Hypotheses. Journal of Financial Economy, 43, 3-27. Delbecq, A. L., Ven, A. H. v. d., & Gustafson, D. H. (1975). Group Techniques for Program
Planning: Glenview, IL: Scott Foresman. Dennis, A. R., & Gallupe, R. B. (1993). A History of GSS empirical research: Lessons Learned and Future Directions. In L. in: Jessup & J. Valacich (Eds.), Group Support Systems: New
Perspective (pp. 59-77). New York. Dennis, A. R., Hayes, G. S., & Daniels, R. M. (1994). Re-Engineering business process modeling. Proceedings of the Twenty-Seventh Hawaii International Conference on Systems Sciences,
4, 244-253. Dennis, A. R., Heminger, A. R., Nunamaker, J. F., & Vogel, D. R. (1990). Bringing automated support to large groups: the Burr–Brown experience. Information & Management, 18, 111121. Dennis, A. R., Nunamaker, J. F., & Vogel, D. R. (1991). A Comparison of Laboratory and Field Research in the Study of Electornic Meeting Systems. Journal of Management Information
Systems, 7(3), 107-135. Dennis, A. R., Tyran, C. K., Vogel, D. R., & Nunamaker, J. F. (1997). Group Support Systems for Strategic Planning. Journal of Management Information Systems, 14(1), 155-184. Dennis, A. R., Valacich, J. S., Connolly, T., & Wynne, B. E. (1996). Process Structuring in Electronic Brainstorming. Information systems research, 7(2), 268-277.
186
References DeSanctis, G., Poole, M. S., Dickson, G. W., & Jackson, B. M. (1993). Interpretive analysis of team use of group technologies. Journal of Organizational Computing and Electronic Commerce,
3(1), 1-29. DeSanctis, G., Poole, M. S., Lewis, H., & Desharnais, G. (1992). Using computing in quality team meetings: initial observations from the IRS–Minnesota project. Journal of
Management Information Systems, 8(3), 7-26. DeVellis, R. F. (2003). Scale Development: Theory and Applications. Beverly Hills and London: Sage Publications. Diehl, M., & Stroebe, W. (1987). Productivity Loss in Brainstorming Groups: Toward the Solution of a Riddle. Journal Personality and Social Psychology, 53(3), 497-509. Doerig, H. U. (2000). Operational risk in financial services: an old challenge in a new environment Easley, R. F., Devaraj, S., & Crant, M. (2003). Relating Collaborative Technology Use to Teamwork Quality and Performance: An Empirical Analysis. Journal of Management
Information Systems, 19(4), 247-268. Eeten, R. v. (2001). Implementatie van de Methode risicoanalyse in een Group Decision Room. Den Haag, The Netherlands: Rijksgebouwendienst (in Dutch). Eijck, D. T. T. (1996). Desiging Organizational Coordination. Unpublished Ph.D., Delft University of Technology, Delft. Ernst, & Young. (2003). Eindrapportage ORM survey: De status van de ORM Implementaties bij in Nederland opererende banken en de keuzes die daarbij gemaakt zijn: Ernst & Young Treasury & Financial Risk management (in Dutch). FAA. (2000). System Safety Handbook: Operational Risk Management. In System Safety
Handbook: Federal Aviation Authority. Finlay, M., & Kaye, J. (2002). Emerging Trends in Operational Risk within the financial services industry. London: Raft International. Fjermestad, J., & Hiltz, S. R. (2000). A Descriptive Evaluation of Group Support Systems Case and Field Studies. Journal of Management Information Systems, 17(3), 115-159.
187
References Fjermestad, J., & Hiltz, S. R. (2001). Group Support Systems: A Descriptive Evaluation of Case and Field Studies. Journal of Management Information Systems, 17(3), 115-160. Frachot, A., & Roncalli, T. (2002). Mixing Internal and External data for managing operational risk.
Working paper: Groupe de Recherche Operationnelle, Credit Lyonnais. FSA. (2003). Building a framework for operational risk management: the FSA's observations. London: The Financial Services Authority. GAIN. (2004). COSO Impact on Internal Auditing, 2005, from www.gain2.org Galliers, R. (1992). Information systems research; issues, methods and practical guidelines. Oxford: Blackwell Scientific. Galliers, R. D. (1991). Choosing appropriate information systems research approaches: a revised taxonomy. In H. E. Nissen, H. K. Klein & R. Hirscheim (Eds.), Information systems
research: Contemporary approaches and emergent traditions (pp. 327-345). North-Holland: Elsevier Science Publishers. Gallupe, R. B. (1990). Suppressing the Contribution of the Group's Best Member: is GDSS Use Appropriate for All Group Tasks? Gallupe, R. B., DeSanctis, G., & Dickson, G. W. (1988). Computer-Based Support for Group Problem-Finding: An Experimental Investigation. MIS Quarterly, 12(2), 277-296. Genuchten, M. v., Cornelissen, W., & Dijk, C. v. (1998). Supporting Inspections with an Electronic Meeting System. Journal of Management Information Systems, 14(3), 165-178. Genuchten, M. v., Dijk, C., Scholten, H., & Vogel, D. (2001). Using Group Support Systems for Software Inspections. IEEE Software, 60-65. George, J. F., Easton, G. K., Nunamaker, J. F., & Northcraft, G. B. (1990). A Study With Collaborative Group Work with and without Computer-Based Support. Information
systems research, 1(4), 394-415. George, J. F., Nunamaker, J. F., & Valacich, J. S. (1992). Electronic meeting systems as innovation: a study of the innovation process. Information & Management, 22, 197-195. Geus, A. d. (1998). The living company. London: Nicholas Brearley Publishing. Goossens, L. H. J., & Cooke, R. M. (2001). Expert Judgement Elicitation in Risk Assessment. Paper presented at the Assessment and Management of Environmental Risks.
188
References Goossens, L. H. J., & Gelder, P. H. A. J. M. v. (2002). Fundamentals of the Framework for Risk Criteria of Critical Infrastructures in The Netherlands. In Probabilistic Safety Assessment
and Management (pp. 1929-1934). Porto Rico / USA: Elsevier. Grinsven, J. H. M. v. (2001). Collaborative Engineering: Managing the Product Creation Process on a Global Scale (Master thesis). Amsterdam: Vrije Universiteit. Grinsven, J. H. M. v. (2007). Improving Operational Risk Management (doctoral dissertation). Delft University of Technology, Faculty of Technology, Policy and Management. Systems Engineering group. Grinsven, J. H. M. v. (2009). Risk Management in Financial Institutions (Vol. 1). Amsterdam: (forthcoming). Grinsven, J. H. M. v., Ale, B., & Leipoldt, M. (2006). Ons Overkomt Dat Niet: risicomanagement bij financiele instellingen. Finance Incorporated, 6, pp.19-21 (in Dutch). Grinsven, J. H. M. v., Janssen, M., & Houtzager, M. (2005). Operationeel Risico Management als Shared Business Process. Integrale View of een Papieren Tijger. IT Monitor, 7, pp. 1316 (in Dutch). Grinsven, J. H. M. v., Janssen, M., & Vries, H. d. (2007). Collaboration Methods and Tools for Operational Risk Management. In N. Kock (Ed.), Encyclopedia of E-collaboration: Idea Group Reference (forthcoming). Grinsven, J. H. M. v., & Santvoord, K. (2006). Operational Risk Management en SOx: Wetmatigheid in risico's. Finance Incorporated, 8, pp.7-9 (in Dutch). Grinsven, J. v. (2003). Collaborative Distributed Risk Management. Paper presented at the Euro / Informs Operations Research International Meeting, Istanbul / Turkey. Grinsven, J. v., & Vreede, G. J., de.,. (2002a). Collaborative Engineering: Towards Design Guidelines for Risk Management in Distributed Software Engineering. Paper presented at the Design 2002, Dubrovnic, Croatia. Grinsven, J. v., & Vreede, G. J. d. (2002). Evaluatie R&CSA Pilot: Nationale Nederlanden / AOV -
Individueel: Technische Universiteit Delft, Faculteit Techniek Bestuur en Management, Sectie Systeemkunde.
189
References Grinsven, J. v., & Vreede, G. J. d. (2002b). Evaluatie R&CSA Pilot: Nationale Nederlanden / AOV
- Individueel: Technische Universiteit Delft, Faculteit Techniek Bestuur en Management, Sectie Systeemkunde (in Dutch). Grinsven, J. v., & Vreede, G. J. d. (2003a). Addressing Producivity Concerns in Risk Management
through Repeatable Distributed Collaboration Processes. Paper presented at the 36th Annual Hawaii International Conference on System Sciences, Big Island, Hawaii. Grinsven, J. v., & Vreede, G. J. d. (2003b). Productief Operationeel Risico Management op afstand met Group Support Systems. IT-monitor, 6, pp. 12-15 (in Dutch). Grohowski, R., McGoff, C., Vogel, D., Martz, B., & Nunamaker, J. F. (1990). Implementing Electronic Meeting Systems at IBM: lessons learned and success factors. MIS Quarterly,
14(4), 369-382. Haan, C. B. d., Chabre, G., Lapique, F., Regev, G., & Wegmann, A. (1999). Oxymoron, a Non-
Distance Knowledge Sharing Tool for Social Science Students and Researcher. Paper presented at the International ACM SIGGROUP Conference on Supporting Group Work, Phoenix, Arizona, USA. Hackman, J. R., & Vidmar, N. (1970). Effects of Size and Task Type on Group Performance and Member Reactions. Sociometry, 33, 37-54. Hammitt, J. K., & Shlykhter, A. I. (1999). The Expected Value of Information and the Probability of Surprise. Risk Analysis, 19(1), 135-152. Harmantzis, F. (2003, February). Operational Risk Management: Risky Business. ORMS Today,
30, 30-36. Harnack, R. V., Fest, T. B., & Jones, B. S. (1977). Group Discussion: theory and technique. Englewood Cliffs: Prentice-Hall. Harris, R. (2002). Emerging Practices in Operational Risk Management. New York: Federal Reserve Bank of Chicago. Harvey, N. (2001). Improving Judgment in Forecasting. In J. S. Armstrong (Ed.), Principles of
Forecasting:
A
handbook
for
Researchers
and
Practitioners
(pp.
59-80).
Boston/Dordrecht/London: Kluwer Academic Publishers. Haubenstock, M. (2001). The Evolving Operational Risk Management Framework. The RMA
Journal(December 2001 - January 2002), 18-21. 190
References Hawkins, N. C., & Evans, J. S. (1989). Subjective Estimation of Toluene Exposures: A Calibration Study of Industrial Hygenists. Applied Industrial Hygene, 4(61-68). Heath, C., & Gonzales, R. (1995). Interaction with others increases decision confidence but not decision quality: Evidence against information collection views of interactive decision making. Organizational Behavior and Human Decision Processes, 61, 305-326. Hengst, M. d., & Adkins, M. (2004). The Demand Rate of Facilitation Functions. Paper presented at the Proceedings of the Thirty-Eighth Hawaii International Conference on Systems Sciences, Hawaii. Hengst, M. d., & Vreede, G. J. d. (2004). Collaborative Business Engineering: A Decade of Lessons from the Field. Journal of Management Information Systems, 20(4), 85-113. Herik, C. W. v. d. (1998). Group support for policy making. S.l.: S.n. Hildebrand, D. K., Laing, J. D., & Rosenthal, H. (1977). Analysis of ordinal data (Vol. 07-008). Beverly Hills and London: Sage Publications. Hill, G. W. (1982). Group vs Individual Performance: Are N+ 1 heads better than one?
Pshycological Bulletin, 91, 517-539. Hiwatashi, J., & Ashida, H. (2002). Advancing Operational Risk Management Using Japanese Banking Experiences: Federal Reserve Bank of Chicago. Hoffman, D. D. (2002). Managing Operational Risk: 20 Firmwide Best Practice Strategies (1 ed.): Wiley. Howard, M. S. (1994). Quality of Group Decision Support Systems, a comparision between GDSS and traditional group approaches for decision tasks. TU Delft, Delft. Huber, G. P. (1980). Managerial Decision Making. Management Application Series. Hulet, D. T., & Preston, J. Y. (2000). Garbage In, Garbage Out? Collect Better Data for Your Risk
Assessment. Paper presented at the Proceedings of the Project Management Institute Annual Seminars & Symposium, Houston, Texas, USA. Jaafari, A. (2001). Management of risks, uncertainties and opportunities on projects: time for a fundamental shift. International Journal of Project Management, 19, 89-101. Janis, I. L. (1972). Victims of groupthink. Bosten: Houghton Miffling.
191
References Janssen, M. (2001). Designing Electronic Intermediaries. Unpublished Ph.D, Delft University of Technology, Delft. Jessup, L. M., Connolly, T., & Galegher, J. (1990). The Effects of Anonymity on GDSS Group Process With an Idea-Generating Task. MIS Quarterly, 24(4), 313-321. Johnson, V. E., & Albert, J. H. (1999). Ordinal data modeling. New York: Springer. Kaplan, S. (1990). Expert information versus expert opinions: Another approach to the problem of eliciting/combining/using expert knowledge in probabilistic risk analysis.
Journal of Reliability Engineering and System Safety, 39. Karakowsky, L., & Elangovan, A. R. (2001). Risky Decision Making in Mixed-Gender Teams: Whose Risk Tolerance Matters? Small Group Research, 32(1), 94-111. Karow, C. (2002). Operational Risk: Ignore It at Your Peril Kaur, S. (2002, July 9). Online Fraud: Hacker moved $62,000 in just an hour - China national then withdrew money, fled to Malaysia. The Straits Times. Keil, M., Wallace, L., Turk, D., Dixon-Randall, G., & Nulden, U. (2000). An investigation of risk perception and risk propensity on the decision to continue a software developement project. The journal of Systems and Software, 53, 145-157. King, J. L. (2001). Operational risk: measurement and modelling. New York: Wiley Finance. Kock, N., & McQueen, R. J. (1998). An action research study of effects of asynchronous groupware support on productivity and outcome quality in process redesign groups.
Journal of Organizational Computing and Electronic Commerce, 8(2), 149-168. Kuritzkes, A., & Scott, H. S. (2002). Sizing Operational Risk and the Effects of Insurance: Implications
for the Basel II Accord. Paper presented at the International Financial Colloquium on Capital Adequacy. Laere, J. v. (2003). Coordinating distributed work Exploring situated coordination with gamingsimulation. Delft: University of Technology. Limayem, M., Lee-Partridge, J. L., Dickson, G. W., & DeSanctis, G. (1993). Enhancing GDSS Effectiveness: Automated versus Human Facilitation. 95-101. Linstone, H. A., & Turoff, M. (1975). The Delphi Method: Techniques and Applications: Reading, MA: Addison-Wesley.
192
References Locke, E. A., & Latham, G. P. (1990). A theory of goal setting and task performance. NJ: Englewood Cliffs: Prentice-Hall. Lohman, F. A. B. (1999). The effectiveness of management information a design approach to contribute to organizational control. Delft: Delft University of Technology. Lowry, P. B., & Nunamaker, J. F. J. (2002a). Using the thinkLet framework to improve distributed
collaborative writing. Paper presented at the Proceedings of the 35th Hawaii International Conference on System Sciences, Hawaii. Lowry, P. B., & Nunamaker, J. F. J. (2002b). Synchronous, Distributed Collaborative Writing for Policy Agenda Setting. Using Collaboratus, an Internet-Based Collaboration Tool. Paper presented at the Proceedings of the 35th Hawaii International Conference on System Sciences, Hawaii. Lum, S. (2003 a, September 11th). Brewery man now faces charges totalling $116m. The Straits
Times. Lum, S. (2003 b, February 9). Secretary stole $1.5m from boss in cheque scam. The Sunday Times. Lyytinen, K., Mathiassen, L., & Ropponen, J. (1998). Attention Shaping and Software Risk - A Categorical Analysis of Four Classical Risk Management Approaches. Information systems
research, 9(3), 233-255. MacGregor, D. G. (2001). Decomposition for Judgmental Forecasting and Estimation. In J. S. Armstrong (Ed.), Principles of Forecasting: A Handbook for Researchers and Practitioners (pp. 107-123). Boston/Dordrecht/London: Kluwer Academic Publishers. Martin, P. H. (2003). Qualitative vs Quantitative Approaches to Operational Risk: Building an Effective Qualitative Operational Risk Structure to gain Competetive Advantage in the Market. Paper presented at the GARP's 3rd Operational Risk Seminar, London. McDonnell, W. (2002). Managing Risk: Practical lessons from recent "failures" of EU insurers (Vol. 20): Financial Services Authorithy. McGoff, C., Hunt, A., Vogel, D., & Nunamaker, J. F. (1990). IBM’s experiences with GroupSystems. Interfaces, 20(6), 39-52. McGrath, J. E. (1984). Groups: interaction and performance. Englewood Cliffs: Prentice-Hall.
193
References McKay, M., & Meyer, M. (2000). Critique of and limitations on the use of expert judgements in accident consequence uncertainty analysis. Radiation Protection Dosimetry, 90(3), 325-330. McQuaid, M. J., Briggs, R. O., Gillman, D., Hauck, R., Lin, C., Mittleman, D. D., et al. (2000).
Tools for distributed facilitation. Paper presented at the Proceedings of the 33rd Annual Hawaii International Conference on Systems Sciences, Hawaii. Medova, E. A. (2003). Operational Risk Capital Allocation and Integration of Risks. In Advances
in Operational Risk: Firmwide issues for financial institutions (2nd ed., pp. ch 6). Medova, E. A., & Kyriacou, M. N. (2001). Extremes in operational risk management. In M. D. (ed). (Ed.), Judge Institute Working Paper, Published in: Risk Management: Value at Risk and
Beyond: Cambridge University Press. Meel, J. W. v. (1994). The Dynamics of Business Engineering: Reflections on two case studies within the Amsterdam Municipal Police Force. Dordrecht: Van Meel. Mejias, R., Shepherd, M. M., Vogel, D. R., & Lazaneo, L. (1997). Consensus and Perceived Satisfaction Levels: A Cross-Cultural Comparison of GSS and Non-GSS Outcomes within and between the United States and Mexico. Journal of Management Information
Systems, 13(3), 137-161. Mennecke, B. E., & Wheeler, B. C. (1993). Tasks Matter: Modeling Group Task Processes in Experimental CSCW Research. Miller, N. E. (1950). Effects of group size on group process and member satisfaction. University of Michigan, Ann Harbor. Mitroff, I. I., Betz, F., Pondy, L. R., & Sagasti, F. (1974). On Managing Science in the System Age: Two schemes for the study of science as a whole systems phenomenon. TIMS
Interfaces, 4(3), pp. 46-58. Mittleman, D. D., Briggs, R. O., & Nunamaker, J. F. J. (2000). Best practices in Facilitating Virual Meetings: Some Notes from Initial Experience. Group Facilitation: A research and
Applications Journal, 2(2), p. 5-14. Mittleman, D. D., Briggs, R. O., Nunamaker, J. F. J., & Romano, N. C. (1999). Lessons learned
From Synchronous Distributed GSS Sessions: Action Research at the U.S. Navy Third Fleet. Paper presented at the Proceedings of the 10th EuroGDSS Workshop.
194
References Muermann, A., & Oktem, U. (2002). The Near-Miss Management of Operational Risk. The
Journal of Risk Finance, 4(1). Murphy, A. H., & Winkler, R. L. (1974). Subjective Probability forecasting experiments in meteorology: some preliminary results. Bulletin of the American Meteorological Society, 55, 1206-1216. Murphy, A. H., & Winkler, R. L. (1977). Reliability of subjective probability forecasts of precipitation and temperature. Applied Statistics, 26, 41-47. Netemeyer, R. G., Bearden, W. O., & Sharma, S. (2003). Scaling Procedures: Issues and Applications. Beverly Hills and London: Sage Publications. Niederman, F., Beise, C. M., & Beranek, P. M. (1993). Facilitation Issues in Distributed Group Support Systems,. Communications of the ACM, 299-312. Nunamaker, J. F., Applegate, L. M., & Konsynski, B. R. (1988). Computer-aided Deliberation: Model Management and Group Decision Support. Journal of Operations Research, 36(6), 826-848. Nunamaker, J. F., Dennis, A. R., Valacich, J. S., Vogel, D. R., & George, J. F. (1991). Electronic Meeting Systems to Support Group Work. Communications of the ACM, 34(7), 40-61. Nunamaker, J. F., Vogel, D., Heminger, A., Grohowski, R., & McGoff, C. (1989a). Group support
systems in practice: experience at IBM. Paper presented at the Proceedings of the TwentySecond Hawaii International Conference on Systems Sciences. Nunamaker, J. F., Vogel, D., Heminger, A., Martz, B., Grohowski, R., & McGoff, C. (1989b). Experiences at IBM with Group Support Systems: A Field Study. Decision Support
Systems, 5(2), 183-196. Nunamaker, J. F. J., Briggs, R. O., Mittleman, D., Vogel, D., & Balthazard, P. A. (1997). Lessons from a Dozen Years of Group Support Systems Research: A Discussion of Lab and Field Findings. Journal of Management Information Systems, 13(3), 163-207. O'Brien, N., Smith, B., & Allen, M. (1999). Models: The case for quantification. Risk Magazine. Ocker, R., Hiltz, S. R., Turof, M., & Fjermestad, J. (1996). The Effects of Distributed Group Support and Process Structuring on Software Requirements Development Teams: Results on Creativity and Quality. Journal of Management Information Systems, 12(3), 127153. 195
References Oldfield, G., & Santomero, A. M. (1997). The Place of Risk Management in Financial Institutions (No. 95-05 B): Wharton School Center for Financial Institutions, University of Pennsylvania. Onna, M. v., & Koning, A. (2004). De kleine Prince 2: Gids voor projectmanagement (Derde, geheel herziene editie ed. Vol. 5): ten Hagen Stam (in Dutch). Orlikowski, W. J., & Baroudi, J. J. (1991). Studying information technology in organizations: research approaches and assumptions. Information systems research, 2(1), pp.1-28. Pinsonneault, A., & Kraemer, K. (1989). The Impact of Technological Support on Groups, An Assessment of the Empirical Research. Decision Support Systems, 5(2), 197-216. Power, M. (2003). The Invention of Operational Risk (No. Discussion paper no. 16). London: ESRC Centre for Analysis of Risk and Regulation. the London School of Economics and Political Science. Pulkkinen, U., & Simola, K. (2000). An expert panel approach to support risk-informed decision-making (No. STUK-YTO-TR172). Helsinki, Finland: Radiation and Nuclear Safety Authority of Finland (STUK). Pyle, D. H. (1997, May 17-19). Bank risk management: theory. Paper presented at the Conference on Risk Management and Regulation in Banking, Jerusalem. Quaddus, M. A., Tung, L. L., Chin, L., Seow, P. P., & Tan, G., C. (1998). Non-Networked Group
Decision Support System: Effects of Devil's Advocacy and Dialectical Inquiry. Paper presented at the Hawaiian Conference on System Sciences. Questa, G. S. (2002). Lessons from the Headlines Qureshi, S., & Vogel, D. (2001). Organizational Adaptiveness in Virtual Teams. Group Decision
and Negotiation, 10(1), 27-46. Ramadurai, K., Beck, T., Scott, G., Olson, K., & Spring, D. (2004). Operational Risk Management
& Basel II Implementation: Survey Results. New York: Fitch Ratings Ltd. Ramadurai, K., Olson, K., Andrews, D., Scott, G., & Beck, T. (2004). The Oldest Tale but the Newest Story: Operational Risk and the Evolution of its Measurement under Basel II. New York: Fitch Ratings Ltd. Reinig, B. A. (2003). Toward an Understanding of Satisfaction with the Process and Outcomes of Teamwork. Journal of Management Information Systems, 19(4), 65-84.
196
References Reinig, B. A., Briggs, R. O., & Vreede, G. J. d. (2003). General Meeting Assessment Survey: short
version (No. 09). RMA. (2000). Operational Risk: The Next Frontier. The Journal of Lending & Credit Risk
Management(March), 38-44. Robbins, S. P. (1998). Organizational Behavior (8 ed.): Prentice-Hall, Inc. Romano, N., Nunamaker, J. F. J., Briggs, R. O., & Mittleman, D. (1999). Distributed GSS
Facilitation and Participation: Field Action Research. Paper presented at the 32nd Hawaii International Conference on System Sciences, Hawaii. Romano, N. C., Chen, F., & Nunamaker, J. F. J. (2002). Collaborative Project Management Software. Paper presented at the Hawaii International Conference on System Sciences, Hawaii. Rosengren, E. (2003). Operational Risk: Presentation at World Bank Seminar: Assessing, Managing and Supervising Financial Risk: Federal Reserve Bank of Boston. Rowe, G., & Wright, G. (2001). Expert Opinions in Forecasting: The Role of the Delphi Technique. In J. S. Armstrong (Ed.), Principles of Forecasting: A handbook for Researchers and
Practitioners (pp. 125-144). Boston/Dordrecht/London: Kluwer Academic Publishers. Rutkowski, A. F., Vogel, D., Bemelmans, T. M. A., & Genuchten, M. v. (2002). Group Support Systems and Virtual Collaboration: The HKNET Project. Group Decision and Negotiation,
11, 101-125. Sagasti, F. R., & Mitroff, I. I. (1973). Operations research from the viewpoint of General Systems Theory. OMEGA; International jounal of management science, 1(6), pp.695-709. Santanen, E. L., Briggs, R. O., & Vreede, G. J. d. (2004). Causal Relationships in Creative Problem Solving: Comparing Facilitation Interventions for Ideation. Journal of
Management Information Systems, 20(4), 167-197. Scarff. (2003). ORM: Operational Risk Management: Naval Safety Center. Seah, T. (2004). Understanding and Auditing Operational Risk Management. Seligman, P. S., Wijers, G. M., & Sol, H. G. (1989). Analyzing the structure of I.S. Methodologies an
alternative approach. Paper presented at the First Dutch conference on Information Systems, Amersfoort, the Netherlands.
197
References Shaw, G. J. (1998). User Satifaction in Group Support Systems Research: A Meta-Analysis of Experimental Results. Paper presented at the Hiccs thirty first. Shaw, M. E. (1981). Group dynamics. New York: McGraw-Hill. Shyan, L. S. (2003, July 29). SembLog uncovers fraud at India unit. The Straits Times. Sih, J., Samad-Khan, A. H., & Medapa, P. (2000). Is the size of an operational risk related to firm size? Operational Risk(January). Simon, H. A. (1977). The New Science of Management Decision (revised edition). Englewood Cliffs: Prentice-Hall. Simons, G. F. (1994). Conceptual modeling versus visual modeling: a technological key to building consensus. Paper presented at the Consensus ex Machina Joint International Conference of the Association for Literary and Linguistic Computing and the Association for Computing and the Humanties, Paris, 19-23 April. Smith, A. (1776, 1963). The Wealth of Nations (Vol. 1). Illinois: Homewood. Sol, H. G. (1982). Simulation in information systems development. Z. pl.: Rijks Universiteit Groningen. Sol, H. G. (1990). Information Systems Development: A Problem Solving Approach. Paper presented at the Proceedings of the International Symposium on System Development Methodologies, Atlanta. Shifting Boundaries in Systems Engineering and Policy Analysis, Inaugural address, (1992). Stewart, T. R. (2001). Improving Reliability of Judgmental Forecasts. In J. S. Armstrong (Ed.),
Principles of Forecasting: A handbook for Researchers and Practitioners (pp. 81-106). Boston/Dordrecht/London: Kluwer Academic Publishers. Swanborn, P. G. (2000). Case-study's wat, wanneer en hoe? (2e dr. ed.). Amsterdam: Boom (in Dutch). 't Hart, H., Van Dijk, J., De Goede, M., Jansen, W., & Teunissen, J. (1998). Onderzoeksmethoden (3 ed.). Amsterdam: Boom (in Dutch). Toft, B., & Reynolds, S. (1997). Learning from Disasters: A Management Approach. Leicester: Perpetuity Press Ltd. Trauth, E. M., & Jessup, L. (2000). Understanding Computer-Mediated Discussions: Positivist and Interpretive Analysis of Group Support System Use. MIS Quarterly, 24(1), 43-79. 198
References Tripp, M. H., Bradley, H. L., Devitt, R., Orros, G. C., Overton G. L., Pryor, L. M., et al. (2004).
Quantifying Operational Risk in General Insurance Companies: Institute of Actuaries. Turban, E., Aronson, J. E., & Bolloju, N. (2001). Decision Support Systems and Intelligent Systems (6th ed. ed.). New Jersy: Prentice-Hall. Turner, J. R. (1999 b). The handbook of Project Based Management: McGraw-Hill, Maidenhead. Turner, W. S. (1980 a). Project Auditing Methodology, North Holland, Amsterdam. UCLA. (2002). Statistical Computing Resources. California: The University of California. Valacich, J., Nunamaker, J. F. J., & Vogel, D. (1994). Physical proximity effects on computermediated group idea generation. Small Group Research, 25(1), 83-104. Valacich, J., & Schwenk, C. (1995). Devil's Advocacy and Dialectical Inquiry Effects on Faceto-Face and Computer Mediated Group Decision Making. Organizational Behavior and
Human Decision Processes, 63(2), 158-173. Valacich, J. S., Vogel, D. R., & Nunamaker, J. F., Jr.,. (1989). Integrating Information Across Sessions
and Between Groups in GDSS. Paper presented at the 22nd Annual Hawaii International Conference on Systems Science, Hawaii. Verbraeck, A. (1991). Developing an adaptive scheduling support environment. Delft University of Technology, the Netherlands. Versteegt, C. (2004). Holonic Control For Large Scale Automated Logistic Systems. Delft University of Technology, the Netherlands. Vocht, A., de. (1999). Basishandboek SPSS 8&9 voor windows 95&98 (1 ed.). Utrecht: Bijleveld Press. Vogel, D., Nunamaker, J. F., Martz, B., Grohowski, R., & McGoff, C. (1990). Electronic meeting systems experience at IBM. Journal of Management Information Systems, 6(3), 25-43. Vreede, G. J. d. (1995). Facilitating organizational change the participative application of dynamic modelling. Delft: De Vreede. Vreede, G. J. d. (2000). A Field study into the Organizational Application of Group Support Systems. Journal of information Technology Cases & Applications, 2(4), 27-47.
199
References Vreede, G. J. d., Boonstra, J., & Niederman, F. (2002). What Is Effective GSS Facilitation? A
Qualitative Inquiry Into Participant's Perceptions. Paper presented at the Proceedings of the Hawaiian Conference on System Sciences, Hawaii. Vreede, G. J. d., & Briggs, R. O. (1997). Meetings of the Future, Enhancing Group Collaboration with Group Support Systems. Journal of Creativity and Innovation Management,
6(2), 106-116. Vreede, G. J. d., & Briggs, R. O. (2001, 4-7 June). thinkLets: Five examples of creating patterns of group
interaction. Paper presented at the Proceedings of the 2001 Group Decision & Negotiation Conference, La Rochelle, France. Vreede, G. J. d., & Bruijn, H. d. (1999). Exploring the Boundaries of Successful GSS Application: Supporting Interorganizatinal Policy Networks. The DATA BASE for
Advanced in Information Systems, 30(3,4), 111-129. Vreede, G. J. d., Davison, R. M., & Briggs, R. O. (2003). How a Silver Bullet May Lose Its Shine: Learning from Failures with Group Support Systems. Communications of the ACM,
46(8), 96-101. Vreede, G. J. d., & Dickson, G. W. (2000). Using GSS to Design Organizational Processes and Information Systems: An Action Research Study on Collaborative Business Engineering. The Netherlands: Kluwer Academic Publishers. Vreede, G. J. d., & Muller, P. (1997). Why Some GSS Meetings Just Don't Work:Exploring Success
Factors of Electronic Meetings. Paper presented at the Proceedings of the 7 th European Conference on Information Systems (ECIS), Cork, Ireland,. Vreede, G. J. d., Vogel, D., Kolfschoten, G., & Wien, J. (2003). Fifteen Years of GSS in the Field: A
Comparison Across Time and National Boundaries. Paper presented at the 36th Hawaii International Conference on System Sciences, Los Alamitos. Vreede, G. J. d., & Wijk, W. v. (1997a). A Field Study Into The Organizational Application Of Group
Support Systems. Paper presented at the Proceedings of the 1997 ACM SIGCPR conference on Computer personnel research, San Fransisco, California, United States. Walter, S. (2004). Outlining the Qualifying Criteria for the AMA approach and Understanding What the Regulators are looking for. New York: Federal Reserve Bank of New York.
200
References Weatherall, A., & Hailstones, F. (2002). Risk Identification and Analysis using a Group Support System
(GSS). Paper presented at the Proceedings of the 35th Hawaii International Conference on System Sciences, Hawaii. Wijers, G. M. (1991). Modelling Support in Information Systems Development. Delft University of Technology, the Netherlands. Wijk, W. B. v. (1996a). Onderzoeksopzet Value Analysis van Group Support Systemen binnen Nationale-
Nederlanden Schade/Zorg. Delft, The Netherlands: Delft University of Technology, Section Systems Engineering (in Dutch). Wijk, W. B. v. (1996b). Onderzoek naar Group Support Systemen binnen Nationale-Nederlanden
Schade/Zorg. (Master thesis). Delft, The Netherlands: Delft University of Technology (in Dutch). Winkler, R. L., & Poses, R. M. (1993). Evaluating and combining physicians probabilities of survival in an intensive care unit. management Science, 39, 1526-1543. Yasuda, Y. (2003). Application of Bayesian Inference to Operational Risk Management: University of Tsukuba. Yin, R. K. (1994). Case Study Research: Design and methods (2 ed. Vol. 5): Sage Publications. Young, B., Blacker, K., Cruz, M., King, J., Lau, D., Quick, J., et al. (1999). Understanding Operational Risk: A consideration of main issues and underlying assumptions.
Operational Risk Research Forum. Young, B. J. (1999). Operational Risk: Towards a Standard Methodology for Assessment and Improvement. Zigurs, I., & Buckland, B. K. (1998). A Theory of Task/Technology Fit and Group Support Systems Effectiveness. MIS Quarterly, September.
201
202
Summary
Summary Operational risk management is an essential part of the economic activities and economic development in financial institutions. It supports decision makers to make informed decisions based on a systematic assessment of operational risk. Operational risk has only recently emerged as a major type of risk, and there are few methods and tools available to help identify, quantify, and manage it. Operational risk does not lend itself to traditional risk management approaches because almost all instances of operational risk losses result from complex and nonlinear interactions among risk and business processes. By the end of the 1990s, many financial institutions focused their risk management efforts on operational risk management. This was mainly motivated by the volatility of today’s marketplace, costly catastrophes such as Barings and Daiwa, decentralization and e-commerce. Moreover, there is an increasing regulatory, operational, and strategic pressure on financial institutions to manage their operational risk adequately within a reliable framework. In response to this, several initiatives have been taken to manage operational risk. Due to difficulties with loss data, most of these initiatives focus on using expert judgment to provide the input to estimate the level of exposure to operational risk. Although these initiatives have helped financial institutions to improve their operational risk management, we argue that the way in which the improvements are made is not effective, efficient and satisfying. In this research project we focus on an alternative to improve operational risk management that is expected to be more effective, efficient and satisfying; Multiple Expert Elicitation and Assessment (MEEA). Multiple experts will be utilized in operational risk management. The value of their output will provide financial institutions with the input to estimate their exposure to operational risk. The method to achieve this will be used consistently. Despite the potential for using expert judgment, very few financial institutions utilize multiple expert judgment to provide them with the input to estimate their exposure to operational risk. There is a number of issues that hinder the implementation of expert judgment. They can be summarized to: process issues, support issues and organizational issues. We try to solve a number of the issues regarding utilizing multiple expert judgment in operational risk management. Dealing with these issues should make it easier to develop and implement the use of expert judgment in financial institutions.
203
Summary
Research objective Improving operational risk management is a complex and challenging activity. Many actors are involved in the design process, the data collection method must be robust and auditable, the processes, techniques and technology to support multiple expert judgment activities are new and not proven in practice, and the output from multiple experts has to be objective. We want to improve operational risk management for financial institutions. The research objective is formulated as: develop an approach to improve the process of utilizing expert judgment in operational risk management. Four research questions are formulated to achieve the research objective.
•
What are the generic characteristics of utilizing expert judgment in operational risk management?
•
What concepts can be used to improve operational risk management by utilizing expert judgment?
•
What does an approach to improve operational risk management by utilzing expert judgment look like?
•
How do we evaluate the improvements that were made to operational risk management?
Research approach We used the inductive-hypothetic model cycle as our research strategy because it has proven its use in emerging research fields in which theory is scarce. With this strategy we answer our research questions and objective. The inductive-hypothetic strategy consists of five steps. In the first step a number of initial theories are identified and used to study a number of problem situations. To describe the relevant aspects of these situations, descriptive empirical models are used. In the second step these models are abstracted into a descriptive conceptual model which in turn is used to describe all the relevant elements and aspects of the problem situation. In the third step a theory is formulated to solve the observed problems. The theory is presented in a prescriptive conceptual model and defines how to address the observed problems. Then, in the fourth step, the prescriptive conceptual model is implemented in several practical situations. The result of this step is a number of alternatives that provides solutions for the identified problems and is presented in prescriptive empirical models. In the fifth step, the developed
204
Summary theory is evaluated by comparing the descriptive empirical models to the prescriptive empirical models.
Literature review We have conducted a literature review to identify a number of initial theories. We studied literature on operational risk management, expert judgment and group support systems. Operational risk management supports decision-makers to make informed decisions based on a systematic assessment of operational risk. Operational risk is defined as the risk of direct or indirect loss resulting from inadequate or failed internal processes, people and systems or from external events. To manage operational risk, there are generally four possible courses of action (1) accept, (2) avoid, (3) transfer and (4) mitigate. Mitigation of operational risk is the most compelling because the other possibilities do not actually reduce the risk; rather the risk remains. Operational risk can be mitigated by a collection of internal control measures, which will only function properly if the internal control environment in which they are embedded is established appropriately in the financial institution. We also addressed the expert judgment literature. Expert judgment is defined as the degree of belief, based on knowledge and experience, that an expert makes in responding to certain questions about a subject. Using structured methods to utilize expert judgment provides better results than using unstructured methods. Dividing the operational risk management process into a number of phases allows initiators, experts, stakeholders and the facilitator to work simultaneously on different parts. Moreover, control over these phases is easier than controlling the whole process. Each phase can be further divided in activities. Each phase and/or activity can be carried out sub optimally wherein inconsistency and bias play an important role. Inconsistency is a random or a-systematic deviation from the optimal, whereas bias involves a systematic deviation from the optimal. Numerous principles are discussed to carry out each phase and activity as optimal possible thereby minimizing inconsistency and bias. We also discussed group support systems literature. A group support system is defined as a socio-technical system consisting of software, hardware, meeting procedures, facilitation support, and a group of meeting participants engaged in intellectual collaborative work. Group support systems offer support for a common collection of group tasks such as diverge, converge, organize, evaluate and build consensus. Moreover a group support system facilitates communication and cognitive tasks, for both process and content. Communication refers to the
205
Summary support provided by electronic messaging between experts via networked PCs. The process dimension refers to the structuring of well-prepared and scheduled activities. The content dimension deals with supporting the actual substance of the communication or cognitive task. Cognitive tasks tend to be intellectually difficult tasks. Several taxonomies are discussed to categorize these cognitive tasks into easy communicable and distinctly supportable categories e.g. diverge, converge, organize, evaluate and build consensus. Using a group support system can lead to a number of effectiveness, efficiency and satisfaction benefits. However, the extend to which these beneficial effects occur depends upon a number of variables such as: facilitation, goals, tasks, structure, group composition, group size and anonymity. We discussed a number of concepts that aim to advance these variables. These concepts aim to increase the effectiveness, efficiency and satisfaction of group support systems meetings. We ended our literature study with the observation that recent developments in operational risk management, expert judgment and group support systems suggest that synergy needs to be created between the patterns of group tasks in operational risk management and the technology used.
Case study A case study at Bank Insure, a large financial institution, is chosen to investigate operational risk management in practice and acquire a better understanding for the development of an approach to improve operational risk management. Utilizing expert judgment at Bank Insure to provide them with the input to estimate their exposure to operational risk is complex and difficult. The outcomes of the expert judgment exercises are often too biased and not shared by the experts, management and other stakeholders to take effective decisions. The throughput time is too long to respond timely to possible operational risks and to gain benefits from possible opportunities. Bank Insure decided to research the possibilities of utilizing multiple expert judgment to solve these problems. We identified the following starting points for the improvement of operational risk management. First, for initiators and manager to take effective decisions, the results need to be free from biases and accepted by experts and managers as well. Further, it is important that the results are sufficient, reliable and robust to enable an accurate estimation of a financial institutions’ exposure to operational risk. Second, there is a need to formulate a clear and unambiguous operational risk management process to provide the initiators and experts with a detailed insight into the process and activities they have to perform. Moreover, the process and activities must be easy to communicate. Third, the process and outcomes should meet the
206
Summary sometimes-conflicting goals of the institution, initiators, experts and stakeholders as close as possible. Fourth, information and communication technology should be applied to speed up the operational risk management process. Fifth, the operational risk management process should be flexible so that they can be used in various business processes in the financial institution. These starting points have to be addressed and specified in the approach.
Multiple Expert Elicitation and Assessment (MEEA) We develop an approach to improve operational risk management, labeled as Multiple Expert Elicitation and Assessment, abbreviated to MEEA. MEEA can be used to support risk managers and decision makers in their efforts to provide a financial institution with the input to estimate their exposure to operational risk. In addition, MEEA can operate with scarce data and enables financial institutions to understand operational risk with a view to reducing it, thus reducing economic capital within the Basel II regulations. MEEA consists of a way of thinking, way of working, way of modeling and way of controlling. In the way of thinking we present our view on operational risk management, how we think that the specific elements of this domain should be interpreted, and a number of design guidelines. In the way of working, we discuss the steps that need to be taken to deal with the issues and identified problems to improve operational risk management. In the way of modeling we discuss the modeling concepts that are constructed when following a methodology. In the way of controlling, we discuss the managerial aspects of the problem solving process. Finally, we discuss how we evaluate the improvements made to operational risk management.
MEEA: way of thinking The first part of our way of thinking presents our view on operational risk management.
•
processes should be designed in which expert judgment is utilized in a structured manner: three characteristics have to be taken in account when designing. First, design attempts to differentiate different sets of behavior patterns. Second, design attempts to estimate the fit between each alternative set of behavior patterns and a specified set of goals. Third, part of the design is communicating thoughts to other minds
•
improving operational risk management should be viewed from a problem solving perspective using a bounded rationality view: the processes in which expert judgment is utilized can be viewed as a sequence of interrelated activities and each process forms the
207
Summary input for the next process. We aim for a model that is appropriate for the designer and leads to an acceptable solution for the financial institution(s) involved
•
operational risk management can be improved by structuring the processes in which expert judgment is utilized: This process can help the facilitator, experts and other stakeholders to focus on solving the relevant problems at hand
•
utilizing multiple expert judgment can improve operational risk management: the elicitation of multiple experts can be viewed as increasing the sample size. Furhter, multiple experts are more likely to foresee the operational risks involved as compared to a single expert given the multidimensional characteristics of an operational risk
•
operational risk management can benefit from group support and group support systems: group support, in the broadest sense, can include facilitation techniques, recipes, group methods and software tools. Group support systems can help speed up the activities in which multiple experts need to gather and/or process information.
The following design guidelines can be followed when utilizing expert judgment in operational risk management. The first guideline emphasizes the focus on compliance with relevant standards such as policies, legal requirements and best practices. Compliance is important because it directly affects the competitive position of a financial institution. The second guideline states that procedural rationality should be ensured when utilizing expert judgment in operational risk management. This guideline anticipates human behavior in processes, activities and tasks because decision makers want to take, and want to be perceived as taking decisions in a rational manner. Procedural rationality is attainable if decision makers, initiators, experts and stakeholders commit in advance to the process, methods and tools by which multiple experts are elicited and their views combined. The third guideline accentuates that the processes in which expert judgment is utilized supports the building of a shared understanding about the outcome between stakeholders. The building of shared understanding is possible when disagreements are not lost through averaging them out, they should be continually revisited and explored. For this interaction several procedures such as the Delphi method and Nominal Group Technique can be used. Further, this interaction can be supported with group support systems. The fourth
guideline is introduced to make the process of utilizing expert judgment practical and flexible. A practical and flexible process combined with supporting tools enables application to the business processes and a widespread integration in the financial institution. The fifth and final
208
Summary guideline states that relevant roles in the processes in which expert judgment is utilized should be considered and assigned explicitly. A clear description and assignment of roles can help understand the interaction between the decision makers, initiators, experts and other stakeholders. Moreover, it can help the facilitator to have more control over the interaction between the initiators and experts.
MEEA: way of working The way of working follows a problem-solving perspective and describes the process, activities and steps that need to be executed in the phases preparation, risk identification, risk assessment, risk mitigation and reporting of operational risk management. Each phase has its own specific goal and is divided in a number of logical steps which in turn aim to minimize inconsistency and bias. Moreover, each step defines what you need to do, what points you need to consider, how you can perform the step and who should be involved. Further, the way of working suggests using a particular combination of methods and tools for each particular step to improve the utilization of expert judgment in operational risk management thereby aiming to improve the effectiveness, efficiency and satisfaction.
MEEA: way of modeling The way of modeling concerns the modeling techniques used to construct models in the methodology that we follow and is closely related to the way of working. The modeling techniques need to support the modeling of processes and activities of the understanding phase and design phase. We make a distinction between conceptual models and empirical models. Conceptual models such as visual models help us to structure perception, representation and reasoning regarding a problem situation. Moreover, they can also be used as a vehicle of communication. Empirical models such as activity diagrams and sequence diagrams enable us to analyze and diagnose a problem situation and find possible solutions. Activity diagrams, of which the core is the activity, can be used to describe a sequence of activities such as identifying events or assessing operational risk. Sequence diagrams can be used to model the dynamic aspects of the system and emphasize the sequence or order in which activities take place.
MEEA: way of controlling The way of controlling describes the control of the way of working and the models that we use to design a process to improve operational risk management. We recommend using a project management approach that is widely accepted such as Prince 2. We furher advise to use a 209
Summary ‘middle out’ and incremental point of view in carrying out the way of working and modeling process. This facilitates quick feedback and helps to strengthen management and employee support for the change process.
MEEA: evaluation MEEA is evaluated using two case studies. In both cases, MEEA is used to design and evaluate a process to utilize expert judgment in operational risk management. The first case study was performed at Ace Insure and the second case study at Inter Insure. Both are a part of a large financial institution. MEEA recommends using a combination of important aspects in operational risk management, expert judgment and group support systems to evaluate the improvements made. For measurement, an input – process – output framework is used in combination with quantitative and qualitative data sources. This enables us to compare and contrast our findings with the existing literature and strengthen our arguments. The framework is used to guide the data collection and analysis. Quantitative data sources are used to study the constructs in statistical detail. Qualitative data sources are used to elaborate on quantitative results and get indications on the causal relationships between the constructs. MEEA is evaluated in the first and second case study. Both case studies follow the design guidelines and steps prescribed by MEEA. Further, MEEA is used to design, execute and evaluate an ORM process. The results of the case studies are that operational risk management is more effective, efficient and satisfying as compared to the contemporary situation. Using a mixed-gender group in ORM avoids internal politics and prevents experts in the risk assessment phase from groupthink and biases. Substantive knowledge is more important in the risk identification phase than in the risk assessment phase. Both substantive and process knowledge are needed in the risk assessment phase to estimate the frequency of occurrence and impact of operational risks. A group of experts who collaborate in a risk assessment advocate a riskier course of action than they would if they would act individually. Anonymity in operational risk management is important to reduce member status, internal politics, fear of reprisals and groupthink. Less verbal discussions are needed in the risk identification and assessment phase to safeguard anonymity. MEEA improved the following aspects of the ORM process: the structure, the involvement and participation of experts and initiators, the interaction between experts and initiators, and the facilitation of the ORM sessions. MEEA improved the outcome effectiveness from an initiators and participants point of view. MEEA improved the outcome efficiency from an initiators and participants point of view. MEEA improved the satisfaction 210
Summary with the outcome. MEEA can be used to support risk managers and decision makers in their efforts to provide a financial institution with the input to estimate their exposure to operational risk. In addition, MEEA can operate with scarce data and enables financial institutions to understand operational risk with a view to reducing it, thus reducing economic capital within the Basel II regulations. Moreover, MEEA enables financial institutions to incorporate forwardlooking activities to prevent catastrophic losses. An important aspect in the evaluation of our approach is answering the question if financial institutions are willing to use and implement MEEA to provide them with the input to estimate their exposure to operational risk. So far, our approach has been implemented in a large Dutch financial institution. More than 150 risk managers are trained in using this approach. More recently, MEEA has also been applied to a large Dutch logistics firm, a Dutch investment firm and a large municipal governmental organization. Although the results from these cases are confidential, first research results indicate that financial institutions and other organizations are willing to use and implement MEEA. Numerous experts are trained to implement MEEA in their daily practice. These results will be presented in several scientific papers. Moreover, the results indicate that our approach improves operational risk management. We end our research with the notion that many issues still need resolving. These issues indicate future research. The first possible issue is for other researchers or experts to apply MEEA in other financial institutions than used in this research. In this case the researchers or experts should be independent of the developers of MEEA. The second possible issue is to research the generalizability of MEEA by using a different group support system, for example Web IQ, Grouputer or Meetingworks.
211
212
Samenvatting
Samenvatting Operationeel Risico Management (ORM) is een essentieel onderdeel van de economische activiteiten en economische ontwikkeling in financiële instellingen. ORM ondersteunt besluitvormers bij het nemen van geïnformeerde beslissingen die gebaseerd zijn op een systematische assessment van het operationele risico. Operationele risico’s zijn recentelijk benoemd als een belangrijk type risico. Voor het operationele risico zijn slechts enkele methoden en tools beschikbaar om ze te identificeren, kwantificeren en managen. Operationele risico’s lenen zich niet voor traditionele risico management methoden. Dit komt omdat in vrijwel alle omstandigheden de verliezen hiervan resulteren uit complexe and niet-lineaire interacties tussen deze operationele risico’s en de bedrijfsprocessen van financiële instellingen. Tegen het eind van de jaren negentig zijn veel financiële instellingen zich gaan focussen op het managen van operationele risico’s. Deze focus werd beïnvloed door de weerbarstigheid van de markt, catastrofes zoals Barings en Daiwa, decentralisatie en elektronische handel. Verder speelt de steeds sterker wordende wet- en regelgeving, operationele en de strategische druk op financiële instellingen een belangrijke rol om operationele risico’s adequaat te managen binnen een betrouwbaar raamwerk. In een reactie hierop zijn er diverse initiatieven ondernomen om deze operationele risico’s te managen. Door de problemen met historische data focussen veel van deze initiatieven zich op het inzetten van experts om een input te verkrijgen waarmee de blootstelling aan het operationele risico ingeschat kan worden. Hoewel deze initiatieven financiële instellingen geholpen hebben, zijn de verbeteringen in het managen van operationele risico’s niet effectief, niet efficiënt en leiden niet tot tevredenheid bij implementatie. In dit onderzoek focussen we op een alternatief om de effectiviteit, efficiëntie en tevredenheid bij implementatie van operationeel risico management te verbeteren. Dit alternatief wordt Multipe Expert Elicitation and Assessment (MEEA) genoemd. Meerdere experts zullen worden ingezet in operationeel risico management. De waarde van de uitkomsten zal financiële instellingen in staat stellen om hun blootstelling aan operationele risico’s in te kunnen schatten. De methode waarmee dit bereikt wordt zal consistent worden gebruikt. Hoewel het potentieel van de inzet van experts groot is, gebruiken nog weinig financiële instellingen het als input om hun blootstelling aan operationele risico’s in te schatten. Er is een aantal issues die de implementatie van het inzetten van experts in ORM hindert. Deze issues kunnen worden samengevat in: proces issues, ondersteunings issues en organisatie issues.
213
Samenvatting Wij proberen een aantal van deze issues in operationeel risico management op te lossen. Het verhelpen van deze issues maakt de ontwikkeling van een aanpak en implementatie van het inzetten van experts ten behoeve van ORM in financiële instellingen gemakkelijker.
Onderzoeksdoelstelling Het verbeteren van Operationeel Risico Management (ORM) is een complexe en uitdagende activiteit. Veel actoren zijn betrokken in het ontwerpproces, de dataverzamelingsmethode moet robuust en auditbaar zijn, de processen, technieken en technologie om de experts te ondersteunen zijn nieuw, hebben zich nog niet bewezen in de praktijk en de uitkomst van de experts moeten objectief zijn. Wij willen Operationeel Risico Management verbeteren voor financiële instellingen. De onderzoeksdoelstelling is geformuleerd als volgt: ontwikkel een aanpak om het proces van het inzetten van experts in operationeel risico management te verbeteren. Vier onderzoeksvragen zijn geformuleerd om deze onderzoeksdoelstelling te behalen:
•
Wat zijn de generieke karakteristieken van het inzetten van experts in operationeel risico management?
•
Welke concepten kunnen worden gebruikt om operationeel risico management te verbeteren door het inzetten van experts?
•
Hoe ziet een aanpak eruit om operationeel risico management door het inzetten van experts te verbeteren?
•
Hoe evalueren we de verbeteringen die gemaakt zijn in operationeel risico management?
Onderzoeksaanpak We kiezen om volgens een inductief-hypothetische onderzoeksstrategie te werken omdat deze strategie bijzonder geschikt is voor opkomende onderzoeksgebieden waar weinig theorie beschikbaar is. Met deze strategie beantwoorden wij onze onderzoeksvragen en doelstelling. De strategie bestaat uit vijf stappen. De eerste stap begint met het identificeren van initiële theorieën die worden gebruikt om een aantal probleemsituaties te onderzoeken. Om de relevante aspecten van deze situaties te beschrijven worden descriptieve empirische modellen gebruikt. In de tweede stap worden deze modellen geabstraheerd in een descriptief conceptueel model. Dit model wordt gebruikt om de relevante elementen en aspecten van de probleemsituatie te beschrijven. In de derde stap wordt een theorie geformuleerd om de geobserveerde problemen op te lossen. Deze theorie wordt gepresenteerd in een prescriptief 214
Samenvatting conceptueel model en definieert hoe de geïdentificeerde problemen te adresseren. In de vierde stap wordt het prescriptieve conceptuele model geïmplementeerd in diverse praktijksituaties. Het resultaat van deze stap is een aantal alternatieven die oplossingen bieden voor de geïdentificeerde problemen. Dit wordt gepresenteerd in een prescriptief empirisch model. In de vijfde stap wordt de ontwikkelde theorie geëvalueerd door de descriptieve empirische modellen te vergelijken met de prescriptieve empirische modellen.
Theoretische achtergrond Een literatuuronderzoek is uitgevoerd om een aantal initiële theorieën te identificeren. Literatuur over operationeel risico management, expert judgment en group support systems is bestudeerd. Operationeel risico management ondersteunt besluitvormers om geïnformeerde beslissingen te nemen gebaseerd op een systematische assessment van het operationele risico. Operationeel risico is gedefinieerd als het risico van directe of indirecte verliezen die resulteren uit inadequate of falende interne processen, mensen en systemen of van externe gebeurtenissen. Om operationele risico’s te managen worden vier generieke mogelijkheden onderscheiden (1) accepteren, (2) vermijden, (3) verplaatsen en (4) mitigeren. Mitigeren van het operationele risico is het meest interessant omdat de andere mogelijkheden het risico niet echt verminderen; sterker nog, het risico blijft. Operationele risico’s kunnen gemitigeerd worden door een set van interne beheersmaatregelen. Het mitigeren functioneert echter alleen naar behoren als de interne beheersomgeving van de financiële instelling goed is opgezet. Expert judgment literatuur is besproken. Expert judgment is gedefinieerd als de mate van zekerheid, die een expert tot uitdrukking brengt in zijn / haar kwantitatieve respons op vragen over een bepaald onderwerp, gebaseerd op kennis en ervaring in zijn / haar eigen kennisdomein. Bij het inzetten van experts levert het gebruik van gestructureerde methoden betere resultaten op dan wanneer ongestructureerde methoden worden gebruikt. Verdeling van het operationele risico management proces in een aantal fasen zorgt ervoor dat initiatoren, experts, belanghebbenden en de facilitator gelijktijdig aan verschillende onderdelen kunnen werken. Deze verdeling zorgt er tevens voor dat het proces beter beheersbaar blijft. Elke fase en/of activiteit kan suboptimaal worden uitgevoerd. Hierbij spelen inconsistentie en bias een belangrijke rol. Inconsistentie is een a-systematische afwijking van het optimale en bias is een systematische afwijking van het optimale. Er bestaan verschillende principes die de inconsistentie en bias verminderen zodat elke fase en activiteit zo optimaal mogelijk uitgevoerd kan worden.
215
Samenvatting We hebben ook de Group Support Systems (GSS) literatuur besproken. Een group support system is gedefinieerd als een socio-technisch systeem bestaande uit software, hardware, procedures, facilitatie-ondersteuning en een groep van participanten die zich bezighouden met het intellectuele werk. GSS biedt ondersteuning voor een verzameling van taken zoals divergeren, convergeren, organiseren, evalueren en consensus bouwen. Een GSS faciliteert communicatie en cognitieve taken voor zowel het proces als de inhoud. Communicatie refereert naar de ondersteuning die wordt geleverd door het versturen van elektronische berichten tussen experts. De proces dimensie refereert naar het structureren van goed voorbereidde en geplande activiteiten. De inhouds dimensie behelst de ondersteuning van de eigenlijke inhoud van de communicatie of cognitieve taak. Cognitieve taken zijn, intellectueel gezien, moeilijke taken. Verschillende taxonomieën zijn besproken om deze taken in gemakkelijk communiceerbare en onderscheidende categorieën in te delen. Het gebruiken van een GSS kan tot een aantal voordelen leiden op het gebied van effectiviteit, efficiëntie en tevredenheid. De mate waarin deze voordelen behaald kunnen worden hangt echter af van een aantal variabelen zoals: facilitatie, doelen, taken, structuur, groepssamenstelling, groepsgrootte en anonimiteit. We hebben een aantal concepten besproken die de effectiviteit, efficiëntie en tevredenheid met deze variabelen positief kunnen beïnvloeden. We eindigen onze literatuurstudie met de constatering dat recente ontwikkelingen in operationeel risico management, expert judgment en group support systems suggereren dat synergie gecreëerd dient te worden tussen de patronen van groepstaken in operationeel risico management en de technologie die daarbij gebruikt wordt.
Case study Een case study bij Bank Insure, een grote Nederlandse financiële instelling, is gekozen om enerzijds operationeel risico management in de praktijk te onderzoeken en om anderzijds tot een beter begrip te komen voor de ontwikkeling van een aanpak om operationeel risico management te verbeteren. Het inzetten van experts die de input kunnen leveren om tot een inschatting te komen van de blootstelling aan het operationele risico is problematisch. De uitkomsten zijn vaak vooringenomen en worden niet gedeeld door de experts, management en andere stakeholders. Hierdoor kunnen geen effectieve beslissingen worden genomen. De doorlooptijd is te lang om tijdig te kunnen reageren op operationele risico’s en om de voordelen te kunnen benutten van mogelijke kansen. Bank Insure heeft daarom besloten om de mogelijkheid te onderzoeken van het inzetten van meerdere experts om deze problemen te verhelpen.
216
Samenvatting De volgende startpunten zijn geïdentificeerd voor de verbetering van operationeel risico management. Ten eerste, de resultaten dienen vrij te zijn van bias en moeten worden gedeeld door de experts en initiatoren. Hiermee kunnen managers dan effectieve beslissingen nemen. Verder is het belangrijk dat er voldoende resultaten zijn qua aantal. De resultaten moeten betrouwbaar en robuust zijn om een accurate inschatting van de blootstelling aan operationeel risico mogelijk te maken. Ten tweede is het noodzakelijk dat er een helder en eenduidig proces voor operationeel risico management geformuleerd wordt. Hiermee kunnen we de initiatoren en experts een gedetailleerd inzicht te geven in het proces en de activiteiten die daarin uitgevoerd moeten worden. Het proces en activiteiten moeten daarnaast gemakkelijk communiceerbaar zijn. Ten derde, het proces en de uitkomsten moeten de (soms) conflicterende doelstellingen van de financiële instelling, initiatoren, experts en andere stakeholders zo goed mogelijk benaderen. Ten vierde, informatie en communicatie technologie moet worden toegepast om het operationeel risico management proces te versnellen. Ten vijfde, het proces van operationele risico management moet flexibel zijn zodat het gebruikt kan worden in verschillende bedrijfsprocessen van de instelling. Deze startpunten moeten worden geadresseerd en verder gespecificeerd in de aanpak.
Multiple Expert Elicitation and Assessment (MEEA) We ontwikkelen een aanpak om operationeel risico management te verbeteren. We noemen deze aanpak Multiple Expert Elicitation en Assessment (MEEA). MEEA kan worden gebruikt om risicomanagers en besluitvormers te ondersteunen bij hun poging om een financiële instelling te voorzien van de input die nodig is om de blootstelling aan operationele risico’s in te schatten. MEEA kan worden gebruikt in financiële instellingen waar weinig data voorhanden is. MEEA geeft financiële instellingen de mogelijkheid om operationele risico’s beter te begrijpen zodat het economisch kapitaal verminderd kan worden. MEEA bestaat uit een manier van denken, manier van werken, manier van modelleren en manier van managen. In de manier van denken presenteren wij ons perspectief ten aanzien van operationeel risico management en hoe wij denken dat de specifieke elementen uit dit domein geïnterpreteerd dienen te worden. Tevens bespreken wij hierin een aantal ontwerprichtlijnen. In de manier van werken bespreken we de stappen die nodig zijn om de issues en problemen uit het domein te verhelpen. In de manier van modelleren bespreken we de modelleerconcepten die geconstrueerd worden als we een methodologie volgen. In de manier van managen bespreken we de management aspecten van het proces van problemen oplossen.
217
Samenvatting
MEEA: manier van denken Het eerste deel van onze manier van denken bespreekt onze visie op operationeel risico management.
•
Processen dienen te worden ontworpen waarin experts op een gestructureerde manier worden ingezet: drie karakteristieken moeten hierbij in ogenschouw worden genomen. Ten eerste, ontwerpen probeert om onderscheid aan te brengen in verschillende sets van gedragspatronen. Ten tweede, ontwerpen probeert om een ‘fit’ in te schatten tussen elke alternatieve set van gedragspatronen en een gespecificeerde set van doelen. Ten derde, onderdeel van het ontwerpen is het communiceren van gedachten naar anderen.
•
Verbeteren van operationeel risico management moet gezien worden vanuit een perspectief van probleem oplossen met begrensde rationaliteit. De processen waarin experts worden ingezet kunnen worden gezien als een sequentie van aan elkaar gerelateerde activiteiten waarbij elk proces de input vormt voor het volgende proces. Wij richten ons op een model dat geschikt is voor de ontwerper en wat leidt tot een acceptabele oplossing voor de betrokken financiële instelling.
•
Operationeel risico management kan verbeterd worden door het structureren van de processen waarin experts worden gebruikt. Dit proces kan de facilitator, experts en andere stakeholders helpen om te focussen op de relevante problemen om die vervolgens op te lossen.
•
Het inzetten van meerdere experts kan operationeel risico management verbeteren. Het ontlokken van de meningen van meerdere experts kan gezien worden als het verhogen van het aantal datapunten. Meerdere experts voorzien beter welke operationele risico’s er belangrijk zijn dan een enkele expert. Dit, gezien de multi-dimensionale karakteristieken van operationele risico’s.
•
Operationeel
risico
management
kan
profiteren
van
groepsondersteuning
en
groepsondersteunende technologie. Groepsondersteuning, in de breedste zin, kan het volgende inhouden: facilitatie technieken, standaard recepten, groepsmethoden en software tools. Groepsondersteunende technologie kan de activiteiten verder doen versnellen. De volgende ontwerprichtlijnen kunnen worden gevolgd wanneer experts worden ingezet in operationeel risico management. De eerste ontwerprichtlijn benadrukt de focus op compliance met relevante standaarden zoals het beleid, juridische vereisten en best practices. Compliance is 218
Samenvatting belangrijk omdat het direct de positie van de financiële instelling beïnvloed. De tweede
ontwerprichtlijn stelt dat procedurele rationaliteit moet worden gewaarborgd als experts worden ingezet in operationeel risico management. Deze ontwerprichtlijn anticipeert op menselijk gedrag in processen, activiteiten en taken omdat besluitvormers geacht worden om beslissingen zo rationeel mogelijk te nemen. Procedurele rationaliteit is haalbaar als besluitvormers, initiatoren, experts en stakeholders zich vooraf committeren aan het proces, methoden en tools waarmee de meningen van de experts ontlokt en gecombineerd zullen worden. De derde
ontwerprichtlijn benadrukt dat de processen waarin experts ingezet worden het bouwen van gemeenschappelijk begrip ten aanzien van de uitkomst ondersteunt. Het bouwen van gemeenschappelijk begrip is mogelijk als men de onenigheden niet verloren laat gaan door ze uit te middelen maar juist door ze continu te blijven exploreren. Voor deze interactie kunnen procedures zoals Delphi en de Nominale Groeps Techniek worden gebruikt. Deze interactie kan verder ondersteunt worden door technologie die expert-groepen ondersteunt. De vierde
ontwerprichtlijn is geïntroduceerd om het proces van het inzetten van experts in operationeel risico management praktisch en flexibel te maken. Een praktisch en flexibel proces in combinatie met ondersteunende technologie maakt een brede toepasbaarheid in de bedrijfsprocessen van financiële instellingen mogelijk. De vijfde en laatste ontwerprichtlijn benadrukt dat relevante rollen in het proces expliciet moeten worden toegewezen aan experts. Een heldere omschrijving en opdracht betreffende deze rol kan de interactie tussen besluitvormers, initiatoren, experts en andere stakeholders verbeteren. Het kan daarnaast de facilitator helpen om meer controle te verkrijgen over de interactie tussen de initiatoren en experts.
MEEA: manier van werken De manier van werken volgt een perspectief van probleem-oplossen en beschrijft het proces, activiteiten en stappen die uitgevoerd dienen te worden in de fasen: voorbereiding, risico identificatie, risico assessment, risico mitigatie en rapportage. Elke fase heeft haar eigen specifieke doel en is verdeeld in een aantal logische stappen. Elke stap heeft op haar beurt als doel om de inconsistentie en bias te reduceren. Elke stap specificeert: wat je moet doen, welke aspecten je in ogenschouw moet nemen, hoe je de stap uit dient te voeren en wie daarbij betrokken dient te zijn. De manier van werken schrijft voor elke stap een combinatie van methoden en tools voor om het inzetten van experts in operationeel risico management te
219
Samenvatting verbeteren. Hierbij wordt dan met name gericht op het verbeteren van de effectiviteit, efficiëntie en tevredenheid.
MEEA: manier van modelleren De manier van modelleren behelst de modelleertechnieken die gebruikt worden om modellen te construeren binnen de methodologie die we volgen. De manier van modelleren is sterk gerelateerd aan de manier van werken. De modelleertechnieken ondersteunt het modelleren van de processen en activiteiten van zowel de fase van het begrijpen als de ontwerpfase. Wij maken daarbij het onderscheidt tussen conceptuele modellen en empirische modellen. Conceptuele modellen zoals visuele modellen helpen om de percepties te structureren, representeren en te redeneren ten aanzien van de probleemsituatie. Zij kunnen tevens gebruikt worden als een communicatiemiddel. Empirische modellen zoals activiteits- en sequentie diagrammen stellen ons in staat om de probleemsituatie te analyseren, een diagnose te stellen van de probleemsituatie en mogelijke oplossingen te vinden. Activiteitsdiagrammen kunnen worden gebruikt om de dynamische aspecten van het systeem te modelleren. Deze diagrammen benadrukken de sequentie waarin de activiteiten plaatsvinden.
MEEA: manier van managen De manier van managen beschrijft de management aspecten van de manier van werken. Het beschrijft tevens de modellen die gebruikt worden om een proces te ontwerpen waarmee operationeel risico management verbeterd wordt. Wij adviseren om een projectmanagement methode te gebruiken die veelvuldig toegepast wordt, bijvoorbeeld Prince2. Verder adviseren wij om een ‘middle out’ en incrementele aanpak te kiezen bij het uitvoeren van zowel de manier van werken als de manier van modelleren. Dit zorgt ervoor dat de feedback spoedig verloopt en het helpt daarnaast om steun te krijgen voor het veranderingsproces van zowel het management als de medewerkers in de financiële instelling.
MEEA: evaluatie MEEA is geëvalueerd in twee case studies. MEEA is gebruikt om een proces te ontwerpen en evalueren waarmee experts ingezet kunnen worden in operationeel risico management. De eerste case studie is uitgevoerd bij Ace Insure. De tweede case studie is uitgevoerd bij Inter Insure. Beiden zijn een onderdeel van een grote Nederlandse financiële instelling. MEEA beveelt aan om een combinatie te gebruiken van relevante aspecten in operationeel risico management, expert judgment en group support systems om de verbeteringen te kunnen meten. 220
Samenvatting Om meting te kunnen verrichten wordt een input – proces – output raamwerk gebruikt in combinatie met kwantitatieve en kwalitatieve databronnen. Dit maakt het mogelijk om onze bevindingen te vergelijken en te contrasteren met de bestaande literatuur. Het raamwerk is gebruikt om de dataverzameling- en analyse te sturen. Kwantitatieve databronnen zijn gebruikt om de constructen binnen het raamwerk statistisch te onderzoeken. Kwalitatieve databronnen zijn gebruikt om de kwantitatieve resultaten te verduidelijken, aan te scherpen en om inzicht te krijgen in de causale relaties tussen die constructen. MEEA is geëvalueerd middels twee casussen. Beiden volgen de ontwerprichtlijnen en stappen die voorgeschreven zijn. MEEA is gebruikt om een Operationeel Risico Management (ORM) proces te ontwerpen en evalueren. De resultaten van beide case-studies laten zien dat ORM effectiever en efficiënter is en tot meer tevredenheid leidt dan in de ‘oude’ situatie. Het inzetten van een gemixte groep van experts waarin zowel mannen als vrouwen deelnemen vermindert de interne politiek en voorkomt groupthink en bias in de risico assessment fase. Kennis van de feiten is belangrijker in de risico identificatie fase dan in de risico assessment fase. Zowel feitenkennis als kennis van modelleren, calculeren en analyse zijn noodzakelijk in de risico assessment fase om de frequentie en impact van operationele risico’s in te kunnen schatten. Een groep experts die samenwerken tijdens de risico assessment zijn eerder geneigd een hoger risico te accepteren dan wanneer ze individueel zouden handelen. Anonimiteit in operationeel risico management is belangrijk om: status van participanten te reduceren, politiek gedrag te verminderen, angst voor represailles weg te nemen en groupthink te verminderen. Om anonimiteit te waarborgen zijn minder discussies nodig in zowel de risico identificatie als assessment fase. MEEA verbetert de volgende aspecten van het operationeel risico management proces: de structuur, de betrokkenheid en participatie van experts en initiatoren, de interactie tussen experts en initiatoren en de facilitatie van de sessies. MEEA verbetert de effectiviteit van de uitkomsten. MEEA verbetert de efficiëntie om tot de uitkomsten te komen. MEEA verbetert de tevredenheid met de uitkomsten. Een belangrijk aspect in de evaluatie van onze aanpak is het beantwoorden of financiële instellingen bereidt zijn om MEEA te gebruiken en implementeren. Tot dusver is MEEA geïmplementeerd in een grote Nederlandse financiële instelling. Meer dan 150 risico managers zijn getraind in de aanpak. Recentelijk is MEEA gebruikt door een grote Nederlandse logistieke organisatie, een Nederlands pensioenfonds, en een grote locale overheidsorganistie. Hoewel de resultaten van deze onderzoeken nog confidentieel zijn, laten de eerste resultaten zien dat zowel 221
Samenvatting financiële instellingen en andere organisaties bereidt zijn om MEEA te gebruiken en implementeren. MEEA kan worden gebruikt om risicomanagers en besluitvormers te ondersteunen bij hun poging om een financiële instelling te voorzien van de input die nodig is om de blootstelling aan operationele risico’s in te schatten. MEEA kan worden gebruikt in financiële instellingen waar weinig data voorhanden is. MEEA geeft financiële instellingen de mogelijkheid om operationele risico’s beter te begrijpen zodat het economisch kapitaal verminderd kan worden. MEEA maakt het mogelijk om toekomstige scenario’s in te schatten die tot catastrofale gevolgen leiden. We eindigen ons onderzoek met mogelijke richtingen voor vervolgonderzoek. De eerste richting is het toepassen van MEEA door andere onderzoekers in andere financiële instellingen. In dit geval dienen de onderzoekers onafhankelijk te zijn van de ontwerper van MEEA. Een tweede mogelijke onderzoeksrichting is om de generaliseerbaarheid van MEEA te onderzoeken door een ander systeem te gebruiken voor het ondersteunen van groepen, bijvoorbeeld Web IQ, Grouputer of Meetingworks.
222
Curriculum vitae Dr. ing. Jürgen H. M. van Grinsven, holds a Ph.D. from Delft University of Technology, a MSc degree (Drs.) in the social science, a BC degree (ing.) in technology management and a BC degree in electronics.
•
Currently, Van Grinsven is Director at Deloitte (Enterprise Risk Services). He is responsible for acquisition, service delivery and practice management. Moreover, he is teaching at Nyenrode University (school of Accountancy & Controlling).
•
From 2005—2008 Van Grinsven worked for Conquaestor B.V., (formerly known as PriceWaterhouseCoopers consulting). Here, he was responsible for building and managing the Risk Management Practice.
•
In 2001 he founded Advanced Collaboration Services (ACS), a small Dutch consultancy firm focusing on risk management consulting. From 2001—2005 Dr. Van Grinsven supervised twenty seven consultants who were working in several risk management projects in the financial service sector. Prior to ACS Jürgen helped co-founding several small companies (laatjerijden, work and get paid) and worked for several companies.
Jürgen gained business experience at major banks, insurance companies and pension funds such as: ABN Amro, ING, Postbank, Nationale Nederlanden, RVS verzekeringen, PGGM, Achmea, Achmea Avero, Interpolis, Cintrus, Staalbankiers, Banca di Roma. Outside the Financial service sector he gained experience at: ProRail, Vodafone, Philips, Het Oosten, Nationaal Archief, Municipality Amsterdam, Drietel. Jürgen gained research and teaching experience at Delft University of Technology, Nyenrode University, and the Haagsche Hogeschool. His research is published in several books, book chapters, articles and presented at international conferences in e.g. the Netherlands, the United States, Croatia, Turkey, Germany and France.
Fore more information, please visit the website: www.jurgenvangrinsven.com
223