IT SUCCESS! Towards a New Model for Information Technology
Michael Gentle
IT SUCCESS!
IT SUCCESS! Towards a New Model for Information Technology
Michael Gentle
Copyright © 2007
Michael Gentle
Published by
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England Telephone
(+44) 1243 779777
Email (for orders and customer service enquiries):
[email protected] Visit our Home Page on www.wiley.com All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher. Requests to the Publisher should be addressed to the Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to
[email protected], or faxed to (44) 1243 770620. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The Publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the Publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Other Wiley Editorial Offices John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA Wiley-VCH Verlag GmbH, Boschstr. 12, D-69469 Weinheim, Germany John Wiley & Sons Australia Ltd, 42 McDougall Street, Milton, Queensland 4064, Australia John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809 John Wiley & Sons Canada Ltd, 22 Worcester Road, Etobicoke, Ontario, Canada M9W 1L1 Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN 978-0-470-72401-9 Typeset in 10.5/13 pt Times by Thomson Digital, India Printed and bound in Great Britain by Bell & Bain, Glasgow This book is printed on acid-free paper responsibly manufactured from sustainable forestry in which at least two trees are planted for each one used for paper production.
Contents
Introduction Acknowledgements Abbreviations PART I BLINDED BY SPECS
ix xiii xvii 1
1
In Search of Excellence the Fundamentals • The more things change, the more they stay the same • A worldwide phenomenon • How the traditional IT model started • The construction industry trap • The free lunch trap • Houses of ill repute • A business problem rather than an IT problem • IT and original sin • No sacred cows
3 3 4 5 6 7 8 10 12 12
2
IT 101 – The Basics for Non-Specialists • The process breakdown for traditional IT activities • The process breakdown for business (i.e. non-IT) activities • The fundamental difference between IT and non-IT activities • 'That's not my problem!' – process ownership and behaviour
15 15 16 18 19
3
The Flaws of the Traditional Model • The unintended consequences of the waterfall method • In search of a pizza parlour manager
21 21 22
vi Contents
• • • • • • • • • • •
Who provides process expertise – client or vendor? When standard client–vendor relationships are possible When standard client–vendor relationships pose problems Is a standard client–vendor relationship possible for IT? The 'Statement of Requirements' (SoR) trap A poor to non-existent pricing model Should IT be run like a business (i.e. an ESP)? The limits of outsourcing Current IT organizational trends The ultimate litmus test to determine one's business model What model would be appropriate for IT?
PART II BUILDING A NEW BUSINESS MODEL FOR IT
22 24 25 26 26 28 30 31 32 33 34 35
4
Managing Demand • Managing demand – traditional model • Managing demand – new model • Capturing demand and identifying opportunities • Prioritizing and approving demand • Planning approved demand • Linking demand to resource capability • Approving demand based on portfolios • The missing component in Project Portfolio Management • Business cases are in the eye of the beholder • Building the IT plan and budget • Demand from a customer perspective • Shaking off the chains of the construction industry • Funding approved demand • Roles and responsibilities
37 37 39 41 43 49 49 50 53 54 55 56 56 58 59
5
Managing Supply • Managing supply - traditional model • Managing supply - new model • Iterative development in practice • Why prototyping has never become mainstream • Is prototyping the answer to everything? • Project critical success factors • Maintenance - letting go of the M-word
61 61 63 65 74 78 79 79
Contents
• •
Delivery and implementation Service and support
6
Monitoring Costs and Benefits • Monitoring costs and benefits for traditional IT activities • Monitoring costs and benefits for business (non-IT) activities • Monitoring costs and benefits – new model • Ownership and accountability for costs and benefits • Cost–benefit analysis during the life of a project • It is normal for costs and benefits to change! • Portfolio performance monitoring • Cost–benefit analysis after project delivery
7
Financials • The main categories of IT costs • Ownership of IT costs for the regulation of supply and demand • Who has the final say for IT investments? • Allocations vs cross-charging • Capturing costs for allocations and cross-charging • Benefits as part of the P&L and annual planning • Ongoing cost–benefit analysis for applications • Reducing application lifetime costs • The limits of financial ROI when applied to IT
PART III THE NEW MODEL IN PRACTICE 8
Players, Roles and Responsibilities • Players, roles and responsibilities – the business • Players, roles and responsibilities – IT • The new business–IT relationship • The changing role of the business analyst • The changing role of the developer • Towards the merging of the developer and analyst roles? • The changing role of the project manager • The changing role of the operations department • What role for PMOs? • The role of External Service Providers (ESPs)
vii
81 81 83 83 84 85 86 87 88 88 89 91 91 92 92 93 94 95 96 100 102 105 107 107 111 112 113 113 114 115 116 117 119
viii Contents
9
10
Getting Started • The business challenge • The IT challenge • Where to start • How to start – from checklist to action plan • From the status quo to first results • From first results to asset management • The role of best-practice methodologies • How consulting companies can help • How tools can help • The costs of moving to the new model • In closing – addressing the three fundamental questions • Further reading
121 121 122 123 124 128 133 136 138 139 140 142 143
Case Study • The company • The business problem • The project context • Building an IT–business partnership • Kicking off the project • Feasibility study and defining a solution • Building the business case • Project approach • Product evaluation – buy or build decision • Building a prototype • Results • Timescales • Three months later • One year later • Two years later • Main lessons learnt (on the plus side) • Main lessons learnt (on the minus side) • Comments with respect to the new model • Reader feedback
145 145 146 146 147 148 149 150 151 151 152 154 155 155 156 156 156 157 157 158
Index
159
Introduction
Insanity is doing the same things over and over again and expecting different results. (Albert Einstein)
Why this book I have been in the IT profession for over 25 years, during which time the industry has gone through one of the most rapid paces of technological change in its history – one could almost say in history, period. Today, systems are more often bought than built, and IT project cycles are measured in months rather than years. And yet, despite these monumental changes, I still feel as if I’m in a kind of a time-warp: the issues we used to grapple with when I was a programmer-analyst in shorts are still very much with us today, namely how to deliver reliable solutions in acceptable time frames, at acceptable costs and with clear business benefits – which is very different from delivering solutions on time, within budget and to spec which, as we’ll see in this book, are concepts which don’t lend themselves very well to the IT profession. When I started out in IT in the early 1980s, we didn't have all the answers. Today though, we can be reasonably sure of what they are; while there will always be new technologies and management trends to learn, we can comfortably say that the four decades from the 1970s to the new millennium have provided us with a sufficiently critical mass of experience and lessons learnt. But this is where the logic breaks down: if success is the result of experience, trial and error, then IT should be one of the most successful sectors out there! Certainly, the average IT organization has had more than enough experience in terms of failed projects, budget overruns and user dissatisfaction. And yet, the time-warp continues.
x Introduction
Einstein once said that insanity is doing the same things over and over again and expecting different results. By this definition, IT as a profession is totally insane! We are ultimately like Bill Murray in the film Groundhog Day: we get up every morning and do essentially the same things, hoping that this time round our lives will get back on track. Like the hero in the film, I think that things will only change once we finally accept the truth – and do something about it. The result is this book.
What this book is This book identifies the fundamental reasons why, despite the enormous technological progress in computers over the past 50 years, most IT departments remain unsuccessful, where success is defined as the ability to deliver reliable solutions in acceptable time frames, at acceptable costs and with clear business benefits. Most books which try to explain this situation focus on one or more of the usual suspects like project management, governance or best-practice methodologies. As laudable and necessary as most of these topics may be, they don't address the root causes of the problem. In other words, you can score full marks on any of these components, e.g. have excellent project management and level 3 certification in the CMMi methodology, but still be unsuccessful in the overall scheme of things. This book therefore takes a step back and challenges the fundamental model under which IT operates, which likens building software to the construction industry, with the IT department the equivalent of a contractor who is supposed to deliver systems on schedule, within budget and to spec. As we all know, this rarely happens, yet this premise drives the whole way that IT departments have been operating from the beginning, with an over-emphasis on contractual obligations and compliance rather than on the actual delivery of workable results. This book proposes an alternative model by walking through the end-to-end processes of an IT department, covering subjects like demand management, investment planning, iterative development and application-level asset management – including the roles and responsibilities required to make it all happen, and proposals for how to get started. In a radical departure from conventional wisdom, one of the most fundamental things this book will show is that the delivery date of a project really represents the first key milestone after which the rest of the work starts, rather than the end point of a sacrosanct project plan after which any additional work is considered an anomaly. The ultimate objective of this book is for readers 10-20 years from now to be able to look back on the traditional IT business model and say ‘to think we used to work like that in the twentieth century!’.
Introduction xi
What this book is not This book is not about concepts, management trends or strategy. Nor is it about IT value, governance strategies, running IT like a business, etc. Not directly in any case, because it would be impossible to talk about IT in the real world without referring to some of them. However, they will logically emerge at the appropriate time, usually indirectly as supporting arguments rather than as separate subjects needing to be developed. Finally, needless to say, this book is definitely not about technology.
Who this book is for This book is for all those constituents – from ‘grunts to execs’ – with a vested interest in having an IT department that meets business needs in acceptable time frames and at acceptable costs: • The whole IT department, and not just the CIO and senior IT management. It is unfortunate that the vast majority of books for IT target either executives, with a focus on management and strategy, or practitioners, with a focus on technology. This book tries to bridge that gap by talking to both segments, because successfully changing IT will require buy-in from across the board. • The rest of the business, from executive sponsors to actual users of IT systems
Qualifiers • IT Projects vs Business Projects: though all IT projects or IT-enabled projects are ultimately business projects, we will use the term IT projects to underline the fact that IT is the enabler (as opposed to, 'non-IT' or business projects like, for example, building a new factory). • IT vs IS: Though some organizations distinguish between IT (Information Technology, or the operations group that manages the infrastructure and ‘keeps the lights on’) and IS (Information Systems, or the development groups in contact with the business and responsible for developing and delivering production applications), such a distinction would be overkill for this book. We therefore use the general term IT to refer to the entire IT department. This happens to correspond to how the rest of the business views the IT department anyway. • New model? What new model? In this book we will often refer to ‘the new model’ or ‘the proposed model’ as an alternative to the traditional model. Most of the recommendations however, e.g. iterative development or portfolio-based investment planning, are anything
xii Introduction
but new. Indeed, most readers will probably already have come across them in the course of their work, by reading books or articles or by attending industry events. However, such tips, techniques and approaches are usually presented as ‘point solutions’ to a particular problem, e.g. to increase the quality of requirements or to increase business alignment – but rarely as part of an overall solution which tries to address the fundamental way in which we work from an end-to-end perspective. This explains why it is very difficult to find a book that talks about, for example, both iterative development and portfolio management as solutions to a common problem. Even those recommendations which I’d like to think originate in this book (e.g. investment planning based on estimates rather than contractual numbers, and application-based asset management) will no doubt already be in use somewhere in the world. In putting together this ‘new’ business model therefore, what I have essentially done is to draw on my own experiences and on those of a whole industry and to cherry-pick those approaches which I believe provide a good starting point for a fundamentally new way of working.
Reader feedback In an Internet age, writing is fortunately no longer a one-way street. I therefore actively encourage your feedback by visiting my website at www.michaelgentle.com where you can share your thoughts – and even respond to a snap survey (one question only – we’re all busy people!) about how useful you found this book.
Acknowledgements
As any author knows, you can’t successfully bring a book to market without the help of others (besides the publisher). It’s one thing having a great idea and producing a draft manuscript from it, but it’s quite another thing altogether turning it into a finished product. The 80/20 rule applies to writing too: 80% of the total effort is required to produce the first draft of a manuscript – inevitably replete with inaccuracies, inconsistencies and bad grammar – and the remaining 20% is necessary to turn it into something which people will actually read. I would therefore like to pay a sincere tribute to all those people who had the unenviable task of finding time in their already-busy schedules to critically read through all or part of the manuscript. Not only did they manage to highlight inconsistencies and areas of disagreement, but they also helped to round off some of the rough edges of my writing style. So many thanks to (in alphabetical order): • Anne Barraque-Curie, Associate Program Director, Gartner Executive Programs, who somehow managed to squeeze a review into a very busy time of the year, and whose feedback from the first draft reassured me that the main thesis of the book rested on a firm foundation. • Catherine Lewko, VP of Finance at Danone, whose unique combination of finance, business and IT experience (few people manage to successfully straddle all three areas) in global companies helped to firm up the financial elements in this book. • Frederic Dufour, pharma industry consultant and former Business Planning and Reporting Manager at GlaxoSmithKline, whose real-world experiences in working with IT enabled him to validate the main themes of this book.
xiv Acknowledgements
• John Mahoney, VP Distinguished Analyst at Gartner, who accepted my out-of-the-blue request to review the proposal, even though we had only spoken very briefly after one of John’s sessions at a Gartner ITExpo event. • Mitch Betts, Executive Editor at Computerworld, who accepted to review the proposal at a very busy time of the newspaper year. Mitch has been present throughout my writing career, from rookie Computerworld columnist to the author of two books. • Muriel Meyer, Senior Manager at Accenture, whose extensive international experience in multiple companies and sectors enabled her to relate to the key themes of this book. • Rick Moreau, Changepoint Pre-Sales Director at Compuware, whose critical, end-to-end review of the first manuscript was instrumental in setting the scene for the rest of the book. And as if this was not enough, Rick was always present to act as a sounding board for the many ideas I was always coming up with. I could not have asked for a better advisor. • Rob Austin, Associate Professor at Harvard Business School, who graciously accepted my out-of-the-blue email request to review the proposal, even though he didn’t know me from Adam (he probably had a poor spam filter…). Rob also unwittingly played a part in reassuring me that I was on firm ground when, halfway through the first draft, I read an article of his which dovetails well with the key themes in this book. • Robert Gentle, my twin brother and fellow-author, who probably had the worst task of all in that he had to review the ‘beta release’ that preceded the first draft – and my, did it need correcting from a style perspective! Robert then went on to help me to produce the final proposal. • Rosie Kemp, Commissioning Editor at Wiley who, with a combination of patience and dogged insistence, succeeded brilliantly in getting the initially-approved manuscript though to final production grade. I am particularly grateful to Rosie's anonymous reviewers, each and every one of whom provided invaluable feedback, both from a content and a style perspective. • Steve Beaumont, EMEA PPM Presales Manager at Hewlett Packard, whose insight from his current role, combined with his prior experience as a big-X consultant, enabled him to validate the overall thesis. • Thomas Cronje, FTS Consultant at Compuware, for providing the title suggestion ‘IT Success!’ – though the clever word play that went with it unfortunately didn’t make it!
Acknowledgements xv
I would also like to thank the following people and organizations from a copyright perspective: • A Distinguished Analyst at Gartner, for allowing me to use a quote of hers on what constitutes project success, which I heard at a session at Gartner PPM in Geneva in Dec 2006. • Justin Hourigan, who allowed me to use the famous ‘What the customer asked for’ cartoon from his website www.projectcartoon.com • John Mahoney, VP Distinguished Analyst at Gartner, who let me use references to his concepts about IT organizational trends, which I've heard John talk about at a number of sessions at various Gartner ITExpo events. • Martin Curley, who let me use a paragraph from his book Managing Information Technology for Business Value (Intel Press, 2004). • Read Fleming, Director of Technology at the aviation consulting firm SH&E, for allowing me to adapt his ‘Five ages of methodology sophistication’ maturity model, which for me is still the most readable and understandable way of explaining what a maturity model is. • Wiley, the publisher of The Technology Garden (2007), for allowing me to use a paragraph in the book on the client–vendor relationship between IT and the rest of the business. Finally, this book would never have seen the light of day without the support of my wife and children, who accepted the family and career constraints that were required for me to balance writing and my day job.
Abbreviations
ABC
Activity-Based Costing
APM
Application Portfolio Management
ASD
Adaptive Software Development
AUP
Agile Unified Process
B-to-B
Business-to-Business,
B-to-C
Business-to-Consumer
BPR
Business Process Re-engineering
BU
Business Unit
Capex
Capital expenditure
CEO
Chief Executive Officer
CFO
Chief Financial Officer
CIO
Chief Information Officer
CMDB
Configuration Management Database
CMM
Capability Maturity Model
xviii Abbreviations
CMMi
Capability Maturity Model integration
CoBIT
Control Objectives for Information and Related Technology
CRM
Customer Relationship Management
CYA
Cover your Ass
DBA
Database Administrator
DSDM
Dynamic Systems Development Method
ERP
Enterprise Resource Planning
ESP
External Service Provider
FAQ
Frequently-Asked Question
FDD
Feature-Driven Development
HR
Human Resources
IS
Information Systems
IT
Information Technology
ITAM
IT Asset Management
ITIL
IT Infrastructure Library
JAD
Joint Application Design
MBMA
Management by Magazine Article
MD
Managing Director
Opex
Operating expense
PABX
Private Automatic Branch Exchange
PD
Participatory Design
Abbreviations xix
PMBOK
Product Management Body of Knowledge
PMO
Project (or Programme or Portfolio) Management Office
PPM
Project Portfolio Management
Prince2
Projects in Controlled Environments
P&L
Profit and Loss
QA
Quality Assurance
RAD
Rapid Application Development
RFI
Request for Information
RFP
Request for Proposal
ROI
Return on Investment
RUP
Rational Unified Process
SaaS
Software as a Service
SFA
Sales Force Automation
SLA
Service Level Agreement
SME
Small and Medium Enterprise
SOA
Service-Oriented Architecture
SoR
Statement of Requirements
SoW
Statement of Work
XP
Extreme Programming
Part I Blinded by Specs
1
In Search of Excellence the Fundamentals
Every truth passes through three stages before it is recognized. In the first it is ridiculed, in the second it is opposed, in the third it is regarded as self-evident. (Arthur Schopenhauer, German philosopher, 1788–1860)
The more things change, the more they stay the same Technological change is usually associated with progress. Airbags, electronics and fuelefficient engines have made cars safer, more comfortable and more reliable. Jetliners with the latest in engine technology, materials and avionics have made flying safer, cheaper and quieter. The modern world abounds with similar examples, from consumer electronics to mobile telephones. There is one sector in which the rate of technological change has easily surpassed the above examples by orders of magnitude, and that of course is the computer industry. It has gone from mainframe to minicomputer (remember those?) to PCs, and now to the Internet. In the enterprise, commercial computing moved from the glass house to the desktop, dropping its price-tag a hundred million-fold in the process. And all of this was achieved within the short space of 25 years, as opposed to over 100 years for cars and planes. We’ve probably all heard about the comparison about a Rolls-Royce costing a few dollars and getting a million miles to the gallon if it had followed the same rate of progress as computers. So what has this new generation of faster, better and cheaper computers actually brought in terms of progress (in the workplace – this book is not about computers in the home)? Well, to start with, around 80% of employees in the average company in the developed world have a computer on their desks today, as opposed to less than 10% in the early 80s. They also use these computers for the most part of their working day, as opposed to only 1-2 hours previously. But probably most important, unlike our car drivers and plane passengers, who are essentially still performing the same activity they did a century ago, which is driving or flying from point A to point B, only with more modern technology, the vast majority of computer users in the enterprise today are doing things that were not even possible 10–20 years ago, e.g. using spreadsheets, word processors, enterprise software
4 Blinded by Specs
and the Internet to automate and transform their business processes – and even invent new ones, made possible by the new technology. So, you might say, if that’s not progress, what is? In short, where is the problem? Why are you reading this book? Well, let’s go and talk to Marina, Steven and Kevin, who work for Acme, your modern, everyday corporation, to try and find out.
A worldwide phenomenon Marina is a Regional Sales Director in one of Acme’s business units (BUs). Having worked her way up through various positions in sales and marketing, both here at Acme and at other companies, she has been through many IT projects and considers herself a typical user of information technology in the workplace. While she generally recognizes the overall benefits of IT – after all, she wouldn’t be able to do her job properly without it – she and her colleagues wouldn’t need much prodding to launch into a litany of ills about having to deal with an IT department that: doesn’t understand them; is unresponsive; makes them fill out forms to get things done; delivers software solutions that don’t correspond to the way they really work; invariably delivers them late and not properly tested – and sometimes actually wants to cross-charge them for it all! What’s particularly disturbing for Marina is that based on conversations she’s had with friends and colleagues from other companies - and other countries, since Marina works internationally – her own experience is nothing exceptional. She has no idea how so many different IT departments around the world can all be afflicted with the same fundamental problems. Steven is a Project Manager in the IT department. As part of a team that provides software solutions for people like Marina, he clearly recognizes the benefits of information technology – after all, that’s what his job is about. Having worked his way up from programmer analyst to consultant to project manager, he has worked on multiple projects in multiple companies and considers himself a good IT professional. However, in the very next breath he would probably tell you just what he thinks of users who: don’t know what they really want; change their minds every week; never come to meetings to sign off on requirements and can’t be bothered to perform user validation testing. The IT department he works in has far more work than it can handle (most of it high priority, naturally), and often ends up selecting projects based on ‘decibel management’, which means catering to those executives who shout the loudest, rather than on any rational decision-making process. Kevin is the CEO. He doesn’t understand or even care much about IT - it wasn’t part of his generation when he was growing up, and he considers it an achievement to be even using email. What he does understand though are costs, and his IT budget has been steadily increasing over the past 10 years, and is now sitting at 5% of total revenue and accounts for almost
In search of excellence the fundamentals 5
50% of capital spend (and that’s just the visible part owned by the CIO – he shudders to think of the disguised IT budgets sitting in the operational budgets of the various BUs). He’s got no idea whether this is normal or not, but he does know that the company cannot function without these systems. And he can’t understand why it’s so difficult for the CIO to explain his cost base – where does all the money go, and why is it so difficult to calculate a ROI for all these exotic three-letter acronyms? Finally, it doesn’t help that there’s always one or two board members constantly complaining about IT’s inadequacies. Maybe they should just get another CIO – this one doesn’t seem much better than the last three (‘gosh, have there been that many already?’). Or maybe just get rid of the problem and outsource all or part of IT – after all, others are doing it. Regardless of where you work, from America to New Zealand, and regardless of what your company makes, from cars to cosmetics, you will probably have no trouble identifying yourself with Marina, Steven or Kevin above. Which is pretty scary when you come to think of it: either a goblin has cast a spell on a whole profession – or that profession is doing something fundamentally wrong. If IT doesn’t work in one company, you could justifiably say that you might have a company problem. If the same symptoms are now visible in most companies in a given sector, then you could say you’ve got an industry-specific problem. If IT generally still doesn’t work in a whole country, then you might get away with saying – at a stretch – that there might be some dominant cultural characteristics at play which might explain why the Americans or the French or the Japanese or whoever just don’t get it. But when you end up with identical symptoms in companies of all sizes and all sectors in countries across five continents, then maybe it’s time to step back and re-examine conventional wisdom. In summary, despite the amazing technological advances in IT and the proliferation of computers in the workplace, the IT department is still perceived as a combination of one or more of the following: a bunch of technical people incapable of communicating in business terms, profligate in its ways, unable to cost-justify its spending, almost always delivering late and over-budget, and finally providing unsatisfactory service to generally dissatisfied users. In the rest of this chapter, we will show how this situation came about, and why it still exists in the twenty-first century.
How the traditional IT model started In any new field, people naturally turn to similar or analogous activities for guidance on how the new one should work. Then over time, through trial and error, the fundamentals begin to emerge, and previously held assumptions are correspondingly validated, adjusted or rejected.
6 Blinded by Specs
For example, the very first television shows were essentially modelled on radio, with the main novelty being that you could now see people in addition to hearing them speak or sing. Of course, it didn’t take long for the industry to figure out that TV allowed you to do much more than simply move a camera into the radio studio. Another example is the auto industry, whose first cars were essentially horse-drawn carriages with the horse replaced by an engine (hence the term horsepower). Again, it didn’t take long for the automobile to take on a shape of its own, driven as much by technological progress as by the ability for designers and engineers to move away from the horse-drawn paradigm. For IT, building systems was initially modelled on the construction industry, with the IT department the equivalent of a contractor who is supposed to deliver systems on schedule, within budget and to spec. Unfortunately, since building software has little to do with building houses, despite appearances, this analogy led us into a trap, which we will now examine.
The construction industry trap Drawing on the construction industry, the IT department would develop systems for the rest of the business through a standard client–vendor relationship, based on a contractually signed-off requirements specifications document. This would then drive a sequential ‘waterfall’ method, with its strict linear approach from analysis, design, development and testing through to implementation, with each phase performed by different teams of specialists. This proved to be a non-starter, and generally remains so to this day. You can specify requirements for a house because the desired outcome is relatively easy to conceive and visualize. You can then have it built ‘to spec’ by a vendor because the corresponding specifications (weights, dimensions and forces) cover standard mechanical components (beams, widgets, tiles) and are applied to the hard sciences (physics, engineering and mathematics) to produce relatively predictable results. In the construction industry, you can therefore separate the design phase (which constitutes on average less than 20% of the total effort) from the construction phase (which accounts for at least 80%) and have them done by different teams. You are also spared the burden of testing – after all, once you’ve calculated the maximum allowable stress for a beam based on the force of gravity and the strength of the materials you are using, then you can rest assured that it’s not going to collapse. Human behaviour however, which is what most business processes are about, is another matter altogether. You cannot come even close to fully and accurately specifying requirements because it is not easy to imagine or visualize the final outcome, since you are usually trying to do something you haven’t done before. Plus, the business is constantly changing, which makes it a moving target anyway. A team of programmers will then try
In search of excellence the fundamentals 7
and convert these imperfect specifications into a workable product using the ‘soft sciences’ of programming logic and software configuration. When building software, it is therefore very difficult to separate the design phase (which can account for up to 80% of the total effort) from the construction phase (which only accounts for 20%). Not only that, but you now also have to do a lot of testing to ensure it all hangs together. For software development therefore, the final product will necessarily be based on interpretation and assumptions, and while it might correspond to documented requirements, it stands very little chance of corresponding to actual requirements.
The free lunch trap In parallel to the above, organizations fell into another trap concerning the economics of supply and demand, and trying to establish who the payer for all these new systems would be. In the beginning, the regulation of supply and demand was not a big issue because the technology was so expensive and clunky, and the concept of computers in the enterprise so new anyway, that its initial use was limited to things like accounting and payroll. However, as computer technology progressed and became more affordable, corporate IT became more heterogeneous, with mainframes in the 1960s being joined by minicomputers in the 1970s. Departmental IT became possible, with cheaper minicomputers beginning to take root and software vendors appearing, thus increasing the options available to the business. By the time we get to the 1980s, decentralization was in full swing and the first wave of – often evangelical – microcomputer users started sensing that their time had come. With this evolution from the sixties to the eighties came vastly increased possibilities for the use of information technology, and the initial driver for developing systems moved from mere cost savings to increasing revenue and even competitive differentiation. In short, more and more parts of the business were beginning to ask for more and more things from IT. The upshot of all this was the inversion of the supply and demand curves, i.e. the demand for IT products and services had by now significantly outstripped the IT department’s ability to supply more than a fraction of it. In any case, the IT department was no longer the sole supplier – there were software package vendors, plus lots of consulting companies more than willing to jump in and build software for departments with money to spend. The regulation of supply and demand therefore became a key requirement. The economy has interest rates and pricing to help regulate supply and demand, but IT was not to be so
8 Blinded by Specs
lucky. Most IT budgets are centrally funded as a corporate cost centre. For example, a Forrester survey (The State of IT Governance in Europe, 28th Sept 2005) of 518 European senior IT and business decision-makers found that two-thirds of companies (40% of which had revenues greater than €1b) fund their IT as a cost centre out of the corporate budget. The same is true of similarly sized American companies, though slightly less so, with 60% of companies funding their IT centrally, as opposed to two-thirds for the Europeans. In another survey, The State of the CIO 2006 on www.cio.com, 60% of all IT organizations control their spending centrally. In the absence of an adequate pricing mechanism, IT became essentially ‘free’ for users, who could ask for almost anything (inevitably high priority, and with or without a serious cost/benefit analysis), which IT would then have to pay for and deliver – with expectations of 100% perfect service levels to boot. Even when some sort of chargeback mechanism existed (usually in the form of annual cost allocations), it rarely had the desired effect of regulating demand, for a number of reasons. Firstly, because allocations are usually buried in annual overhead, they are not adequately communicated to the actual users whose behaviour they are supposed to influence - indeed, they might not even be aware that they are paying for IT. Secondly, they were often calculated based on some ‘voodoo formula’, which meant that clients usually didn’t know what they were really being billed for. For example, the IT department of a major financial services company which was implementing a time-tracking system actively resisted cost allocations based on actual costs – which it could now readily obtain from the new system – presumably because it feared that such transparency would reveal the shaky foundation on which its voodoo formula was based. Finally, it was very difficult for IT to explain to their clients what business benefits they were getting in return, so the net result was inevitably conflictual. In the worst-case scenario, the law of unintended consequences applied and chargebacks actually had the effect of generating demand, the reasoning being that ‘since we’re going to pay for it anyway, we might as well ask for as much as possible’. IT therefore ended up as a black hole in which the rest of the business could pour work requests at will, and whose volume would always exceed its capacity to deliver. In The State of the CIO 2006 survey on www.cio.com, CIOs rated ‘the overwhelming backlog of requests and proposals’ as their biggest barrier to job effectiveness. Like governmentfunded healthcare in most of the Western world, when the users are not the payers, it becomes difficult to regulate supply and demand.
Houses of ill repute At this stage let us introduce the notion of a business model, which at its simplest describes how a company builds and sells its products, at what costs and margins, and
In search of excellence the fundamentals 9
how it interacts with its customers. The construction industry trap and the free lunch trap above had far-reaching consequences on the IT business model. The first one ensured that IT would rarely be able to deliver systems that really met business requirements, thereby setting itself up for one or more cycles of ‘corrective maintenance’ (an oxymoron really). The second one ensured that users would always be fundamentally dissatisfied with IT, because if something desirable is essentially free, then by definition they will end up asking for more than what can be physically and economically delivered. Any one of these two factors by itself was a big enough challenge. But the two combined was to prove extremely damaging over time, and gave rise to a number of other ills which, though at first sight seem unrelated, can ultimately all be traced back to one or both of these factors. One of the most visible results was an adversarial relationship between IT and the rest of the business (Users – ‘not only was it delivered late, it’s not really what we asked for!’. IT – ‘we delivered to spec – they don’t know what they want!’). Needless to say, this provided a fertile ground for vendors and consultants to hype new concepts and technologies, with the implicit message that they could probably do it better and quicker than the IT department – thereby further driving a wedge between the two. Equally visible, especially for the CFO, was the inability to track costs and benefits: • Instead of properly quantifying and tracking the life-cycle costs for each of the systems it delivers, IT usually gets away with simply measuring aggregate costs for infrastructure (hardware, networking, etc.) and activities (development, maintenance, help desk, etc.). And this doesn’t even include the non-IT costs on the business side from the people who manage, coordinate and support applications and their underlying data. • On the business side, users are either unable or unwilling to properly quantify and track the business benefits of the systems they use, e.g. order cycle time, sales cost per order or first call resolution rate. Amazingly, some business users even expect IT to do this – which would be the equivalent of an airline asking the maintenance teams in the hangars to track the business benefits of the aircraft it uses. The main focus of business benefits is to get a project launched – usually based on a subjective business case – after which there is little or no incentive to check that the benefits actually materialize as planned. Which in a way confirms the subjectivity of the business case – nobody bothers verifying the numbers because they know them to be suspect anyway. Being unable to track costs and benefits means that we cannot calculate ROI (though as we’ll see later, the notion of ROI in the strictly financial sense of the term is not particularly well-suited to IT, and needs to be replaced with other criteria).
10 Blinded by Specs
From an internal operational perspective, IT is characterized by a general inability to properly manage – when managed at all – basic processes like capturing and prioritizing demand, allocating resources based on business objectives, monitoring the performance of work delivery, and providing general operational reporting on what’s going in at one end, what’s coming out at the other, and what’s happening in between. Any properly run business has to be able to do this to manage customers, orders, production and delivery, and to generally anticipate events rather than be driven by them. IT generally has very little understanding of its demand and supply chains, and would have a hard time being able to answer fundamental questions like ‘what is currently in the pipe?’ or ‘what do we have to deliver over the next 6 months?’ or ‘what is our current and projected resource utilization rate?’ And finally, the deadly combination of all the above, which is an inability for a department, whose budget can be anywhere from 2-10% of total revenue (and even more for ITintensive sectors like banking, insurance and telecommunications) and account for up to 50% of capital spend, to actually demonstrate how it is contributing to business objectives.
A business problem rather than an IT problem That companies turned to the construction industry as a starting point for the IT business model is quite understandable, given the many similarities on the surface (‘So you want to automate your claims processing? Tell me exactly what you want and I’ll build it for you’). That companies failed to anticipate how decentralized computing would result in demand outstripping supply and how it would impact the underlying economics is also understandable – it would be unreasonable to expect a mainly technical organization to have that level of financial and economic foresight. What will, however, remain one of the great mysteries of the corporate era is why the many years – nay, decades – of painful experience did not lead to a questioning of both the previously held assumptions. As explained in the introduction, if success is the logical result of experience, then given the above list of ills, IT should be one of the most successful sectors out there! But instead, some sort of organizational insanity took over, as companies ended up doing essentially the same things over and over again and expecting different results. One of the most common explanations put forward to explain IT’s difficulties is the sheer rate of technological change. Whereas other industries with longer innovation cycles have lots of time to get used to new technologies, IT, as the argument goes, has to cope with product cycles of 18–24 months, and by definition will always be behind the quality curve. However, a closer look will reveal this argument to be flawed. Before the arrival of the PC in the eighties, technological change in the IT department was limited to better operating systems running on ever faster and smaller hardware. But ultimately your
In search of excellence the fundamentals 11
average IT department was characterized by technological stability in the form of a particular hardware vendor (IBM, DEC, Data General...) whose computers ran a particular language (COBOL, PL/I, RPG...). And yet this technological stability did not result in more successful IT – the problems were essentially the same. Even today, if some benevolent technological dictator could somehow wave a magic wand and freeze all technological change, it would not change the fundamental problems which concern IT’s ability to deliver reliable solutions in acceptable time frames, at acceptable costs and with clear business benefits. In the absence of a valid technology argument, this situation is ultimately attributed to some sort of professional deficiency on the part of the IT department, the usual suspects being lack of mastery of some new tool or technology, poor project management or noncompliance with one of many so-called best-practice methodologies. This professional deficiency view is sometimes taken one step further by business users who wonder why the products and services they get from IT can’t be as sexy and reliable as the ‘consumer IT’ they get from the likes of Google and Microsoft. The comparison is unfair. Firstly, Microsoft, Google et al. can produce slick, reliable products from a user perspective, and can afford to spend millions (billions?) in R&D and product development because their market is the whole world. Our humble IT departments have limited budgets and build systems for a market of one. If only on this basis we shouldn’t be comparing consumer IT and corporate IT. Secondly, the commercial business model used by consumer IT is built on a solid foundation of a few centuries and is working pretty well, while the same cannot be said of the IT business model, which is barely 50 years old. At the end of the day, therefore, it is none of the above. Rather, it is the logical result of a poor business model, supported by the twin pillars of an unworkable client–vendor relationship, which assumes that building systems is like building houses, and a pricing mechanism (or lack thereof), which can only be described as an economic aberration. This ultimately explains why the monumental technological progress of the past 50 years has had little real impact on user–IT relationships and on the ability of IT to demonstrate its contribution to the business. Consider the following. If a marketing director spends a million dollars on an unsuccessful marketing campaign, he wouldn’t dream of complaining to the CEO that his people didn’t deliver, or that the outside agency was unable to give him what he wanted – with a bit of luck, the CEO wouldn’t even know about it. If, however, the same marketing director spends a million dollars on a new CRM system that fails to deliver, he would have no qualms about complaining to the CEO about the deficiencies of the IT Department, which was unable to give him what he wanted. And the CEO would probably commiserate with him and pick up the phone to summon the CIO, or have it as an
12 Blinded by Specs
agenda item at the next board meeting. And the CIO, when eventually confronted with the subject, would probably defend his organization’s performance by demonstrating his compliance to the same flawed business model responsible for the mess in the first place.
IT and original sin When you have a house built, whatever gets delivered will be pretty much as specified in the architectural plans. And once you start living in it, it’s hardly going to collapse, or part of it become unusable. Nor will you end up paying five times the initial cost of your house in maintenance and repair over the first 5–10 years. The technical and business risks associated with having a house built are therefore close to zero, which explains why we all have no problem taking out a mortgage, and don’t lose any sleep while the house is being built. Unfortunately, the same is not true of an IT application, which carries a high degree of business and technological risk right from the word go, whose usefulness only really becomes apparent once you start using it, and whose costs and benefits continue to evolve significantly over time. Likening IT to the construction industry and building a business model around it therefore got a whole profession off to a wrong start. Like Christian theology, which holds that the descendants of Adam and Eve were born into original sin as a result of eating the forbidden fruit, this unworkable business model can be likened to a sort of ‘original sin’, which condemned all successive generations of IT professionals to committing the same mistakes. (Note that whereas Adam and Eve’s descendants were at least supposedly aware of the original sin they were born into and could therefore hope to do something about it, the descendants of IT don’t seem to be aware of theirs...) In the final analysis, for IT to work, it’s not about technology. It’s about a sound business model supported by (i) basic economics and (ii) an understanding of the realities of building systems, which bears little relationship to other areas of economic activity, despite the apparent similarities which lead to frequent but erroneous comparisons. If I had a dollar or a euro for every time I heard or read about the analogy between IT and building houses (or building bridges, another common comparison), I’d already be a rich man and wouldn’t have to be writing this book...
No sacred cows The fundamental error of reasoning in the traditional IT business model is to assume that software can be conceived upfront like a house, and subsequently scoped, spec’d and
In search of excellence the fundamentals 13
signed off for commitment by both client and vendor – and that the documented business benefits will start flowing once the solution has been delivered. If you’re able to accept this fundamental error, then suddenly you can view a whole lot of things which we normally take for granted in an entirely different light. For example, the following statements would no longer appear outlandish (for many readers some of them probably never were outlandish – just difficult to justify under the traditional model): • You can’t ask users to define requirements and specifications to be contractually signed off and cast in stone. • You can’t give such requirements and specifications to an IT department (or a vendor) and ask it to define a budget and a project plan to be cast in stone, and expect it to contractually meet them. • For an IT project, nothing is cast in stone – the business environment, costs, benefits, schedules and risk can and will change, both during the project and after delivery. • Whatever is initially delivered on day one can never totally correspond to actual requirements, and will have to be continuously reworked and enhanced over many years in the form of new releases and versions. There is therefore is no such thing as a ‘project end date’. Rather, the delivery date of an application represents the first major milestone, after which a lot of work still remains to be done. • Strictly financial ROI is not a good criterion for selecting IT projects and later evaluating their performance. IT funding should not be subjected to the same financial considerations as other business investments like plant, property and equipment. • While any IT department would stand to gain by emulating certain practices of external service providers (ESPs), running IT like a business from a financial P&L perspective will not address IT’s fundamental problems. • Outsourcing (a standard client–vendor relationship) has its limits, and is only really applicable to running mature production applications. Additionally outsourcing applications development will not address IT’s fundamental problems; this can only be successfully achieved with an internal IT department. In order to understand why the above statements are not as provocative as they might appear at first sight, we first need to understand how the current, ‘wrong’ business model works, and where it falls down. This will then enable us to lay the foundations for the ‘right’ business model and propose ways of implementing it.
2
IT 101 – The Basics for Non-Specialists
Software should be called hardware, and hardware should be called easyware. (Anonymous)
In order for non-specialist readers from the business to understand the downsides of the traditional model, and the subsequent proposals for correcting them, it is essential to understand the basics of how an IT department operates. Anyone already working in IT can either skim very briefly through this chapter or skip it altogether – although it is recommended to read the two essential conclusions entitled ‘The fundamental difference between IT and non-IT activities’ and ‘That’s not my problem! – process ownership and behaviour’.
The process breakdown for traditional IT activities At the most basic level, an internal ‘client’ (e.g. the marketing department) turns to its internal service provider or ‘vendor’, the IT department, to solve a business problem by delivering a solution based on information technology. The quotes around the terms ‘client’ and ‘vendor’ are deliberate, because it is not a true legal and contractual client–vendor relationship. The traditional user–IT activities would result in the following high-level processes from start to finish (see Figure 2.1): • Define Objectives and Concept: The client defines the business objectives she expects to meet (e.g. reduce order rejection rate by x%), and the concept which will allow her to achieve this (e.g. a product configurator). • Do initial cost–benefit analysis: The client, with input from IT, then carries out a cost–benefit analysis whose outcome will decide approval and funding. However, because IT in most companies is centrally funded and costs do not come out of the client’s budget (or chargeback mechanisms are usually ineffective in regulating demand)
16 Blinded by Specs
Define objectives and concept
Do initial cost-benefit analysis
Define requirements
Design
Buy/build
Develop
Deliver/ implement
Service/ support
Test
Figure 2.1 High-level processes for traditional IT activities
this phase is generally present in a subjective form that tends to magnify the benefits and reduce the costs, with the main objective being to get a green light to launch the project. • Define Requirements: The client and IT jointly define the functional requirements for the solution. This describes what the system is supposed to do from a business perspective, e.g. to shorten the sales cycle by enabling credit checking within 24 hrs. This step normally ends with a requirements specifications document signed off by the client, which IT is then under a contractual obligation to deliver within a given cost and timeframe. • Buy/Build: The IT department then either buys an off-the-shelf package or builds an in-house system to meet the specified requirements. This is then presented to the client for acceptance and verification of conformance to the original specifications. • Deliver/Implement: The IT department then delivers and implements the solution. Implementation means that IT populates it with data, activates it and finally trains the client in its use. • Service/Support: The IT department then runs the solution for the client from a production and service standpoint, ensuring availability, response times and support (sometimes against agreed service levels). It will also bring out new releases at regular intervals based on bug fixes and evolving requirements. Note that though this is a fairly basic process breakdown, and some companies might have moved on in terms of adapting it to modern-day reality, it nonetheless describes very well how the average IT department still operates today.
The process breakdown for business (i.e. non-IT) activities Let’s now compare this with general business activities in the non-IT world. Here we end up with real clients and vendors, which are legal business entities with contractual relationships.
IT 101 – The basics for non-specialists 17
Define objectives and concept
Do initial Define cost-benefit requirements analysis
Buy/build
Deliver/ implement
Service/ support
Do ongoing cost-benefit analysis
Figure 2.2 High-level processes for business (i.e. non-IT) activities
The activity can be either B-to-C (Business-to-Consumer, e.g. you or me buying a car or a house), or B-to-B (Business-to-Business, e.g. companies buying products or services from each other). In either case a client turns to a vendor to solve a problem by acquiring a particular product or service, with the following underlying processes (see Fig 2.2): • Define Objectives and concept: The client defines the objectives she expects to meet by purchasing the product or service. This can be explicit and based on hard numbers (more common for B-to-B), or implicit and based on intangible drivers and advertising pressure (more common for B-to-C). • Do initial cost–benefit analysis: Since the cost of the product or service is usually coming out of the client’s pocket or budget, she compares the price (or better, the annual costs – or better still, the total lifetime costs) with the expected benefits, and arrives at a decision on whether to go ahead or explore alternatives. Whether this is explicit (e.g. ROI or some equivalent) or implicit (‘Should I or shouldn’t I?’), a conscious decision is ultimately made in answer to the question ‘Can I really afford this?’ The answer will ultimately become a pre-requisite for deciding whether to move ahead and buy, seek alternative solutions or simply defer the subject to a future date. • Define Requirements: The client defines the requirements for the product or service. If she is sufficiently familiar with the sector or has the appropriate level of expertise, she will be able to go further and identify a short-list of corresponding products or services to evaluate. If not, she will rely on advice from some third party. This step normally ends with a short-list and a ‘requirements specifications’ document to give to the corresponding vendors for a quote or proposal. • Buy/Build: The client then buys the chosen product or service, which either she or the vendor will build or assemble, depending on the client’s level of expertise. • Deliver/Implement: The vendor then delivers and implements the product or service. Implementation means the vendor installs it and/or activates it and trains the client in its use. Alternatively, this can be done by the client himself if she is sufficiently familiar with the product and has the appropriate level of expertise. Where appropriate, this ends with a contractual acceptance and verification of conformance with respect to the original specifications or contract – sometimes backed up by penalties for non-conformance.
18 Blinded by Specs
• Service/Support: If the client is sufficiently familiar with the product or has the appropriate level of expertise, she will be able to service and support it herself. In most cases though, service and support will be provided by the vendor or a specialist third party. • Perform ongoing cost–benefit analysis: Once she starts using the product or service, the client will monitor and measure actual costs and benefits in an attempt to verify the initial cost–benefit analysis. The answer will ultimately become a pre-requisite for deciding whether to continue using it, or to stop and cut one’s losses and seek alternative solutions – or, depending on the financial impact, to simply absorb the losses and chalk it up to experience.
The fundamental difference between IT and non-IT activities Comparing the process breakdown for traditional IT activities (Fig 2.1) and general business activities (Fig 2.2), we can see that when the costs are not coming out of the client’s pocket or budget (or when chargeback mechanisms are not effective in regulating demand): • The initial cost–benefit analysis phase is generally present in a subjective form, which tends to magnify the benefits and reduce the costs, with the main objective being to get a green light to proceed. • The ongoing cost–benefit analysis phase is generally absent – or is present in a subjective form that tends to magnify the benefits and reduce the costs, with the main objective being to justify an investment that is not yielding the expected returns. We can draw a fundamental conclusion here, namely that the concept of a cost–benefit analysis is really only valid if both the costs and the benefits apply to the client from an accounting perspective. If not, then the product or service is essentially ‘free’, and there is no real requirement for a cost–benefit analysis ‘with teeth’, i.e. one which can be used to not just approve a project, but also to withhold or cancel its funding if it is not living up to expectations. It’s like your teenager who asks you to buy him or her the latest fashion wear or consumer device, which they absolutely ‘gotta have’. As long as you’re doing the paying, they can always be counted on to apply the required ingenuity to come up with some sort of justification. But if they have to buy it with their own, limited money, e.g. the earnings from last summer’s job, then they will weigh the pros and cons and compare it to all the other things they absolutely ‘gotta have’.
IT 101 – The basics for non-specialists 19 Name, address, pizza type and quantity Take order
Valid order
Pizzas
Deliver pizza
Make pizza
Pizzas delivered
Figure 2.3 High-level processes for a telephone pizza parlour
‘That’s not my problem!’ – process ownership and behaviour Economic activities are based on processes, which are the underlying tasks that people do in order to produce goods or services. An understanding of processes is therefore essential to be able to build a business model. Processes have inputs (information in one form or another), activities (people doing something based on that information) and outputs, also known as deliverables (whatever the process produces or generates). Let’s take a simple example of a telephone pizza parlour (Figure 2.3). For the process ‘Take order’, the input would be a customer name, address, pizza types and quantities. Using this information, the subsequent activity would be the receptionist entering the order into an order-entry system, or simply noting it down on an order pad. Finally the output or deliverable would be a valid order for the kitchen, which then ends up as input for the next process, ‘Make pizza’. Processes have ‘owners’, who are the people ultimately responsible, from an accountability perspective, for the quality of the deliverables and the cost of producing them (Figure 2.4). Note that this might or might not be the same person who actually carries out the activity – for example for the process ‘Take order’, the receptionist is both the process owner and the person who carries out the work, whereas the process ‘Make pizza’ might be owned by a Kitchen Manager, who wouldn’t necessarily also make the pizzas. OK, so much for theory – now let’s get down to the real world of hungry people wanting to eat and see what type of behaviour this is going to generate in our pizza parlour. For
Name, address, pizza type and quantity
Owner:
Valid order
Pizzas
Take order
Make pizza
Deliver pizza
Receptionist
Kitchen Manager
Delivery Manager
Figure 2.4 Process owners for a telephone pizza parlour
Pizzas delivered
20 Blinded by Specs
the process ‘Take order’, if a customer places an order for five pepperoni and pineapple pizzas and provides a valid delivery address, then whether those pizzas are for the kids or for the dog is of no concern to the receptionist. And as she once again ponders how anyone could want to eat pineapple on a pizza, she ends up making a mistake on the order pad. For the next process ‘Make pizza’, when the order ends up in the kitchen as four pizzas instead of five, then not only is that not the kitchen’s problem, it might not even be aware of the error. As for the delivery boy... This leads us to two fundamental rules that apply to processes: • A process owner is generally not concerned about what happens upstream of his process (except if the quality of the inputs affects the cost or quality of his own deliverables). • A process owner is generally not concerned about what happens downstream of his process once he has produced his deliverables (quality and safety issues notwithstanding). If each process owner is then incentivized and rewarded based on a narrow definition of his own process, e.g. the kitchen manager on pizza quality and throughput, then the upstream or downstream processes could be performing less than acceptably without affecting his own performance. For example, even if the receptionist often takes wrong orders, as long as the kitchen manager makes those pizzas and they’re of good quality, he would still get full marks. After a while, the pizza parlour manager would hopefully become aware of the rate of inaccurate orders and do something about the receptionist or the order process. But until he does so, the kitchen manager would not be concerned about what was happening outside of his narrow activity (which is just as well, because who ever heard of a kitchen manager in a pizza parlour...?). Moving to an enterprise example now, let’s imagine a marketing department which asks a copy firm to produce 100 copies of the internal newsletter on glossy paper for next week’s seminar. The copy firm will produce the newsletters without inquiring as to the reasons why – or remarking that the format of the newsletter is so unreadable that it stands little chance of being read anyway. The copy firm is in business to produce whatever the customer gives it, and if it’s a poorly designed and virtually unreadable newsletter, well, that’s really not its problem. In summary, when work gets broken down into discrete processes, with well-defined deliverables and start and end points, then the performance criteria of each process is defined – and rewarded – based on measurements confined to that process. Anything upstream or downstream can usually be summed up as ‘that’s not my problem’, or put more positively ‘I can’t be responsible for that as well, since it’s belongs to another process over which I have no influence – and for which I’m not paid’.
3
The Flaws of the Traditional Model
The truth will set you free – but before it does, it will make you miserable. (De Marco’s dictum)
In this chapter we will show why the traditional IT business model, based on the construction industry’s client–vendor relationship, and a poor to non-existent pricing mechanism, is at best problematic, and at worst unworkable. This will lay the groundwork for proposing a new model from Chapter 4 onwards. The pre-requisite for this chapter is a firm understanding of the basics of how an IT department operates (covered in the previous chapter, ‘IT 101 – The Basics for Non-Specialists’).
The unintended consequences of the waterfall method Under the traditional model, the IT department develops systems for the business through a standard client–vendor relationship based on a contractually signed-off requirements specifications document. This then drives a sequential waterfall method, which is a strict linear approach from analysis, design, development and testing through to implementation, with each phase performed by different teams of specialists (see Figure 2.1 in the previous chapter). Just as for our pizza parlour processes (see Figure 2.3 in the previous chapter), the ‘it’s not my problem’ problem applies. This means that as far as the analyst is concerned, the user knows what he wants; as far as the developer is concerned, the analyst drafted the correct specifications; and so on all the way down the line. In other words, it is possible to produce something to spec which doesn’t correspond to what the customer really wanted. Paradoxically, it is also possible to produce something to spec which actually does correspond to what the customer asked for, but which did not yield the desired business results, e.g. because of an unrealistic business case. In either case, IT would defend its own narrow position by saying ‘it’s not my problem’ since that’s what the customer asked for. It would not do so out of any disregard for the
22 Blinded by Specs
customer’s real needs, but simply because it is working to the rules of a particular business model. Just how perverse such rules can be is strikingly illustrated by the following funny – and simultaneously sad – anecdote. At an industry event in Geneva, the speaker was talking about the usual statistics of IT project failure and then brought up the logical question of what actually constitutes failure. She quoted from a survey the following answer provided by a respondent: ‘The project was a success – but the users wouldn’t use the system!’.
In search of a pizza parlour manager The only way round this ‘it’s not my problem’ problem is to have an overall process owner responsible for ensuring that the sum of the processes adds up to produce a desirable outcome. In our pizza parlour example, this would of course be the store manager, whose job it is to produce the desirable outcome of increased sales. In the traditional IT model of Figure 2.1, with its discrete sequential processes, each performed by different teams of specialists working on input from upstream, we are essentially faced with the equivalent of a pizza parlour with no manager to mind the store, in other words to coordinate the various processes so as to produce the desirable outcome of delivering business benefits. The reason why no one is minding the store to ensure an overall desirable outcome from a BU or company perspective is that the definition of a desirable outcome rarely exists in practice. It exists in theory, since all IT investments are supposed to generate business value, ROI or process improvement, but because there usually isn’t anyone responsible for obtaining and measuring it, it ultimately doesn’t exist. And the reason no one is responsible for it is because it is assumed to be enshrined in the business case in the form of the ROI which was used to launch the project. Since the business case is taken at face value and all parties have committed to meeting it, it becomes implicitly cast in stone, hence there is no reason to ensure it is actually being achieved. If a pizza parlour was run on this basis, then you wouldn’t need a pizza parlour manager, since the desirable outcome of increased sales would be enshrined in the business plan instead of in the day-to-day reality of running the business.
Who provides process expertise – client or vendor? Depending on the complexity of a product or service, a certain level of specialist expertise might be required in order to successfully complete the underlying processes. Such expertise can take one of three forms (see Figure 3.1):
Define objectives and concept
Define requirements
Do initial cost-benefit analysis
Buy/build
Deliver/ implement
Service/ support
Do ongoing cost-benefit analysis
Buying a mobile phone Buying a car Buying insurance Buying a house (housing estate) Building a house (with an architect) IT (buying or building software)
LEGEND:
Customer provides most or all expertise
Vendor provides most or all expertise
Increasing product complexity, decreasing customer expertise
The flaws of the traditional model 23
Both customer and vendor provide joint expertise
Figure 3.1 Specialist expertise required as a function of product complexity
• Customer expertise: It is mainly provided by the customer. If I buy a car, for example, I’m usually capable of deciding what to buy without any specialist assistance outside of car magazines or web sites. • Vendor expertise: It is mainly provided by the vendor. For example, if I’m an ordinary computer user with little knowledge of PCs, I would probably rely on the vendor to suggest a configuration for me based on my usage requirements. • Joint expertise: It is jointly provided by each. In this case neither the vendor nor the customer has the combined expertise necessary to successfully complete the process. In this scenario, both the customer and the vendor provide inter-related expertise, the sum of which will define the final process deliverable. For example, if I want to have a house custom-built, then my own requirements and knowledge of the neighbourhood, combined with the architect’s expertise (and a lot of to and fro between the two) will result in a final design that might not necessarily have been evident at the beginning. Each of these three forms of specialist expertise will have a direct impact on client–vendor relations for a process, as we shall now see.
24 Blinded by Specs
When standard client–vendor relationships are possible In general, a standard contractual client–vendor relationship for a process becomes feasible when you, as a customer, can toss a specifications document to a vendor and basically say ‘please do this for me’, get a quote and a delivery date, and then walk away (management and co-ordination notwithstanding). This would be possible under the following conditions: • It is relatively easy for you to visualize the outcome because it usually forms part of everyday life. • The level of ambiguity in the ‘specifications’ (in the broad sense of the term, i.e. for building a product or for running a service) is low. For example, a specification for a standard height bedroom of 20 sq m, with a minimum length or width of 4 m and south-facing windows is unambiguous; one for a ‘light and airy bedroom capable of holding a double bed and a working space for a desk and PC’ is not. • The vendor can readily translate the specifications into costs and schedules (e.g. so much wood, or so many widgets at a given price, which will take so long to build and install). • Your reasons for acquiring the product or service (and hence the underlying specifications) are fairly stable and there is little chance that you will change your mind once the vendor starts working. • The level of joint expertise (see previous section) is low. Typical examples are to be found in the construction industry, where easily visualizable results (houses, buildings...) and correspondingly unambiguous specifications (weights, dimensions and forces) which cover standard mechanical components (beams, widgets, tiles...) are applied to the hard sciences (physics, engineering and mathematics) to produce relatively predictable results. Qualifier: OK, so we all know that houses and buildings are hardly ever delivered on time, but this is due to a combination of low bidding, labour issues, logistics and delivery issues, environmental constraints, the weather, soil composition discoveries made while excavating, etc., etc., and very little to do with the ability of the contractor to actually build the house to spec. After all, no one ever took delivery of a house and said ‘This is not what I asked for! Where’s that room I thought would be over there?!’. The materials and components will mostly be standard items available on the market. The specifications themselves are unlikely to change significantly once work starts. Finally, the level of joint expertise is low or non-existent – what materials, metals or fabrics are
The flaws of the traditional model 25
ultimately chosen by the builder are of no concern to the customer, except perhaps where it would influence aesthetics, costs or maintenance. In general, a traditional client–vendor relationship can be applied to processes in which the product is not very complex and the client has enough expertise in a particular area. In Figure 3.1, examples of such processes would be: • Buy/build applied to buying a new house on a housing estate based on standard, catalogue models (the vendor is the promoter). • Service/support applied to buying a car (the vendor is the garage that services your car). Finally, implicit in a standard client–vendor relationship is the ‘It’s not my problem’ problem, since the vendor will assume that he has the right information. If not, too bad – that’s not his problem.
When standard client–vendor relationships pose problems In general, a standard contractual client–vendor relationship for a process begins to pose problems when a vendor has difficultly understanding just what you, the customer, really want, which makes it difficult to get a quote and a delivery date. This would occur when one or more of the following conditions are met: • It is not easy for you to visualize the outcome because it does not necessarily form part of your everyday life. • The level of ambiguity in the specifications is sufficiently high for them to be open to interpretation. • The vendor finds it difficult to translate your specifications into costs and schedules, which means that he cannot produce an accurate quote and delivery date. Indeed, without further investment of time and resources on both his part and yours, he might not even be sure of what technology or materials he will be using. • Your reasons for acquiring the product or service, and hence the underlying specifications, are essentially moving targets. It is therefore likely that you will change your mind once the vendor starts working. • The level of joint expertise is sufficiently high for neither of you to be able to do the job without ongoing input from the other.
26 Blinded by Specs
For these reasons, the vendor will require ongoing clarifications concerning ambiguous and changing requirements. The joint expertise provided by each of you will introduce additional requirements and constraints, which will further influence the shape of the final product. In some cases formal specifications, as such, might not even be possible, and the vendor would first have to produce a mock-up or prototype for you to get a better idea of what you will finally commit to. In general, a traditional client–vendor relationship poses problems when the product starts to become complex and the client doesn’t have enough expertise in a particular area. In Figure 3.1, an example of such a process is ‘Define requirements’ applied to having a house custom-built with an architect. The initial requirements of your average client’s dream house will be more ‘vision’ (lots of living and working space, big kitchen, lots of natural light, etc.) than specifications (i.e. dimensions), and will therefore require joint input and multiple back and forths between client and vendor before arriving at a mutually agreed final design, quote and delivery date.
Is a standard client–vendor relationship possible for IT? IT systems represent a complex mix of business functionality and technical constraints. This is as true for in-house systems as for off-the-shelf packages (see bottom of Figure 3.1). Neither party – IT or the business – has the combined expertise necessary to successfully complete each process alone. This is as valid upstream in the design and build phases as it is downstream for implementation and support. IT in the enterprise falls squarely into the category of activities for which a standard client–vendor relationship is not possible. Note that this doesn’t mean that a client–vendor relationship is not possible at all for IT (or indeed for the architect designing your house), only that it is a non-standard one, with a different approach, roles and responsibilities. These differences will be described in the following chapters.
The ‘Statement of Requirements’ (SoR) trap An integral part of the standard client–vendor relationship that IT uses, based on the construction industry, is a contractually signed-off requirements specifications document. This phase starts with business analysts (also known as systems analysts) sitting down with business users in an attempt to understand their requirements, ultimately producing a thick document that nobody, even with the best of intentions, can really fully understand. Once the so-called ‘statement of requirements’ (SoR) has been duly signed off by the business, IT will then try to build (or in conjunction with the business, to buy) a system to meet those requirements. A typical scenario for a non-trivial project would usually play out as follows:
The flaws of the traditional model 27
• It takes at least three months for business analysts to produce a SoR of up to a hundred of pages or more. • It then takes another three months before it becomes politically acceptable to agree that not many people understand it, neither the users – and sometimes not even those in IT who have to use it to buy or build a software solution to meet these ‘requirements’. • Finally a system is bought or built to correspond to these requirements – with the results you can guess by now – or you have to start all over again. So though the final deliverable theoretically corresponds to documented requirements, it stands little chance of corresponding to actual requirements – at best it represents a starting point for subsequent rework; at worst it is unusable. Now it could be argued that this approach is a genuine attempt by IT to get the business to commit to real, as opposed to perceived needs. However, it also happens to represent a very convenient contractual safeguard for IT (cover your ass, or CYA for those in the know), because once the business has signed off on an SoR, it means that IT is covered. So if the end result does not reflect real requirements, then IT can use the signed-off SoR as a ‘get out of jail free’ card in Monopoly, and not be penalized (‘We built what they asked for...’). The main problem with this approach is that the business is constantly moving. Enormous amounts of energy are expended in defining in detail precisely what is required at the time the requirements are gathered – or if you are lucky – some time between when they were gathered and when the documentation actually was signed (which does not always happen). Then the business moves on and real life intervenes to ensure that at least some of the requirements shift. But the changes don’t always find their way back to IT,who become obsessed with interpreting the documented requirements like a judge trying to find the ‘will of parliament’ in the words of legislation, without any concern for what the business needs today or will need by the time the project actually delivers something. Of course if the business has any sense, they make vague mutterings about the completeness of the SoR without actually signing anything (‘Sorry, I didn’t have time to actually read it all in detail, but it seems fine, so you IT chaps just get on with building it and I’ll get back to you as soon as I can find some time to read it...’) Applying the traditional SoR approach to IT denies some essential realities. Requirements for many business systems are usually moving targets. Fitting systems to businesses is therefore like fitting shoes to children: you can guarantee that the child will
28 Blinded by Specs
have changed long before the shoes wear out. So if you take 6 to 12 months to build a pair of shoes, you had better get used to barefooted children. Specifying one’s requirements is not something that comes naturally to users. Just ask any five people to write down the ‘requirements’ for setting the table: you’ll get five different answers, and each person will have left something out. So why do we expect users to be able to correctly ‘specify’ requirements for business subjects hundreds of times more complex? As we’ll see in Chapter 5, specifying requirements is an iterative process, which requires intermediate results. For the example of setting a table, by simply looking at a partially set table (a ‘prototype’), it will become much more obvious if something is missing (e.g. salt and pepper, water, serviettes...). Committees of users who are supposed to define requirements are like marketing focus groups – they usually follow a reductive approach. They can only tell you whether they like or don’t like what you present them with. Have you heard of the focus group that invented the Walkman? Neither have I. It didn’t exist, because before some bright spark in Sony invented it, the general public didn’t know it could be done or how it would change their lives. So how can you expect a group of users to define requirements? People need to be given a chance to know what a system is capable of doing and how they will cope with it before they can define what their real requirements are. Finally, requirements for systems in the twenty-first century will increasingly represent new concepts and enhanced business processes for competitive differentiation, rather than optimizing existing processes for cost savings. This means that you’ve now got an even bigger problem because it will be difficult to visualize the final outcome. For example, most of the Internet-based services we use today weren’t born as the result of some grand design, but rather grew out of experimentation, trial and error. So how can you specify what it is supposed to look like or what it is supposed to do in exhaustive detail? The results of decades of trying to avoid the SoR trap have been vividly captured in the caricatured but alas true-to-form cartoon in Figure 3.2, which is probably as old as the IT profession itself, and that many readers will have no doubt seen on an office wall somewhere.
A poor to non-existent pricing model Whoever coined the term ‘there’s no such thing as a free lunch’ obviously never worked in or around IT. Because IT is usually centrally funded as a corporate cost centre, BUs launch projects based on expected business benefits which will accrue to them financially, yet most, if not all, of the corresponding costs will come out of the IT budget.
The flaws of the traditional model 29
Figure 3.2 A well-known view of IT (Adapted from ‘How Projects Really Work’ located on www.projectcartoon.com, 2006 © ProjectCartoon.com. All Rights Reserved.)
Now the financial powers-that-be might be able to advance many reasons why this is so, ranging from the size of the total IT budget which requires centralization for purchasing efficiencies, to the difficulties of defining who gets what. Whatever the reasons though, they are ultimately irrelevant, because as we saw in Chapter 2, unless both the costs and the benefits apply to the client from an accounting perspective, there is really no incentive to carry out a cost–benefit analysis ‘with teeth’, i.e. one which can be used, not just to approve a project, but also to withhold or cancel its funding if it is not living up to expectations. Also, as we saw in Chapter 1, even when some sort of chargeback mechanism exists (usually in the form of annual cost allocations), it rarely has the desired effect of regulating demand – and in the extreme actually has the perverse effect of generating demand. Chargebacks, or internal invoicing based on actual consumption of human and technical resources, are supposed to be fairer and more objective, but are not very common because they are difficult to implement and expensive to run. The extreme form of chargeback, which is even less common, would be to run IT as a profit centre to which BUs turn to, in much the same way as they would turn to an ESP (external service provider). This is discussed in the next section. So, chargebacks and profit centres notwithstanding, IT as a product or service is essentially ‘free’, and when something desirable is free, then by definition you will end up asking for more than what can be physically and economically delivered. The downsides of this situation could be tempered somewhat if there was a rational project selection and prioritization process which formed part of an overall governance strategy (governance can be defined as a framework for objective decision-making and monitoring the outcomes of those decisions). Alas, as we’ll see in the next chapter, projects are usually approved based on political considerations and the organizational influence of the business sponsors. Appropriate governance can reduce, but not eliminate this reality.
30 Blinded by Specs
The net result is a vicious circle in which IT is literally buried in an avalanche of requests, projects and backlogs it can never all meet – thereby ensuring that business users will always be fundamentally dissatisfied with IT.
Should IT be run like a business (i.e. an ESP)? The less than acceptable performance of IT has led to the proposal in some circles for IT to be run ‘like a business’. After all, ESPs, unlike IT, have the basics like costs, schedules, utilization rates and full project costings under control, otherwise they wouldn’t be in business. This reasoning seems sound on the surface, but in practice things are not that simple. While any IT department would certainly stand to gain by emulating certain practices of ESPs, there are some fundamental differences between internal and external service providers which limit just how far the two can converge. To start with, an ESP, by definition, adheres to the traditional business model and its contractual client–vendor relationship, with all its inherent faults when applied to building software. This book is all about moving away from this type of relationship. Secondly, and probably most important, an ESP has a different agenda from that of its clients, namely to maximize revenue and profit by getting clients to use as much of its services as possible (nothing wrong with this – after all, it’s a business). Running IT like an ESP would therefore mean defining financial success criteria based on revenues, costs and margins (implicitly with some sort of comparison or benchmarking with respect to real ESPs, which would represent the ‘competition’), and introducing internal invoicing based on the actual consumption of human and technical resources (calculating actual infrastructure usage can be complex and very expensive to run). Just like the ESP whose objective is to make as much money out of the client as possible, the ESP-like IT department would be incentivized to improve its own financial performance (what gets rewarded gets done). One could even go as far as saying that it would have an additional, unstated objective, which would be to justify its existence with respect to the external market – thereby reinforcing the incentive to improve its financial performance. As for its presence on ‘investment committees’ or ‘project review boards’, this would be a pure conflict of interest. Unlike an internal service provider who could objectively serve corporate interests by voting against a poor project request in the pipeline, the ESPlike IT department might base its decision on how well the project would serve – or not serve – its own interests, e.g. by favouring low-risk and/or high-margin projects, regardless of their usefulness from a business perspective. If you accept the premise that the traditional client–vendor relationship is not the best way to run an IT department, then the logical conclusion is that you cannot run IT like a
The flaws of the traditional model 31
business, since that would require a traditional client–vendor relationship. At the end of the day, the role of the IT department is to deliver business benefits in one form or another, and not to make a profit or compete with ESPs. For more views on the pros, cons and challenges of running IT like a business, please refer to the following (see ‘Further reading’ at the end of Chapter 9): • Mark Hall’s article entitled ‘Business is Business’. • Iain Aitken’s book ‘Value-driven IT Management’ under the index entry ‘transfer charging’.
The limits of outsourcing Because IT does not lend itself well to a standard client–vendor relationship, by extension the same should be true of outsourcing, which after all is a standard client–vendor relationship. Outsourcing can only really successfully be applied to those processes which meet our two conditions of low levels of specifications ambiguity and low levels of joint expertise. The only IT process that meets these conditions is Service/support. Though usually categorized by a high level of joint expertise, it is sufficiently sequential and compartmentalized to be able to build robust processes with handover points between the two parties. And even then, this would only apply to applications which have reached a certain stage of stability and maturity, usually not before two or three years of operation. Not surprisingly, Service/support of so-called legacy applications represents the dominant market for outsourcing in IT. Any attempts to outsource one or more of the other processes would result in the same disadvantages explained in this chapter, namely a final product which stands little chance of corresponding to real requirements and long project cycle times. This is especially true for Buy/build (though under the right conditions underlying processes like testing and QA could be outsourced). Furthermore, the situation would be exacerbated because it would be based on a legal contractual relationship between client and vendor (with little room for flexibility), as opposed to a loosely contractual relationship between a department and an internal service provider (with room for flexibility). You probably already experience these disadvantages today when IT resource and capacity constraints lead you to contract with software vendors, consulting companies or integrators to deliver new systems. In this form of project-based outsourcing, you will usually find out that you don’t have the flexibility you would have had with an internal IT department. Note that there is one area in which outsourcing the Buy/build process works relatively well, and that is for product development, e.g. developing software packages offshore, or developing software for mobile phones and other consumer electronics. The more ‘predictable’ and ‘commodity’ nature of these systems means that the product can be fairly
32 Blinded by Specs
well spec’d and entrusted to a vendor, often sitting in another country. Though on the surface this might seem to be part of IT, in reality it is about product development (i.e. developing one product for thousands or millions of clients), and has nothing to do with enterprise IT, which is about building diverse operational systems (sales, marketing, order management...) for just one client. There is a saying in IT which goes ‘Good, fast, cheap – choose any two!’. The main driver for the huge outsourcing market which took off during the recession of 2001–2003 was clearly ‘cheap’, which is fine when applied to Service/support, which outsourcers can usually do much cheaper due to economies of scale and specialization in technologies and skills. Indeed, over 80% of all outsourcing contracts concern running and supporting production applications (one outsourcing specialist at an industry event in Geneva put the figure at over 90%). However, with the recession now officially behind us since 2005, ‘good’ and ‘fast’ are going to make a comeback because in a rebounding economy, this is the key to competitive differentiation – after all, no company ever saved its way to prosperity. But ‘good’ and ‘fast’ cover the upstream processes of designing and building systems, rather than the downstream processes of servicing and supporting them, and we saw that these do not lend themselves well to outsourcing. So either the outsourcing of the Buy/build process will have to adapt in terms of changing the nature of the client–vendor relationship in a way that speeds up project cycle times and delivers a product more closely mapped to real requirements – or the outsourcing of IT development will remain the exception, or indeed might even be insourced again. After all, it should come as no surprise to learn that people sitting in the same building and sharing the same business knowledge, language and culture are much more likely to deliver systems better and faster than an outsourcer sitting in India, China or the Czech Republic. They might even deliver them even cheaper if you consider that shorter cycle times means quicker time-to-benefits, which can sometimes outweigh cheaper labour costs. In conclusion, though outsourcing is here to stay – the nature of the modern world in terms of costs and skills differentiation between countries makes this inevitable – it is essential to differentiate between those IT processes which can be outsourced and those which can’t. When all is said and done, we should always remember that outsourcing is based on a traditional client–vendor relationship. So once it is established that it is this type of relationship which is part of the problem, then outsourcing the wrong processes will only make things worse.
Current IT organizational trends At a session at Gartner ITExpo in Cape Town in August 2006, the speaker, John Mahoney, was talking about IT organizational trends. What Gartner had already observed over the years was the evolution of the role of the IT department from an
The flaws of the traditional model 33
essentially technical focus (‘technology-aligned’) to a more business focus (‘business aligned’). This in itself was nothing new, but Gartner went on to predict further stages of evolution up the scale, to ‘business engaged’, ‘business leadership’ and finally ‘embedded’, i.e. no longer a stand-alone service provider but an integral part of the business. If Gartner’s prediction turns out to be correct, then it should logically be accompanied by a corresponding change in the relations between IT and the rest of the business, with the traditional client–vendor relationship gradually making way for a model based on a shared risk/reward partnership. An IT department cannot purport to be ‘business engaged’ or provide ‘business leadership’ by maintaining a traditional vendor status with its contractual safeguards and sign-offs. This conclusion seems to be in line with sentiments echoed by the deputy CIO of Air France (Air France-KLM is the world’s largest airline), Jean-Christophe Lalanne. When talking about the importance of an organizational role to manage the relationship between IT and the business (what I call later in this book Client Manager or Account Manager or Business Relationship Manager), he says that ‘But for it to succeed, it needs to be viewed as a real partnership, and no longer as a client–vendor relationship’ (quoted in 01 Informatique of 26th Jan 2007, p10). Similarly, in their book ‘The Technology Garden – Cultivating Sustainable IT-Business Alignment’ (see Further Reading at the end of Chapter 9), authors Jon Collins, Neil Macehiter, Dale Vile and Neil Ward-Dutton state that ‘When an IT organization is perceived as a supplier, or worse as a cost-centre, the relationship between business and IT is fundamentally unbalanced’. They go on to quote a senior interviewee who illustrates this point: ‘Lots of companies go wrong by implementing the IT–business relationship as a supplier–customer relationship. In these situations the business doesn’t want or expect to be challenged by IT – they just want it to make stuff happen; but at the same time they complain that it doesn’t add value. For IT to deliver real business value, the relationship between IT and business has to be more one of strategic partnership.’ (Collins et al., 2007)
The ultimate litmus test to determine one’s business model Most of the issues raised in this chapter are nothing new and you might already be successfully addressing some of them in your organization. You might therefore counter by saying that the flaws of the traditional model as explained here don’t really apply to your organization. Well, in order to address this objection, here is a simple litmus test to determine whether an IT department works to the traditional model or not:
34 Blinded by Specs
Would you consider the terms ‘on schedule, within budget and to spec’ (in whatever combination you want) as being the main success criteria for project delivery by an IT department?
There are only two answers allowed: ‘yes’ or ‘no’. If your answer is‘yes’, then your dominant view of how an IT department is supposed to operate is based on the traditional business model. Regardless of whether you consider yourself in a partnership or not (the term is often very loosely used), when all is said and done, you are ultimately a vendor in a client–vendor relationship, because these are the criteria you would essentially use to measure the performance of your organization. If, however, your answer is ‘no’, then whatever model your dominant view of IT is based on, it is definitely not the traditional business model and its underlying client–vendor relationship. You would consequently measure the performance of your IT organization by criteria other than the traditional trio of budget, schedule and spec. This is not a trick question, nor is it unfair. After all, if we were to replace the words ‘IT Department’ by ‘ESP’ and ask the same question, then the chances are your answer would most definitely be a ‘yes’ (and certainly the ESP would not be in violent disagreement with you...). If you answered ‘no’ for an IT Department but ‘yes’ for an ESP, then you are essentially saying that ESPs and IT Departments cannot be held to the same success criteria for the same work. So if the business contracts out a project to an ESP because the IT department does not have the resources or the skills, then it can expect to have different success criteria – thereby supporting the argument earlier on about the limits of outsourcing. At the end of the day, the traditional success criteria of ‘on schedule, within budget and to spec’ imply that you are working to the traditional business model and its underlying client–vendor relationship, regardless of any adaptations you may have made to cater to one or more of the shortcomings of the model.
What model would be appropriate for IT? So if the current business model for IT, supported by the construction industry’s standard client–vendor relationship and by the free lunch pricing mechanism, is not applicable to IT, then what model would work? Let us now try to answer this question in the subsequent chapters by building a business model step-by-step, from demand through to supply and monitoring costs and benefits.
Part II Building a New Business Model for IT
4
Managing Demand
There’s no such thing as a free lunch. (Milton Friedman, Nobel prize-winning economist, 1912–2006).
Managing demand – traditional model In order to manage planning, production and delivery, any properly run business has to be able to balance orders for its products and services (i.e. demand) with its ability to produce them in terms of resource and scheduling constraints (i.e. supply). Otherwise it might produce too little of what is required, too much of what is not required, or deliver late, or have problems with product quality or customer satisfaction. Even non-profit organizations, who also have a ‘market’ to serve and a finite number of employees who don’t work for free, are subject to the same constraints. The average IT department, though not a business from a P&L perspective (the exceptional IT profit centre notwithstanding), has a resource base comprising highly paid specialists, produces highly complex products and services, and has an annual budget of anywhere from 2–10% of annual revenue. Yet it does a very poor job of managing – when managing at all – basic supply and demand. It generally has very little understanding of its demand and supply chains, and would have a hard time being able to answer fundamental questions like ‘What is currently in the pipe?’ or ‘What do we have to deliver over the next six months?’ or ‘What is our projected resource utilization for the next quarter?’ It can also end up delivering products which don’t correspond to what the customer really wants – or, paradoxically, products which do correspond to what the customer wants, but did not yield the desired results, even though built close enough to spec. In short, when it comes to supply and demand, IT is unduly focused on the supply side of the equation, in other words the how, (e.g. project management, software development and managing physical assets like hardware and networks), to the detriment of the demand side, or the what (i.e. capturing and prioritizing demand, assigning resources based on business objectives and doing projects that deliver business benefits).
38 Building a New Business Model for IT
At the risk of exaggerating the point, it’s almost as if once IT has a green light to deliver a project, it couldn’t care less about whether the project makes sense or will deliver business benefits – it’s only objective from here on will be to deliver it to spec, on time and within budget, and manage the underlying physical assets. Put another way, IT is only concerned with building the system right, not with building the right system. The criteria for success is defined as the delivery of solutions on time, within budget and to spec – like a building contractor – instead of the delivery of solutions which deliver business benefits. However regrettable this tunnel vision may be, it is totally understandable, because that’s how the traditional IT business model and its client–vendor relationship works. IT focuses on managing supply, because that’s its mandate; managing demand is unclear, and in the absence of proper governance, it defaults to a business problem on the customer’s side. Projects are therefore usually approved based more on business sponsor influence – or putting it less charitably, decibel management, or catering to those executives who shout the loudest – rather than on any rational decision-making process. This might appear to be a rather harsh indictment of the demand chain in the average IT department today, but unfortunately it corresponds to reality. Far from being the rational and structured process we would like it to be, demand management is usually a nebulous combination of decibel management and organizational politics – significantly amplified in international organizations when country politics and cultural differences are thrown into the mix. Once projects have been delivered, then the absence of rational demand management becomes even more acute. While you can usually count on business executives to obtain the funding to launch projects, the same is rarely true to sustain the resulting applications after delivery. This is usually because the executive sponsor has either moved on (often as a result of the project’s success – or failure) or is far less motivated to go and bat for operational funding, which doesn’t have the same visibility and organizational rewards as launching a new project – especially when, as is usually the case, the magnitude of the ongoing funding was not part of the original business case. Some real-world examples illustrate this: • The name of the IT Department at a major telco was called ITD, for ‘IT Delivery’, thereby clearly implying that its mandate was to deliver whatever its customers asked for, under a pure contractual client–vendor model. During a subsequent reorganization, aimed in part at shedding its role as a mere internal contractor and getting closer to the rest of the business, the word Delivery was dropped from the name, which became just plain IT. • During the kick-off meeting of a project to improve IT decision-making processes at a global pharmaceutical company, one of the corporate representatives stated without
Managing demand 39
batting an eyelid (he was sitting opposite me...) that in this company ‘there are projects under way that should never have been launched at all...and we only hear about them when it’s too late’. • Soon after the IT department of an insurance company introduced time entry as part of a governance project, the project review board suddenly ‘discovered’ the existence of 10 projects that IT staff had been working on which it was not even aware of. In an unusual display of firmness and as part of the new investment management process, it shut them all down. • One of the most common sports at the subsidiaries of multinational companies is to ensure that local demand is funded at just below the threshold that requires corporate approval (which sometimes requires the creative splitting of larger projects into smaller, seemingly unrelated ones...). And when a country’s IT plan is at odds with the corporate IT plan from HQ, then organizational politics will usually result in the local CEO bypassing IT altogether and going directly to the global VP on the business side to explain why the latest corporate IT initiative is not really applicable to his country. The larger the subsidiary in terms of contribution to global sales, the better the chances of obtaining such special exemptions. • A marketing director at a pharmaceutical company had little problem obtaining significant funding for a sales force automation (SFA) project. A month after the implementation however, he moved on as part of a company reorganization. In the absence of a business sponsor, the maintenance budget for the following year was next to nothing, which seriously impacted usage because significant further enhancements remained to be done – which needless to say was not part of the original business case. So demand management is clearly the missing link in most IT departments. Yet any successful business model, by definition, has to be built on the effective management of demand as well as supply. Without a reasonably accurate demand pipeline, IT will inevitably end up in fire-fighting mode in terms of resource planning, and will have little chance of being able to start regulating demand (you can’t regulate what you can’t capture).
Managing demand – new model Using the fundamental premise that not all demand from the business will be approved, because of business priorities on the one hand, and IT resource and scheduling constraints on the other, the best way of representing demand would be via a funnel, as shown in Figure 4.1. Demand from the business enters at the top, follows one or more decision-making processes, and then either exits at the bottom as approved work to be executed, or remains in the pipeline pending further evaluation.
40 Building a New Business Model for IT
IDEA
PROJECT REQUEST
PROJECT
Capture demand and identify opportunities
Build business case. Seek executive approval Perform detailed budgeting and planning
Approved demand
Figure 4.1 Managing demand (pipeline approach)
The first stage (or gate, or stage gate) of the funnel represents ‘ideas’ or opportunities, which comprise only high-level information and estimates for timescales, costs and benefits. Ideas generally represent work intake that IT has not yet had the chance to evaluate in detail, but which needs to be acknowledged as having entered the pipeline. As part of a filtering and screening process, these ideas then move down to the next stage, called ‘project requests’, during which they are further qualified with quantifiable cost and benefit information, to a level of detail which makes high-level planning and a costbenefit analysis possible. It is during this stage that the business case is built for executive sponsorship and approval. Finally, once the business case has been approved, the project request moves down to the stage where it becomes a project – though strictly speaking these would not just be new projects, but also work related to production applications, e.g. upgrades. At this stage detailed planning, budgeting and resource allocation (where possible) takes place. Note that though the project is approved (as in ‘this is a good idea and we should be doing it’) it will only exit the funnel for execution if funding is available. This approach is analogous to a sales funnel or sales pipeline in SFA (Sales Force Automation) and CRM (Customer Relationship Management), in which leads enter at the top, then pass through various ‘funnel stages’ before finally falling out as a signed deal. This stage gate approach can also be represented horizontally (Figure 4.2), and maps to most project methodologies, e.g. in Prince2, project ideas move to project mandates, then to project briefs and finally to a project initiation document.
Managing demand 41 Capture demand and identify opportunities
IDEA
Build business case. Seek executive approval
PROJECT REQUEST
Perform detailed budgeting and planning
PROJECT
Figure 4.2 Managing demand (sequential approach)
Whether demand is portrayed horizontally as sequential steps, or vertically as a pipeline, the overall idea is the same, namely a structured approvals process characterized by milestones or stage gates. The funnel approach probably has the advantage of reflecting the reality that not all demand is ultimately approved. Let us know look in detail at how demand flows through the pipeline from idea through to project.
Capturing demand and identifying opportunities Demand for IT products and services originates from customers in the business in the form of ideas or opportunities, with high-level information on timing, costs and benefits (note that we’ll use the logical word customers for now, regardless of whether this implies a client–vendor relationship or not). At the one extreme – and unfortunately quite common – IT is simply an internal service provider operating under the traditional client–vendor model and focused on satisfying user requirements. In essence it is a passive order taker disconnected from the business and not involved in understanding what lies behind customer demand. At the other extreme – and unfortunately not very common – IT is a strategic differentiator and is part of a joint IT/business group responsible for process improvement and business innovation. Here we would have account managers responsible for understanding customer demand and full IT participation in the decision-making and approvals process. This will be covered in more detail in Chapter 8 when discussing roles and responsibilities under the new business model, but at this stage let us focus on capturing and managing demand, regardless of how it occurs. There are two categories of demand – planned and unplanned: • Planned demand arises as part of the annual planning process, which results in the IT Plan (which is what IT is supposed to deliver) and the corresponding budget for the
42 Building a New Business Model for IT
next financial year. This would be the case for large, departmental or enterprise-wide projects which cannot be funded on-the-fly during the current budget cycle, but also for keeping the lights on, which by definition lends itself to annual planning (the term ‘keeping the lights on’ implies that even if a company was unable to launch new projects because of budgetary constraints, it would still have to fund these fixed IT costs in order to run its day-to-day operations if it wanted to stay in business). • Unplanned demand corresponds to the huge amount of unpredictable work that IT does which is not contained in well-defined project structures. These include things like change requests, feature requests and bug fixes which arise from changing business and regulatory environments, changes in strategy, company reorganizations, mergers and acquisitions, insufficiently tested systems, etc. Some of these requests will become input for the next planning cycle, but most will have to be done inside of the current budget cycle. For those who believe in the myth of the sacrosanct IT Annual Plan, remember it’s just that – a myth. In the real world demand is coming in every single day, so the challenge is to capture that demand, both planned and unplanned, as early as possible, expose the high-level business justification and set up an ongoing dialogue between IT and its customers. This enables the management of a demand pipeline. An opportunity analysis on this pipeline, based on an appropriate scoring model, will give rise to an initial screening and validation process which will enable an idea to move down to next stage and become a project request. Typical examples of filtering criteria for a scoring model are expected revenue, expected cost reduction, regulatory compliance or operational improvements. Finally, it is important that the process of capturing demand be appropriately categorized so that each type of idea can be handled as quickly as possible. For example, an idea for a strategic $5m data warehouse system which would take a year to develop might take three months to be approved, whereas a small $10 000 change request which would take only a month to implement could be approved in a week. Demand should therefore be suitably categorized so that the resulting decision-making process corresponds to the nature of the business problem – and plain common sense. This is shown in Figure 4.3, which shows demand management based on different idea types, which not only follow different decision-making paths, but also enables a ‘fast-track’ directly to the approved state, thus bypassing the sequential idea–project request–project path. At one IT department which was trying to streamline its demand management process, it transpired that even the smallest and most insignificant change request had to go through the same, top-heavy approvals process. This required full documentation and business justification, with dotting of ‘i’s and crossing of ‘t’s, to such an extent that they admitted that they were spending on average 6–8 weeks approving work requests which would then be
Managing demand 43 Change request
Project idea
Operational requests…
Capture demand and identify opportunities
IDEA
PROJECT REQUEST
Build business case. Seek executive approval
PROJECT
Perform detailed budgeting and planning
Approved demand
Figure 4.3 Different decision-making processes as a function of idea type
developed in less than 10 days! They subsequently created a fast-track approvals process, which removed most of the detailed business justification, with funding from a budget envelope for that application (funding options are discussed at the end of this chapter).
Prioritizing and approving demand Ideas which pass the first level of screening and validation move down to the next stage and are transformed into project requests, for which executive sponsorship and approval is sought. Project requests are further qualified in order to build a business case, which will be based on a combination of business alignment, costs, benefits, technology alignment, risk and IT resource and scheduling constraints. Let us look at each of these in turn.
Business alignment All firms have medium to long-term strategies (e.g. become market leader in product category X, reduce costs by Y%) and corresponding business objectives for each year (e.g. launch new product ABC and generate first year sales of $Xm). Companies should therefore invest in projects which help them to meet such goals. This should result in demand for IT products and services being aligned with company strategies and objectives, known as business alignment. The first and most elementary step in building a business case would be to link a project request directly or indirectly to achieving one or more
44 Building a New Business Model for IT
clearly identified business objectives (appropriately weighted, because not all objectives are of equal importance). Now you might be forgiven for asking ‘Well, isn’t that rather obvious?’; how can IT initiatives not be aligned with the business – after all, it’s company money being spent, isn’t it? Well, as explained at the start of this chapter, projects in most companies are launched based mainly on business sponsor influence and decibel management rather than on any rational decision-making process. This makes it possible for projects to be launched based on subjective business cases – which may or may not be aligned with business objectives. Here’s a real-world example. At a pharmaceutical company, the stakeholder of a very successful customer service project, which had just been implemented, wanted to launch a follow-up project for the coming year. Her idea was to enable call-centre agents to be able to identify customers based on automatic number identification – despite the fact that agents were able to identify callers within seconds after picking up the phone by simply typing in the first few letters of their names and the first two digits of their post code. The business justification for such a project, which would require relatively complex and far from inexpensive (at the time) computer–telephony integration, was absolutely zero. Fortunately the IT Planning Manager had no trouble explaining to her that a project whose business benefits could be summed up as ‘being able to reduce customer identification time from around three seconds today to less than a second’ would never pass the screening and evaluation process that IT and the business had jointly set up. She agreed, but then added as an afterthought that it was really sexy technology and ‘wouldn’t it be great to be able to do that?!’ In the absence of a proper demand management process, it is entirely possible that this type of MBMA-initiative (management by magazine article) could have ended up being approved and funded.
Costs IT costs can be broken down into hardware, software and people, each of them capitalizable to all or some extent depending on accounting rules. Given the frequent cost overruns of IT projects, ideally we would like to be able to define at this stage a detailed budget for hardware, software and people, so that the business case can be based on costs which are as accurate as possible. Unfortunately, this is wishful thinking, for a very fundamental reason: the detailed functional requirements (which are necessary to define what the technical solution will be) are not yet known at this stage. This is only to be expected, since it would take at least a few weeks to define these detailed requirements, and you won’t be able to do this until the project has been approved.
Managing demand 45
We consequently wouldn’t be able to define an accurate budget anyway. In fact, in some cases we might not even know at this stage what the final solution will be based on in terms of technology or product (buy or build), so any budgeting will necessarily be an estimate. You will no doubt have noted the conundrum here. In order to define a detailed budget you first need to define the detailed requirements (since that is what will enable you to define the technological solution and how much it will cost). However, in order to define the detailed requirements – as opposed to just high-level requirements – you need to mobilize people from both IT and the business for at least a couple of weeks. But you cannot do this until the project has been approved – based on a business case, which requires the detailed budget and project schedule! We shall call this circularity the ‘commitment conundrum‘ – see Figure 4.4. The commitment conundrum leads us to a fundamental conclusion. During the approvals process in demand management, you can only define an estimated budget based on high-level requirements. You cannot define a budget that will be cast in stone, which you can later hold an IT department or an external vendor responsible for delivering against contractually. The estimated costs will evolve once the project request is validated and moves to the project planning phase – and (yes, you guessed it) will evolve yet again once the project is under way. For those from the traditional client–vendor school, this inability to properly define DEFINE BUSINESS CASE AND PROJECT SCHEDULE Define an accurate business case and project schedule, so that we can …
Define the detailed requirements, and the DEFINE DETAILED corresponding IT solution, REQUIREMENTS which will enable us to …
Approve the project with signed-off commitment, but for this we need to …
Mobilize resources from IT and the business for at least a few weeks, so that we can … MOBILIZE RESOURCES
Figure 4.4 The commitment conundrum
APPROVE PROJECT WITH SIGNED-OFF COMMITMENT
46 Building a New Business Model for IT
costs and commitment upfront might come across as poor, if not downright irresponsible management – but wait for the explanations later in the chapter! Finally, any question of cost begs the question of who pays – the business or IT – as this will help to regulate supply and demand. This is a subject in its own right, which we will cover in detail in Chapter 7, ‘Financials’. At this stage let us focus on the management of the demand process regardless of who the payer is.
Benefits IT projects are supposed to generate business benefits in the form of either increased revenue (make money) or decreased costs (save money). All other types of benefits, e.g. increased customer satisfaction, reduced order cycle time, reduced risk or better regulatory compliance, ultimately end up being a lever for one or both of these categories. When quantifiable, benefits can be usually be measured in operational terms like sales cycle performance (e.g. average order size), delivery performance (e.g. order cycle time) or customer service performance (e.g. first-call resolution rate). Ideally you should be able to translate this into increased revenue or decreased costs, but in practice this is extremely difficult – you can never be sure to what extent changes in benefits are linked to the outcome of the project or to other factors. There will also be intangible benefits whose effects are difficult to quantify, e.g. customer satisfaction, customer referrals or company reputation through better customer service. Where possible, these should also be identified at this stage, so that they can later be tracked from a trend perspective as part of the ongoing cost–benefit analysis. It would be nice to be able to quantify and nail down expected benefits at this stage. Unfortunately, just as for costs above, benefits can only be estimates, which will be firmed up during project implementation, and once in production, monitored as part of the ongoing cost–benefit analysis. Some companies take benefits one step further and reason at a higher level in terms of ‘business value’ (usually shareholder value), but also stakeholder value (i.e. for customers, suppliers, employees and the community in which the company operates). The notion of business value can also help when you are trying to compare the business benefits of two very different projects. For example, if one project for the sales force can generate $3m in increased sales, and another for HR can result in increasing employee retention by 15%, then you would need to roll up the various benefit criteria into some sort of business value indicator, otherwise you’d be comparing apples and oranges (though
Managing demand 47
another equally valid view would be that you shouldn’t even be comparing them in the first place). We have therefore deliberately avoided using the term business value in this book because it is ultimately a subjective, often overused, ‘feel-good’ term that can mean different things to different people.
Technology and architecture alignment That business initiatives would need to align with technology and architecture might appear to some to be a case of the tail wagging the dog. After all, the technology is there to serve the business, so why should this even come into the picture, you might ask? Well, there are two answers. Firstly, in the rapidly changing technology landscape that characterizes IT, new technologies come and go every 3–5 years. And since the IT department can only be skilled in so many of them, the most fundamental technology alignment is to try and ensure that the IT department actually has the skills needed to build or configure a new system (NB, when the technology is known, which is not always the case at this stage of the planning phase). When this is not possible, you can, of course, always bring in external resources or even outsource the work, but you’d still have to deal with the skills gap once the systems are in production. Secondly, there is no such thing as a stand-alone system. Even if it somehow starts off that way, changing business requirements driven from the outside (e.g. regulatory requirements or an acquisition) or the inside (e.g. a reorganization or new product launch) will sooner or later require that system to share information with other systems. Thus all new applications ultimately end up as part of an enterprise whole, linking in one way or another the demand and supply chains, from marketing and sales through to order management and customer service. They therefore have to be able to talk to each other, both logically, e.g. to share customer or order information, and physically, e.g. to be technically compatible at the bits and bytes level. In the absence of technology and architecture alignment, you end up over time with a complex mix of different systems, built at different times, by different people, for different reasons. The resulting islands of automation are costly to manage because of the multiple technologies and skill sets required, and are inherently unreliable because of data duplication and process inconsistencies. At the end of the day, you can’t just say ‘Let’s roll out a new system!’. Any new project request must be scored against technology and architecture alignment – ideally based on a technology road map or an architecture plan linked to business objectives
48 Building a New Business Model for IT
Risk Risk is usually an absent ingredient in demand management. Managing risk is all about monitoring the likelihood of events negatively impacting a project, and mitigating that risk by taking preventive action. Risk management should cover at least the following areas: • Technology and architecture: how tried and tested is the technology to be used (when known), what is its level of complexity and how stable is it? Will it integrate well into the enterprise architecture, or could it end up as an island of automation, with the resultant disconnect from the rest of the business? • Organizational: how mature is the organization in being able to adapt or change its processes? Will it be open to change, or resistant? • IT resources and skills: does IT have the necessary resources in terms of availability and skills to actually deliver this type of project? • Benefits realization: are the benefits put forward for the business case actually realistic and achievable? Finally, risk management is an ongoing activity, and not just a one-off limited to getting a project approved. As we have seen above, nothing is cast in stone – the business environment, costs and benefits can and will change over the life of the project. Risk will therefore also change, and new risks might occur which need to be monitored. A typical example is technology risk, which only becomes clearer once the project has started (and even then sometimes only after a pilot is up and running 3–6 months later).
Approving the business case Once the project request has been appropriately scored in terms of business alignment, costs, benefits, technology and architecture alignment and risk, they can now all be combined to produce a business case. This can be defined in either financial terms (ROI – return on investment; NPV – net present value; IRR – internal rate of return; payback period or break-even point) or operational terms (sales cycle, delivery or customer service performance) or both. Using one or more of the traditional financial ROIs mentioned above to define the business case is not particularly well-suited to IT projects. It is more realistic to measure ROI in operational terms by evaluating how new or significantly improved operational capabilities
Managing demand 49
can help to gain a competitive edge by reducing costs, driving growth or improving customer loyalty. This will be discussed in more detail in Chapter 7 on financials (‘The limits of financial ROI when applied to IT’, p 102). Whatever method is used, once the business case is approved and executive sponsorship is obtained, the project request moves down to the next stage as a project, otherwise it remains in the pipeline pending further evaluation. Finally, note that even the business case at this stage is an estimate (you must be getting used to this by now...) since both the costs and the benefits can only be validated after the project has started. One of the biggest myths of the traditional business model is the belief that you can define a watertight business case upfront, and then expect the documented business benefits to start flowing once IT has delivered the solution.
Planning approved demand The end result of this second screening and validation process is to move a project request down to the final stage of a project (NB not just new projects, but also work related to keeping the lights on). Note that though the project is approved (as in ‘this is a good idea and we should be doing it’) it will only exit the funnel for execution if funding is available. It is at this stage that a project manager is usually assigned, and detailed planning, budgeting and resource allocation (where possible) takes place. Not surprisingly, detailed planning can and will result in changes to the original information provided at project request level which led to the approval, mainly in the areas of technology, timings, costs and risks. These changes then become part of the forecasted planning for the project.
Linking demand to resource capability The processes described above for capturing and approving demand from idea through to project primarily address business aspects like costs and benefits. But what about IT resource and scheduling constraints – after all, demand might be justified from a business perspective, but does IT have the resources and the skills necessary to do it? An integral part of the approvals process should therefore be to match project requests which are in the final stages of approval to IT resource and scheduling constraints. This exercise should normally already be part of the risk profile of the project request, discussed earlier in this chapter.
50 Building a New Business Model for IT
The outcome of this resource demand analysis and capacity planning would be to secure the necessary resources as far ahead of time as possible. This would be done through a combination of new hiring, contractors, reprioritization of other project requests – or even concluding that the work cannot be staffed this time round and should be put on hold till resources become available. If you don’t do a resource demand analysis and capacity planning as part of your approvals process, then the business executives who approve projects are essentially saying that IT has elastic resources and that they should go and figure it out. Which, to put it mildly, is highly unreasonable. Can you imagine what would happen if the same business executives put together a business plan which called for building so many units of new products A, B and C, and then telling manufacturing to go figure out the resourcing, materials and production planning – and by the way, without impacting the already planned production of other products?
Approving demand based on portfolios IT spending on new projects and running production systems represents an investment (in the general sense of the term, whether its capex or opex). You therefore expect a payback in the form of business benefits, either in financial terms (e.g. increased revenue or decreased costs) or operational terms (e.g. decreased order cycle time or reduced customer service handling time). Demand for IT spending in the average company is very high (2–10% of revenue and up to 50% of capital spend), spans all BUs and always exceeds IT’s ability to deliver in terms of budget, resource and scheduling constraints. With so much competing demand coming in at the top of the pipeline, and only so much being approved at the other end (see Figure 4.1), it would make sense to spread it across a number of well-defined investment categories based on a combination of business objectives, expected return and risk. An analogy is personal investment, in which we spread our money across various categories based on personal objectives, expected return and risk. The resulting portfolio would comprise, for example, cash, stocks, bonds and a mortgage, each of which represents a different mix of risk and return (see Figure 4.5). Portfolio planning is as applicable to IT investment as it is to personal investment. An IT portfolio would be well-balanced across new projects and running and enhancing production systems, based on business objectives, expected return and risk. It would also be managed over time to ensure it continues to meet business objectives, with the category mix varying based on changing business requirements like acquisitions, competitive threats or regulatory compliance. For example, a serious downturn in a company’s market sector
Managing demand 51
Personal Investment Portfolio
15% CASH
40%
STOCKS 25%
BONDS HOUSE
20%
IT Investment Portfolio KEEP THE LIGHTS ON
10% 5%
INCREASE REVENUE
5% DECREASE COSTS
10% 70%
REGULATORY COMPLIANCE STRATEGIC INITIATIVES
Figure 4.5 Portfolio examples
might result in it reducing the funding for everything except keeping the lights on, just as an upturn in the market might lead it to increasing funding on strategic innovations. A basic example of investment categories that can be used for IT portfolio management is: • Keep the lights on • Generate revenue • Reduce costs
52 Building a New Business Model for IT
• Regulatory compliance • Strategic initiatives By scoring all demand against an appropriate portfolio mix (e.g. 70% keeping the lights on, 10% generate revenue, 5% reduce costs, 10% regulatory compliance and 5% strategic initiatives), a company is better able to select and manage its IT investments so that limited resources are assigned to achievable, business-aligned goals. This would be done in the project request stage as part of the approvals process, in conjunction with the business case. Portfolio planning would also allow a company to launch projects and fund applications which would otherwise stand little chance of being approved based solely on a business case. For example, an experimental, high-risk, high-return project would in all probability not be approved if viewed in isolation, nor would it continue to receive funding once it went into production as an application. However, when viewed in the overall scheme of things from a portfolio perspective, it might be approved as say, part of the ‘strategic initiatives’ category. Portfolio planning would not only look at demand to see whether it corresponds to business objectives – it would also take the reverse approach and look at the business objectives to see whether there are too few or too many corresponding project requests. For example, an analysis of the sample project investment portfolio of Figure 4.6 might
BUSINESS OBJECTIVE – CLIENT – BUDGET (bubble size)
INCREASE REVENUE
DECREASE COSTS
REGULATORY COMPLIANCE
STRATEGIC INITIATIVES
MARKETING
SALES
ORDER MANAGEMENT
CUSTOMER SERVICE
Figure 4.6 Sample investment portfolio breakdown by client
FINANCE
Managing demand 53
reveal that the business objective ‘decrease costs’ has too much project funding, whereas the arguably more important objective of ‘increase revenue’ has too few projects. This might result in a decision to modify the portfolio mix to obtain a more balanced portfolio in terms of investment objectives. Finally, portfolio planning doesn’t end with prioritization and approval. Once the corresponding projects have been launched and are under way, we then have to monitor the performance of the portfolio over time to ensure it is still meeting our initial objectives. This will be covered in more detail in Chapter 6, ‘Monitoring Costs and Benefits’. The alternative to a portfolio-based approach is a project-by-project approach – unfortunately the norm in most companies – in which projects are approved as a collection of individual and often unrelated items. This lack of categorization not only makes it difficult to invest rationally, it also makes it difficult to react to a changing business environment: instead of adjusting spend based on investment category criteria, we end up doing so based on individual project criteria (suitably biased by business sponsor influence...). Investment on a project-by-project basis would be the equivalent of financial investment based on individual financial instruments, e.g. company A, bond B and currency C, which taken individually might make sense, but which taken collectively could leave you with a low-risk/low return portfolio in an economically favourable market, or at the other extreme a high risk/high-return portfolio in a recessionary market – and an inability to know which items to adjust when the market changes. As a first step away from a project-by-project-based approvals process, the basic portfolio-based approach described in this section would represent a great leap forward in terms of implementing a rational investment planning process. Organizations which have reached a high level of investment maturity would then move on to portfolio optimization techniques, e.g. the ‘efficient frontier’ approach, whose objective would be to define portfolios which would yield the maximum value for the least cost. The efficient frontier is beyond the scope of this book, especially since most organizations have not even reached the basic maturity level described here.
The missing component in Project Portfolio Management There is a fundamental flaw in the concept of Project Portfolio Management (PPM) as generally put forward by some vendors and consultants from the IT industry. As we have already seen, only 20% of the average IT budget is allocated to projects (where projects refer to investments for new systems), while the remaining 80% is needed to run production applications or keep the lights on. Today’s projects are tomorrow’s applications, and the operating costs of these applications over their lifetime consume on average five times the original project investment – which explains the 80:20 ratio. And since it would
54 Building a New Business Model for IT
be highly exceptional for a project to yield its target business benefits on day one, the resulting applications will require ongoing funding before they are able to yield acceptable business benefits. In its haste to borrow a concept from the financial industry, the IT industry overlooked the applications component, thereby implying that all you had to do was to fund your projects correctly, and presumably everything else would fall into place once the resulting applications were delivered (no doubt generating all those business benefits enshrined in the business case...). In reality of course, correctly funding projects and delivering the resulting applications (usually within the space of a year) is but the first step of a very long journey (of 5–10 years). This should require an application to be managed as an asset which has to be continuously funded over its useful life – funding which should be an explicit part of the investment planning process described in the previous section, and not something relegated to a category called maintenance or keeping the lights on. The IT industry subsequently coined the term APM (Application Portfolio Management) to plug this gap. Which it only does partially because, when you really think about it, there should not be any distinction between projects and applications. If anything, the application should be the reference, since a project is nothing more than the short development phase of a very long application life cycle. This will be explained in more detail in Chapter 5 ( ‘Maintenance – letting go of the M-Word’, p 79) and Chapter 7 (‘Ongoing Cost–Benefit Analysis for Applications’, p 96). Portfolio planning as proposed in this chapter, therefore, is not just about projects (according to traditional PPM), but also about applications. This explains why we prefer the more general term portfolio management, without any distinction between projects and applications.
Business cases are in the eye of the beholder The demand management process described in this chapter is based on a rational decisionmaking process, which takes into account business priorities, costs, benefits, risk and IT resource and scheduling constraints. This should logically enable a company to define IT demand objectively, resulting in the ‘right’ projects being approved and then funded. So much for the theory. In the real world, unfortunately, approving and funding projects usually do not follow such an objective process. What often happens is that projects are approved and funded even if they don’t have a valid business case, simply because the
Managing demand 55
corresponding business sponsor or VP shouts the loudest and has a lot of organizational influence – something which most organizations will of course refute with much vehemence. This is even more acute when multiple BUs have to compete for IT funding from a shared, corporate budget. Human nature and corporate politics will usually ensure that BU priorities end up taking precedence over the ‘common good’. The business case, like beauty, is therefore in the eye of the beholder. At the end of the day, if you want a project approved, you can always put the right numbers together (as the saying goes, ‘figures lie, and liars figure’). And this doesn’t necessarily have to be a deliberate intention to deceive – on the contrary, most business cases are put together with the best of intentions, only they tend to magnify the benefits and reduce the risks and costs. There are other organizational factors at play too. From an executive perspective, a successful IT project can be a stepping stone to a promotion, or a means of consolidating one’s position in the board room. So there will always be reasons for senior executives to launch IT projects which on the surface are based on a justifiable business case – especially when surfing off the latest buzzword, e.g. e-business and CRM in the late 1990s – but upon further examination can be shown to be unjustified in terms of costs (underestimated), benefits (overestimated) and risk (unacceptably high). The rational process introduced in this chapter, with demand in clearly defined stages, and decision-making supported by business cases and portfolio planning, should go a long way in reducing the chances of the ‘wrong’ projects being approved. Realistically however, they can never be entirely eliminated.
Building the IT plan and budget A properly managed demand pipeline, comprising both new projects and keeping the lights on, should go a long way towards helping the annual planning process, which results in the IT Plan and the corresponding IT budget for the next financial year. The IT budget would basically be prepared in three stages: wish list, approved work and funded work. The wish list would be the total pipeline at budget preparation time, which would be the sum of all ideas, project requests and projects. A filtering and screening process would then reduce this wish list to an approved list, suitably categorized into portfolios, which would represent things the company must do (like keeping the lights on and regulatory projects) and would like to do (all other project requests and projects). Finally, based on the available IT budget, the approved work is reduced to funded work, which represents keeping the lights on plus projects which will actually be done.
56 Building a New Business Model for IT
Without a demand pipeline, the annual planning process essentially takes the form of a frantic organizational scramble over a 1–2 month period (the greater the sums involved, the less time you are given!) as people rush to put some numbers together so that they can ‘turn in their projects for next year’ (one CIO at a mobile operator uses the term ‘project tsunamis’ to describe this phenomenon). In such an environment, the chances are high that investment planning will follow a subjective project-by-project approach, rather than the more rational portfolio-based approach described earlier. IT budgeting is a vast subject which is covered in most IT business books and websites. Martin Curley’s book ‘Managing Information Technology for Business Value’ provides a good introduction to IT budgeting (see ‘Further reading’ at the end of Chapter 9).
Demand from a customer perspective The demand management process described in this chapter should result in an up-to-date pipeline of customer demand, in which customers are aware of the status of their ideas and requests (e.g. pending approval or on hold), in much the same way as customers in the business world are aware of the status of their orders (e.g. in progress or shipped). This compares with the current situation in most companies in which customer demand – when it actually exists – is characterized by a black hole from which it is very difficult to extract information between the time a request is made and the time someone gets back to you much later on. Proper management of demand in terms of timeliness (status feedback on a weekly basis, decisions on a weekly to monthly basis) and rational scoring (based on costs, benefits and risk) should normally encourage customers to feed their ideas into this type of demand chain, since they know they will be objectively processed.
Shaking off the chains of the construction industry If you are from the traditional school of the client–vendor contractual relationship in which costs and deliverables need to be defined in detail up front, then not surprisingly, you might have some difficulty buying into this particular way of managing demand. There don’t seem to be any firm numbers to drive decision-making, and nobody seems to responsible for anything, since everything can change later down the line! You would probably find it unthinkable outside of IT for a customer to approve or reject a project for a building of ‘approximately 20 storeys’, for which the building contractor would only be able to commit contractually to costs 3–6 months after digging has started. So why should it be any different for IT? Well, there are a number of fundamental reasons why.
Managing demand 57
The main reason, as we saw in Chapter 3, is because you cannot toss a specifications document to an IT department and basically say ‘please do this for me’ and get a quote and a delivery date. Also, as we have just seen, the commitment conundrum makes it logically impossible to commit to firm numbers anyway during the approvals process. But there are other fundamental reasons too, as we shall now see. The construction industry works primarily based on standard products and components, and standard categories of labour, which, by definition, have standard costs. This enables a reasonably accurate cost estimate. While there might be standard categories of labour in IT (developer, business analyst...), the actual deliverables are anything but standard, since all projects are different. The tools and technologies used by these categories of labour can also vary significantly. Though long a dream in IT, it is not yet possible – and probably never will be because of human behaviour and the fact that each company is different – to have standard process components with standard costs which can be used to produce a quote and delivery date. Then there’s the question of volume of demand, which for the construction industry is a fraction of that of the average IT department. Because of the regulation of supply and demand inherent in a market economy, a construction firm is unlikely to bid for hundreds of requests to build houses, apartment blocks and skyscrapers – most of which are categorized as urgent and need to be built within the next 6–12 months! Rather it will select those which it is best positioned to win and invest the time and effort (many months) to prepare the proposal and contractual commitment. An IT department, however, doesn’t benefit from a market-based regulation of supply and demand, as we saw in Chapter 1. It can have thousands of work requests on its plate at any one time – which it doesn’t even have the luxury not to bid for. All require attention, evaluation and a decision. Even if it wanted to, a company would not have the resources to assign to managing demand to the level of detail required for accurate business cases and contractual commitments. Another factor is turnaround time for customer requests. Developing accurate business cases and providing firm numbers take time – weeks or even months, not days. Customers making requests to IT cannot be expected to wait months before getting a reply. Business reactivity and cycle times simply don’t afford such turnaround times – not to mention such poor customer service. Finally, as we shall see in the next chapter on supply, once a project has started, there will usually be more surprises. For example, ambiguous or evolving requirements might require clarification, which could influence the shape of the final product – and hence the corresponding costs, benefits and risk. In some cases, formal specifications as such might not even be possible, and a mock-up or prototype would first need to be produced before any detailed budgeting and contractual commitment could be done.
58 Building a New Business Model for IT
If it hadn’t already become clear by the end of the first three chapters that the construction industry does not map very well to IT, this should hopefully be the case by now when discussing demand management. Unless this reality is accepted by both IT and the business, and each put on a new hat (not a hard hat...) when it comes to thinking about managing demand, companies will continue to have a high percentage of the ‘wrong’ projects being approved – with unrealistic business cases to boot – and customers will continue to be frustrated in terms of getting timely answers to their requests for IT products and services.
Funding approved demand Up to now we have shown that during the demand management process it is not possible to accurately determine costs, benefits and risk. Realistically, these can only be estimates. It therefore follows that the business case will also be an estimate. And finally of course, it is not possible to get contractual commitment from either the customer in terms of signed-off requirements, or from the IT department or vendor in terms of deliverables, costs and schedules. So how do we fund approved demand in such a situation, since the traditional approach is to only give the funding in exchange for commitments on deliverables, costs and schedules? The answer depends on the type of demand, planned or unplanned, as discussed earlier in this chapter: • For unplanned demand, i.e. ‘minor’ requests like small change requests, feature requests, emergency fixes, etc., which concern production systems, with short approval times (weeks) and short delivery times (1–3 months), funding is best achieved by drawing from a budget envelope, similar to a current account or chequing account. The amount would be fixed as part of the annual planning process, e.g. $100 000 for application A, or $1m for applications A, B and C. The account is then decremented until it runs out. These amounts would form part of the ‘product development’ budget for an application (rather than the ‘maintenance’ budget, a term we will do away with at the end of the next chapter). We will talk about the various cost categories of software in more detail in Chapter 7 on financials. • For planned demand, i.e. ‘major’ requests like new systems and key enhancements to production systems, with long approval times (months) and long delivery times (6–12 months or more), funding is best done incrementally, based on 3–6 month milestones which recognize uncertainty and the evolving nature of costs, benefits and risk.
Managing demand 59
Examples of milestones would be proof-of-concept, pilot, first phase with basic deliverables, second phase with additional deliverables, etc. Using this approach a project that did not deliver acceptable benefits at acceptable costs at a given milestone could see its funding postponed, suspended or even cancelled entirely. This should not be seen in a negative light as ‘punishment’, but simply as part of the reality of delivering IT solutions, which is an organizational learning process out of which you might get some things that work well, and others that work less well and need to be revisited. This would be similar to the way venture capitalists fund requests from entrepreneurs, via increasing increments depending on the results of milestones in the business plan. In this way they are better able to balance risk and return, rather than literally bet a lump sum on an untried and untested plan. For a further discussion of this venture capitalist funding approach, please refer to the article by Harvard Professor, Rob Austin, entitled ‘No Crystal Ball for IT’ (see ‘Further reading’ at the end of Chapter 9), which also discusses the downsides of equating IT to the construction industry.
Roles and responsibilities The organization, roles and responsibilities required to manage demand is explained in Chapter 8.
5
Managing Supply
Never commit both to budget and to schedule for an application you have never delivered on budget and on schedule before. (Paul A. Strassmann - from ‘The Twenty Deadly Sins of Project Management’, 1991)
Managing supply - traditional model Managing supply under the traditional model is based on the waterfall method (see Figure 5.1, derived from Figure 2.1), so called because of the image of water falling over successive phases, each performed by different teams of specialists and conditioned by signed-off acceptance before being able to proceed with the next one. And since water can’t flow upstream, the clear message is that sign off and commitment for a given phase is a one-way journey, with any return upstream very difficult. Furthermore, each phase or process is subject to the ‘it’s not my problem’ problem explained in Chapter 2. This rigidly procedural, life-cycle approach starts off with analysts sitting down with business users in an attempt to document requirements. We saw in Chapter 3 how this phase yields a detailed, contractually signed-off SoR, and its inherent disadvantages. Depending on the buy or build decision, this detailed requirements document would then drive either: • In the case of a decision to build, a sequential waterfall method, i.e. a strict linear approach from analysis, design, development and testing through to implementation, with each phase performed by different teams of specialists. • In the case of a decision to buy, an often lengthy package evaluation process – which in extreme cases can be likened to the search for the holy grail, with a requirements document weighing in at over a hundred pages. This is then followed by the configuration and customization of the chosen product – often using the waterfall method – to correspond to the exhaustive requirements.
62 Building a New Business Model for IT
Define requirements
Buy/build Design
Develop
Test
Deliver/ implement Service/ support
Figure 5.1 Managing supply with the « waterfall » method (traditional model)
In either case, though the final deliverable theoretically corresponds to documented requirements, we saw how it stands little chance of corresponding to actual requirements. At best it represents a starting point for subsequent rework; at worst it is unusable. This reality is vividly illustrated by the following example. A project manager from a consulting company had a particularly conflictual meeting with one of his clients, for whom they were building a new system. When one of the client’s key users happened to mention how vital it was that the system meet their requirements, he reached into his attaché case and brought out the contractually signed-off SoR, waved it in the air, and looking firmly at all the people round the table, said in no uncertain terms, ‘Folks, I’m not here to meet your requirements – I’m here to produce what has been signed off in this document.’ Finally, the problematic nature of the supply side of the traditional model led IT to borrow a term from industry – maintenance – to refer to corrections and rework which should normally have made it in the first version but didn’t. Maintenance (also called corrective maintenance, an oxymoron), became in essence a dirty word, the existence of which prevented IT from doing the more ‘noble’ work of delivering new projects. These new projects would in turn lead to their own load of maintenance, thus spawning that bane of any CIOs existence, the maintenance backlog. This explains why there is more status and recognition in working on new projects rather than on maintenance (as we’ll see towards the end of the chapter, maintenance is a term that should not even exist in IT). Note that enhancements to existing systems, though normally also part of maintenance, are always accounted for separately by IT – since they were not part of the original
Managing supply 63
specifications, they are essentially considered new work, and therefore politically acceptable. Hence that all-too-frequent rejoinder from IT to the business ‘That’s not a bug – that’s an enhancement request!’ (read ‘That wasn’t part of the original contract...’).
Managing supply - new model The downsides of the waterfall method are nothing new. Alternative approaches have existed as long ago as the 1980s and have been successfully implemented in many companies around the world. Unfortunately, they have never become mainstream; we will examine the reasons why further on in this chapter. Here are some of the most well-known approaches: • Agile (several flavours exist): • ASD (Adaptive Software Development); • AUP (Agile Unified Process); • Crystal clear; • DSDM (Dynamic Systems Development Method); • FDD (Feature-Driven Development); • Scrum; • XP (Extreme Programming); • JAD (Joint Application Design); • PD (Participatory Design); • RAD (Rapid Application Development); • RUP (Rational Unified Process); • Spiral method. We will not get into a discussion about the relative merits of each and deliberately remain at a generic level called ‘iterative development’ or ‘prototyping’, since the common denominator amongst all of them is the delivery of a prototype after one or more iterations.
64 Building a New Business Model for IT
There is no one ‘right’ iterative method; you might chose one or another depending on the nature and scope of the work to be done, and whether you are configuring an off-the-shelf package or developing something from scratch. Whatever the various permutations and derivatives of these approaches, they all subscribe to the basic premise that it is very difficult for users to say upfront what they want out of a new system. This can only be done once they have had a chance to see what the system is capable of doing, either visually or by hands-on experimentation. In practice, this will mean first obtaining high-level requirements in interactive workshop sessions rather than in one-on-one interviews or in meetings. These high-level requirements are not cast in stone, but will need to be adjusted and confirmed by validating an intermediate result, usually in the form of a prototype. As a Japanese proverb goes ‘When I hear, I forget; when I see, I remember; when I do, I understand’. The first version delivered based on this prototype would not be considered an end it itself, but merely the first step in a series of incremental releases whose content will be validated based more on real-world usage than on previously documented requirements. Finally, the traditional client–vendor relationship with its contractually formalized upfront requirements and deliverables would not be applicable, for the simple reason that both would be moving targets. This would consequently be replaced by a partnership in which a cross-functional team (IT and the business) works towards a common business objective of achieving workable results over time. The supply phase for the new model is shown in Figure 5.2. Let us now see how this would work in practice.
Subsequent releases
Define requirements
Buy/build Develop/ configure
Run process workshops
Decide which processes to automate
Design
Multiple iterations Validate
Figure 5.2 Managing supply (new model)
Test
Deliver/ implement
Service/ support
Managing supply 65
Iterative development in practice The iterative or prototyping approach can be broken down into the following phases, which we’ll explain in further detail below: • Defining detailed requirements during workshops; • Prioritizing business processes; • Building a prototype; • Validating the prototype; • Implementing a pilot; • Implementing subsequent releases.
Defining detailed requirements during workshops As explained in the previous chapter on demand management, once an idea has become an approved and funded project, it falls out of the pipeline into the supply phase (Figure 4.1). However, since the project was approved based on high-level requirements and a high-level business case, the first step in the supply phase is to define the detailed requirements – based not on ‘specifications’, but on process modelling and data modelling. Process models do two things. Firstly they explain what people do, e.g. first someone answers the phone, then takes down the customer’s name and address, then enters the order. Secondly, they explain how they do each of these steps, e.g. manually, semi-manually or automatically using a particular tool or system. A process model is the fundamental starting point for any IT project since it is from here onwards that you’re going to decide which parts of your work you’re going to try and improve. Data models describe the underlying relationships between the everyday things that business users work with. To take the simplest example, a customer can have more than one address, and an order can be shipped to any one of these addresses. Depending on what business you’re in, this can become even more complex if, for example, an order can be split into multiple parts, and each sub-order can be shipped to a different address. Such relationships are part of the data foundation (a construction term which is fully justified in this context) of whatever software application you’re going to buy or build to meet the processes described in the process model.
66 Building a New Business Model for IT
Process and data modelling will be done during interactive workshop sessions with the core project team, rather than in traditional meetings and interviews. The number of sessions and the mix of participants will depend on the functional areas being addressed and the level of processes being defined. For example, participants in a CRM workshop who must define processes from first contact with a prospect through to order entry would include people from marketing, sales and order management. If, however, the objective of a workshop is to define the detailed processes for lead generation, then the participants would be limited to marketing and sales, with no-one from order management. In general, there should not be more than 7–10 people in a workshop. Finally, IT is always present in such workshops, the participants being the IT project manager, a business analyst and – in a radical departure from the traditional model – the lead software developer who will ultimately be responsible for developing the system (the changing roles of the business analyst and developer under the new model will be discussed in Chapter 8, ‘Roles and Responsibilities’). Sessions usually last 1–3 days, and are run by two people, one standing in front picking the participants’ brains and sticking post-it notes on the wall (representing definitions, processes and data), and a ‘scribe’ who notes it all down. This then becomes part of the documented deliverables, which usually don’t exceed 2030 pages maximum. This comprises: • Formal business definitions (e.g. what is a customer or what is a contract, which can sometimes take a day or more to gain consensus on); • High-level processes broken down into lower-level processes (usually not more than one level down), plus associated metrics; • Data entities and relationships; • Decisions made and open items. To see the types of deliverables that would arise out of a workshop, let’s use our pizza parlour example in Figure 2.3 of Chapter 2. Figure 5.3 shows how the process ‘take order’ could be broken down into detailed processes (the same exercise could be conducted for the processes ‘make pizza’ and ‘deliver pizza’, not shown here). We will talk further about these processes in the next section.
Managing supply 67
Take order
Identify customer
Enter order
Metrics: - average order amount - order processing time
Validate order
Up-sell or cross-sell
Propose payment options
Get payment method
Calculate customer discount
Confirm order
Get credit card details
Dispatch order
Process credit card payment
Figure 5.3 Detailed processes and metrics for a telephone pizza parlour
Figure 5.4 shows a simplified data model foundation on which the above processes would sit. Each box, called an entity, would translate into a physical file or table in a system (using a spreadsheet like Microsoft Excel as an analogy, each box would correspond to a separate worksheet). Translated into plain English, the data model reads as follows: • A customer can have multiple credit cards; • A customer can have multiple addresses;
CREDIT CARD
CUSTOMER
ADDRESS
ORDER
ORDER ITEM
PRODUCT TYPE
PRODUCT PRICE
Figure 5.4 Data model for a telephone pizza parlour (simplified)
68 Building a New Business Model for IT
• A customer can have multiple orders; • An order can be paid with only one credit card; • An order is delivered to a single address; • An order can have multiple items; • An order item corresponds to a product type (e.g. pizza, beverage...); • A product type can have multiple prices (e.g. based on criteria like quantity). The apparent simplicity of this model actually hides some essential business fundamentals which, if absent, would result in a system which would not be able to do everything its supposed to. Here are two simple examples: • If the assumption is made that a customer can only have one credit card, then that information could be held in the customer entity and you wouldn’t need a separate credit card entity as shown. In reality, of course, people can have more than one credit card, so if the system design didn’t reflect this, then the customer would be unable to pay with a second credit card, or she’d have to pay cash, or you’d have to note down the credit card number and process the payment manually. • People don’t only order pizzas for delivery to their own homes; they can also order from someone else’s place and have them delivered there, or they can order at work for office delivery. This is why a separate entity is needed for multiple delivery addresses. Now you might wonder why you’d even want to store addresses; after all, it could always be noted down at the time of the order. True, but if you wanted to streamline your order process and reduce the average time to take an order over the phone, then by simply selecting an address from the customer’s list of available addresses you can shave off at least 30 seconds from the order capture process. Getting the data model right before starting to do any coding – even of a prototype – is essential, because this captures the business rules on which your software solution will be built. If you get it wrong, then a whole lot of unpleasant things can start to happen: it can take longer to build certain features; they can be less reliable; reporting can become more complex; some business enhancements would become problematic, and indeed in some cases would no longer be possible. All of this translates into increased costs, increased cycle times, decreased reliability and decreased business benefits – all of which increase exponentially over time because systems once built end up being used for 5–10
Managing supply 69
years or more. All because ‘you didn’t have the boxes in the right places’! The good news is that by asking the right questions during the interactive workshops, the business fundamentals come to light pretty quickly. Finally, these workshops are usually held off-site to maximize the chances of full participation without disturbances, and to increase ‘bonding’ between participants. Note that such bonding doesn’t just apply to IT and the business, but also to different parts of the business who don’t always work together because of organizational or political reasons. For example, one of the first JAD sessions a project manager hosted brought together two organizational rivals whose mutual feelings were well known in the company. However, after three days of reviewing processes from an enterprise rather than a departmental perspective, they had started to get a better idea of each other’s challenges. While they didn’t exactly start hanging out together, they did reach an agreement to work as part of a team, something which was difficult to imagine just 3 days earlier.
Prioritizing business processes With agreement on the business processes (i.e. how the business works – or should work from here on), the next step is to select and prioritize those processes which need to be improved, and the associated metrics against which success will be measured. For our pizza parlour, it might be decided, for example, that the process ‘Take order’ needs to be improved, with the associated metrics being the average order amount (e.g. $15) and order processing time (e.g. a maximum of 2 min per order). Dropping one level down to the underlying sub-processes (Figure 5.3), we can see that the first metric, average order amount, could be increased by improving the process ‘Upsell or cross-sell’. This could be automated. For example, based on a customer’s order, the receptionist’s screen could propose discounted options for larger size pizzas or additional pizzas (up-sell) or additional products (cross-sell). The second metric, order processing time, could be reduced by identifying prior customers (e.g. via name and phone number) and encouraging them via loyalty discounts to leave their credit card details, which would no longer have to be taken down manually for each order. This would be done by creating two new processes called ‘identify customer’ and ‘get payment method’. Implicit in this stage is the definition of the solution, which is not necessarily based on technology. For example, not all processes lend themselves to automation or business transformation; in fact a lot of business processes are simply common-sense steps, which
70 Building a New Business Model for IT
are formalized in terms of roles and responsibilities. For example, in Figure 5.3, the process called ‘Confirm order’ simply means that the receptionist reads back the order to the customer on the phone to confirm quantities, pizza types and delivery address. All too often companies rush into project mode, which automatically assumes a technological fix to a business problem. Using this approach however, it becomes possible to identify those processes which are at fault and to explore ways of improving them – which may or may not require a technology solution. Sometimes a slight reorganization of roles and responsibilities or the addition of a new resource – at far less cost than that of a new IT system - can be sufficient to dramatically improve faulty processes.
Building a prototype Up to now, there has been absolutely no mention of technology. The objective has been to work purely at a business level and define the business problem, the corresponding processes and metrics, and the high-level solutions. For those processes which will require a technology solution, IT now goes away to either evaluate packaged solutions or to build an in-house solution, either as a new project or as part of an enhancement to an existing application. The pros and cons of buy vs build are beyond the scope of this book, suffice to say that even though there are many functionally rich software packages on the market today, with their correspondingly short delivery cycles, IT departments can still have good reasons for going down the build route. For example, a report in 01 Informatique of 27th April 2007, quoting from the ‘European Software Services Survey, 2006 to 2007’ by Forrester Research, which polled 115 European CIOs, states that 75% of respondents still build applications in-house - even though 58% had already invested in both an ERP and a CRM package. The choice of buy vs build, therefore, is not a clearcut decision, and many firms continue to do both. Whatever the option chosen, this step involves IT working in close conjunction with the business users on the project team in order to configure (package) or develop (in-house solution) a prototype to map to the prioritized business processes decided above. This is usually done via two or three iterative passes: • The first one is to produce a screen mock-up as quickly as possible (2-4 weeks maximum), the objective being for users to be able to walk through their processes and validate the content and positioning of information on the screen. This first pass would be limited to screens and information flow, and would exclude error checking, data validation and interfaces to and from other systems. There is usually lots of feedback from this first pass, but as it only represents a mock-up, correcting the inevitable design errors and assumptions costs far less in terms of time and money than if they
Managing supply 71
were applied to a finished product (5–10 times less). Finally, the mock-up can be either a throw-away version or reusable. Sometimes a throw-away version is justified when initially it is not yet sure on what technology the final product will be based, or when you can use existing technology to complete the mock-up much quicker. • Once the screen mock-up has been validated, the next step is to turn it into a prototype capable of being given to users to test and validate. This usually takes between 1 and 3 months, depending on the scope of the prototype, whether the mock-up was a throwaway version or reusable, and whether there are interfaces to or from other systems. Except where absolutely necessary, interfaces should not form part of a prototype because the complexity usually associated with building and testing them can significantly extend the overall delivery time. Depending on the iterative method you end up using and the degree of user involvement, the number of iterations and the interval between them can vary quite significantly. For example, XP iterations typically run over 2-week cycles; Scrum favours monthly ‘sprints’ and the deliverables from a JAD session could result in two iterations over a total period of three months.
Validating the prototype The project team then validates the prototype as meeting the minimum requirements necessary to be able to start using it and obtaining results. Since the project team already validated a mock-up during the first pass, there will usually not be many surprises, and the validation session will focus on running through the end-to-end processes with representative data, and ensuring there are no serious bugs. Note that the emphasis is on eliminating serious bugs, not all bugs – there will always be design errors and programming errors since software is not based on material components, which obey the laws of physics, but on processes, logic and human behaviour. Except for design changes and critical bugs, the feedback from this validation session will be incorporated into the next release, and not the one to be used for the pilot – see ‘Implementing subsequent releases’ further on.
Implementing a pilot Once the prototype has been validated by the project team, it would be tempting to roll it out. After all, the users validated it, didn’t they? Yes, up to a point – by a panel of users within the controlled confines of a conference room (hence the term conference-room pilot, or proof-of-concept). The real world, as we all know, is entirely different. The next step is therefore an operational pilot, in which the prototype is actually used in a live
72 Building a New Business Model for IT
environment by selected users for a duration of at least 2–3 months. Anything less will yield insufficient results, as it usually takes around one month to sort out the inevitable technical glitches and for users to settle down with a new system. The main objectives of an operational pilot are to validate the business objectives and identify the real-world problems that only show up when used in a live environment. Users have to be motivated to use a new system and have to be able to answer the very self-centred question ‘What’s in it for me?’ Only a pilot will enable you to factually see whether they can actually benefit from the new system – or if process or system changes are going to be required to make this happen. Finally, a pilot should be kept small enough in scope, for two reasons. Firstly, to keep the momentum going and deliver results quickly, so people don’t have time to doubt. Remember, most IT projects either fail or fall way short of expectations – so they’ve heard it all before and only rapid results will ensure you win them over. Secondly, if the pilot initially does not meet the expected business objectives, or if real-world operational problems remain unresolved, then it should be financially and organizationally acceptable for the project team to suspend the pilot without heads necessarily rolling. Remember, a pilot by definition means testing the waters before committing oneself, so it should not be considered as an end in itself to be evaluated in terms of success or failure, but rather as a means to define the way forward. If you cannot do this, then whatever you’re running is not a pilot, but a phased implementation, since the clear implication is that whatever happens during the pilot is not going to change the rest of the project schedule. Such projects stand a high chance of ending up in damage-control mode from day one, and then either fail outright, or are suitably descoped in order to meet deadlines, regardless of the usefulness of the deliverables. When this happens, the chances are it will be someone else’s fault (the vendors, the consultants, the users, IT – take your pick), instead of it simply being a healthy learning experience in a new area. At the end of the day, the rationale behind a pilot is that it is better to understand a little than to misunderstand a lot. Finally, ESPs and software vendors tend not to like pilots, because not only can they significantly reduce the size of the upfront deal, they also introduce downstream uncertainty in the form of a go/no-go stage gate. In this respect there is a clear clash of agendas: ESPs and software vendors want to maximize upfront revenues and long-term commitment (fully understandable), whereas customers want to minimize risk with respect to sunk costs (perfectly understandable too). Both sides therefore need to be aware of each other’s agendas and plan accordingly. This clash of agendas once again underscores the essential
Managing supply 73
difference between a client–vendor relationship (in which each party has different goals) and a shared risk–reward partnership (in which both partners share the same goals).
Implementing subsequent releases Implicit in the iterative approach is the reality of time boxing, or regular releases at predefined intervals. Initially this would be at 3–6 month intervals while the learning phase is at its peak, and later at 6-12 month intervals as processes stabilize. The objective is to bring out ‘good enough’, practical functionality at close and regular intervals, rather than ‘perfect’, documented functionality at some distant future date. In what is generally a noncontractual model, this is usually the only contractual part – albeit implicit – namely that IT will bring out new releases quickly. Recalcitrant users will be more tolerant of bugs and missing features if they know that things will be corrected 3–6 months down the line. Time boxing also enables a milestone-based approach in which the success of each release influences the future of the project in terms of incremental funding (funding approaches were discussed at the end of Chapter 4). Finally, time boxing is a practical way of dealing with the reality of user requirements being a moving target, for which it is next to impossible to nail down a realistic delivery date. You therefore build in predictability by defining a series of regularly recurring delivery dates instead, and varying the scope of each release accordingly.
In summary The value of the iterative/prototyping approach lies as much in the cross-functional consensus between the various participants as in the deliverables themselves. This ensures that users are active participants with a personal stake in the final outcome, rather than passive customers with an eye on the contract. The prototyping approach also enables a company to better manage risk, because subsequent funding and system evolution (and not ‘maintenance’, a term we will deal with at the end of the chapter) will be based on actual usage and real-world business benefits, rather than on the documented requirements of some grand design. Finally, the prototyping approach redefines success and failure. The sterile contractual delivery of documented requirements within time and budget is replaced by an outcomesbased approach in which human behaviour and processes are mapped to systems as part of an ongoing organizational learning experience. Prototyping recognizes that software development is essentially a design process, which can account for up to 80% of the total
74 Building a New Business Model for IT
effort required; construction is only 20% (building software is cheap – all you have to do is compile the code). These ratios are reversed in the construction industry, in which it is the construction effort which consumes the lion’s share of the work.
Why prototyping has never become mainstream Prototyping in various forms first emerged as long ago as the 1980s and has been successfully implemented in many companies around the world. For examples of real-world successes, please refer to an article by Harvard Professor, Rob Austin, entitled ‘No Crystal Ball for IT’, and the case studies in ‘The CRM Project Management Handbook’ (see ‘Further reading’ at the end of Chapter 9). However, prototyping success stories have generally been more the result of individual initiatives rather than part of any official policy at senior IT and business level. My own case is representative of this: every project I’ve ever managed in my IT career, in multiple functional areas, countries and industry sectors, with solutions bought or built, was based on the iterative approach. I have never once asked a business user to sign off a contractual document – and delivered enough successful projects not to convince me to change my methods. However, the very same organizations were dominantly waterfall, and I worked alongside other project managers who had only ever worked using the waterfall approach. The CIOs or directors we all reported to were too busy dealing with fire-fighting and trying to square the circle of unlimited demand and limited resources to even care how their senior managers were running their particular application area, as long as they obtained results. Let us now examine the main reasons why prototyping has never become mainstream, which will serve to underline the enormous challenges that need to be overcome in order to change this.
It represents revolution, rather than evolution Implementing organizational change is always a challenge. Evolutionary change, in which the required behaviour remains well anchored to familiar frames of reference, is usually easier to accomplish than revolutionary change, in which the required behaviour represents a more or less clean break with – sometimes even a rejection of – the old ways of working. The bad news is that there’s nothing evolutionary about prototyping when compared to the traditional waterfall method. It represents a decisive break with sacred cows like client–vendor relationships, signed-off requirements and success defined by delivery to spec, time and budget. Everything is turned on its head – in short, it represents a revolution if ever there was one.
Managing supply 75
By definition, revolutionary change can only be implemented at a micro level – and even then only when it does not threaten the existing order. By the time it eventually moves to macro level, it is no longer revolution, but evolution, since it will have followed the process outlined in the quote at the start of Chapter 1.
It combines risk taking and trust as integral parts of the job Whatever its disadvantages, the waterfall method has at least one very clear advantage: it provides contractual safeguards which people can turn to if required. Since it is based on process (do this, then that, under such and such terms and conditions) by opposing parties (clients and vendors) it allows the players to apportion blame if things go wrong. Success or failure is therefore defined in terms of compliance with activities, processes and procedures – which may or may not produce a positive outcome, regardless of the theory that says it should. The iterative method, however, provides no contractual safeguards, since it is based on outcomes (i.e. workable results) by a single group of people (a joint user–IT team, or partnership). Success or failure is now defined in terms of a positive outcome by the team, with the underlying activities merely a means to that end. Needless to say, risk-taking is an integral part of the job, both for the IT project manager and for the business sponsor. They sink or swim together – it could be said that they are doomed to succeed. Finding two such risk-takers willing to form a partnership and able to motivate their own teams in new ways of working is a challenge. This can only be possible if there is a good personal and business relationship between the two, based on mutual trust, respect and a good track record. Such trust doesn’t fall out of the sky – it has to be built up over a period of time. Unfortunately, because of all the reasons mentioned up to now in this book, the relationship between IT and the business is not generally characterized by mutual trust, respect and a good track record – rather the opposite. So the available pool of candidates for heading up an iterative approach for a project is by definition rather small, both in IT and the business.
It requires people to focus on outcomes rather than due process A job description for an IT project manager capable of embracing iterative methods might read as follows (take a deep breath): charismatic, good communications skills, facilitation skills, capable of motivating both IT and business users towards achievable goals, capable of handling conflicting goals and agendas, capable of building good relationships between
76 Building a New Business Model for IT
IT and the business, customer-oriented and focusing less on process and procedures and more on outcomes and results. When all is said and done, the successful iterative project manager is one who achieves a positive outcome in terms of results, with everything else merely a means to that end. If we now look at the job description for a ‘standard’ (i.e. waterfall) project manager, the main requirement would be to be able to successfully manage a traditional client–vendor relationship by ensuring compliance with a set of processes and procedures – preferably based on standards and methodologies. While all of the skills required for the iterative project manager job description above would clearly be useful, they are ultimately niceto-haves, because at the end of the day, the successful waterfall project manager is one who ensures compliance by both parties with processes and procedures. If he also manages to generate a positive outcome in terms of workable results and customer satisfaction, then great. If not, then his salvation will rely on how well he can show that he complied with the officially approved processes and procedures. Needless to say, not only do few IT departments have managers with iterative skills roaming the corridors, there usually isn’t even a requirement to recruit or train any, since they don’t correspond to the dominant business model. A similar job description exercise could be conducted at the levels of business analyst and software developer, with the same results, namely that the people occupying these posts are, in the vast majority of cases, from the waterfall school. And not out of choice, but out of tradition – most would find the idea of not ‘working to spec’ quite alien. Note that this is also true for IT’s business customers, who have also been brought up to accept that when it comes to systems, their role in life is to get the specs right, sign them off and then go back to their day jobs.
It requires workshop facilitation skills that IT usually doesn’t have Few IT departments have people trained in facilitation skills necessary to host the workshops required to define detailed requirements based on process and data modelling as shown in Figures 5.3 and 5.4. Note that this does not necessarily imply that someone in IT actually runs the sessions, only that he/she is capable of organizing and facilitating them, and understanding the deliverables to be able to make use of them. Workshops are usually run either by experienced outside consultants, or by specially trained internal consultants who perform such workshops not just for IT but for other areas of the company as well. The reason why few IT departments have people with facilitation skills is because the waterfall method relies mainly on interviews and meetings (which will generate written material which can later be itemized and signed off), rather than on interactive workshops.
Managing supply 77
It requires modelling skills that IT usually doesn’t have For historic reasons which are beyond the scope of this book, most IT staff don’t possess the skills that would enable them to do process and data modelling, which are the main pre-requisites for prototyping. Both process and data models arise out of interactive workshop sessions, as explained earlier in this chapter. Needless to say, your average business users probably wouldn’t recognize a process or data model if it hit them in the face – so expecting them to be able to come to IT with this essential information is wishful thinking. Consequently it is up to IT to obtain it by asking the right questions and drawing the right conclusions – which is where the facilitation skills in the previous section come into the picture. This skills deficit makes it very difficult for prototyping to naturally take hold in an IT department, because process modelling is the foundation of the iterative approach.
In summary A combination of the above reasons explains why prototyping, despite its potential and track record where successfully implemented, has never become mainstream, and remains to this day the exception that confirms the rule. It generally remains a fringe activity associated more with particular individuals rather than being part of any official IT policy. Apart from firm believers and practitioners who’ve seen the benefits first hand and wouldn’t dream of returning to the waterfall approach, IT usually only turns to it when the inability of the waterfall approach to deliver is so manifest that they have no alternative but to try out prototyping on an experimental basis. For a good discussion on iterative development, please see the article on www.cio.com entitled ‘How Agile Development Can Lead to Better Results and Technology-Business Alignment’, by Thomas Wailgum (see ‘Further reading’ at the end of Chapter 9). This article highlights a rather disturbing reality. Quoting from the 2006 ‘State of Agile Development’ survey by The Agile Alliance and Version One, it points out that only 29% of traditional waterfall projects were considered ‘somewhat successful’ or ‘very successful’ – as opposed to 81% for agile projects. But despite these benefits, the adoption of agile development in organizations remains low, as evidenced by a Forrester Research survey entitled ‘Enterprise Agile Adoption in 2006’, which showed that only 17% of North American and European enterprises use agile development processes. That organizations seem unable to embrace iterative development despite factual proof of its benefits simply serves to underscore the enormous challenges explained in this section.
78 Building a New Business Model for IT
Is prototyping the answer to everything? Inevitably, when faced with two radically different approaches to doing something, each with its proponents, rational discussion on pros and cons usually gives way to evangelism and defensive posturing. Waterfall vs prototyping is no exception. Which invites the natural question, ‘Is prototyping the answer to everything?’. Prototyping generally works well when performance and reliability are not critical on day one (ultimately performance and reliability are always important over time), and when ‘good enough’ today is better than ‘perfect’ tomorrow. Also, security and safety would not be key requirements, since by definition these cannot be simply ‘good enough’. A typical example would be internal (i.e. non-customer-facing) CRM applications for sales and marketing. Not surprisingly, prototyping generally works less well when performance, reliability, security and safety are critical right from day one. Such systems are necessarily more quality-driven than date-driven. Typical examples would be customer-facing systems, and applications for production and manufacturing. But even then, this needs to be qualified. It doesn’t mean that prototyping is not applicable: it could still be used, for example as a first phase prior to using the traditional waterfall method. A key factor in favour of prototyping concerns the dominant nature of applications being developed (or packages being bought) today. The first wave of ‘back office’ systems, like accounting, finance and production planning, rested on the familiar grounds of core business processes. They were more or less predictable in terms of what they were supposed to do and the benefits they were supposed to deliver. When the ‘front-office’ became the focus of attention towards the end of the last century, we already began to see a shift towards more interaction and collaboration with customers and partners, resulting in more dynamic and ad hoc processes which were harder to define upfront, both in terms of requirements and in terms of benefits. Today and tomorrow, in an era of globalization, aggressive competition, increased customer choice and innovative Internet-based services, more and more systems tend to represent new concepts and different ways of doing things. In short, as the focus shifts from predictable, non-differentiating processes to unpredictable, differentiating processes, it becomes more and more difficult to visualize the desired result or outcome. Such systems won’t result from some grand design, suitably signed off and developed to spec, but rather will grow out of experimentation, trial and error. This environment lends itself naturally to prototyping and the iterative approach – in fact, one can even go as far as saying that the waterfall method is particularly bad in this type of setting. And even
Managing supply 79
when requirements for performance, reliability and security would tend to favour the traditional waterfall method, much shorter cycle times can be achieved by combining the two. For example, one could use the iterative approach to better understand requirements and get some real-world feedback, and then use those results as input to a more traditional waterfall approach. So, in reply to the question ‘Is prototyping the answer to everything?’, we could say – ‘Yes, a lot of the time’. While no methodology under the sun can address any and all IT projects, prototyping should clearly be the dominant approach, and be applied at least 2/3 of the time. Note that one could also ask the question, ‘Is the waterfall approach the answer to everything?’, to which the answer must surely be a resounding ‘No!’. And yet, it remains the dominant approach in IT departments today (dominant as in 80–100%).
Project critical success factors The above debate of the waterfall vs the iterative approach should not give the impression that project success is simply a question of using one preferred approach over another – if only things were that simple. As any article, book or study on the subject will show, there are other equally important critical success factors, ranging from active executive sponsorship and a realistic business case to obtaining user buy-in and effectively managing organizational change. Which development methodology you use is but one of a number of things you’ve got to get right in order to deliver workable results. For a more detailed look at across-the-board project critical success factors, including a 40 question project risk analysis covering subjects from project definition and organizational politics to balance of permanent staff vs contractors, please refer to Chapter 12 in ‘The CRM Project Management Handbook’ (see ‘Further Reading’ at the end of Chapter 9).
Maintenance - letting go of the M-word Outside of the IT industry, maintenance refers to the upkeep of finished products to ensure they continue to work properly, don’t jeopardize safety and don’t age prematurely. For example, cars are serviced every so many miles or km, houses and buildings are repainted and cleaned at regular intervals and commercial airplanes are stripped to an unrecognizable state every few years for an exhaustive inspection of the complete mechanical structure. The key words are ‘upkeep’ and ‘finished product’. Maintenance is not about changing the product or correcting its shortcomings – these would be called
80 Building a New Business Model for IT
enhancements, e.g. renovating the attic to turn it into a bedroom, or building a stretched version of a commercial airliner to accommodate more passengers. As explained earlier in this chapter, the traditional IT business model uses the term maintenance in a negative sense to refer to corrections and rework once the first version has been delivered. So the sum total of the maintenance backlog in any IT organization refers to the sum of all the projects which made it to production, but were somehow unsuccessful or incomplete, and subsequently have to be revisited. Under the new model, however, it is an accepted fact of life that it is impossible to correctly and fully specify business requirements for software, so there will always be ‘rework’ and ‘corrections’. This will be based on a combination of planned and unplanned demand (the distinction between the two was discussed in the previous chapter). This will result in a regular stream of new releases and versions, whose intervals and content will gradually decrease over time as processes stabilize and become institutionalized. There would be no more ‘maintenance’ as such, and whatever you wanted to call it (ongoing releases or versions would be just fine) it would certainly not be viewed in a negative light. A CRM centre of excellence at a global telco provides a real-world example of a company that adopted this approach. This team was responsible for providing solutions to three BUs in over 15 locations in Europe and Asia Pacific who were selling voice, data and Internet services to customers ranging from SMEs (small and medium enterprises) to global accounts. The rapidly growing business (a new office opened somewhere in the world every three months) plus regular reorganizations at national and global level meant that this group had to bring out new versions every 3-6 months to keep pace. No CRM solution at the time covered the complete business, so there was a lot of customization, which explained why the company had set up a centre of excellence. They used the iterative approach described in this chapter, and the distinction of project vs maintenance did not even exist. There was no such thing as a maintenance team, only one category of software developers regularly bringing out new versions to meet evolving business requirements. Letting go of the M-word would also eliminate the sterile – and dangerous – comparison of new projects and maintenance, as represented by the famous 80/20 ratio. Sterile because ultimately all IT effort ends up supporting the business, otherwise they wouldn’t be spending the money in the first place. Dangerous because it gives the impression to the CEO and CFO that only new projects (which represent on average 20% of the IT budget) are noble and have the potential to deliver business benefits, and that maintenance and operations (the remaining 80%, also called base spend) are somehow a necessary evil, which represents money that could be spent elsewhere. As if to confirm this
Managing supply 81
negative view of things, the comparison is often represented in the form of an iceberg, with new projects as the visible part, and the maintenance hidden in the murky depths below. At a CIO event in London, the ubiquitous iceberg slide inevitably appeared, and one participant said that he would never ever show that vision of new projects vs maintenance to his CEO, because it could give the impression that he’s somehow wasting 80% of the IT budget. Finally, the categorization of IT spend into this 80/20 ‘good’ project investment vs ‘bad’ base spend implicitly results in pressure to reduce the base spend. At times some companies even enforce a zero increase in IT base spend – while at the same time investing in new projects! Since today’s projects are tomorrow’s applications, in other words which will become part of the base spend, the only way the CIO can pull off this financial balancing act is to squeeze out cost savings from existing applications – and when there are no more of these to be had, to reduce quality of service or cut corners on infrastructure renewal investment. As we will see later in Chapter 7 on financials, with proper asset management, which focuses on discrete applications from a cost/benefit perspective, budgetary efforts can be more objectively focused on tangibles, instead of on a subjective breakdown of ‘good’ vs ‘bad’ spend.
Delivery and implementation Once a software solution has been appropriately tested (a subject in its own right and beyond the scope of this book) and validated as meeting business expectations, IT delivers and implements it. This means installing, activating and initialising it with live data. In parallel, users are trained and afterwards ‘go live’ on the system, which is now in production. Later on, once processes have matured and the application has stabilized, jointly agreed service levels can be put in place via an SLA, or service level agreement. Implementing software is a fairly predictable phase with clearly defined ‘no surprises’ tasks whose success is dependent on a combination of user buy-in, a working product that does what it’s supposed to do, appropriate training in terms of duration and content and finally basic project management.
Service and support Once a software solution has been successfully implemented, it moves to the phase of service/support, during which the IT department (or increasingly, an outsourcer) runs the solution from a production and service standpoint, ensuring availability, response times and support.
82 Building a New Business Model for IT
Like the implementation phase above, servicing and supporting an application is a fairly predictable process, with clearly defined ‘no surprises’ tasks which have two basic objectives: • Ensure ongoing product/service usage and provide timely response to incidents, enquiries and requests (sometimes against the pre-defined criteria of an SLA); • Not just be content with ‘answering the phone’ and meeting service levels, but also to close the loop with the development teams by analyzing customer feedback in terms of trends and frequently asked questions (FAQs) and channelling the findings to them as input for subsequent releases.
6
Monitoring Costs and Benefits
When you’ve got them by the wallet, their hearts and minds will follow. (Fern Naito, quoted in MacHale, 1997)
Monitoring costs and benefits for traditional IT activities If a stock you invested in dropped 20% overnight, you or your stockbroker would definitely be aware of it the very next day, and after an analysis of the situation, might take action to sell or monitor closely. If however the expected benefits of an IT project dropped by 20% compared to original expectations (over a period of a few months, never mind overnight), the business sponsor probably wouldn’t even be aware of it the next day – and probably never, because benefit monitoring of IT investments is rarely carried out, or because the numbers are fudged to ensure that the original expectations are ‘achieved’, or simply because the bad news never reaches him. The same is not true of the cost side though; any significant cost increases would definitely be noticed, because a budget has ownership and accountability, and delivering to budget is viewed as part of a contractual commitment. It is therefore usually monitored very closely. Finally, putting costs and benefits together and performing a cost–benefit analysis, both during the project and after it has been delivered, is hardly ever carried out. At best some companies do a post-implementation review after the fact, but more for internal IT best practice than for any financial reasons. As for companies halting or suspending a project as part of a rational ongoing cost–benefit analysis – well, that hardly every happens: projects are usually only stopped after they have spiralled out of control and the damage is too big to ignore. The fact that such a situation can exist in the 21st century, with companies routinely spending 2–10% of annual revenue and up to 50% of capital investment on IT projects and then failing to monitor their investment performance, is probably one of the best indicators of a failed business model.
84 Building a New Business Model for IT
There are two reasons for this. The first one is rooted in the free lunch aspect of the traditional model, whereby BUs are free to launch projects which will be funded out of a central IT budget, with ineffectual pricing or chargebacks. This means there is no financial incentive to carry out a serious cost–benefit analysis, which ends up in a subjective form that tends to magnify the benefits and reduce the costs, with the main objective being to launch the project. Once the project is under way, the original business case is more or less forgotten. IT then gets on with the job of delivery and the business more or less keeps its fingers crossed that the original business benefits will materialize as promised – even though the underlying business requirements, costs and benefits are changing all the time. So since nobody ultimately owns the benefits side of the equation – as in being accountable and responsible for its delivery – it gets left by the wayside (see ‘In search of a pizza parlour manager’ in Chapter 2, p 22). The second reason, less obvious, is because the business likens IT to the construction industry, in which you wait until actual delivery in order to check for compliance to spec, and only after would you do a cost–benefit analysis. Benefits are unlikely to change while a house is being built, so why would they change while software is being built? (We’ll see why further on). Let us now compare this with investment performance monitoring in the non-IT business world, and then propose an alternative approach as part of the new model.
Monitoring costs and benefits for business (non-IT) activities As we saw in Chapter 2, under the normal business model for non-IT activity, purchase costs normally come out of the client’s pocket. Once she starts using the product or service, she will therefore monitor and measure actual costs and benefits in an attempt to verify her initial cost–benefit analysis. This could be explicit (e.g. ongoing measurements or a one-time evaluation) or implicit (e.g. ‘experiencing’ the product or service on a day-today basis), but as long as the initial investment represents money that could have been used for something else, a conscious effort will be made to answer the question ‘Did I get a good deal out of this?’ The answer will ultimately become a pre-requisite for deciding whether to continue using it, or to stop and cut one’s losses and seek alternative solutions – or, depending on the financial impact, to simply absorb the losses and chalk it up to experience. Sometimes though, it is not always possible to back out of an investment that is not yielding the expected benefits. In the B-to-C world we can usually backtrack and sell the original goods or services (e.g. car, stock or insurance), with the resulting egg on face and hole in pocket not usually visible to others. In the B-to-B world however, things are more
Monitoring costs and benefits 85
complex, because of a combination of the sums of money involved, undesirable external visibility, organizational politics and the operational impact of withdrawing the product or service. All this does not mean that no cost–benefit analysis takes place in a B-to-B environment, only that it can sometimes take much longer because of the political and organizational requirements to save face (and make it through the next reorganization or election...). Finally, non-IT products and services lend themselves to fairly obvious performance monitoring. In the B-to-C world, for example, daily usage of the product or service allows you to form a conclusion on whether you spent your money usefully or not. In the B-to-B world, costs can be easily measured against tangible and easy-to-recognize benefits like throughput, number of units produced or office spaced leased. In summary, performance monitoring of investments in the non-IT world is fairly common, expected and based on readily understandable costs and benefits – which unfortunately is not the case for IT investments, as we shall now see.
Monitoring costs and benefits – new model A rational approach to monitoring IT costs and benefits will necessarily be based on the fundamental differences between building software and building houses. As we have already seen, the construction industry works primarily based on standard products, components and categories of labour, which enable reasonably accurate cost estimates. Costs are tracked against budget during construction, and a full cost analysis is done after delivery (benefits are unlikely to change during construction). For software, however, we now know that both costs and benefits can change significantly, not just during the project phase, but also after delivery, and in ways that are not always possible to foresee. Here are some real-world examples: • While building a new customer service system at a telco, it was found that the interface to the order and billing systems had to be delayed for at least a year because of data quality problems in reconciling customer information. This impacted one of the main intended benefits of the new system, which was to enable call centre agents to have access to customer order and billing history while handling a call, and thereby be more responsive and shorten call handling time. The discovery about the poor data was made well into the project, as the initial assumption based on the information available at the time was that this interface would not pose any significant problems. The project nonetheless continued through to completion without the interface, even though the incomplete solution did not deliver much added value compared to the one it was replacing.
86 Building a New Business Model for IT
• After delivery, an SFA system at another telco was found to be acceptable from both a business and a technical perspective, satisfying both the project team and the executive sponsor. However, the benefits were significantly impacted because sales reps were uncomfortable with management visibility into their opportunities, so they decided to drag their feet during the first year of usage until they were able to acceptably control pipeline visibility. This was a case of insufficient organizational change management to accompany the introduction of new technology and its impact on people and processes. • In the case study in the last chapter, the new customer service system at a pharmaceutical company exceeded all expectations in terms of business objectives and customer satisfaction – but nonetheless did not find favour with the sales force because it feared that this new channel would threaten their livelihood by replacing sales reps as the main source of information for physicians. This factor became apparent early on in the project, thereby impacting benefits even before the go-live date. The project nonetheless continued through to completion, the reasoning being that it would not impact the key objective of the call centre, which was to answer medical enquiries. As for the previous example above, this was a case of insufficient organizational change management to accompany the introduction of new technology and its impact on people and processes. In the real world therefore, IT cost–benefit analysis should start right from the moment the project is launched, and run throughout the life cycle, not just during the project phase but also after delivery. This is shown in Figure 6.1. Let’s see how this would work in practice.
Ownership and accountability for costs and benefits As mentioned at the start of this chapter, while budget ownership and accountability is usually pretty clear, benefits ownership is virtually non-existent. Budget ownership can be open to discussion; usually it is owned by IT, but it can also be owned by the business, or eventually find its way back to the business in the form of cross-charging.
Do ongoing cost-benefit analysis Define objectives and concept
Do initial cost-benefit analysis
Define requirements
Buy/build
Deliver/ implement
Figure 6.1 Monitoring costs and benefits (new model)
Service/ support
Monitoring costs and benefits 87
As for benefits ownership, well, this is really a no-brainer – only the business can be responsible for defining and monitoring benefits realization. So if a new call centre, for example, is to achieve a first-call resolution rate of 80%, or increased revenues of 10% by cross- and up-selling, then IT cannot be held responsible for defining these numbers, and then delivering on them. Rather, IT is the ‘contractor’, ‘vendor’ or ‘partner’ that enables the business sponsor to achieve this. We will go into more detail concerning the ownership of costs and benefits in Chapter 7 on financials. At this stage all we need to know is that costs can be owned by either IT or the business, whereas benefits are definitely owned by the business.
Cost–benefit analysis during the life of a project Most companies track project costs with respect to budget during monthly or quarterly investment committees or project review boards. The main objectives of these sessions are to see if the budget is under control and the project is on schedule. Very far behind, when treated at all, is risk analysis (discussed in Chapter 4). And finally, hardly a blip on the radar screen, is the ongoing monitoring of expected benefits and the associated cost–benefit analysis. Under the new business model, such meetings would go beyond just monitoring costs and schedules and include both risk analysis and benefits analysis. Risk analysis would usually be carried out by the IT project manager, and benefits analysis by the business sponsor. This is, of course, easier said than done. Under the traditional model, the business sponsor will usually balk at anything that changes the schedule. He’ll therefore usually push back on risk. Business and organizational risk he’ll ‘assume’ – usually by assuming that it is under control. Technical risk he’ll consider an IT problem. Similarly for benefits, which he’ll assume will remain as stated in the original business case – even when, as sometimes happens, the IT project manager tries to show that they might have to be revised. By which time, driven by the adversary nature of the traditional client–vendor relationship – and perhaps the business sponsor’s own agenda – CYA takes over and each side ensures it has a written record of the various exchanges. Under the new model however, both the IT project manager and the business sponsor would be incentivized to correctly monitor risk and benefits because, as we saw in the previous chapter, they are jointly responsible for the final outcome. The sum of all four components – costs, schedules, risks and benefits – would enable an ongoing cost–benefit analysis, which can then be compared to the original expectations. Based on the results, either the original expectations are revised to take into account the new reality, or
88 Building a New Business Model for IT
corrective action is taken in terms of funding, schedules and expectations. In the worst case, the project is put on hold, or even cancelled. If this happens, it should not always be viewed in a negative light (next section).
It is normal for costs and benefits to change! Under the traditional model, the client’s requirements are literally enshrined in the sacrosanct SoR, and IT is contractually responsible for delivering on them, after which the business benefits are supposed to start rolling in. In addition, the business sponsor’s reputation is often on the line in terms of schedules, costs and benefits. Therefore any changes to these elements of the project equation would most likely be viewed in a negative light and actively resisted, ignored or swept under the carpet. Unfortunately – and this will come as no surprise to anyone who has managed IT projects – it is an organizational fact of life that a business sponsor will almost always prefer to descope a project in order to meet deadlines, regardless of the usefulness of the deliverables (which can always be suitably disguised), rather than to have useful deliverables three or six months later. In the construction industry this type of manoeuvre is of course impossible, since you can’t deliver a partially finished building and have the last few storeys delivered six months later! Under the new business model however, it is accepted that building systems bears little resemblance to building houses. The construction industry has hundreds of years of experience behind it: wood and stone might have given way to concrete and steel, but houses and buildings are still essentially composed of load-bearing structures, floors, walls and roofs. So builders can turn to this huge body of knowledge and experience to produce fairly accurate costs and schedules. The IT industry however uses new technologies with life cycles of 2–5 years. A developer therefore cannot turn to a manual entitled ‘The ultimate guide to estimating costs and schedules for Version 3 of Whizzbang Technology XXL+ which has been on the market for all of two years and is likely to superseded by something new next year’. It is thus normal for requirements, schedules, costs, benefits and risk to change during the life of the project; this is the nature of the IT beast. Changes to costs and benefits should be viewed less in a negative light and more as part of a healthy organizational learning experience.
Portfolio performance monitoring In Chapter 4 we saw how a portfolio-based approach helps to drive a rational decision-making and approvals process, resulting in investments which are well balanced across new projects and running and enhancing production systems. However, portfolio management
Monitoring costs and benefits 89
doesn’t end with prioritization and approval: once the corresponding projects have been launched and are under way, we also have to monitor their performance over time to ensure they are still meeting the initial portfolio objectives. Using again the personal investment analogy of Chapter 4, we would increase or decrease the weight of certain portfolio categories (e.g. stocks vs bonds) based on how well or poorly the corresponding investments are performing, changes in one’s personal situation (e.g. buying a house) or in the general business environment (e.g. a sharp dip in the stock market). In much the same way, we would monitor the projects in an IT portfolio, and increase or decrease the weight of certain portfolio categories based on their ongoing cost–benefit analysis, risks and changes to the general business environment. For example, new regulatory constraints might require further investments in regulatory projects or upgrades, to the detriment of some other portfolio category. Or a project whose cost–benefit analysis can no longer be justified (e.g. if the original business case is no longer valid) will be suspended, and the funding moved to another part of the portfolio, e.g. kicking off a new project request from the pipeline which has a more promising business case. So projects will be monitored, not just on an individual basis, but also as part of the portfolios in which they were defined during the approvals process.
Cost–benefit analysis after project delivery We now know that correctly funding projects and delivering the first version of the resulting applications (usually within the space of a year) is but the first step of a very long journey (of 5–10 years). This means that the original project costs can be easily dwarfed by the ongoing lifetime costs – by a factor of five on average, which explains the infamous ‘iceberg ratio’ of 20% new projects to 80% keeping the lights on. This is vitally important to always keep at the back of one’s mind, because it is unlike most other areas of economic activity. For example, if I were to buy a car costing $1m (apparently, they do exist!) and were to keep it for 5 to 10 years, there is no way that I would end up spending up to $5m in running costs – not even if the price of oil hit $100 a barrel. If however I were to spend $1m on a new IT project, then the chances are I would end up spending another $5m over its lifetime to run and enhance it. The main conclusion here – besides the fact that a high-end Ferrari probably represents a better investment than the $1m project request pending approval by the investment committee – is that you’re not going to stop writing cheques once a project has been
90 Building a New Business Model for IT
delivered. If you are led to believe the contrary, then it is either by accident (i.e. a firm belief in the traditional business model that says it is so) or design (i.e. someone has an agenda whose outcome is based on initial project costs rather than on total lifetime costs). Ongoing monitoring of costs and benefits after initial project delivery is therefore a must, not just in order to understand one’s cost base, but also to measure the business benefits delivered over time. This will be better understood in the next chapter on financials when considering examples of managing applications like assets.
7
Financials
Not all things that can be counted count, and not all things that count can be counted. (Albert Einstein)
The main categories of IT costs Let us review the activities behind IT projects and applications and summarize the various cost components. A project is delivered by a software development team (from an IT department or an ESP) that designs, develops (in-house development), configures (off-the-shelf software package) and implements the first release or version. Once in production, the resulting applications physically reside on infrastructure (comprising hardware, system software and networks) and are run and supported by an operations or production team. Subsequent releases or upgrades are then brought out at regular intervals to meet new or changing business requirements, usually by the same development team which produced the first release, and are then put into production to replace the previous one. This cycle continues up to the end of the useful life of the application, which can be anywhere from 5–10 years. This results in the following three broad cost categories for IT: • PRODUCT DEVELOPMENT, which covers the costs to produce the first version (project phase) plus ongoing releases once in production (what the traditional business model calls “maintenance”); • INFRASTRUCTURE, which covers the costs of the physical hardware, system software and networking on which the product runs; • OPERATIONS, which refers to the day-to-day running and support costs of the finished product – or more specifically of the most current release of the product.
92 Building a New Business Model for IT
Not included in the above would be general IT overhead and indirect costs like staffing, career planning, training, analysis of emerging technologies, finance, budgeting and reporting.
Ownership of IT costs for the regulation of supply and demand While there are various ways of regulating supply, ranging from improving resource productivity to smoothing out demand over the year, it is ultimately tinkering at the edges – at the end of the day, you regulate supply by increasing your resource capacity, period. So when it comes to supply and demand, the main requirement is to regulate demand, and that is what we will focus on. Given the discussions up to now on the free lunch syndrome and the lack of an adequate pricing model to help regulate demand in IT, it should come as no surprise that the business clients should bear all of the above cost categories associated with producing and running an application. As we saw in Chapter 2, it forces the client to carry out a real cost–benefit analysis ‘with teeth’, i.e. one with the decision-making power not just to approve a project but also to withhold or cancel its funding if it is not living up to expectations.
Who has the final say for IT investments? It would be logical to assume that if the client ends up owning the IT budget – either directly at the outset, or indirectly through allocations or cross-charging – then he should have the final say in what that budget buys in terms of hardware, software and services. That assumption is correct – but subject to expert advice from a trusted advisor. Let’s look at some analogies outside of IT. Before the marketing department launches a major campaign, it will probably rely on expert advice from some outside consultancy or agency. Another example is in recruiting, in which HR will rely on a head-hunting firm or recruiting agency to do the screening and come up with a short list of candidates. Same thing for IT: just because the client owns the budget doesn’t mean he can decide on which software solutions to purchase without expert advice. This is all the more important for IT, in which non-specialists from the business would logically be more susceptible to influence from sources like vendors, consultants – or even magazine articles. The IT department is therefore the trusted advisor, who not only knows the ins and outs of software selection, but also the critical importance of architecture, and how any products – bought or built – have to be able to integrate into the enterprise architecture. Yesterday’s stand-alone applications have long since given way to integrated,
Financials 93
enterprise-wide applications which need to talk to each other, both logically, e.g. to share customer or order information, and physically, e.g. to be technically compatible at the bits and bytes level. Investment decisions also have to take into account the IT department’s resource base and skill sets, so that you don’t end up with a solution whose dominant skills the IT department lacks. So while budget ownership by the client does give him the final say for IT purchases, it does not imply a passport to buy anything and everything without validation from a technology, architecture and resource perspective. The IT department as trusted advisor would normally provide this type of input as part of the qualification process required to build the business case, mainly at the level of cost estimates, technology and architecture alignment and risk management (discussed in Chapter 4, p 43). Ultimately it is the ‘investment committee’ or ‘project review board’ (defined in the next chapter on roles and responsibilities) which is responsible for approving IT purchases, with ‘trusted advisor’ input coming from the IT department.
Allocations vs cross-charging Once you’ve decided which cost categories are to be borne by the client, you then have to put in place a mechanism for transferring these costs from IT to the business. This can be done through allocations or cross-charging.
Allocations Allocations take the costs of shared physical resources (usually hardware and infrastructure) and spread them over departments or BUs based on one or more criteria, like headcount, revenue or actual usage. While actual usage is the most objective criteria, it can be taken to extremes and become horrendously expensive and complex in practice, so needs to be entered into with caution. Allocations are usually rolled into BU overhead at the start of the financial year, with a year-end adjustment to take into account actuals. Besides the challenges of finding transparent and objective criteria – as opposed to a voodoo formula – allocations have the main disadvantage of being relatively invisible to those whose behaviour they are supposed to influence. Because they are buried in annual overhead, they are usually not adequately communicated to the actual application users who make requests for IT products and services – indeed, they might not even be aware that their department is paying for IT. That is why it makes better sense to do cost allocations at as granular a level as possible (ideally down to the departmental and application level), so that the resulting costs have a chance of influencing user behaviour.
94 Building a New Business Model for IT
Cross-charging Cross-charging takes the costs of human resources (internal IT staff or external contractors and ESP staff) working on projects and applications and charges them directly to the customer based on their cost rate. This can optionally be taken one step further and linked to specific activities using activity-based costing (ABC). Cross-charging is usually done on a monthly basis. Note that when an IT department is a separate business entity or subsidiary which provides services to BUs, as opposed to the more common internal department which is part of the same BU, then the term invoicing rather than cross-charging is used. Such BUs would receive monthly invoices from IT in exactly the same way as they receive monthly invoices from vendors or ESPs. Apart from that, there is no difference between cross-charging and invoicing. Cross-charges have the advantage of visibility and regularity – they land on your desk every month and are thus more likely to generate the desired behaviour necessary to regulate demand. This is especially true if – and there are tools which allow you to do this – the granularity goes down to the actual user and the associated work, instead of just rolling up everything to the departmental or BU level (‘Hey Joe, that new report you asked for last month has ended up costing quite a bit – was it really that important...?’).
Capturing costs for allocations and cross-charging So how would we capture costs for cross-charging and allocations? Across our three cost categories (product development, infrastructure and operations), there are three main cost components: hardware, software and people. • HARDWARE COSTS: Hardware costs (PCs, servers, network infrastructure...) are easy to monitor based on the initial price tag and annual depreciation. Sometimes expensive hardware like high-end servers or mainframes is used by a single application, which makes things easy. But sometimes they can be shared by multiple applications, which complicates things somewhat because allocation rules then have to be worked out, as explained above. Network infrastructure is almost always used by multiple applications, but fortunately there exist network monitoring tools which analyze network traffic by application and hardware device. The challenge is to strike the right balance between the desired result (reasonably fair and understandable allocations) and the costs and complexity of obtaining actual, ‘metered‘ usage. • SOFTWARE LICENCES: This would cover the costs of packaged application software and the associated annual maintenance. This can be relatively easy to calculate if, for example, a new application is based on 100 licences of some specialty lead management software for direct marketing. If however, the 100 licences represent an add-on to
Financials 95
the marketing module of a large enterprise-wide application like SAP or Oracle which is also running your company’s order management and financials, then the price could be heavily discounted – or even given away ‘free’. In such cases, calculating the software cost to allocate would require some creativity and would necessarily be an average which took into account all functional areas using the various modules of SAP, Oracle or whatever. • PEOPLE COSTS: People costs take the form of analysts, developers, testers, trainers, operations and support staff. This can be easily captured by simple time and expense entry against standard project and application codes, e.g. software development against project ABC, or operations and support against application XYZ. Once such time and expenses are approved, they can then be automatically invoiced at month-end or quarter-end to the required BU client. As for the potential organizational difficulties of getting people to enter their time (‘big brother’ looking over one’s shoulder and unwelcome visibility into one’s work), this should become less of an issue when it becomes clear that the objective of time entry is to crosscharge the client. Note that if the same people were to work for an ESP, they would have no problem filling out time sheets, since they would be operating under a client–vendor business model which requires it for billing! Well, it would be the same in IT under the new business model – if properly communicated, it should be understood and accepted. Finally, the various cost components need to be presented in an understandable format to the payer. A departmental or BU manager is not interested in the technical details of hardware usage, network traffic or the time spent by various categories of people doing analysis, coding or testing. Costs should either be rolled up into the three understandable categories of product development, infrastructure and operations, or bundled into understandable services which the payer can relate to, e.g. ‘equip a new hire with a laptop and a mobile phone’ or ‘produce custom report’. In summary, cost allocations and cross-charging can be relatively complex – and emotionally charged – subjects for BUs and IT, and need to be entered into with caution, with an initial overemphasis on simplicity and buy-in as opposed to bean counting and the potential for rejection. What you’re ultimately looking for is not strict financial accuracy, but workable results in terms of regulating demand through adequate pricing.
Benefits as part of the P&L and annual planning Benefits should appear in the P&L (when direct financial benefits are possible) and as part of the annual planning process, otherwise it’s just a subjective number that could be inflated to produce a positive cost–benefit analysis to enable a project launch. So if, for
96 Building a New Business Model for IT
example, part of a business case is to reduce costs by 30%, or to increase revenue by 10% over a number of years, then those benefits – or the incremental parts thereof – should be reflected in the annual budget from the next year on. If not, then we might as well all go home, because the numbers ultimately mean nothing and are merely a means to an organizational/political end, which is getting a project launched. The regulation of demand in a company will receive an exponential boost the day the CFO asks a BU head why there isn’t a line item for the benefits which were supposed to accrue from that too-good-to-be-true business case which she put together last year for that IT project... Apparently, Federal Express does just that. In his book ‘Managing Information Technology for Business Value’ (see ‘Further reading’ in Chapter 9), Martin Curley writes that ‘Federal Express stresses IT accountability in its IT planning process. When a business division signs up for an IT investment, that division states the expected impact explicitly – either in terms of revenue increase or cost savings, and these figures are then integrated into both business and IT operating budgets for the ensuing years’. (Curley, 2004)
Ongoing cost–benefit analysis for applications Once a project is delivered, the resulting applications are essentially financial assets, whose performance in terms of costs and benefits will be monitored by the application manager (defined in the next chapter on roles and responsibilities) to realize the business case. It is important at this stage to differentiate between the financial assets just described, and the underlying technology assets like hardware and software, which might sit in a configuration management database (CMDB) or an IT asset management (ITAM) system. The former is the visible end product that the business client sees and uses and from which she will derive business benefit. The latter is a means to that end, and is often invisible to the client. Which makes sense – after all, as important as these physical assets are in terms of what they do, as far as the client is concerned they are ultimately ‘part of the plumbing’. There is still an unfortunate tendency in many IT departments to view assets mainly from their traditional, ‘physical things’ perspective. Yes, you have to be able to manage and track these physical assets in order to manage your cost base and your service levels, but beyond that they have no intrinsic business value. However, order entry application XYZ which is supposed to handle a throughput of so many orders at a given unit cost – now that is an asset with intrinsic business value. An analogy is your car, which can be considered as having ‘business value’ to you. However, even though the wheels and engine components might be considered physical assets (not that you’d want to track them anyway), they wouldn’t figure on your radar screen when considering the ‘business value’ your car provides you.
Financials 97
Let’s now show via two examples how an application manager would track costs and benefits over time from the perspective of a financial asset.
Application 1 The first example shows how the cost components of Application 1 (top of Figure 7.1) change over time: • During the first year product development costs were naturally high, but operations and infrastructure costs were relatively low because the first year was a pilot which focused on a limited number of users. • In year 2, both infrastructure and support costs increased significantly as the product was rolled out across multiple BUs.
APPLICATION 1 - ANNUAL COSTS 2500 2000 Product Dev
k$
1500
Operations 1000
Infrastructure
500 0 1
2
3
4
5
6
7
8
9
YEAR
APPLICATION 1 - BENEFITS 9 8 7 6 5 4 3 2 1 0
Benefit 1 Benefit 2
1
2
3
4
5 YEAR
Figure 7.1 Application 1
6
7
8
9
98 Building a New Business Model for IT
• In years 3 and 4, both product development and support (part of operations) costs decreased as processes stabilized and gradually became institutionalized. • In year 5, the acquisition of another company significantly increased the number of users of the application, which in addition needed to be enhanced to take into account the functional requirements of the new company. This resulted in a spike in total costs, which almost doubled compared to the previous year. • In year 6, the acquisition was fully absorbed and the application was once again stable, leading to a decision in year 7 to outsource it, resulting in lower costs over the final years. On the benefits side (bottom of Figure 7.1), assuming that this application is supposed to deliver two key benefits, e.g. decreased unit costs per order and a shorter order cycle time, we can see constant improvement in each metric over the years. Eight years after this application has been introduced, unit costs have been halved (benefit 1) and order cycle time has decreased by a factor of three (benefit 2). In many ways this represents the ideal application, in which total costs decrease over time, while application benefits increase. Note that this analysis assumes that the benefits are directly related to the performance of the application – in reality of course, you can’t always be sure to what extent benefit performance is directly linked to the IT application or to external factors. In addition, benefits are not always easy to fully quantify and translate into financial terms – e.g. how would you objectively translate decreased order cycle time into increased sales? This is one more argument in favour of operational metrics as opposed to purely financial metrics, simply because they stand a much better chance of objectively measuring application benefits. Finally, depending on the accuracy of the costs and benefits, you could combine the two charts to show a financial ROI, but as we’ll see in the next section, there are reasons why you probably wouldn’t want to do this.
Application 2 In our second example, Application 2 (top of Figure 7.2) we see an entirely different picture: • The project was delivered in December of year 1. • In year 2, instead of decreasing, or at least remaining stable, total costs actually increased, for a number of reasons. Firstly, poor infrastructure capacity planning and
Financials 99 APPLICATION 2 - ANNUAL COSTS 1600 1400 1200 k$
1000
Product Development Operations Infrastructure
800 600 400 200 0 1
2
3
4
5
YEAR
APPLICATION 2 - BENEFITS 8 7 6 5
Benefit 1
4
Benefit 2
3 2 1 0 1
2
3 YEAR
4
5
Figure 7.2 Application 2
load testing during the project phase resulted in poor response times from day one, requiring a significant hardware upgrade (infrastructure cost component). Secondly, inadequate testing and training, due to budget restrictions during the last phase of the project, resulted in much higher support costs (operations component). Finally, a combination of bugs and missing functionality required a significant new release after only one year of operation (product development component). • Year 3 was in many ways a replay of the previous year, with costs once again increasing and another round of rework and corrections (product development component) to cope with yet more bugs, and the dawning realization that the original specifications did not really correspond to reality, and needed to be revisited. • By year 4, little had changed, and it became clear that the company had a problem application on its hands, with annual running costs (operations + infrastructure) having more than doubled since the start of the project, instead of decreasing or at least staying stable.
100 Building a New Business Model for IT
On the benefits side (bottom of Figure 7.2), assuming similar benefits as for application 1 above, but for a different product line, we can see that apart from a timid drop in year 2 for benefit 2, the performance has actually been negative, with unit costs and order cycle time either stagnating or even increasing. In year 5, the decision was taken to phase out the application, with product development funding limited to urgent bug fixes for the last year of its existence. We are therefore looking at a problem application whose costs steadily increase over time, with no corresponding benefits to show for the money spent – the exact opposite of the previous example. Which prompts the logical question ‘Why would any company spend such money over a period of five years for no obvious business benefits?’. The unfortunate answer is that since very few IT departments are capable of understanding their application cost base in this manner, and hardly any companies have people in their BUs who track applications benefits from an asset perspective, this type of analysis is simply not possible. So it’s not that companies wilfully throw good money after bad, it’s just that in the absence of objective cost and benefit monitoring at the application level, they simply don’t have the visibility for proper decision-making. So most of them simply carry on funding non-performing applications and keep their fingers crossed that the original business case still holds. Note that effective benefits monitoring is essential when IT costs are transferred to the business via allocations or cross-charging: people have to know what they’re getting in return for their payments. Finally, application-based asset management as described here might make a lot of sense, but the average company runs thousands of applications and you can’t have an asset manager for each and every one of them. In practice you would manage a group or a set of applications, rolled up at either a functional level (e.g. marketing, sales, customer service...) or at a business process level (e.g. acquire new customers, manage existing customers, fulfil orders...). This means that the average company should have between five and ten asset managers to manage applications across either the key functional areas or the key business processes.
Reducing application lifetime costs Since running and enhancing production applications consume up to 80% of the IT budget, reducing application life-time costs becomes a key financial objective, not just from a pure cost-savings perspective, but also because the resulting savings can be used to fund new projects.
Financials 101
The first and most obvious way to reduce lifetime application costs is to consolidate duplicate or overlapping applications which were originally launched in glorious isolation from each other for organizational or political reasons. This is unfortunately quite common – many of us work or have worked in companies with multiple implementations of an ERP or CRM system from the same vendor. Another option for cost reduction is to outsource mature applications to an outsourcer which has a more cost-effective infrastructure and skills base. A word of warning though. Outsourcing is a highly complex undertaking that is beyond the scope of this book – suffice to say that if you outsource the wrong applications for the wrong reasons and go about it in the wrong way, you will rue the day you ever read somewhere that outsourcing can save you money... Finally, there is one other way to reduce application costs which is not very well known but is very effective, and that is simply – knowing when to retire it! With proper asset management based on an ongoing cost–benefit analysis (like the examples in the previous section), an application manager would know when the stage was reached where it would no longer make sense to go through with another costly and lengthy upgrade, and begin to plan for its retirement. A new application (especially a packaged one) would, by definition, have richer functionality, lower acquisition costs and lower annual running costs than an old one, in much the same way a new car would yield better comfort, fuel consumption and lower annual running costs than a 10-year-old model. Proper asset management with clear application retirement dates also leads to better long-term investment planning and risk management, avoiding the syndrome of applications on their last legs suddenly having to be replaced within the next 6-12 months, with the potential risk to the business and the frantic scramble for funds that usually ensues. Many companies have a fair number of ancient applications which can no longer be costjustified in terms of annual running costs and business benefits – but cannot be identified without conducting an extensive inventory. This by itself is highly revealing of the dysfunctional nature of the traditional business model: we invest a lot of energy and money to fund new projects, but once the resulting applications have been delivered, we soon forget about them and they fall into a black hole called maintenance or keeping the lights on. Then the following year the cycle continues and we focus once again on new projects – which in turn will fall into the same black hole and end up forgotten! After five years of working like this, we have little idea of where the lion’s share of the IT budget is going and we have to do an inventory of the black hole to see what can be consolidated, outsourced or retired. Using a transport analogy, it would be as if a company had a fleet of cars comprising a non-negligible number of 10-year-old clunkers which break down often, get terrible
102 Building a New Business Model for IT
mileage and have annual running costs which exceed by far the annual running costs of a new car. And to complete the analogy, it would not even be able to identify the poorly performing cars without performing an inventory, because costs are tracked in a catch-all cost category called ‘vehicle maintenance’. With proper asset management as explained in the previous section, there would be no need for an explicit inventory because the application would be managed right from the time it enters production. From a cost-savings perspective, there are big financial payoffs to be had here. Retiring such applications could also free up funds for investment in new projects. For an IT department with an annual IT budget of, say $100m, as much as 80%, or $80m, would be for running production applications. Assuming that you can identify and retire applications which can yield as little as 1% in savings from this budget, that would already net you $800k. And with a little effort you can do a lot better than 1%...
The limits of financial ROI when applied to IT In the days before the ‘dot bomb’ crash and subsequent recession of 2001–2003, companies were awash in cash and IT projects were approved with highly subjective business cases. After the funds dried up, companies suddenly ‘discovered’ the requirement for objective business cases as an integral part of the approvals process. While this has clearly been a good thing from the perspective of rational investment planning, decisionmaking and accountability (in other words, governance), it unfortunately tended to institutionalize the notion that the only good business case is one that can be defined in traditional financial terms (e.g. ROA – return on assets; NPV – net present value; IRR – internal rate of return; payback period or break-even point). The reasoning behind this is that IT funding should be subjected to the same financial considerations as for any other part of the organization, namely that it should have an acceptable rate of return on the capital invested, usually based on a company’s ‘hurdle rate’ or investment approval threshold. However, in the real world, things are not that simple. Firstly, it should hopefully be clear by now that software investments are not the same as physical investments like plant, property and equipment, which can be analyzed in isolation for ROI. Similarly, the various stocks, bonds and other financial instruments which form part of a financial portfolio usually have little relation to each other, which makes an ROI approach possible for each component. Modern-day IT applications, however, are very rarely stand-alone systems. On the contrary, they are integrated parts of an enterprise whole, linking in one way or another the demand and supply chains, from marketing and sales through to order management and customer service. ROI can therefore be extremely difficult to calculate, since in most cases it will be a diffuse collection of benefits, both tangible (costs, revenue, cycle time...) and intangible
Financials 103
(employee productivity, customer satisfaction, customer loyalty...), which can impact multiple functional areas in both predictable and unpredictable ways. A CRM investment is a typical example: any upfront calculated ROI stands every chance of being wrong. The dominant form and shape of the actual ROI will only show up after use – if it shows up at all. In the rare case where you could compute a reasonably accurate ROI, the corresponding hurdle rate against which a decision would be based would have to be much higher to take into account the risks associated with any IT project, e.g. organizational risk, technological risk, resource and skills risk (risk management was discussed in Chapter 4). From a purely financial perspective, an IT project is arguably one of the worst investments you could make – you’d probably get a better return from investing in risky financial instruments like warrants and unproven technology stocks than in an IT project. Or as John Spanenberg, an IT Investment Director at ING Bank was quoted as saying ‘In terms of gambling, first there is horse racing, then there is poker, and then comes software development’. Setting the hurdle rate for IT projects at the same level as those for investments in plant, property and equipment therefore does not make sense. IT investments cannot and should not have to go head-to-head (in terms of using the same criteria) with other business investments when competing for funds during the budget process. In other cases, the inter-relations between projects and their corresponding potential returns adjusted for risk would make a portfolio-based approach more appropriate (see Chapter 4), instead of trying to gauge what return each individual project would bring when considered in isolation. Using, once again, personal finance as an analogy, if you wanted to invest in risky emerging markets, you probably wouldn’t even try to compute an ROI – what you would do instead is to say something along the lines of ‘at not more than 5–10% of my total investment portfolio, I’m willing to shoulder the risk’, period. Note that even the portfolio-based approach, as essential and necessary as it is for IT investment planning, would have its limits under the new model because portfolios are, by definition, based on financial benefits. And as we have seen in this book, many if not most project benefits will be measured in operational terms – which may or may not be translatable into financial terms. So the portfolio approach would have to be adjusted to take into account a mix of both financial and operational returns, perhaps along the lines of a ‘balanced scorecard portfolio’ (and doing that is beyond the scope of this book). Finally, in the case of projects based on untried and untested concepts – of which we’re seeing more and more – instead of asking the traditional ‘What return could I expect for this investment?’, the question should be turned the other way round and read ‘How much risk am I prepared to assume and how much am I willing to incrementally spend
104 Building a New Business Model for IT
in order to obtain this or that new operational capability?’. Consider the humble example of the telephone: if you were to replace your entire telephone system tomorrow, you’d be hard-pressed to put together an ROI-based business case. Ditto for e-mail. And yet your company simply couldn’t function without telephones and email. So why should it be relevant to know whether the $1m or whatever investment is going to be required is going to provide you with a return on capital of 15% or whatever...? Most people working in IT or with IT would probably agree with this line of reasoning – after all, they deal with the real world every day. However, since funding of new projects is provided by the office of the CFO, who by definition reasons in terms of ROI, project sponsors have to perform a rain dance to try and get a project launched, after which they know there will be little chance that they will have to back up their numbers 1-2 years later. There are some companies though who have recognized the limits of financial ROI when it comes to IT. One financial controller at a global foods company calls financial ROI for IT projects ‘bullshit’, and explained that their investment planning was based more on business benefits like enhanced operational capabilities and the creation of competitive opportunities than on purely financial returns. This apparent contradiction between financial ROI and operational benefits was highlighted by author Harwell Thrasher in an interview in which he points out two fallacies in using ROI to choose projects. Firstly, he says, the highest ROI project is not necessarily the best for the business because ‘a high-ROI project may make a process more efficient by making it more rigid, when what you really need is for it to be more flexible’. The second fallacy is that you can compare project ROIs. Directly echoing the challenges outlined in this book in Chaper 4, p54, he says that ‘Those who write proposals tend to overstate return, understate expenses, minimize transition costs and dependencies on other projects, and neglect risk. So the project with the highest ROI on paper tends to be the one with the most creative proposal writer.’ (Quoted in Computerworld of 23rd July 2007, p29). At the end of the day, investing in IT should be less about obtaining a return on investment, and more about improving operational capabilities to help gain a competitive edge by reducing costs, driving growth or improving customer loyalty, even if they are difficult to quantify – or indeed cannot be quantified at all. As Einstein’s quote at the beginning of this chapter states, ‘Not all things that can be counted count, and not all things that count can be counted’.
Part III The New Model in Practice
8
Players, Roles and Responsibilities
The difference between involvement and commitment? When you have bacon and eggs for breakfast, the hen is involved but the pig is committed... (John A. Price, quoted in MacHale, 1997)
Players, roles and responsibilities – the business The end-to-end processes covering supply, demand and production are shown in Figure 8.1. The roles and responsibilities across these phases are shown in Figures 8.2 and 8.3, which we will now explain. Needless to say, what follows is not an absolute, but simply a framework for the fundamental roles and responsibilities which companies can use as a foundation to implement the new model. In the real world, there can be variations across companies, sectors and countries.
Business units These are the internal clients who need to solve a business problem using information technology. Roles and responsibilities will vary across the demand, supply and production phases. • DEMAND PHASE: See Figure 8.2. The capture of demand in the form of ideas into a pipeline will be the responsibility of a departmental manager or key user of an application, called the application manager. Without this role, existing application users or people requesting new projects would be free to ask for anything and everything, from the frivolous to the serious. All of this would then subsequently need to be evaluated, resulting in both IT and the business spending a lot more time than is really necessary. The ‘application manager’ will meet on a regular basis with the ‘IT client manager’ (defined further on) to jointly review demand, not just what’s in the pipeline and the status of ongoing work, but also what might be coming down the line in the near future. Once ideas have been screened, filtered and validated as described above, it then moves down to the next stage, either as: (i) a change request related to ongoing projects or production applications; or (ii) a new project request.
108 The New Model in Practice
Capture demand and identify opportunities
IDEA (high-level opportunity) PROJECT REQUEST (estimated costs & benefits) PROJECT (detailed budget & resources)
SUPPLY
Build business case. Seek executive approval
DEMAND
Perform detailed budgeting and planning
PRODUCTION
Do ongoing cost-benefit analysis Define requirements
Buy/build
Deliver/ implement
Service/ support
See Figure 5.2 for detail
Figure 8.1 End-to-end processes for the new model
Planned demand, usually in the form of project requests and important change requests (e.g. a major new upgrade of an existing system) will be managed by an investment committee or project review board or governance committee or PMO. Note that PMO can stand for project management office, programme management office or
IT Client Manager
Application Manager *
IDEA (high-level opportunity) Investment Committee or PRB **
PROJECT REQUEST (estimated costs & benefits)
Executive Sponsor
IT Project Manager
LEGEND:
IT Role
BU Role
PROJECT (detailed budget & resources)
* Or Dept Manager where applicable ** PRB = Project Review Board
Figure 8.2 Roles and responsibilities across the demand phase
Players, roles and responsibilities 109
IT Project Manager
IT Client Manager
Business Owner * Investment Committee or PRB ** Application Manager
Service Manager SUPPLY
PRODUCTION
Do ongoing cost-benefit analysis Define requirements
Buy/build
Deliver/ implement
Service/ support
* On behalf of Exec Sponsor ** PRB = Project Review Board
Figure 8.3 Roles and responsibilities across the supply and production phases
portfolio management office depending on its role (PMOs are discussed further in this chapter in ‘What role for PMOs?’). Whatever form this group takes, it comprises key players from IT and the rest of the business responsible for evaluating and scoring demand based on both business and IT criteria, checking for potential duplication and funding approved projects and major change requests. Ideally, this group would not just be content with evaluating individual ‘orders’ from a cost–benefit perspective, but also help IT to become a strategic differentiator by evaluating demand from the perspective of enterprise-wide process improvement and business innovation. If a project is approved and funded, it will be owned by an executive sponsor (usually a VP or departmental director), responsible for realizing the business case – or more specifically, since payback period is hardly ever at project delivery, responsible for ensuring that the right conditions are in place to enable the business case to be realized over time. Unplanned demand, usually in the form of small change requests, feature requests and emergency fixes for production systems, with short approval times (weeks) and short delivery times (1–3 months), does not need to go through an investment committee. As
110 The New Model in Practice
explained at the end of Chapter 4, funding for such ‘minor’ requests is best achieved by drawing from an annual budget envelope, similar to a current account or chequing account. This account would be owned directly by the application manager (or owned by IT on behalf of the application manager), who would therefore be able to fund the fine tuning of ‘his’ application outside of the budget cycle – based of course on the appropriate technical input provided by IT in order to help him reach a decision. • SUPPLY PHASE: See Figure 8.3. Since it is unrealistic to expect a senior executive to actually run a project, the ‘executive sponsor’ will delegate the day-to-day aspects to a business owner, who, in practice, usually owns the corresponding functional area or heads up the department which will actually use the application. This person is responsible for co-ordinating users and ensuring the business meets its obligations in terms of the project plan, which would cover things from ensuring the team has the right people to putting together a programme to manage organizational change. He is also the main point of contact for his counterpart in IT, the ‘IT project manager’ (defined further on). This is an extremely important role, which could mistakenly be considered a luxury since we already have an executive sponsor. However, as we all know, people at executive level don’t have the availability to run a project on a dayto-day basis in terms of making operational decisions about processes and data – and neither is it their role anyway. Without this ‘business owner’ role, the executive sponsor soon becomes a figurehead, whose distance from the day-to-day running of the project usually ensures its eventual demise. Note that this person is sometimes referred to as the ‘business project manager’, but this term should be avoided, firstly because it implies that he has project management skills (hardly ever the case), and secondly it opens the question as to which of the two project managers, business or IT, is the real project manager. In the final analysis, the business owner is the business client responsible for working with IT during a project. • PRODUCTION PHASE: See Figure 8.3. Once the project is delivered, the resulting application will continue to be owned by the BU, now responsible for realizing the business case over time. The actual day-to-day management of the application, which is now essentially an asset (in the financial sense of the term, and not in the sense of an entry in a CMDB) will become the responsibility of the ‘application manager’. In addition to his role during the demand phase already discussed above, the ‘application manager’ will ensure that the business uses the application as planned and addresses any potential political/organizational obstacles to its use. He is also responsible for measuring the business benefits and the costs (not just IT costs but also the BU costs associated with his own people), which will enable him to do the ongoing cost–benefit analysis. Though the ‘application manager’ could be the original ‘business owner’ during the supply phase, in practice it would be a dedicated post reporting to him, optionally with
Players, roles and responsibilities 111
a team to manage data and first level business support (nothing to do with IT or technical support). It could be argued that the post of ‘application manager’ should belong in IT. However, since the ultimate objective of this post is to ensure that an application asset realizes business benefit, the right place is on the business side. Finally, as mentioned when discussing the ‘application manager’s’ role during the demand phase above, he remains the main point of contact for his counterpart in IT, the ‘IT client manager’.
Players, roles and responsibilities – IT IT department An internal service provider who will address the client’s need, using its own internal resources, or external resources or a combination. Roles and responsibilities vary across the demand, supply and production phases. • DEMAND PHASE: See Figure 8.2. An IT client manager or ‘account manager’ or ‘business relationship manager’ will be the single point of contact for a department or major BU functional area. His internal client will usually be the ‘application manager’ (see previous page), with whom he will meet on a regular basis to jointly review demand, not just what’s in the pipeline and the status of ongoing work but also what might be coming down the line in the near future. The objective of this key role – which would be new in many organizations – is to manage the business relationship in terms of demand, supply, quality of service, finance and accountability for results. Without such a role, IT is essentially a passive order-taker with little added value in terms of helping the business to formulate their ideas – and where appropriate challenge them and propose alternatives – and whose criteria for success remains the sterile conformance to deadlines, budget and spec. One of the main challenges of this new role is for a single person to be able to objectively represent both IT and the business without losing the respect of either group. Like the two-faced Roman god Janus, he has to deal simultaneously with two organizations and present a credible face to each. The ideal ‘client manager’ should be able to talk business with the business, talk IT with IT – and be able to make each side see the other side’s point of view when necessary. Finding and nurturing such talent won’t be easy, but without them you’re not going to be able to build the required relationship between IT and the business. In large organizations this will probably be a dedicated role covering multiple applications. In smaller organizations, it can be part of the role of a senior business analyst for a particular application, which essentially means that there will be a client manager for each application.
112 The New Model in Practice
• SUPPLY PHASE: See Figure 8.3. Once a project is launched, it will be managed by the IT project manager, responsible for coordinating IT resources and ensuring IT meets its obligations in terms of the project plan, which would cover everything from development and testing to training and deployment. Note that the ‘IT project manager’ is usually identified towards the end of the demand phase, just before the project drops out of the pipeline for execution (bottom of the funnel in Figure 8.2). The ‘IT project manager’ is also the main point of contact for his counterpart in the business, the ‘business owner’. Though the ‘IT project manager’ would manage the project from a budget and project plan perspective, he would do this in close cooperation with the ‘business owner’, since the two would be jointly responsible for the project outcome. Finally, the ‘IT project manager’ is necessarily at ease with iterative methods, from workshops and process modelling to prototyping and piloting. • PRODUCTION PHASE: See Figure 8.3. Once the project is delivered, a service manager in IT will be responsible for the resulting application from a day-to-day production and service standpoint. This means ensuring availability, response times and support, optionally against agreed service levels, and closing the loop with the development teams responsible for bringing out new releases (which may be within IT or at an outsourcer). The ‘service manager’ is also responsible for measuring his associated costs in terms of infrastructure and people, and providing this information to the application manager in the BU so that he can monitor the overall costs and benefits. This emphasis on ownership of the production phase would be new to most IT departments – too many organizations just focus on delivering the project, after which the application simply morphs into that catch-all overhead category called production, or keeping the lights on.
The new business–IT relationship The traditional client–vendor relationship is now replaced by a partnership in which the business and IT work towards the common business objective of a positive outcome over time, where positive outcome means workable results which contribute to realizing business benefit, and over time means in the form of successive, time-boxed releases every 6-12 months. The BU might ultimately own the realization of the business case, but its achievement will be obtained by the partnership between the business owner and the IT project manager, who are thus jointly responsible for the project outcome. Then after the first few releases, once the application has become more mature and processes have stabilized, this partnership will then shift to the application manager and the client manager.
Players, roles and responsibilities 113
The changing role of the business analyst As we saw in Chapter 3, the role of the business analyst (sometimes also known as the systems analyst) under the traditional model is to sit down with business users in an attempt to understand their requirements, and ultimately produce a signed-off SoR which will be then be ‘tossed over the wall’ for IT to build. The added value during this process can vary significantly depending on the person, from a mere note-taker to an active participant capable of challenging clients and proposing alternative ways of doing things. Both extremes exist. The former is quite dangerous though, because by not challenging unrealistic or technically unfeasible requirements, the note-taker generates false expectations and eventually sets up his IT colleagues for a fall. Under the new model, as we saw in Chapter 5, the traditional SoR obtained during interviews is replaced by process and data models obtained during cross-functional, interactive workshops. And since all the required IT participants are present during these sessions – project manager, business analyst and even software developers – it is clear that the traditional role of the business analyst is no longer applicable. So how should this role evolve under the new model? The main change is that the business analyst is no longer divorced from the technical side of things, i.e. he must be well grounded in producing software solutions, with a background in either software development or the configuration of enterprise software packages (both would be ideal). This technical and/or functional grounding is essential because a key part of his new role is to work with the lead software developer who was in the workshop with him to design a prototype and the subsequent releases and versions. In practice, this means that the role of business analyst will move away from what is essentially a documenter of other people’s requirements to that of an interpreter of business processes who proposes solutions and helps to design them (in conjunction with the relevant subject-matter experts like software developers and the software package consultants). In short, any IT organization – and indeed the business analysts themselves – would have much to gain, both in terms of career development and actual business results, by moving to the more pro-active analyst role required for the new model.
The changing role of the developer Under the traditional model, upon receiving the SoR tossed over the wall to her by the business analyst, the lead developer has the unenviable task of programming useful deliverables out of a document based on interviews in which she did not participate. And as we
114 The New Model in Practice
already saw in Chapter 3, even if she does manage to produce something ‘to spec’, it stands little chance of corresponding to actual requirements. Under the new model, however, the software developer is no longer a sequential player at a handover point somewhere down the line. On the contrary, he is in the loop right from the project kick-off, and is an active participant during the workshops, as explained in Chapter 5. There are many advantages to this approach. First and foremost is the fact that since the developer participates in the process and data modelling, he has all the information needed first hand to start working on a design as soon as the workshops are over, thereby slashing literally months off the traditional approach in which his only reference was an SoR. Another advantage is that he is directly exposed to real business people with real business problems, which should result in a more practical, real-world design based on real-world feedback, instead of one conceived in splendid isolation based on interpreting documented requirements. Finally, the developer would probably argue that as far as he is concerned, the greatest advantage of this approach is that from a career perspective, it dramatically increases his business knowledge and customer focus, thereby enabling him to be much more effective at his job. It also makes him eligible to progress to the role of analyst in a far shorter time than compared to his traditional role – and when he does so he would already be business-aware, as opposed to traditional software developers who essentially live in a world of ‘code and specs’ until such time as they ‘discover’ their business customers. During an international project review meeting at a global telco, the project manager struck up a conversation with a developer from another country during a coffee break. When this person learnt that the developers on the project manager’s team participated in workshop sessions right from project kick-off and were in direct contact with the business, he was amazed. He said that in his organization developers were not allowed to have any contact with business users, since that was the analyst’s role (he also asked if they were hiring...). So just as for business analysts discussed in the previous section, any IT organization would have much to gain in terms of career development and actual business results by moving their developers to the new model.
Towards the merging of the developer and analyst roles? In the light of the changing roles of the business analyst and the developer discussed above, you will probably have seen that there is a clear overlap in terms of functional and technical skills. Which begs the question whether you would actually need both. It certainly is a valid question.
Players, roles and responsibilities 115
Depending on project scope and company size, you don’t need dedicated business analysts. Small organizations have a combined role of analyst/developer, or in the case of packaged software, a business consultant who is a subject-matter expert in a particular functional area and is capable of configuring the package. Such people usually work directly with the IT project manager to design and propose a prototype and the subsequent releases and versions. For larger projects and larger companies, a dedicated business analyst becomes necessary to better manage the business relationship and to coordinate development and configuration efforts. For companies wishing to adopt the iterative methods as required by the new model, the logical trend would therefore be a significant reduction in the requirement for both traditional business analysts and traditional software developers. There is another factor at play which might accelerate this trend. The software solutions landscape comprising traditional software development and monolithic software packages is evolving. Tomorrow we will see more and more hosted software-as-a-service (SaaS), open source software and services based on SOA (Service-Oriented Architecture), all of which, by definition, require a blend of business analyst and configuration skills. So even without adopting the new model, it is still a fair bet that the changing software solutions landscape just described will result in the merging of the traditional developer and analyst roles.
The changing role of the project manager As we saw in Chapter 5 when discussing why prototyping has never become mainstream, the role of the project manager under the traditional model is essentially to manage a standard client–vendor relationship by ensuring compliance with a set of processes and procedures, usually based on the waterfall method. It is not to generate a positive outcome in terms of workable results and customer satisfaction – not directly in any case. As unpalatable as this may be to the vast majority of project managers trying to do one of the most difficult jobs in any company, this is a fact. The role of the project manager is to deliver to spec, on schedule and within budget. Whether this actually generates a positive outcome will depend on how realistic the business case was, how well the client understood his requirements, how well they were interpreted by the analyst, how well the developer understood the resulting SoR, and so on all the way down the line. Under the new model, the role of the project manager is to achieve (in conjunction with the business owner) a positive outcome in terms of workable results and customer satisfaction, with everything else merely a means to that end. He thus focuses on outcomes rather than on due process, and primarily uses iterative methods to achieve this.
116 The New Model in Practice
This results in a radical change in role, with traditional task/deliverables management now enhanced by a combination of creativity, motivation, leadership and subject-matter expertise. These additional skills, far from being nice-to-haves, are essential because the project manager has to help to define and deliver a solution which is based on a moving target in terms of requirements, costs and schedules. It is therefore less a question of managing people’s obligations in terms of compliance with processes and procedures, and more a question of harnessing people’s creativity and then motivating and convincing them to move in a particular direction which only becomes clear over time. The challenge for IT departments to find such profiles, either through internal training or external recruiting, will not be easy. But without successful iterative project managers, you cannot start to implement the new model.
The changing role of the operations department Under the traditional model, once a project is delivered, the resulting application(s) are handed over to the operations group to run from a production and service standpoint, ensuring availability, response times and support. Implicit in this model is a fully tested and fully documented solution which works as planned, and conforms to the appropriate technical and operational acceptance criteria. This is only to be expected, since operations runs the company’s core systems (sales, order management...), sometimes on a 24/7 basis, which consequently have to be reliable. Under the new model however, the first version would, by definition, not represent a finished product so to speak, but merely the first in a series of successive versions and releases. The robustness of the first version (and especially the pilot version before that, where applicable) might therefore not be ‘production grade’, and probably won’t become so till at least the second or third version a year later. So the question is ‘Who runs the initial versions of the application under the new model?’ – operations or the development group that built it? Experience shows that at least during the first year the application should be owned from a production standpoint by the original development group, until such time as business processes mature and reliability increases. This would ensure the responsiveness needed to continue to deliver results in a rapidly changing environment in which considerations of time-to-market can be as important as operational quality. The challenge in this approach is for the operations department to understand why this is so, and to see it as ‘normal’, and not as an attempt to bypass it or undermine its legitimacy. Here are two real-world examples. The CRM centre of excellence referred to in the example on p 80 was also responsible for running the resulting applications, because
Players, roles and responsibilities 117
they ran on different hardware and database environments from those of the rest of the company. This created organizational friction at times, and the CRM operation was generally viewed as an anomaly in the overall IT organizational structure. The company in the case study in Chapter 10 provides another example of this – the hardware and database environments were also different from the company standard, and were run by the applications group responsible for developing the application. Note that in both examples, even if the hardware and database environments had corresponded to company standards, the applications could still not have been turned over to operations on day one because the initial versions would not have been production grade. There are three ways of resolving this problem of operational ownership. The first is for the operations group to be made an integral part of the change programme required for the new model, so that they are brought on board at the appropriate time and are part of the delivery of the first version, even if it is not production grade. The second is for the development group to run the application for the first year before transitioning a more robust version 2 or 3 over to operations. Note that this would imply that the development group already has the required expertise on its team (mainly a database administrator or DBA) or is allowed to hire one or use a contractor. The third approach would be to have centres of excellence or application groups with full end-to-end responsibility for an application from a customer perspective, which would include running it. Experience shows that from a customer perspective this last solution is by far the best in terms of responsiveness and business knowledge. However, an operations department can probably run a robust application at a lower cost because of economies of scale and centralization of skill sets. Whatever the final decision by the CIO, it is clear that the operations department would need to adapt to the new model, and this will require the appropriate communication and change management.
What role for PMOs? The term PMO can stand for a number of things: project management office, programme management office, portfolio management office – even project meddling office, of which more later. The original role of PMOs, which started gaining credence at the turn of the century, was to improve project success rates. This could be achieved by either having a centralized ‘best practice’ group responsible for training and mentoring project managers, or by having a centralized team of ‘super project managers’ to run high priority initiatives. In some organizations this evolved – helped along by regulatory requirements like Sarbanes-Oxley – to include project evaluation and prioritization with a view to rationalizing the investment process and to facilitate enterprise-wide reporting.
118 The New Model in Practice
Implementing these – fully justifiable – goals from an organizational perspective, however, sometimes proved problematic, for one or more of the following reasons: • Difficulties in objectively defining just what constitutes project success, e.g. is it hard numbers based on a – more or less subjective – business case, or simply customer satisfaction? • A culture clash when a PMO – by definition a central authority – is overlaid onto an organization with a decentralized culture. In extreme cases, the resulting administrative and compliance aspects can foster grievances and ultimately rejection, with the PMO viewed as a policing function (project meddling office). An example is a telco which suddenly created a PMO with the objective of enterprise-wide reporting on key international projects from a cost and schedule perspective. In the resulting culture clash, project managers in the various countries carefully distilled the information provided to the PMO, while continuing to run their projects as before (with their own managers simply turning a blind eye to the new setup). • A two-tier or class-based organization with the ‘noble’ projects run by the PMO, and the mundane stuff run by the other project managers – who in addition have to provide part of ‘their’ resources for the PMO-run projects. Resolving these very real issues requires an organization to carefully distinguish between two very different roles: helping people to become better project managers from an educational/consultative perspective, and actually managing the project from an ownership and accountability perspective. The former is an internal IT educational issue which the business customer might or might not care much about; the latter however represents the bottom line for the customer in terms of who is responsible for delivering results. Depending on a company and its culture, it might be preferable to separate the two. For a combination of organizational and cultural reasons that we need not go into here, PMOs remain at the time of writing a primarily Anglo-Saxon phenomenon found more commonly in the US and the UK for example, than say in continental Europe. This does not mean that the roles of the PMO are absent in other countries, only that they are more likely to be found in separate organizational structures, for example a ‘process and methods’ or ‘methods and quality’ department for the project best practices and a more classic project review board or investment committee for project prioritization and investment. As far as the new model is concerned, it is less a question of which group or groups – be they PMOs, investment committees, project review boards or process and methods departments – carry out which roles, and more a question of ensuring that they are carried
Players, roles and responsibilities 119
out. At the end of the day, an organization has to have in place the people responsible for standardizing on project management practices, facilitating enterprise-wide project reporting and rationalizing the investment process.
The role of External Service Providers (ESPs) The role of ESPs under the new model would need to be very carefully managed for two fundamental reasons: (i) they adhere by definition to the traditional model and its contractual client–vendor relationship, and would therefore have to have very good reasons to deviate from it; (ii) like any business, they have a fundamentally different agenda to those of their clients, which is to get them to use as much of their services as possible, and not to improve the performance of their IT organizations (nothing wrong with this, after all, it’s a business). Let’s examine these two constraints in turn. As a vendor in a client–vendor relationship, an ESP would normally only start work for a client with a signed-off SoR, and the resulting statement of work, or SoW, which stipulates in detail the actual tasks to be performed and the corresponding deliverables. Even for packaged software implementations, which are sometimes based on workshops which drive configuration decisions, there are still clearly defined tasks and deliverables which can ultimately be ticked off as corresponding or not ‘to spec’. Finally, since the norm today is for clients to demand fixed-price contracts as opposed to time and materials (paradoxically not always the best way to do things, but that’s beyond the scope of this book), there has to be a contract of some sort which specifies the criteria for final delivery and final payment. From the business perspective of the vendor, all of this makes perfect sense – any business needs to be able to forecast, sell and plan based on predictable revenue streams. For ESPs this means allocating billable resources across multiple clients in a way which maximizes resource utilization and keeps to a minimum the number of people ‘on the bench’ who are not generating revenue. For the second point concerning the agenda of ESPs, it should not come as a surprise to anyone to learn that the ‘modus operandi’ of ESPs is to get a foot in the door of any new client and use its privileged insider access to ferret out as many opportunities as possible for additional work, either with the IT department or the rest of the business. The techniques used to do this, especially amongst the former Big X consultancies, are sufficiently well honed that they are usually capable of entering into the first contract at a relatively low margin, safe in the knowledge that they have a good chance of recouping the initial investment with subsequent work. There’s nothing intrinsically wrong with this; all businesses operate this way with varying approaches – some obvious and some less obvious – to drive additional business and increase reliance on the vendor.
120 The New Model in Practice
The fundamental conclusion from these two points is that it will be difficult for some ESPs to adhere to a new business model in which: • The traditional client–vendor relationship is replaced by a partnership; • The output-based approach (pre-defined activities and compliance to spec) is replaced by an outcomes-based approach (positive results) as the main criteria for success; • The predictability of large, long-term contracts (e.g. five people full time for a year or more) would be replaced by smaller, milestone-based deliverables whose results could impact future funding. It would be a mistake, however, to conclude that ESPs cannot play a part in helping an IT Department move to the new model, because ESPs usually have the resources with the process modelling and workshop facilitation skills that are usually lacking in most IT departments. It’s simply a question of understanding – and accepting – the realities of their business agenda and managing it accordingly. Note that smaller, more specialist ESPs should have less trouble adapting to the new model. For example, a pharmaceutical company built up a long-term relationship with a team of process modelling consultants from a specialist ESP. Over the years they came to know the main processes between marketing, sales and customer service very well, for the simple reason that they were the ones who had run all the workshops for each functional area over a period of two years. But because it was a small company, there was no possibility for generating additional work outside of their specialist domain.
9
Getting Started
Every organization has to prepare for the abandonment of everything it does. (Peter Drucker)
Accepting that a new business model is required to successfully run IT is one thing; implementing it is quite another. Managing organizational change is always a challenge, and in this case all the more so, as it requires new thinking across the board at both the IT and the business level, as we shall now see.
The business challenge There are enormous business challenges to be overcome at all levels – CEO, CFO, VPs, directors – for a new model of this nature to work, because it is essentially saying that everything which we originally thought was true for IT and represented fundamental business practice – if not plain common sense – was really not so at all and now needs to be turned on its head. Sacred cows like sign-offs, committed costs and schedules will now fly out of the window, to be replaced by the reality that they are all moving targets. What used to be ‘free’ now needs to be priced in order to regulate demand. Decibel management and executive influence for launching new projects will need to be significantly reduced through governance, in order to manage demand and approvals objectively. For the CFO in particular, accepting that financing IT is not the same as financing plant, property and equipment is probably going to be the biggest challenge, because it goes to the heart of her role as custodian of the company’s finances, which is to only give out money in return for a contractual ROI. We’re now going to ask her to forget about ROI in the traditional sense of the term and to take on the role of a venture capitalist who manages a risk–reward equation rather than a cost–schedule equation. We will no longer expect her to ask in investment committees or project review boards ‘Are we still on budget and schedule?’ (which implicitly assumes a valid and unchangeable business case), but rather ‘What types of benefits are we seeing for this round of funding, and does the potential return at this stage warrant additional funding?’.
122 The New Model in Practice
The other main executive challenge, which will be no less daunting, will be at VP and director level. These are the clients of IT who actually commission new projects and use the resulting applications. Here we will first be asking them to cost-justify new projects not just in order to get them approved, but also to bear the associated costs. We will then be asking them to be responsible for the business outcome. During the project phase they will then play a role as active participants with a stake in the final outcome, instead of passive customers with an eye on the contract. And finally, after delivery they will be required to manage the resulting production applications as financial assets, in other words be responsible for ongoing cost–benefit analysis, and know when to retire them. The good news is that these seemingly near-insurmountable challenges need not really be so in practice, because they are already used elsewhere in the world of business by the very same executives – what is new is that we will now be asking them to apply the same rules to IT. Two examples: • Venture-capitalist financing is an established way of funding initiatives which comprise a high degree of risk and uncertainty. So all we’re really asking is for the CFO to recognize IT projects as being more in this category than in the category of plant, property and equipment. Once she does this, then it would logically follow that she would adopt the appropriate means of financing them. • All companies assign ‘asset managers’ (even if they’re not always called that) to manage assets in terms of costs and benefits over their useful life. For example: • If a company buys a fleet of trucks, it will name a person to manage them in terms of costs and benefits – it won’t ask the truck company to do that. • Ditto for a factory; a factory manager will ensure that the asset delivers in terms of costs and benefits – he won’t ask the building contractor or machine-tool vendors to do that. So if a company invests millions in a new IT project which is supposed to deliver business benefits, it should name someone in the BU to manage the resulting applications. It can’t ask or expect IT to fulfil that role – that would be the equivalent of asking the truck company to manage the fleet, or the building contractor or machine-tool vendors to manage the plant. So all we’re really asking is for the business users of IT applications to recognize them as also being assets, and to adopt the same approach they use for other assets like plant, property and equipment.
The IT challenge It would be tempting to think that IT faces fewer challenges than the rest of the business in adapting to the new model, because they already deal directly or indirectly with most
Getting started 123
of the ‘revelations’ in this book. Nothing could be further than the truth – they face more challenges than the business because their whole organization, staffing, skills, compensation and benefits are based on the traditional model. Unlike the business, who would ‘only’ (no mean challenge in itself) need to wrap its mind around some new concepts, play a participative instead of a spectator role in projects and introduce a couple of new job descriptions, the IT department would have to more or less re-invent itself, with risk and outcomes replacing much of the traditional security of specs and sign-offs. This would mean telling a lot of people that their job descriptions are now going to change, and that some of the things they were good at yesterday are suddenly going to be less important tomorrow, and that new skills and attitudes are going to have to be acquired through a combination of training and new hiring. The dominant profile of the people who will be interacting with the business, from analysts and developers to client managers and project managers, will be resolutely extrovert and people-oriented. This contrasts with the dominant profile of IT staffers today, which is introvert and technology-oriented. This reality was echoed in a joke told by Gartner analyst John Mahoney at Gartner ITExpo in Cape Town in August 2006: ‘How can you tell an extrovert in IT? He looks at your shoes instead of his when he’s talking to you’. Finally, if the CIO is not careful, he could unwittingly end up running a two-tier IT department, with a new-model school delivering tomorrow’s projects and an old-model school maintaining yesterday’s applications. And oh, as if that were not enough, everybody would now be required to enter their time in order to capture their costs. And finally, the CIO would have to carry out this transition without dropping the ball concerning ongoing projects and operations. Yes, as De Marco’s dictum at the start of Chapter 3 says, the truth might set you free, but before it does it’s going to make your life miserable. For an IT department to adapt to the new model will therefore be a long-term journey, as we shall now see.
Where to start Given the scope and magnitude of the changes required to move to the new model – or, realistically, to start introducing the basics in parts of the company – it should come as no surprise if we say that we’re looking at a long-term journey: 1-2 years before seeing any first results, and 3-5 years before seeing it institutionalized. Such a change management programme is clearly beyond the scope of this book (see ‘How consulting companies can help’ towards the end of this chapter), except to say that it should be triggered by the CIO and subsequently managed jointly by IT and the business as an enterprisewide initiative.
124 The New Model in Practice
In terms of where in the organization to start, there are basically three approaches. You can either focus on your pain points, or focus on those areas which are already working well – or you might have no choice in the matter: • The first and most common approach is to focus on one or more pain points and address those. These pain points can be either ongoing, e.g. a department or functional area with a poorly performing application with unacceptably high running costs. Or it can be point-in-time, e.g. a failed project which had serious organizational repercussions. In general it is easier to institute major change in the second case, simply because the climate is right for change and there will be less opposition. For ongoing pain points however, it’s more of a challenge, because things are still working, however badly, so there will inevitably be some form of resistance to change which will require a combination of skilful selling and assertive leadership. • The second approach takes the opposite tack to pain points, and actually looks for those areas which are already working well. Any company will usually have at least one mature group with a stable application and a good relationship between IT and the business. Such a group will, by definition, already be doing a lot of things right, and formalizing one of more of the components of the new model would be a logical step in their process maturity. Once this group is able to demonstrate workable results, it could then become a showcase or catalyst for the more challenging parts of the organization. • Lastly, you might not have any choice in the matter: regulatory compliance, a merger, acquisition or even a major internal reorganization are all examples of external factors which might require a certain part of the organization to get its house in order. As all companies are different, with varying levels of maturity, there are no stock answers. Your starting point can be any one of the above, or a combination, depending on circumstances.
How to start – from checklist to action plan Once you’ve decided where to start – or events have made the decision for you – you have to decide on two or three components of the new model to implement, with the objective of institutionalizing them inside of a year. The overall guiding principle should be to institutionalize every year at least one or two of the model components in part of the organization (realistically, you won’t be able to do it company-wide). Examples could be demand management, iterative development or time entry. At the end of each year, you should be able to answer the question ‘What have I institutionalized this year?’.
Getting started 125
Here is a high-level checklist of the end-to-end essentials you would need to implement for the new model. If you were to ‘slam-dunk’ it into your organization, this is more or less the order in which you would do it. In the next section though, we’ll see how you could actually do this in the real world, step by step. 1. CLIENT MANAGEMENT: assign IT client managers to be the single points of contact for your internal clients at the level of department, application or major functional area. 2. APPLICATION MANAGERS: get the business to assign application managers to be the single points of contact for your IT client managers (above). 3. ITERATIVE METHODS: start training or recruiting project managers to be able to run interactive workshops for process modelling, and business analysts and developers to design and develop new systems based on a prototyping approach. 4. TIME ENTRY: get all IT staff to enter time against both new projects and production applications in order to capture your people costs. 5. APPLICATION-LEVEL COST TRACKING: Once you’ve got time entry under control (above), you will know your people costs in terms of product development and operations. You can then associate this with the corresponding infrastructure costs to determine total costs at the application level. The same would apply to projects, which represent the first phase of an application’s life cycle. 6. APPLICATION-LEVEL BENEFITS TRACKING: get the application managers from the business to start monitoring the ongoing benefits of their applications. The same would apply to projects, for which you would monitor ‘benefits to completion’ to ensure that whatever is delivered for the first version is capable of meeting the business case originally put forward. 7. DEMAND CAPTURE (NB not the same as demand management, covered in the next point): ensure all demand for IT products and services from the business is captured into a demand pipeline in a structured format in terms of description, reasons, timing, benefits, costs and feasibility. Note that this component is inextricably linked to the first one in the list – without credible client managers in place managing the business relationship, it won’t be possible to have a demand pipeline. 8. DEMAND MANAGEMENT: set up an investment committee or project review board to manage the demand pipeline (above) in terms of business priorities and IT resource and scheduling constraints. The resulting prioritization and approvals process (ideally portfolio-based) will cover both new projects and enhancements to production applications.
126 The New Model in Practice
9. APPLICATION-LEVEL ASSET MANAGEMENT: however useful your applicationlevel cost and benefit tracking (points 5 and 6 respectively), it is only a snapshot of the present; it doesn’t say anything about the future. For this, you need to combine it with the demand pipeline (point 8 above) to obtain visibility into application change requests and feature requests. With this combined view of both the present and the future, you will then be able to start proper asset management and make informed decisions about funding or retiring applications. 10. PRICING AND CHARGEBACKS: once the above fundamentals are in place, you can finally start to introduce pricing mechanisms for IT products and services. Note that pricing has deliberately been left for last, because once you’re going to start asking people to pay for something, you’d better be very sure that you have your shop in order in terms of demand, supply and quality of service. People will only accept to pay for something which was previously free if they are generally satisfied with it. Rushing into pricing and chargebacks too early can thwart your whole change programme and set you back many years – only do so when it has a chance of being accepted (people have long memories when it comes to bad experiences in IT). A checklist like the above is one thing; translating it into something actionable is quite another. Realistically, such a change programme cannot be launched company-wide without the CIO dropping the ball in terms of ongoing projects and keeping the lights on. So whether you are starting from a particular pain point, a mature application which is working well, or an event outside your control, here is a recommended approach – most of which is based on real-world experience in a multinational pharmaceutical company and a global telco. There are two main phases: • The first one is to set up a partnership with a key stakeholder in the business and to deliver a new project with workable results in a short time based on iterative methods. Results, rather than good intentions, will provide you with the credibility to take things further. This first phase is outlined in Figure 9.1. • The second phase capitalizes on these results by formalizing the IT/business partnership around the new project and introducing application-level asset management. This is shown in Figure 9.2. Note that there is no direct relation between the numbers in these two figures and the numbered checklist above.
Getting started 127
ESTABLISH CHANGE PROGRAMME
2
Engage with stakeholders to drive change Hold change pgm presentation for stakeholders Identify key stakeholder for change
1 Identify appl group to drive change
ESTABLISH ITERATIVE 3 SKILLS BASE
4 Identify new project and define project team
Launch project 5 and interactive workshop session
Deliver first version
6
Process and data modelling Interactive workshop facilitation Iterative development Iterative project management
Figure 9.1 Getting started – first results from a new project
FORMALIZE APPL LEVEL ASSET MANAGEMENT
7
FORMALIZE IT/ BUSINESS PARTNERSHIP AROUND NEW APPL
9
Appl level cost tracking
Assign Client Manager Assign Application Manager Assign Service Manager Bring out subsequent releases
Appl level benefit monitoring Implement tool 8 to support new processes
Appl level demand management Asset management reporting Pricing and chargebacks
Figure 9.2 Next steps - capitalizing on first results
Apply lessons 10 learnt to rest of organization
128 The New Model in Practice
From the status quo to first results Let us start by looking at the sequence of events for the first phase shown in Figure 9.1: 1. IDENTIFY APPLICATION GROUP TO DRIVE CHANGE Objectives: To identify the application group in charge of a key functional area (e.g. sales, marketing, customer service...) which will implement the new model on a pilot basis. Main deliverables: The application group which will pilot-test the new model, plus an action plan for establishing the change programme with the business and ensuring that IT has the necessary skills to deliver a project based on iterative methods. Owner: The CIO Content: As explained earlier in this chapter, the application group which will lead this initiative can be the result of some trigger event like a highly visible project failure, or be chosen because it is mature group with a stable application and a good relationship with the business, or the result of external events like regulatory compliance or a merger. 2. ESTABLISH CHANGE PROGRAMME Objectives: To engage with the business and communicate how things are going to be done from here on and why, and to identify the key stakeholder for a new project. Main deliverables: A clearly identified stakeholder from the business who is in agreement to partner with IT on the change programme.
Getting started 129
Owner: The head of the applications group chosen above, who is a senior manager or director actually responsible for delivering solutions. It is not advisable for the owner of the change programme to be someone in a transverse organization like a PMO or a process and methods department (see ‘What role for PMOs’ in Chapter 8). Such entities may lack the organizational legitimacy to drive change because they are not always accountable for results, only for providing guidance, best-practice and coordinating projects. If such transverse entities exist then by all means they need to be closely involved, but they should preferably not own the change programme. Content: For IT to establish a change programme, it must engage with the business at a senior level. For the particular application area chosen, this will mean first meeting with the corresponding senior executives and their direct reports. These initial meetings should be on an individual basis rather than a group meeting, because the emphasis is on mutual self-assessment, bouncing around ideas for change and strengthening personal relationships. Based on the outcome of these one-on-ones, the applications group head will organize a presentation for all the stakeholders he met with to summarize his findings and present a programme for change. Soon after this presentation you should be able to identify a candidate stakeholder and project to move forward with. In terms of duration, this step can take anywhere from 3 to 6 weeks. It should come as no surprise to learn that a newly hired person stands a much better chance of establishing a change programme by virtue of him not being too closely associated with previous initiatives or projects which did not live up to expectations. Indeed, this was the case for the head of an applications group at a new company. Less than a year after his arrival, and in the wake of a previously failed project, it did not prove too difficult for him to meet all stakeholders and to organize a presentation to summarize his findings. To maximize the chances of success for this presentation, he got an outside consultant to come and pitch part of it. This was because the recently failed project was still fresh in peoples’ minds, and it was essential for the main thrust of the message to come from a neutral source. The very same day, the key stakeholder agreed to a new project launch (see step 4 further on), which kicked off two weeks later. This step is done in parallel with step 3 below.
130 The New Model in Practice
3. ESTABLISH ITERATIVE SKILLS BASE Objectives: To ensure that IT has the essential skills in terms of process modelling, data modelling, workshop facilitation, iterative development and iterative project management to be able to run a project based on the prototyping approach. Main deliverables: A clearly identified iterative project manager, plus a business analyst and developers or packaged software consultants with iterative experience. Owner: The head of the applications group Content: Because you are trying to build a partnership with the business, this step should as far as possible be done with internal staff rather than outside consultants. This is essential because you are supposed to be building up a team for the long haul, with new versions coming out at least twice a year. Realistically though, you will end up relying on a combination of internal and external staff. This is fully workable, but on the condition that the project manager be part of IT – this should be non-negotiable. You cannot launch the foundation for a business partnership if it is led by an external consultant, whatever his pedigree; he would simply not have the organizational legitimacy and internal business knowledge to make it work. In terms of duration, this step can take anywhere from 3-12 months depending on your current staff profiles, the development tools you already have in place (not all of them lend themselves well to iterative development), the emphasis you’re going to place on hiring vs training, and the ratio of internal to external staff you’re going to use on the project. One of the main challenges will be to ensure that this does not take too long compared with step 2 above. In practice, you should already have at least an iterative project manager in place before embarking on step 2. If not, you will lose whatever momentum you’ve gained while you are busy training or recruiting people for step 3. Transverse groups like PMOs or process or best-practice departments are well positioned for taking on the roles of running interactive workshops. One company took this a step further by creating an enterprise-wide team responsible for finding solutions to business problems using interactive workshops based on process modelling,
Getting started 131
regardless of whether they were related to IT projects or not. In the absence of such internally trained staff, interactive sessions can always be run by outside consultants, but there will be a loss of business knowledge once these people walk away. You can limit this to some extent by using the same consultants for successive projects. For example, in the case study in Chapter 10, the same two consultants were used for all of the process workshops over a period of three years (see Figure 10.1), which enabled them to bring an enterprise-wide view and lots of added value to each session. This should be seen as the exception though; as far as possible the people running process workshops should be part of your organization, from IT or the rest of the business, in order to retain vital business knowledge in-house. 4. IDENTIFY NEW PROJECT AND DEFINE PROJECT TEAM Objectives: To identify a project which will be run based on the iterative approach, and define the corresponding project team. Main deliverables: A clearly identified project within the functional area of the key stakeholder, and the corresponding project team members. Owners: IT (head of applications group) and the business (key stakeholder). Content: As befits the new partnership, both IT and the business should be in agreement on the project chosen to launch the new model. The challenge is to identify one that is not too small in terms of scope and business impact, but also not too large in terms of costs and risk. This however implies a choice of projects, which in the real world is not very common. Therefore the chances are that there will already be a key project waiting to be done, and this will be the default choice. The challenge from here on will be to break the project down into manageable phases with realistic milestones, so that it can be handled through an iterative approach. Once again, this is to be a joint decision by IT and the business based on the newly established partnership. The project team members should be chosen based on the key business users of the corresponding functional area, and the main IT resources in the applications group. If internal resources are not available at the business analyst and developer/software
132 The New Model in Practice
package configurator levels, then outside consultants should be brought in and be made part of the project team. In terms of duration, this step is clearly company-specific; a relevant project can appear on the radar screen almost immediately, or 3-6 months later depending on the budget or planning cycle. 5. LAUNCH PROJECT AND INTERACTIVE WORKSHOP SESSION Objectives: To kick off the project and run an interactive workshop. Main deliverables: Formal business definitions, process, and data models and the prioritized processes to be addressed during the project. Owners: IT (head of applications group) and the business (key stakeholder). Content: This session would run as described in Chapter 5 (section ‘Defining detailed requirements during workshops’), with full participation by the project team. At the end of this session, there should be agreement on the prioritized processes that need to be addressed by this project (section ‘Prioritizing business processes’). In the days immediately following the workshop, the IT project manager and the business owner will agree on the content for the first version and the corresponding timescales. 6. DELIVER FIRST VERSION Objectives: To deliver the first version of a software solution (buy or build – since which option will be taken is not necessarily known after the workshop) to meet the prioritized processes. Main deliverables: The first version of a software solution that delivers workable business results.
Getting started 133
Owner: IT (head of applications group), with the required participation from the business as part of the iterative approach. Content: The first version will be developed based on the prototyping approach described in Chapter 5 (sections ‘Building a prototype’, ‘Validating the prototype’ and ‘Implementing a pilot’). In terms of duration, delivering a prototype can usually be done within 1–2 months after the workshop kick-off, and then take another 1-3 months to refine in order to have a working product for a pilot. The pilot should then run for at least three months. So total end-to-end time for delivering a successful first version will be between 6-9 months depending on scope, complexity and the outcome of the pilot. These numbers correspond to real-world examples of iterative projects, whether they were based on packaged software or in-house development or outsourced development. At the end of step 6 in Figure 9.1, you should have delivered the first version of a new software solution that was built using iterative methods. If you went about it correctly, then not only will you have obtained workable results much faster than what the business is normally used to, you will also have done so with full user participation and buy-in. Successfully completing this first major milestone will provide you with the credibility to take things further and move to the next phase outlined below.
From first results to asset management Let us now look at the sequence of events for the second phase, shown in Figure 9.2: 7. FORMALIZE IT/BUSINESS PARTNERSHIP Objectives: To formalize around the new application the key roles of client manager, application manager and service manager, while continuing to bring out subsequent releases. Main deliverables: Clearly identified people to fill the roles of client manager, application manager and service manager, and subsequent releases of the software over time.
134 The New Model in Practice
Owner: IT (head of applications group) and the business (key stakeholder). Content: Because the first version of the software will have just been delivered, the candidates for the key posts of client manager and application manager should already be known, since in all probability they were already part of the project team. Formalizing these roles should thus be a formality. The same would not be true however for the post of service manager, which would be new in most IT departments. It’s also a valid question as to whether this post even makes sense until after at least a year, once the second or third version of the software comes out and processes and service levels have stabilized. The timing of when to introduce this post into the equation would therefore be dependent on each organization, especially in terms of how it adapts to the changing role required for the operations department (see ‘The changing role of the operations department’ in Chapter 8). This step can usually be accomplished within a month, as it is more of an organizational step rather than a task that actually produces something. 8. IMPLEMENT TOOL TO SUPPORT NEW PROCESSES Objectives: To evaluate, select and implement a packaged software tool to support both the new processes and the upcoming ones (step 9). Note that given the number of tools on the market to choose from, developing one in-house cannot be justified, neither on grounds of costs nor features. Main deliverables: A tool implemented and in use by the main business and IT players associated with the new application. Owner: IT (head of applications group), with the appropriate participation from the business. Content: The role of tools is discussed further below (see ‘How tools can help’). There are just two main things to point out here. Firstly, you should only implement the tool to support
Getting started 135
the processes already in place – or new processes for which organizational agreement has already been reached. Adapting to both a new tool and to newly-defined processes on-the-fly will unnecessarily increase the risk associated with the ongoing project. Secondly, even though whatever tool you select will probably support much more than your current processes, the recommendation is to resist going overboard with things that are not currently part of your agenda. Focus on getting the essentials in place first and institutionalizing them; the rest can come later. The evaluation and selection process for such a tool can vary significantly depending on how companies normally go about such initiatives, which can range from a proofof-concept with a chosen vendor to a full-blown RFI and RFP process. In practice therefore you’d be looking at anything from 2–6 months. Once a tool has been chosen, implementing it to support the processes up to and including step 9, with a user base limited to the IT applications group and its key business counterparts, can be done within a timeframe of 3-6 months. These numbers are based on real-world examples for such tools with project scopes much larger than that required here. 9. FORMALIZE APPLICATION-LEVEL ASSET MANAGEMENT Objectives: To implement asset management around the newly-implemented application in terms of cost and benefit monitoring, and to use this as a basis for implementing formal demand management and investment planning. Main deliverables: Asset management, demand management and investment planning around the new application. Owner: The business (key stakeholder) and IT (head of applications group). Content: This will be the most challenging task of the whole road map because in most organizations tracking an application in the manner explained in Chapter 7 (section ‘Ongoing cost–benefit analysis for applications’) will be something radically new. As if that were not enough, the same will probably be true for managing demand and investment planning as explained in Chapter 4. Investment planning will become even more challenging because you can’t create a project review board or an investment committee to oversee just one application – what about the rest of the company?
136 The New Model in Practice
Ditto for pricing and chargebacks. Will the rest of the organization play by the old rules while this application plays by different rules? Why should one part of the organization have its demand regulated by pricing or chargebacks while the others continue to have a free lunch? Who will arbitrate? There are no easy answers. Because of these factors, it is very difficult to give practical advice in how to get there. So the actual implementation of this step will necessarily be company-specific, with some or all of the components of step 9 implemented before moving on to step 10 and applying the lessons learnt to the rest of the organization.
The role of best-practice methodologies It would be impossible to talk about a new model for IT without discussing the role of bestpractice methodologies like CMMi, CoBIT, ITIL, PMBOK, Prince2 and Six Sigma, to name the most common. The main driver for all of these methodologies is ‘process improvement’ – or in plain language, using various forms of professionalism to do things better. The granddaddy of all IT methodologies is of course CMM (Capability Maturity Model) and its famous process maturity levels defined by Watts Humphrey of the Software Engineering Institute (SEI) in 1989 in his ground-breaking book, ‘Managing the Software Process’. CMM, which has since been ‘upgraded’ to CMMi, focuses essentially on the software development process. The other common methodologies are: • CoBIT (Control Objectives for Information and Related Technology), with six levels of maturity similar to CMMi, is essentially an audit-oriented set of guidelines for everyday use by both the business and IT, in areas ranging from governance and risk reduction to outsourcing and audits. • ITIL (IT Infrastructure Library) is a set of best practices for service management and operations, with a focus on IT production and operational quality, supported by a configuration management database (CMDB). • Prince2 (Projects in Controlled Environments) and PMBOK (Product Management Body of Knowledge) are both process-driven project management methodologies covering the organisation, management and control of projects over their life cycle. • Six Sigma, which has its origins in industry, is a statistical process-improvement approach which focuses on quality from a customer perspective, with the ultimate objective of manufacturing defect-free products.
Getting started 137
Not surprisingly, no one model does it all. They can overlap as well as complement each other – indeed, some organizations implement a bit of each depending on what they want to achieve. We will not be describing these methodologies in any further detail here, because there are plenty of articles (see ‘Further reading’ in Chapter 9) and books on the subject. Rather, we will try and look at their relevance to the new model. On the plus side, all of the methodologies described get you to think about doing things right. Implemented correctly (i.e. practically, without ‘process police’ enforcing unreasonable levels of compliance...), they can bring about definite process improvements in the areas you are trying to address. Probably the greatest benefit from these methodologies is that they help an organization move up the maturity curve – the concept of maturity is common to most of them – as described below (adapted from ‘The Five Ages of Methodology Sophistication’, by Read T. Fleming): • The age of anarchy – anything goes! • The age of folklore – wisdom is passed from one generation of engineers to another, over beer and pizza. • The age of methodology – the way things are to be done is documented, and they are actually done that way. • The age of metrics – both the products and the processes are measured in standardized ways. • The age of enlightenment – productivity is achieved through continuous improvement. It should come as no surprise to anyone to learn that whatever the area these methodologies address, from software development to service management, few IT organizations are near the top of the scale. Most are at the mid-point or just below – which might explain why beer and pizza are still growth industries... On the minus side, these methodologies have two main drawbacks. Firstly, most of them have their origins in engineering and industry, which means that not only are they strongly associated with the waterfall method, they implicitly assume that building software can be modelled on the building of physical things (hence the term software engineering). This explains the over-arching emphasis on getting things right the first time. For software development for example, rigorous requirements specifications condition successive phases, all adequately backed up by the appropriate documentation, to the point where the test phase should be able to be linked all the way back to the original requirements specification. Another example is project management, in which getting the business case right and nailing down costs and schedules are paramount, and any subsequent changes are managed through a rigorous change management process (probably designed as much to discourage
138 The New Model in Practice
change as to properly manage it). The second major drawback of these methodologies is that they are primarily focused on doing things right, and focus little, if at all, on doing the right thing. With the possible exception of CoBIT, none of them have success criteria based on business benefits and the outcomes of investment decisions. Rather, it is through compliance that workable results are supposed to occur. But as we saw very early on in this book, you can achieve full compliance with any one of the methodologies (e.g. to spec, budget and schedule) and still not deliver a positive business outcome. Now since the whole thrust of this book is about accepting the reality that building software is not like building houses or physical things, we should logically expect such best-practice methodologies to be poor candidates for helping to move to the new model. However, things are not that simple. Every single one of them has positive things which can help your IT organization move up the maturity scale – as long as you consider maturity from the perspective of workable results rather than strict process compliance. For example, you can achieve consistency and predictability in terms of delivering workable results, together with the relevant metrics, by managing demand and supply as explained in Chapters 4 and 5 respectively. The case study at the end of this book is a perfect example of this – the business/IT partnership and the iterative approach used enabled the consistent delivery of workable results in timeframes of 3–6 months, with minimal documentation and no requirement to link detailed testing to rigorous upstream requirements. And it enabled a company to embark on an enterprise-wide transformation of its sales, marketing and customer service operations over a period of 4 years, as shown in Figure 10.1. At the end of the day, best-practice methodologies are really nothing more than codified common-sense. There is therefore lots you can learn from adopting one or more of them as frameworks for improving the way you work – as long as you don’t interpret them literally, and become obsessed with compliance to the detriment of workable results. Remember, if your business clients are happy with your performance in terms of delivering workable results in reasonable timescales, they couldn’t care less whether you’re compliant with any alphabet soup methodology. If, however, IT is not performing adequately, no amount of compliance with any methodology is going to help get you off the hook.
How consulting companies can help You will almost certainly require outside assistance in getting from here to there. Needless to say, there are lots of consulting companies with the expertise to help you put together a change programme. As explained in Chapter 8 (section ‘The role of external service providers (ESPs)’), ESPs usually have the resources with the workshop facilitation, process and data modelling skills that are usually lacking in most IT departments. But they also have an agenda which is fundamentally different to yours, and that is to place warm bodies
Getting started 139
in your organization for as long as possible. This might lead them to sell you on the vision of things which might or might not be attainable in terms of change management. So be selective when working with consulting companies and favour small change programmes over larger ones, e.g. one or two attainable objectives involving not more than two or three people over a maximum period of six months. Remember, a change management programme is not like a software development project in which consultants are actually doing most of the work. In a change programme, consultants can only point you in the right direction in terms of processes and train your people, who ultimately are the ones who are going to make it work. So if a consulting company ends up proposing an unduly large change programme, the chances are it’s not going to be in your best interests.
How tools can help Starting to implement the new model, even on a basic scale in selected parts of your company, will require the help of a software tool to support the new processes. It would be inconceivable to manage the end-to-end business processes covered by the new model on a mix of Microsoft Excel, Access or in-house developed systems. It would also be difficult for an IT department to justify using its scarce resources on such an exercise. Amazingly today, in an age in which most companies would simply be unable to function without IT to manage its production, sales, delivery and service, the IT department remains the proverbial cobbler’s child with no shoes. Whereas the rest of the business like marketing, sales, order management, finance, customer service – even HR – all have their systems, from stand-alone applications to integrated ERP and CRM, the IT department usually has to make do with Microsoft Project and Excel! And yet it has to run a business just as complex as the rest of the company, one based on products, services, orders, resources, projects, technology, finance, inquiries and support. This is an aberration, to say the least! The main reason for this is the lack of appropriate tools and technology, which has come relatively late to IT. Fortunately, there now exists a mature offering of software packages which allows an IT department to ‘do business’ with its customers, both internal and external. They virtually all fall into the category of ‘IT governance’ or ‘Project Portfolio Management’ (PPM), with functional coverage which includes a combination of: • Demand capture; • Demand management; • Investment planning;
140 The New Model in Practice
• Project portfolio management; • Application portfolio management; • Resource demand analysis (i.e. mapping the effort associated with demand to the available supply of people) and resource allocation; • Budgeting and financials, including fiscal years and fiscal periods, taxation rules, multi-currency and differentiation between capex and opex; • Project management (optionally with a two-way interface to the ubiquitous Microsoft Project, a de facto tool in most IT departments); • Management of non-project work (e.g. small change requests which don’t justify the creation of a project with tasks, but still allow the scheduling and tracking of effort); • General dashboard reporting and performance monitoring across projects, applications, portfolios, non-project work, resources and customers; • Cross-charging and invoicing; • Workflow, which together with the above functionality, would enable an organization to implement processes based on the various methodologies like ITIL, CobIT, Prince 2, etc. In general, most of these tools propose functionality which is light years ahead of what the average IT department needs as a starting point. However, since implementing the new model is a long-term journey of 2–5 years, it is essential for any tool to be able to cope with your long-term plans. So, as for all software packages regardless of the functional area (ERP, CRM, PPM...), you’re going to have to balance functionality with the corresponding simplicity or complexity, and for the ability of a tool to be able to grow with your organization as its maturity increases.
The costs of moving to the new model Intimately linked to any requirement to change business models would of course be the costs of change. What would it cost to move to the new model? What would it cost to not do anything and continue with the current model? Let’s start by answering the second question first because it’s the easiest. The costs of not doing anything would be a continuation of the costs incurred today in terms of not ‘getting value’ out of IT. Examples could be unacceptable costs from a balance sheet
Getting started 141
perspective; poor operational processes with bottom-line and customer impact; missed business opportunities because of an inability to combine innovation with a short timeto-market; unwelcome legal scrutiny due to regulatory non-compliance – all the way to the near-certainty that many of your systems will be earmarked for retirement soon after a merger or acquisition because most of your IT sucks. The answer to the first question now suddenly becomes quite easy, namely that the costs of moving to the new model would pale in comparison to the costs of doing nothing. If we tried to quantify these costs at a high level, this is what you could expect: • Additional heads in the business for the new role of application manager (count 1 for each key functional area or key business process). • Additional heads in IT for the new roles of client manager and service manager (maps directly to the previous point, i.e. count 1 for each key functional area or key business process). • IT training costs to bring selected staff up to speed on the new model, mainly for project management, workshop facilitation, process and data modelling and iterative development. Count a 3–5 day training course for each of the four areas, plus another few weeks for consolidation of newly acquired skills. • New hires, since the new skills will necessarily require a mix of training and hiring. Count two or three people per key application area. • Consulting assistance from an ESP for a change management programme (count two people for 3–6 months). Note that this would represent the total costs over 3–5 years. But since you would start with a pilot as explained in the change programme proposed in this chapter, the initial first year costs would only apply to a small part of both IT and the business. In the real world therefore, the sums involved should fit comfortably into the annual operational and IT budgets of any company that is serious about wanting to improve its IT effectiveness. Finally, any discussion of costs would not be complete without a discussion of benefits, which can be summed up as IT delivering reliable solutions in acceptable time frames, at acceptable costs and with clear business benefits. And even though it will not always be possible to quantify these benefits in financial terms (see ‘The limits of financial ROI when applied to IT’ at the end of Chapter 7), the operational excellence and business
142 The New Model in Practice
agility that would be built up over time would make any cost question irrelevant – as the case study in the next chapter will show.
In closing – addressing the three fundamental questions Running an IT department is ultimately about being able to successfully answer the following questions – in the following order: 1. Are we building the right things? In other words, is IT effort and expenditure in line with justified business objectives, or do we just cater to those business executives who have the most executive influence? 2. Are we building things right? In other words, are we taking into account the reality that meshing human behaviour and organizational processes in order to create workable systems is really an iterative process, whose benefits only really unfold over time? Or do we assume that they are physical things like a houses or bridges which can be neatly spec’d, signed off and tossed over the wall to the IT department or to a vendor for delivery – and that any subsequent work is considered an anomaly? 3. Are we managing assets? Once we’ve started delivering on our projects: • Are we able to track the resulting application costs down to the granular level of product development, operations and infrastructure? Or do we simply lump them into a catch-all category called maintenance and keeping the lights on, thereby ensuring that the CFO focuses on the whole instead of on the sum of the parts? • Do we actually have business people to measure the benefits and thus manage the resulting applications as assets to help realize the business case over time? Or do we just assume that the original business case is still valid and that the business benefits have been automatically flowing since delivery? If you are unable to address each of the above fundamental questions, then you should seriously consider reviewing the business model which you’re working to. It is entirely possible that the assumption about IT being likened to a building contractor in the construction industry has become an implicit red line throughout your processes, from investment management to service delivery. If this is the case, then you will probably assume that the way you buy or build software (question 2) is not the issue because it’s always been done that way for the past 50 years and you see no reason to think it should be done otherwise. This would result in you focusing unduly on cost control of yesterday’s projects rather than trying to ensure you fund the right projects in the first
Getting started 143
place (question 1) and then ensure that you’re actually deriving business benefit from the resulting assets (question 3).
Further reading Once an organization has made a decision to operate under a particular business model, this becomes only the first step of a long journey. All a business model attempts to do is to lay the ground rules for how an IT department markets, builds and sells its products, at what costs and margins, and how it interacts with its customers. After that, the devil is in the details, spanning a myriad of subjects like budgeting and cost management, programme and project management, infrastructure and architecture, leadership, risk management... and much, much more. The recommended reading here is not about the devil in the details – there are dozens of books on each of those subjects. Rather, it is designed to tie into the general theme of this book, which on the one hand is to get you thinking out-of-the-box and able to challenge conventional wisdom where appropriate, and on the other hand to show how some of the subjects in this book can actually be applied in the real world.
Articles • ‘No Crystal Ball for IT’, by Harvard Professor Rob Austin, at http://www.cio.com/ article/8101/No_Crystal_Ball_For_I.T. • ‘The Software Construction Analogy is Broken’, by Mishkin Berteig at http://www. kuro5hin.org/story/2003/3/13/211831/159 • ‘The New Methodology’, by Martin Fowler at http://www.martinfowler.com/articles/ newMethodology.html • ‘Business is Business’, by Mark Hall at http://www.computerworld.com/action/ article.do?command=viewArticleBasic&articleId=278634&source=rss_news50 • ‘How Agile Development Can Lead to Better Results and Technology-Business Alignment’, by Thomas Wailgum at http://www.cio.com/article/109751/How_Agile_ Development_Can_Lead_to_Better_Results_and_Technology_Business_Alignment/1 • ‘Iterative vs waterfall software development: Why don’t companies get it?’, by Bill Walton at http://www.computerworld.com/developmenttopics/development/story/0, 10801,90325,00.html?SKC=development-90325
144 The New Model in Practice
• ‘Quality Model Mania’, by Gary Anthes at http://www.computerworld.com/developmenttopics/development/story/0,10801,90797,00.html
Books • ‘VALUE-DRIVEN IT MANAGEMENT – Commercializing the IT Function’, by Iain Aitken (2003). Computer Weekly Professional Series. • ‘MANAGING INFORMATION TECHNOLOGY FOR BUSINESS VALUE - Practical Strategies for IT and Business Managers’, by Martin Curley (2004). Intel Press. • ‘THE TECHNOLOGY GARDEN – Cultivating Sustainable IT-Business Alignment’, by Jon Collins, Neil Macehiter, Dale Vile and Neil Ward-Dutton (2007). John Wiley & Sons Ltd. • ‘THE NEW CIO LEADER – Setting the Agenda and Delivering Results’, by Marianne Broadbent and Ellen S. Kitzis (2005). Harvard Business School Press. • ‘THE CRM PROJECT MANAGEMENT HANDBOOK – Building Realistic Expectations and Managing Risk’, by Michael Gentle (2002). Kogan Page. • ‘WIT’, by D. MacHale (1997). Prion Books. (Source of some of the quotes used in this book.)
10
Case Study
Good judgement comes from bad experience, and a lot of that comes from bad judgement. (Anonymous, from the quotes archive on www.jokes2go.com)
The following case study illustrates how the new business model applies in the real world, with particular emphasis on the following aspects: • Building the business case; • Defining the budget; • Building an IT–business partnership; • Building a cross-functional project team; • Piloting a prototype based on iterative methods; • Bringing out subsequent releases and versions; • Measuring business benefits in operational terms. Needless to say, the company in question did not ‘apply the new model’ (this case study took place in 1996–2000) – it simply had business practices in place which reflected many of the essentials proposed in this book. At the end of the case study we will summarize the main lessons learnt and relate them back to the new model.
The company A European subsidiary of one of the top ten pharmaceutical companies. Country revenues of over US $500m, and a sales force of over 600 sales reps and sales managers.
146 The New Model in Practice
The business problem Customer service is not a term one would normally apply to the pharmaceutical industry, but it is nonetheless a very real requirement, since a physician can call up the pharmaceutical company that produces the drugs she prescribes. The main reasons would be for medical information about a product, e.g. a patient comes to a physician to be jabbed with a new vaccine, and then realizes he forgot to put it in the fridge when he bought it the day before, and is it still usable? Other reasons could be to request product samples, or to register for a company-sponsored event. Pharmaceutical companies usually put these customer interactions into two distinct categories: ‘medical’ and ‘operations’. For those of you not familiar with the pharmaceutical industry, doctors and physicians are the same things. One just has a better image than the other – a bit like used cars vs pre-owned cars. So the industry usually prefers the term physicians. Like most pharmaceutical companies, this one handled medical and operational questions separately, which had two major drawbacks. Firstly, it required the customer (which means physicians or pharmacists – not you and me who are the actual patients) to deal with separate departments, usually not staffed to deal with enquiries; customers were therefore either put on hold or transferred – when they didn’t simply hang up. Secondly, the absence of feedback between departments ensured that medical and operations often remained ‘blind’ on issues which might otherwise concern each other. For example, an unusual recurrence of a question on product X could be the result of a medical issue, a promotional issue or even a competitor campaign – which should normally be channelled to marketing so that the appropriate corrective action can be taken. In reality of course, this rarely happened because – to put it mildly – the left hand didn’t know what the right hand was doing. Even though most product-related questions were repetitive (a pharmaceutical product usually generates about 20–30 FAQs), what passed for customer service was characterized by unanswered questions, lost calls and an absence of feedback between medical and operations. There was also no standardization of medical information across functions, e.g. multiple versions of Q&As (questions and answers) existed for each department, each with its own ‘official’ answer (all of which were of course ‘medically’ correct, but nonetheless inconsistent).
The project context On the operations side, the company had made enormous progress in the space of just two years in reorganizing its sales and marketing from separate, product-oriented organizations to a customer-centric organization. Today this would be called CRM; at the time
Case study 147
it was simply called ‘being customer-centric’. At the origin of this successful transformation was a very basic business problem: one of the company’s best-selling drugs, which contributed a significant chunk to annual sales, was being threatened by the arrival of generics (which are legal copy-cat drugs) as it approached its coming off-patent (drugs are protected by 20 year patents, of which about half that time is spent in developing it and bringing it to market). Simply put, the CEO had to find a way to stave off this potential disaster-in-the-making, and one of the answers turned out to be differentiation through superior customer service. As part of this transformation to a customer focus, a customer-centric information system and a data warehouse had been running for over a year now, capturing all sales and marketing interactions against a single, company-wide physician database (replacing over 100 disparate physician databases scattered across three BUs). Systems to support new processes for sales administration and marketing events were also in place. Finally, a new SFA system was successfully rolled out, fully integrated to the data warehouse. The business and IT therefore had a solid partnership with mutual credibility, and both had built up a store of knowledge and experience of sound business practices which bode well for the future.
Building an IT–business partnership Since the approach used for the project in this case study was based on the successes described above, it is essential to understand how they came about. Three years earlier, the company launched an SFA project which failed so spectacularly that it probably broke every rule in the book, from an unrealistic business case to massive budget overruns and lack of user buy-in. When it finally went live a year behind schedule, it was so unworkable that the whole project was halted after only three months and written off at great expense. As sometimes happens in such cases, the necessity to learn from past mistakes resulted in an almost cathartic revamping of the whole way IT interacted with the business (on the sales and marketing front only – it was not an enterprise-wide initiative). Using the approach outlined in Figures 9.1 and 9.2 in the previous chapter, fundamentals were put in place across the board, from how the business would interact with IT and launch new projects to how to define requirements and whether to buy or build systems. The main foundation for this new way of working was a business/IT partnership to replace the previously conflictual client–vendor relationship. Probably the most fundamental change proposed by the CIO – and wholeheartedly endorsed by the CEO – was that the company would henceforward only launch a project if it had an active executive sponsor at board level and a business owner for its day-to-day running. This effectively put paid to the situations in which either IT or the business could go ahead and launch projects on their own without true accountability for results. The end of the traditional
148 The New Model in Practice
client–vendor relationship also logically resulted in the replacing of the waterfall method by an iterative approach to software solutions based on a combination of JAD (joint application design) and RAD (rapid application development). In the IT department, the sales and marketing applications group entered their time on a daily basis into an in-house developed system, to the level of granularity of project/application and work code, e.g. analysis, development or testing. The results were used to track costs and to provide input into the annual planning cycle. Finally, each newly implemented application had a single point of contact from the business and the IT side, responsible for jointly evaluating demand (captured into an extension of the time-entry system mentioned above) and managing the calendar of subsequent releases. By applying these fundamentals, the new IT–business partnership was able to reduce project cycle times from over a year to less than six months, eliminate ‘corrective maintenance’ and to transform the sales and marketing organization as described at the start of this section, and summarized in Figure 10.1. The organization was therefore at a very high level of customer and process maturity, and the logical next step was to address the issue of customer service.
Kicking off the project Unlike most projects which usually start with a more or less well-defined business case, this one did not, for the simple reason that no-one had any idea at this stage what the final solution would look like from even an organizational perspective, never mind a systems Case study Year 1
Year 2
Year 3
SINGLE VIEW OF THE SINGLE VIEW OF THE UNRELIABLE CUSTOMER FOR CUSTOMER INFO FOR CUSTOMER FOR SALES & MKTG AT HQ SALES IN THE FIELD SALES & MKTG • The ‘age of folklore’ • Unreliable data (100+ physician databases) • Islands of automation • Failed SFA project
Year 4
SINGLE VIEW OF THE CUSTOMER FOR CUSTOMER SERVICE
• Data ownership issues • Customer service initiative • BU reorganization • Cross functional workshops resolved • Cross-functional • ‘One-stop-shop’ contact• SFA for sales force workshops centre with company-wide • All customer-facing • Single customer applications plugged into integration database • Datawarehouse holding the datawarehouse customers, calls and prescriptions.
Figure 10.1 Sales, marketing and customer service evolution over four years
Case study 149
perspective. The only thing that was clear was that there was a key business problem to resolve and a solution first needed to be found before being able to build a business case. The executive sponsor (the operations director) therefore assigned a dedicated project owner from the business to tackle the issue of customer service. This person then set up a crossfunctional project team from marketing, sales, medical, clinical safety, HR and of course IT. Besides the representative, cross-functional nature of the team, one of its key strengths was the strong belief in CRM brought to the table by three members of the project team: • The clinical safety director, whose forward-thinking views on CRM was instrumental in getting the medical department to break out of its traditional ‘librarian’ role, and adopt a proactive, customer-facing role with a service culture; • The project owner from marketing, newly arrived in the company from the mail-order business; • The IT project manager, a CRM advocate from a non-pharmaceutical background who’d managed the customer-centric information system, the data warehouse and the SFA projects implemented over the previous two years. In an industry very much characterized by people in the medical and related professions moving from one pharmaceutical company to another, these non-traditional outsider views were critical in encouraging people to think ‘outside the box’.
Feasibility study and defining a solution The team kicked off a feasibility study, which had three main parts: • External input: physicians were invited to a customer feedback meeting and asked to give their views concerning ‘customer service’. Not surprisingly, they wanted to deal with as few people as possible in as short a time as possible. This was particularly important when they had a patient in front of them and they needed a quick answer. They also wanted to be able to use the same channel for other interactions like adverse effects reporting, seminar/event registration, etc., without having to call their sales rep. The key feedback from this session, and a subsequent survey, was the requirement for a one-stop-shop call centre. • Internal input: monthly tracking of calls to the telephone switchboard (via the PABX) revealed a lost call rate of 15%. There was also a one-week, company-wide survey, during which every person potentially in touch with customers filled out a log of who
150 The New Model in Practice
called and for what purpose. The analysis showed that questions were asked by physicians (39%), pharmacists (15%) and sales reps (13%). These figures confirmed that there was a real need for product information. • Benchmarking: in an attempt to compare themselves with the industry, competitors and non-competitors alike, a number of standard questions were prepared and calls made by doctors on the project team to other pharmaceutical companies. The results were very poor, with just one out of the ten companies called able to provide an acceptable level of service. The key feedback from this was that most other companies were equally bad, and that there was a window of opportunity for differentiation through superior customer service.
Building the business case In an attempt to take medical information out of its ‘librarian’ status, and create competitive advantage through real customer service which closes the loop with operations, the following business objectives were defined: • A one-stop-shop contact centre for all inbound customer contacts, whatever their nature (medical information, documentation, samples...) and whatever the channel (phone, fax, mail), with a unique phone number to be published in the national medical dictionary of prescribable products, i.e. the physicians’ ‘bible’; • Reflecting the horizontal, customer-facing nature of the contact centre, it was to be jointly run by both the medical and operations groups (an organizational revolution, for those who know the pharmaceutical industry); • No lost calls; • FAQs, which constitute 80% of all product-related questions, to be handled by nonspecialists at the first point of contact, adequately supported by a knowledge base containing the official, company-validated answers; • A first-call resolution rate of 80%, i.e. the percentage of all calls to be answered at level one, without transfer to a company doctor at level two; • Any level two transfers not resolved while on the line to be closed within three days; • 100% customer satisfaction six months after launch. The business objectives described above were all operational. There was no attempt to quantify the resulting business benefits in terms of increased sales or decreased costs. In
Case study 151
other words, the business case was not based on any form of ROI in the traditional sense, only on improving customer service.
Project approach The project team now kicked into high gear, adopting the tried and tested approach used for the previous projects, i.e. a 2–3 day offsite JAD workshop for process and data modelling, which would be all the more challenging in that for the first time they would be defining new processes, and not just formalizing changes to existing processes – which explains why even HR was part of the project team. One week later, they had a 30-page requirements document which enabled them to evaluate technical solutions.
Product evaluation – buy or build decision The company found itself from a strategic and systems perspective clearly ahead of its time – the very concept of ‘customer service’ for physicians (and pharmacists) in the pharmaceutical industry was a novelty in the late-90s, so it was no surprise that there were no packages on the market. There were of course customer-service packages with call-centre software, but the processes around which these packages were based did not fit well with the required pharmaceutical processes, for the following reasons: • A customer service package usually requires upfront customer identification as a prerequisite for continuing the call. When a physician calls up however, she’ll usually just mumble her name in passing and then launch straight into her question. Now if you really want to work up a physician’s ire – or get her to hang up and prescribe a competitor product – start by asking her for personal details which have nothing to do with her problem, usually a patient sitting in front of her. It’s like when you call up a cab company and say ‘Hi, I’m currently at location X, and would like a cab to go location Y’, and they almost cut you off by saying ‘I first need your name’, as if they can’t ask you that afterwards. • Once on the line, a physician can ask more than one question, each of which needs to be uniquely identified and tracked for reporting purposes. Just about all customer service packages at that time were based on enquiries or tickets which represent a single item. • When dealing with a customer, the call-centre agent has to be able to view customer interactions across all channels. Most service packages, especially in the mid-90s, were not yet on the CRM curve whereby all interactions for a customer were stored, and not just tickets or enquiries.
152 The New Model in Practice
• Integration to the company’s customer-centric information system meant that any callcentre software had to be consistent with its data-model, especially the many-to-many relationship between customers and institutions (also known as affiliations). This requirement is specific to the pharmaceutical industry, and is absent in customer service packages. • The system had to have a knowledge base for FAQs. Adapting a traditional, procedural and process-heavy customer service package, then possibly interfacing it to a knowledge base from a different vendor, then interfacing it all to the company’s own systems was clearly not a cost-effective proposition. Today you might be able to buy something off-the-shelf close enough to be able to customize, but this was not even an option then. So a decision was reached fairly quickly to build the required system.
Building a prototype Using the process modelling output from the JAD sessions, the project team subsequently designed a solution to handle the following high-level processes: • To address the FAQs which would constitute over 80% of all calls, the contact-centre agents would rely on a keyword-driven knowledge base containing a list of official, company-validated questions and answers. Any question not part of the official FAQ list would be logged against the enquiry, and the call transferred to a company physician who would be able to view the same enquiry through basic workflow. • After successfully answering the enquiry, the agent would then ask the caller whether he wanted a follow-up validation fax or letter on an official company letterhead and signed by an authorized medical authority. This would trigger the printing of the appropriate page, which would then be signed and manually faxed or stuffed into an envelope (automation would come later and depend on actual volumes). • The agent would then ask the caller for his name and address, which it was hoped he would provide to enable a check against the information in the central customer database. This would enable continuous monitoring and improvement of data quality. • An enquiry would be fully owned by the level one agent, even if it went to level two. Any fax or mail follow-up, or any request for literature which required a walk to the nearby cabinet of product literature would always be handled by the owner of the enquiry, who would carry it out during slack time or any other time during the day. In order to guarantee the highest level of service and job motivation for the agents, excessive workflow and Taylorization was ruled out right from the start.
Case study 153
• Full two-way integration with the customer-centric information system on a nightly basis would enable: (1) call-centre agents to be aware of any other prior customer interactions, e.g. sales calls or marketing interactions from other channels; (2) sales reps to be aware, on their SFA laptops, of any calls made by their customers to the contact centre which occurred the day before. IT defined the data model and sketched out on paper what the screen prototype would look like to address these processes (one week), then outsourced the development of a working prototype (two months), which was then given to two of the future call-centre agents for validation (two weeks). The corresponding feedback resulted in refining the prototype to produce the finished product (four months), which was then system-tested (one month) in time for an on-schedule implementation. The outsourced development was in line with the iterative approach used by the team, in other words it was not based on a signed-off SoR which was then to be delivered on a fixedfee basis, as is usually the case when an external vendor develops a solution for a client. Even though most of this outsourcer’s business was based on the traditional SoR plus fixed fee approach, it had no issues adapting to the iterative approach required. At the first meeting, the IT project manager and the analyst/developer who had designed the prototype sat down with the developer from the outsourcer who was going to produce it. The only documentation that passed between them was the workshop deliverables (process and data models) and the prototype screens sketched out on paper. The financial approach used was time-andmaterials capped at one or two months, and then extendable if required. A similar approach was used for the final version produced from the prototype. At least twice per month there would be face-to-face status and review meetings to monitor progress and agree on the inevitable mid-course design corrections. The outsourcer was, in effect, an extension of the company’s IT department, with the same approach used for internal IT resources. The resulting solution had the following features: • A keyword-driven Q&A knowledge base populated with FAQs; • Online access to customer addresses and contact history; • Facilities for reply by phone, fax or mail; • Workflow routing of non-FAQs from level one non-specialists to level two specialists (i.e. company doctors); • Email routing of non-medical requests to relevant departments within the company (e.g. samples, event registration...);
154 The New Model in Practice
• Email routing to the clinical safety department of all drug adverse effects logged by the level one agents; • An interface to the sales and marketing data warehouse, enabling enterprise-wide integration. A pilot was not considered necessary to test the new system and processes for the following reasons: • There was no external publicity made for the launch of the new department (this was to follow later), and therefore no customer expectations to meet. • There were no potentially recalcitrant users to placate, or changed processes to monitor. It was a newly-created department, with highly motivated staff eager to start. • The main requirement of the new department was to be able to answer FAQs over the phone. In the worst-case scenario with the complete system down, call-centre agents could still meet this objective using a stand-alone version of the knowledge base on their PCs. Any absence of online customer identification and any inability to interface to the data warehouse would only impact internal reporting, not customer satisfaction.
Results All of the business objectives were either met or exceeded. In the resulting two-tier organisation, non-specialist customer service reps at the first point of contact ran at a first-call resolution rate of around 90% (vs the original target of 80%), with the remaining 10% transferred to specialists. FAQs which previously took up to a week or more to be answered (when they were answered at all) were now being handled in less than 30 seconds, with the full enquiry wrapped up in a minute. Customer satisfaction measured from an independent outside company was 99% within the first month. Most pharmaceutical companies would already consider it a remarkable achievement to be able to know how many questions were asked each month, regardless of which ones and for which products or therapeutic class. Here we had a company which was able to operate down to the most detailed level, i.e. able to identify how many times a particular question was asked about a particular product in a particular therapeutic class – and by which physician or pharmacist in the company-wide customer database. There was even a screen called the ‘Top Ten’, which showed over any period of time (day, week, month), the top ten questions asked.
Case study 155
Close-the-loop, weekly cross-functional meetings were held between the contact centre and other departments to review the top questions asked, and identify any trends which would require corrective action from a particular department to eliminate or reduce the occurrence of a particular question. Finally, the fact that the new system was plugged into the data warehouse enabled enterprise-wide integration between the field and the call centre. What this meant in practice was that call-centre agents saw on their screens that the physician on the phone had been visited the previous day by sales rep Jane Smith – who in turn would see on her laptop the next day that one of her doctors had called up with FAQ #15 related to product X and requested some samples. Jane Smith would then immediately call up ‘her’ physician and propose to drop by with the samples and perhaps spend a few minutes talking about the reason for yesterday’s call, thereby strengthening the customer relationship. On the organizational front, IT and the business also formalized the roles of two of the key players on the project team, creating single points of contact on the IT and business side to manage demand and to monitor costs and benefits. There was less emphasis on monitoring costs, because they were relatively low in absolute terms for all three cost categories: product development, infrastructure and operations. Almost all of the emphasis was on benefit monitoring which, given the results outlined above, was not really surprising.
Timescales Total elapsed time was: (1) six months from project launch through feasibility study to formal definition of objectives; (2) nine months from definition of objectives through requirements, evaluation, development and implementation.
Three months later The contact centre soon moved beyond just answering FAQs. It served as a means to monitor and improve data quality: usually physicians in a hurry and used to poor service will hesitate to spend non-productive time providing their name and address details to a callcentre agent. However, with service now characterized by the phone being picked up in three rings or less, and FAQs answered within the space of 30 seconds (compared to days or weeks beforehand), callers were only to happy to show their gratitude by allowing the agent to fully check their name and address against the company’s central database. Probably the most business-sensitive use of the contact centre was the ease and speed with which the company was able to handle official communication about mad cow disease, which broke out in 1997. Whereas other companies had to frantically scramble to set up or outsource a dedicated phone number and call centre to handle queries, all that
156 The New Model in Practice
was needed was to define a number of FAQs reflecting the company’s official line on the subject, and put it in the knowledge base – in literally 48 hrs.
One year later There was a 50% increase in the number of questions answered, with the first-call resolution rate still above 90%, and customer satisfaction still at 99%. During a major product launch, the number of calls per day increased fivefold during the first few weeks, providing vital feedback which enabled quick corrective action, e.g. for product packaging and documentation. As new questions were also logged (i.e. those not yet in the knowledge base), multiple occurrences could be identified, and new FAQs could be set up literally on a weekly basis following product launch. The only black spot on the horizon was a classic case of resistance to organizational change. This concerned the sales force, who did not initially adhere to the project, out of a fear that this channel could, over time, threaten their livelihood by replacing their faceto-face channel as the main source of information for physicians. Despite official communication to the contrary, there was an implicit boycott of this new channel by the sales force, to such an extent that a significant part of the sales force did not even, as requested, publicize the service to their physicians during their visits! It took almost two years before the sales force came to accept that the one-stop-shop contact centre was simply one more component of the channel mix, and in no way threatened the traditional faceto-face sales channel. By then, they themselves had become frequent callers to the contact centre, now convinced of the benefits of having access to the same up-to-date product information provided to their customers.
Two years later • Creation of an official email address, and opening up of a web site as an official channel through which physicians and pharmacists could ask questions; • Recognition of the department by a national institute of quality management; • Quality of service now controlled by audit, in addition to customer satisfaction surveys.
Main lessons learnt (on the plus side) As this was the third in a series of successful CRM-related projects (see Figure 10.1), both the business and IT had already reached a level of experience whereby they were simply applying the business fundamentals learnt over the years:
Case study 157
• Executive sponsorship, and a dedicated project owner from the business; • A business case with measurable, operational objectives; • A business/IT cross-functional team; • Real-world customer input to help drive the solution; • A workshop-based, iterative approach to software solutions.
Main lessons learnt (on the minus side) Organizational resistance to change by the sales force. This was clearly foreseen right from the start, since it became apparent during the feasibility study. Despite all attempts to convince them that their jobs were not at stake, this reality was only accepted almost two years after launch.
Comments with respect to the new model There are two ways of looking at this case study with respect to the new model. The first would be to describe how it differs from the traditional model. The most visible differences were the absence of the following: • A standard client–vendor relationship with its contractual obligations and sign-offs (replaced by a partnership working towards mutually agreed objectives); • A detailed business case as a pre-requisite for starting the project (replaced by a feasibility study to determine what the best solution would be, with the business case only defined afterwards based on the outcome of the study); • IT–user interviews resulting in a detailed SoR (replaced by a three-day workshop session resulting in a 30-page document containing business definitions, process and data models); • The absence of a traditional project manager (replaced by an iterative project manager who, in addition to managing tasks and deliverables, also had to play a creative role in proposing a workable solution and ensuring the buy-in of the project team); • Sequential teams of analysts, developers and testers working to the waterfall method (replaced by people with the combined role of analyst/developer, using iterative methods in conjunction with business users);
158 The New Model in Practice
• ‘Corrective maintenance’ within the first few months to fix serious design and usability issues (replaced by minor ongoing releases to fix bugs and incorporate usability suggestions from the call-centre agents). The second approach would be to check off the items from the checklist of Chapter 9 (see ‘How to start – from checklist to action plan’, p 124): 1. CLIENT MANAGEMENT: Yes; 2. APPLICATION MANAGER: Yes; 3. ITERATIVE METHODS: Yes; 4. TIME ENTRY: Yes; 5. APPLICATION-LEVEL COST TRACKING: Yes; 6. APPLICATION-LEVEL BENEFITS TRACKING: Yes; 7. DEMAND CAPTURE: Yes; 8. DEMAND MANAGEMENT: Partially (though all demand was captured into a pipeline and managed at application level, there was no link to any form of portfolio management); 9. APPLICATION-LEVEL ASSET MANAGEMENT: No (though the costs for all applications were known, and benefits measured for some of them, there was no attempt to explicitly tie them together to produce a cost–benefit analysis); 10. PRICING AND CHARGEBACKS: No.
Reader feedback While things are still fresh in your mind, you might want to share your thoughts and respond to a snap survey (one question only – we’re all busy people!) about how useful you found this book. To do so, please visit my website at www.michaelgentle.com, where you will also find the most recent survey results.
Index
action plan 124 Activity-Based Costing (ABC) see pricing agile see iterative development Aitken, Iain 144 allocations see pricing annual plan see budget Anthes, Gary 144 application asset manager 100 cost-benefit analysis 96–100 financial asset 96 inventory 102 reducing lifetime costs 100 retiring 101 Application Portfolio Management (APM) see portfolio management architecture 47, 92 Austin, Rob 59, 143 benefits as part of P&L 95 business value 46 intangible 46 monitoring 83 operational 46 quantifiable 46 Berteig, Mishkin 143 best-practice methodologies CMM, CMMi 136
CoBIT 136 ITIL 136 Prince2 40, 136 pros and cons 137–138 Six Sigma 136 Broadbent, Marianne 144 budget annual plan or IT plan 42, 55 cost categories 91 ‘good’ vs. ‘bad’ spend 81 ownership 92 project 44 business alignment see demand business case see demand business model see model definition 8, 9 litmus test 33 business value see benefits business, IT run like a see profit centre buy vs. build 70 case study 145 chargebacks see pricing client-vendor relationship 6, 24–28, 33 Collins, Jon 144 commitment conundrum 45, 57 comparison with corp IT 11 construction industry
160 Index
construction industry (Continued) construction phase 6, 7 design phase 6, 7 examples 24, 57 labour categories 57 materials and components 24, 57 trap 6 cost-benefit analysis application examples 97–100 monitoring 53, 83 ongoing 86–90, 96–100 ownership of 86 process 15, 17 with teeth 18, 92 cross-charging see pricing Curley, Martin 56, 96, 144 data modelling 65–68 decibel management 4, 44 demand business alignment 43 business case 48, 54 categories 42 filtering and screening 40 funding, chequing or current account 58 funding, milestone-based 58 funding, venture-capitalist 58 funnel 40 ideas 40 managing 10, 37 operational 43 planned 41 portfolios see portfolio management prioritizing and approving 43–49 project requests 40 regulation of 7, 57 resourcing of 49 scoring 42 stages, stage gates 40 supply and demand curve 7 unplanned 42 executive sponsorship 40, 109
feedback see reader feedback financials 91 Fleming, Read T. 137 Fowler, Martin 143 free lunch see pricing further reading 143 Google 11 governance 29, 108 Hall, Mark 143 houses see construction industry human behaviour 6, 28 hurdle rate see ROI ideas see demand implementation 81 IT 101 15 IT plan see budget iterative development agile 63 challenges 74–77 definition 63 in practice 65–74 pros and cons 78 prototype 64, 70 workshops 64 JAD 63, 148 keep the lights on 42, 89 Kitzis, Ellen S. 144 Macehiter, Neil 144 MacHale, D 144 Mahoney, John 32, 123 maintenance 79 maturity 137 Microsoft 11, 139 model, new building 35 costs 140 model, traditional flaws 21
Index 161
fundamental error 12, 13 how it started 5 process breakdown 15 non-IT activities 16, 18 organizational trends in IT 32 original sin 12 outcomes 75 outsourcing 31–32, 101 pilot 71 pizza parlour 19, 22 portfolio management applications 53 approving demand 50 efficient frontier 53 investment categories 51 investment planning 52 performance monitoring 53, 88 personal investment analogy 50, 89 projects 53 pricing Activity-Based Costing (ABC) 94 allocations 8, 29, 93–95 chargebacks and cross-charging 8, 29, 93–95 free lunch trap 7 lack of 7, 8, 28, 84 process breakdown (IT) 15 breakdown (non-IT) 16 examples 19 expertise 22 incentives and rewards 20 modelling 65–67, 77 ownership and behaviour 19 profit centre 29 project budgeting see budget changing costs and benefits 87–90 critical success factors 79 management 75–76 monitoring costs and benefits 83
risk analysis 79 success criteria 22, 75 Project Portfolio Management (PPM) see portfolio management project requests see demand prototype see iterative development reader feedback xii, 158 releases 73, 80 requirements actual 7, 27 ambiguity 24–26 contractual 21 documented 7, 27 get out of jail free 27 sign-off 26 specifying 26–28 statement of (SoR) 26 risk management 48 ROI calculating 9, 102 evaluating 13, 17, 102 hurdle rate 103 limits of 102–104 personal investment analogy 103 roles and responsibilities account manager 111 application manager 107 BUs 107 business analyst 113, 114 business owner 110 business relationship manager 111 changing roles 113–116 client manager 111 developer 113, 114 executive sponsor 109 External Service Providers (ESPs) 119, 138 governance committee 108 investment committee 108 IT 111 new business-IT relationship 112 operations department 116 PMO 108, 117
162 Index
roles and responsibilities (Continued) project manager 112, 115 project review board 108 sacred cows 12 SLA 81, 82 SoR see requirements Spanenberg, John 103 specifications see requirements supply regulation of 7, 57 supply and demand curve 7 support 81 technology alignment 47 computer progress 3 rate of change 10 Thrasher, Harwell 104 time boxing 73
time entry 95, 139 tools 139 trusted advisor 92 user relationship with IT 9 views of IT 4, 5 versions see releases Vile, Dale 144 voodoo formula 8, 93 Wailgum, Thomas 143 Walton, Bill 143 Ward-Dutton, Neil 144 waterfall method definition 62 unintended consequences 21 workshops see iterative development