The second edition includes new chapters on Direct Observation of Non-Clinical Skills (DONCS), educational supervisor re...
194 downloads
1149 Views
3MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
The second edition includes new chapters on Direct Observation of Non-Clinical Skills (DONCS), educational supervisor reports, the new online portfolio for trainees, workplace-based assessments in psychotherapy and views from trainees themselves. This book will be essential reading for psychiatric trainers and trainees. About the editors
Amit Malik is a Clinical Services Director in Hampshire and was integral to the development and implementation of workplace-based assessment tools in the UK. He successfully led the national implementation of electronic systems for WPBAs and portfolios. Dinesh Bhugra is Professor of Mental Health and Cultural Diversity at the Institute of Psychiatry, King’s College London, and an Honorary Consultant at the South London and Maudsley NHS Foundation Trust. He has been influential in psychiatric training and assessment in the UK, particularly as Dean (2003– 2008) and President (2008–2011) of the Royal College of Psychiatrists. Andrew Brittlebank is Deputy Medical Director for Medical Performance at Northumberland, Tyne and Wear NHS Foundation Trust and Associate Dean for Postgraduate Education at the Royal College of Psychiatrists. His areas of College responsibility include curriculum and workplace-based assessments.
PUBLICATIONS
Workplace-Based Assessments in Psychiatry
This book outlines the workplace-based assessments (WPBAs) that are required by the current competency-based psychiatry curriculum. It has been updated, taking into account the experience gained since these assessments began. The authors explore the theory and practice of different assessment methods such as case-based discussion, long-case evaluation and directly observed practice, changes in the MRCPsych examinations and multi-source feedback.
malik, bhugra & Brittlebank
It has now been 4 years since significant changes were made to the way psychiatric trainees’ skills are assessed for the MRCPsych examinations. Much teaching, learning and assessment now occurs in the workplace in real clinical situations, with the emphasis being on outcome as reflected by the performance of the doctor.
college seminars series
workplace-based assessments in psychiatry second edition edited by
amit malik, dinesh bhugra & andrew brittlebank
This page has been left blank intentionally
Workplace-Based Assessments in Psychiatry Second edition
This page has been left blank intentionally
Workplace-Based Assessments in Psychiatry Second edition
Edited by Amit Malik, Dinesh Bhugra and Andrew Brittlebank
RCPsych Publications
© The Royal College of Psychiatrists 2011 RCPsych Publications is an imprint of the Royal College of Psychiatrists, 17 Belgrave Square, London SW1X 8PG http://www.rcpsych.ac.uk All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. British Library Cataloguing-in-Publication Data. A catalogue record for this book is available from the British Library. ISBN 978 1 908020 06 2 Distributed in North America by Publishers Storage and Shipping Company. The views presented in this book do not necessarily reflect those of the Royal College of Psychiatrists, and the publishers are not responsible for any error of omission or fact. The Royal College of Psychiatrists is a charity registered in England and Wales (228636) and in Scotland (SC038369).
Printed in the UK by Bell & Bain Limited, Glasgow.
Contents
List of tables, boxes and figures
vii
List of contributors
ix
Preface
xi
1 Introduction: changes in training Amit Malik, Dinesh Bhugra and Andrew Brittlebank
1
2 Workplace-based assessment methods: literature overview Amit Malik and Dinesh Bhugra
14
3 Case-based discussion Nick Brown, Gareth Holsgrove and Sadira Teeluckdharry
28
4 The mini-Assessed Clinical Encounter (mini-ACE) Nick Brown
45
5 The Assessment of Clinical Expertise (ACE) Geoff Searle
56
6 Multi-source feedback Caroline Brown
68
7 Direct Observation of Non-Clinical Skills: a new tool to assess higher psychiatric trainees Andrew Brittlebank
76
8 Workplace-based assessments in psychotherapy Chess Denman
84
9 Educational supervisor’s report Ann Boyle
99
10 Portfolios Larissa Ryan and Clare Oakley
108
11 Annual Review of Competence Progression (ARCP) Wendy Burn
122
12 Examinations in the era of competency training Anthony Bateman
131
v
contents
13 Piloting workplace-based assessments in psychiatry Andrew Brittlebank, Julian Archer, Damien Longson, Amit Malik and Dinesh Bhugra 14 Developing and delivering an online assessment system: Assessments Online Simon Bettison and Amit Malik
vi
142
154
15 A trainee perspective of workplace-based assessments Clare Oakley and Ollie White
167
16 Conclusions Amit Malik, Dinesh Bhugra and Andrew Brittlebank
181
Appendix 1: Assessment forms
185
Appendix 2: Guide for ARCP panels in core psychiatry training
197
Appendix 3: The MRCPsych examination
215
Index
218
Tables, boxes and figures
Tables 14.1
A blueprint for a mini-ACE assessment (example)
50
18.1
Case-based discussion (CbD) group assessment form
88
18.2
Supervised Assessment of Psychotherapy Expertise (SAPE) 94
13.1
Satisfaction scores and time taken to complete the first 6 months of WPBAs
146
Boxes 11.1
Methods for the assessment of trainees’ performance
13.1
Structured question guidance
36
13.2
Helpful prompts for planning a case-based discussion
37
13.3 Case-based discussion: clinical record-keeping performance descriptors 13.4
Case-based discussion: clinical assessment (including diagnostic skills) performance descriptors
6
38 38
Case-based discussion: medical treatment performance 13.5 descriptors
39
13.6
Case-based discussion: risk assessment and management performance descriptors
39
13.7
Case-based discussion: investigation and referral performance descriptors
39
13.8
Case-based discussion: follow-up and care planning performance descriptors
40
13.9 Case-based discussion: professionalism performance descriptors
40
Case-based discussion: clinical reasoning (including decision-making) peformance descriptors
40
13.10
13.11 Case-based discussion: overall clinical care performance descriptors
41 vii
tables, boxes and figures
14.1
mini-ACE: history-taking performance descriptors
51
mini-ACE: mental state examination performance 14.2 descriptors
51
14.3
mini-ACE: communication skills performance descriptors
52
14.4
mini-ACE: clinical judgement performance descriptors
52
14.5
mini-ACE: professionalism performance descriptors
52
14.6
mini-ACE: organisational efficiency performance descriptors 53
14.7
mini-ACE: overall clinical care performance descriptors
53
15.1
ACE: history-taking performance descriptors
62
15.2
ACE: mental state examination performance descriptors
63
15.3
ACE: communication skills performance descriptors
63
15.4
ACE: clinical judgement performance descriptors
63
15.5
ACE: professionalism performance descriptors
64
15.6
ACE: organisational efficiency performance descriptors
64
15.7
ACE: overall clinical care performance descriptors
64
16.1
Good Medical Practice and mini-PAT domains
71
19.1
Portfolio evidence
105
19.2
Key areas to be explored with an underperforming trainee
106
10.1
Important factors for portfolio success
112
Utilising the Royal College of Psychiatrists’ Portfolio 10.2 Online
118
11.1
Example of a remedial plan for a trainee who has failed the CASC examination
127
13.1
General Medical Council’s standards for curricula and assessment systems
143
14.1
Nielsen’s top ten usability heuristics
162
14.2
Typical support enquiries
164
Figures
viii
13.1
Case-based discussion planning grid
35
17.1
Skills assessed in the DONCS pilot study
81
18.1
A grid for systematically recording a complete psychological formulation of a patient’s difficulties
98
10.1
Kolb’s learning cycle
109
Contributors
Julian Archerâ•… National Institute for Health Research (NIHR) Academic Clinical Lecturer in Medical Education, Peninsula College of Medicine and Dentistry, Plymouth Anthony Batemanâ•… Halliwick Unit, St Ann’s Hospital, London Simon Bettisonâ•… IT Consultant, Bettison.org Ltd, Sheffield Dinesh Bhugraâ•… Professor of Mental Health and Cultural Diversity, Section of Cultural Psychiatry, Institute of Psychiatry, King’s College London, and Dean, Royal College of Psychiatrists Ann Boyleâ•… Consultant Psychiatrist, Leicestershire Partnership NHS Trust, Head of Postgraduate School for Psychiatry and Associate Postgraduate Dean, East Midlands Healthcare Workforce Deanery Andrew Brittlebankâ•… Consultant Psychiatrist and Director of Medical Education, Northumberland, Tyne and Wear NHS Foundation Trust and Associate Dean for Postgraduate Education, Royal College of Psychiatrists Caroline Brownâ•… Locum Consultant Paediatrician, Nottingham Children’s Hospital Nick Brownâ•… Consultant Psychiatrist, Birmingham and Solihull Mental Health Foundation Trust; Senior Assessment Adviser, National Clinical Assessment Service (NCAS), London Wendy Burnâ•… Consultant Psychiatrist, Leeds Partnerships NHS Foundation Trust and Head of the Yorkshire and the Humber School of Psychiatry Chess Denman FRCPsychâ•… Consultant Psychiatrist, Cambridgeshire and Peterborough NHS Foundation Trust, Cambridge Gareth Holsgroveâ•… Expert in Medical Education, formerly the Royal College of Psychiatrists’ adviser on postgraduate medical education Damien Longsonâ•… Consultant Liaison Psychiatrist, Manchester Mental Health and Social Care Trust, Head of School of Psychiatry, NorthWestern Deanery Amit Malikâ•… Clinical Services Director and Consultant Psychiatrist, Hampshire Partnership NHS Foundation Trust, and Training Policy Advisor, Royal College of Psychiatrists, London ix
contributors
Clare Oakleyâ•… Clinical Research Worker, St Andrew’s Academic Centre, Institute of Psychiatry, King’s College London and formerly Chair, Psychiatric Trainees’ Committee, Royal College of Psychiatrists Larissa Ryanâ•… ST4 Old Age Psychiatry, Oxford Deanery, Berkshire Healthcare NHS Foundation Trust Geoff Searleâ•… Consultant Psychiatrist, Programme Director CT1–3, Wessex School of Psychiatry, Crisis Team, Dorset Healthcare University NHS Foundation Trust, Bournemouth Sadira Teeluckdharryâ•… Specialty Registrar training in the West Midlands Ollie Whiteâ•… Specialist Registrar in Child and Adolescent Forensic Psychiatry, Oxfordshire and Buckinghamshire Mental Health NHS Foundation Trust and formerly Chair, Academy Trainee Doctors Group, Academy of Medical Royal Colleges
x
Preface
Since the first edition of this book was compiled almost 5 years ago, postgraduate psychiatric training has seen a number of structural changes. At the same time there have been advances in the evidence base for competency-based training and assessment not only within UK psychiatry but also in postgraduate medicine globally. These factors have made a new edition essential. In addition to changes to the evidence base there are many updates to the broader content. Following implementation of the first diet of workplacebased assessments (WPBAs), there were concerns about the utility of some of these tools for what had been the specialist registrar grade (ST4 and above). This volume has a new section on the Direct Observation of Non-Clinical Skills (DONCS), a new tool with greater relevance for higher trainees in psychiatry. Regulation- and policy-driven changes now dictate that every trainee should have a formal annual review of their training. Thus, we have included experience-driven guidance, not only the Annual Review of Competency Progression but also Educational Supervisor Reports that are crucial for carrying out these reviews. Excitingly, the Royal College of Psychiatrists has invested in and developed an online portfolio for trainees. This is a significant enhancement to the Assessments Online system that mainly facilitated WPBAs, as it not only allows trainees to record all the significant training experiences and related evidence electronically but also enables them to share it online with their supervisors. A section within the book explores this portal as well as the wider challenges of implementing such systems. The first diet of WPBAs was regrettably light on psychotherapy and hopefully efforts by the College’s Faculty of Psychotherapy over the past couple of years have addressed this gap. The developments in psychotherapy WPBAs are a valuable and very welcome addition to this volume. Finally, the views of some of our most important stakeholders, the trainees, have been invaluable in shaping the direction of assessment systems within UK psychiatry and this edition would not have been complete without a section on trainee perspective on these assessments. xi
workplace-based assessments in Psychiatry
For editorial reasons, some sections had to be omitted to accommodate others. Most significantly, the chapter on Patient Satisfaction Questionnaires was not included in this volume as more work needs to be done within UK psychiatric training to overcome many of the challenges, outlined within the first edition of this book, of accessing patient feedback as evidence of trainee performance. Other chapters, such as the one on logbooks, have been superseded by newer overarching portfolio frameworks. Over the past few years of the editors’ work with postgraduate psychiatric training, it has become clear that some of the best innovations within this area are driven by the individual efforts of trainers and trainees who are passionate about quality of training and patient care. This book is a tribute to all of them, especially our chapter contributors, who have worked tirelessly to develop and implement new ways of training and assessing trainees in the UK and have graciously and eloquently shared their experience and expertise within this volume. We are grateful to all of them. We would like to express our gratitude to Dr Nick Brown, not only for his contributions to this edition, but also for his tremendous work on the first edition of this volume. We would also like to thank our families who continued to put up with us as we compiled this edition. Andrea Livingstone, at the Institute of Psychiatry, as always ensured that the initial drafts came in on time and then painstakingly supported us in turning them into a complete manuscript – thanks. We are extremely grateful to Professor Peter Tyrer, Vanessa Cameron and Dave Jago for their support. Finally, we would like to thank Kasia Krawczyk and Andrew Morris at RCPsych Publications for their patience, hard work and attention to detail that has been invaluable in producing this second edition. Amit Malik Dinesh Bhugra Andrew Brittlebank
xii
Chapter 1
Introduction: changes in training Amit Malik, Dinesh Bhugra and Andrew Brittlebank
This chapter outlines the developments in postgraduate medical education in the UK that will influence psychiatric training for many years to come. It especially focuses on the role of the Postgraduate Medical Education and Training Board (PMETB), before its merger with the General Medical Council (GMC), and its development of principles for assessments in postgraduate training. There is a brief description of the Royal College of Psychiatrists’ (‘the College’s’) activities for the assessment of future trainees in the context of wider changes in postgraduate training in the UK. The challenges of the assessment of clinical competence and clinical performance are considÂ� ered. Some of the basic concepts of competency- and performance-based assessments are outlined. Workplace-based assessments (WPBAs), tools for which are discussed in subsequent chapters, are placed in the context of familiar assessments and examinations of clinical competence and performance, including the traditional long case and Objective Structured Clinical Examinations (OSCEs). The concept of a programme of assessments is introduced, and there is mention of how these separate assessments may fit together for both formative and summative purposes. There is a section with some basic pointers that trainees and trainers must consider when undertaking WPBAs. Finally, there is a brief section on supervisor reports, which always have been, and will continue to be, indispensable in the assessment of trainee performance; these are discussed in greater detail in Chapter 9.
Changes to training Since the introduction of the PMETB and Modernising Medical Careers (MMC) in the early 2000s, many developments have influenced the structure of and principles underpinning postgraduate medical training in the UK.
Context The Postgraduate Medical Education and Training Board was set up as an independent regulator for postgraduate medical training and came live 1
malik et al
in 2005. It established principles for curricula and assessment systems to guide postgraduate medical training in the UK. The Royal College of Psychiatrists’ curricula were underpinned by these principles, thus making the system more workplace- and competency-based. Following the Tooke report (Independent Inquiry into MMC, 2008), PMETB merged with the GMC in April 2010. Modernising Medical Careers In 2004, the four UK health departments published a policy statement outlining the planned reforms to the structure of postgraduate medical training, leading to the birth of Modernising Medical Careers. The two main components of these reforms were the foundation programme and the ‘run-through’ grade. Foundation programme Since 2007, after graduating from medical school, doctors undertake an integrated 2-year foundation programme, which focuses on generic competencies and management of acute illness. The curriculum for the foundation years is competency-based and is assessed by a range of WPBAs. Most psychiatric training posts are in the second foundation year. There are some generic psychiatric competencies that all foundation trainees need to develop irrespective of whether they undertake a psychiatric placement. ‘Run-through’ grade The initial MMC proposals led to the development of a single run-through grade where specialty training had a single point of entry and there was to be no mid-point selection (from senior house officer to specialist registrar) as had traditionally occurred. Trainees, who are now called specialty registrars (StRs) were initially appointed annually to the new grade and continued, subject to satisfactory educational progress, till the completion of training and the attainment of the Certificate of Completion of Training (CCT) without any further mid-point selection. After this initial experience, psychiatric training has once again been decoupled and trainees after finishing their core training have to reapply to specialty-specific higher training programmes. All trainees in core and higher training in this StR grade have to train to a GMC-approved curriculum and are assessed according to a GMC-approved assessment framework. European Working Time Directive The new European health and safety legislation, the European Working Time Directive (EWTD), has significantly reduced the amount of time doctors spend at work, as they now have to comply with a maximum of 48 hours’ a week. The need to cover clinical services safely has led to widespread evolution of shift working, meaning that some of this 48-hour limit is not part of the traditional working day. This restriction on the amount of time doctors in training actually spend at work, being supervised by more senior 2
introduction
clinicians, is already affecting the training experience. This is primarily due to the fact that all medical training has traditionally been based on spending long hours at work and eventually experiencing various clinical situations. This will not happen with the reduced hours and a competency-based checklist has been proposed as a safety net, to ensure that trainees have achieved all essential competencies that will enable them, as consultants, to treat patients safely and competently. Additionally, a lot of the traditional teaching at the basic specialist (or core training) level is formal and classroom-based and this has already been affected by the new shift pattern of working. Finally, the reduced working hours, along with the development of functional teams (e.g. crisis resolution and home treatment, self-harm assessment), have further reduced the opportunities for trainees to see new patients first hand and to learn skills requisite for emergency psychiatry. The College’s commitment to involving patients and carers throughout specialist education in psychiatry reiterates the need for patient and carer involvement in further development and delivery of the curricula and assessment systems. The run-through training grade went live in August 2007, supported by a national online selection system – MTAS (the Medical Training Application Service). The MTAS system was introduced in a cavalier manner leading to significant disruption to the lives, employment and training of junior doctors in the UK. The system was fatally flawed and its failure resulted in great discontent and fury among the medical profession, which ultimately led to the Secretary of State commissioning an independent inquiry, chaired by Sir John Tooke, to examine the framework and processes underpinning MMC. Among its many recommendations, the inquiry sought to move up the bar for professional ability from competence to excellence within postgraduate medical training. Following the Tooke report, psychiatric training was again decoupled and the run-through grade was broken down into the more familiar core and higher training structures. Additionally, the report also recommended the merger of PMETB and GMC, bringing all medical training, undergraduate and postgraduate, under the purview of one national regulator.
PMETB, GMC and Royal College of Psychiatrists From September 2005 to April 2010 PMETB was a sole statutory regulator of postgraduate medical education in the UK. It defined separate standards for assessments and curricula and set up a process of staged compliance with these standards from 2007 to 2010. To this end, the Royal College of Psychiatrists submitted new competency-based curricula to PMETB and following approval these were implemented for the trainee cohort commencing their training in August 2007. These curricula were based on Good Medical Practice domains of the GMC and were one of the first set of specialty curricula to be approved by PMETB. Following extensive feedback from trainees and trainers, these curricula were extensively rewritten 3
malik et al
and mapped on the CanMEDS framework (Royal College of Physicians and Surgeons of Canada, 2005), which is extensively used in Canada for medical education and training and is felt to be more suitable for defining outcomes in medical education. In April 2010, PMETB merged with the GMC. Additionally, principles for assessment and curricula were subsumed into one document (General Medical Council, 2010) and the medical Royal Colleges were expected to submit curricula and assessment systems that complied with all GMC standards. The Royal College of Psychiatrists’ revised curricula were approved by the GMC in June 2010 and have now been implemented for the training year commencing August 2010. There are essentially nine separate curricula, for six Certificate of Completion of Training (CCT) specialties and three subspecialties in general psychiatry, which have the core curriculum subsumed within each of them. Additionally, following the pilot feedback from 2006 (Chapter 13), in 2007 the College rolled out an assessment system which included WPBAs and examinations as an integral component of these curricula. A major change for conventional assessment systems that existed pre2007 is that the new assessment strategy relates to the entire training period, unlike in the past when assessments were undertaken only at discrete points in the form of periodically scheduled high-stake examinations. Clearly, the focus of postgraduate training is shifting away from simply gaining a certain number of marks or dichotomous pass/fail decisions in examinations, to national examinations being a vital component of a wider assessment system that includes WPBAs, educational supervisor reports and portfolio-based assessments. The details of the national examination undertaken by the College are discussed in Chapter 12.
Why are we interested in the assessment of clinical performance? Competency-based postgraduate training programmes in medical specialties are now part of many international postgraduate training systems, including those in the UK, the USA, Canada and now also Australia. The principles underlying the new training programmes ensure that there is emphasis on learning in practice, i.e. at the place of work, and that training and assessment revolves around the top two levels of Miller’s pyramid for clinical assessment (Miller, 1990). Thus knowledge and its application will not suffice; it is not enough to ‘know’ or even to ‘know how’. To ‘show how’ may reflect competency, but it is the apex of the pyramid that is of the greatest interest. Competency-based training begs the question of assessment of outcome at the ‘does’ level. This is the level of performance in daily clinical practice.
What is competence and what is performance? The fundamental components of clinical competence are knowledge, skills and attitudes. Competence in a clinical situation is the complex ability to 4
introduction
apply these three as needed according to the matter in hand. Performance is the enactment of competence. The assessment at the basic level relates to the questions ‘Do they know it?’ and ‘Do they know how?’; at the competence level to ‘Can they do it?’; and at the performance level to ‘Do they show how?’ Unfortunately, things may not be that simple and most would agree that there is more to performance than an aggregation of competencies. What professionals do is far greater and more complex than the constituent parts that can be described in competency terms (Grant, 1999). Identifying a lack of competence may be easier than confirming attainment of a competency. There have been valid concerns and criticisms of competency-based training as being reductionist (Talbot, 2004) and ‘not fit for purpose’ (Oyebode, 2009). However, McKimm (2010) quotes Gonczi defining competencies as ‘a complex structuring of attributes needed for intelligent performance in specific situations’, which more accurately reflects our aspirations for our future competent professionals. This definition, if used as the underpinning principle for competency-based training, should, in fact, enhance the standards of training and competencies towards excellence. A cautionary note must be struck; four essential matters must be understood. The first is that there is no (current or future) single perfect tool for the assessment of overall clinical competence. Indeed, there are dangers in an endless pursuit of tools that break down competencies into even smaller assessable components, taking them further and further from the complexity of real clinical life. The second is that future direction is towards programmes of assessment in which different tools are employed. In this way performance, which includes the ability to apply a range of competencies in a professional setting, can be gauged. The third fundament is to consider the role of supervisor assessment. The supervisor is in a unique position to assess a trainee’s day-to-day professional activities. Any programme of assessment of clinical performance must include this critically unique perspective and not just rely on numerical scores obtained from assessment tools. Finally, it is clear that ongoing evaluation and adjustment of the assessment programme will remain an essential component of its quality assurance process.
What should we be trying to achieve? With contemporary emphasis on competency-based curricula and assessment of performance at the place of service, great attention has been given to the development of a range of tools to meet the challenge of assessing clinical performance as described above in a valid, reliable and feasible fashion. Furthermore, there is a need to meet both formative and summative purposes of assessment, that is to provide feedback to trainees in an in-house training and developmental context and potentially to 5
malik et al
provide data for the purpose of summary, such as informing eligibility for progress in training. Although there are many methods for evaluating trainees’ knowledge and some for measuring skills, the ability to reliably measure clinical performance is more limited. This ability is not contained in one instrument or method but in the concept and design of a programme of assessments adjusted in response to the changing nature of the relevant curricula. There is a choice of available instruments and methods that range from assessing what actually happens in the workplace and thus testing performance, through to the use of simulations, for example OSCEs that primarily assess competence, down to traditional written examination formats that assess knowledge and its application. These have been broadly categorised in Box 1.1.
Box 1.1â•… Methods for the assessment of trainees’ performance 1 Assessments of performance (what the doctor actually does in the service base): •â•¢ individual patient encounter, e.g. CEX (ACE), mini-CEX (mini-ACE) •â•¢ video of patient encounter in the workplace, e.g. as used in general practice for many years •â•¢ simultaneous actual patient encounter •â•¢ direct observation of a skill, e.g. DOPS or OSATS in obstetrics •â•¢ observation of team working •â•¢ multisource feedback, e.g. TAB, mini-PAT •â•¢ feedback from patients, e.g. patient satisfaction measures •â•¢ plus observation of performance in non-clinical areas, e.g. teaching, presentation. 2 Assessments of competence in simulated settings, including OSCE: •â•¢ consultation skills, e.g. with standard patient or other role-player •â•¢ discussion of clinical material, e.g. case-based discussion •â•¢ simulated practical procedure, e.g. on a mannequin or model •â•¢ simulated teamwork exercise •â•¢ critical thinking •â•¢ reflective practice, e.g. written-up case. 3 Cognitive assessments: •â•¢ knowledge, e.g. test such as MCQ, EMQ •â•¢ problem solving/application of knowledge, e.g. CRQ paper •â•¢ other written assessments. ACE, Assessment of Clinical Expertise; CEX, Clinical Exaluation Exercise; CRQ, critical reading question; DOPS, Directly Observed Procedural Skills; EMQ, extended matching question; MCQ, multiple choice question; mini-PAT, Mini Peer-Assessment Tool; OSATS, Objective Structured Assessment of Technical Skills; OSCE, Objective Structured Clinical Examination; TAB, Team Assessment of Behaviour.
6
introduction
Utility of assessments and assessment systems In his seminal paper in 1996, van der Vleuten defined the concept of utility as a multiplicative product of reliability, validity, educational impact, cost and acceptability. Reliability refers to the reproducibility of results; for example, if the same trainee given the same examination repeatedly obtained the same score. Validity is a concept that describes if the test method is actually capable of measuring that which it purports to measure; for example, writing an essay on a mental state examination does not predict an individual’s ability to perform such an examination. Another important consideration is feasibility (which relates to cost); although rigorous repetitive testing might give the answers closer to the truth in terms of competence, assessment and examination processes must be manageable within the constraints of time and resources available in the majority of clinical settings. Trainers and trainees have their own preconceived notions about various forms of testing and this and various other factors have an impact on the acceptability of an assessment programme. Without a significantly high acceptability by trainees and trainers, assessments cannot have long-term success. Finally, assessments drive learning and the content, format and programming of an assessment all contribute to its educational impact (van der Vleuten, 1996).
National exams or local assessments – what we know already A detailed overview of the literature on the various WPBA tools is presented in Chapter 2. This section briefly discusses some of the psychometric data around the traditional assessments of competence using examinations and puts this in the context of a few psychometric values obtained from studies on WPBAs. Standard settings in examinations are discussed in greater detail also in Chapter 2. Examinations of clinical competence have traditionally used the longand short-case viva approach. This approach has validity as candidates are assessed on real patients and asked problem-solving questions. However, as candidates are tested on different cases and judged by different examiners, reliability of the results may be flawed. Nevertheless, reliability for both can be markedly improved by increasing testing time (and thus sampling), for example from 1 to 8 hours. Reliability using the long-case examination has been estimated at 0.60 with 1â•›h of testing, increasing to 0.75 for 2â•›h, 0.86 for 4â•›h and finally 0.90 for 8â•›h of testing (Waas & Jolly, 2001). This finding has clear implications for the refinements of the Assessed Clinical Encounter (ACE) tool (see Chapter 5). To overcome the poor reliability of clinical examinations, objective clinical examinations were developed in the 1970s and have gained worldwide use. The Objective Structured Clinical Examination (OSCE) has become a familiar part of postgraduate examinÂ� ations. However, its reliability is contingent upon careful sampling across clinical content and an appropriate number of stations, which generally means that several hours of testing time are in fact needed. For the OSCE, 7
malik et al
reliability for testing times rises from 0.54 for 1â•›h to 0.82 for 4â•›h and 0.90 for 8â•›h of testing (van der Vleuten et al, 1988). For the mini-CEX, reliability commences at 0.73 for 1â•›h of testing and peaks at 0.96 for 8â•›h (Norcini et al, 2003). Standardised patients Another often used method for assessment of clinical competency is standardised patient examination. A standardised patient is a person who is trained to depict a patient in a way that is similar and reproducible for each encounter with different trainees. Hence they present in an identical way to each trainee. The standardised patient can be an actor, an asymptomatic patient or a real patient with stable, abnormal signs on examination. The advantages of using standardised patients are that the same clinical scenario is presented to each trainee (better reliability) and that clinical skills can be directly observed (higher face validity). Feedback can be instantaneous and can also be given from the point of view of the patient – the standardised patient would need to be trained to do this in a constructive manner. Using standardised patients has high face validity. Reliability varies from 0.41 to 0.85 (Holmboe & Hawkins, 1998). It increases with more cases, shorter testing times and less complex cases; it is better when assessing historytaking, examination and communication skills than when assessing clinical reasoning or problem-solving. Standardised patients have been used in multistation exams such as OSCEs where trainees perform focused tasks at a series of stations. They have been used as a means of integrating the teaching and learning of interÂ� personal skills with technical skills and of giving direct feedback to trainees. By combining this with video-recording the student–patient encounter, there is a mechanism for the student to review the recording later as an aid to learning (Kneebone et al, 2005). This can be used as part of an assessment process and enables multiple raters to rate the trainee, thereby increasing the reliability. A fundament of understanding with regard to these assessments is that sampling is the key factor in the determination of reliability rather than the degree of structuring and/or standardisation. This means that methods that are less structured and standardised, such as the Clinical Evaluation Exercise (CEX)/Assessment of Clinical Expertise (ACE) and mini-Clinical Evaluation Exercise (mini-CEX)/mini-Assessed Clinical Encounter (miniACE), can be almost as reliable as other more structured and objective methods. The finding also reinforces the need to develop and implement not merely single assessment tools but an overall schedule or programme of assessments.
Undertaking assessments locally The implementation of WPBAs in the postgraduate setting has been, and will continue to be, an incremental process. Regardless of the assessment 8
introduction
tool that is being used, the following pointers will help trainees and assessors in getting started. Pointers for assessors 1 The assessor must assess the trainee for their particular stage of training. 2 The assessor will need to set out protected time to conduct the assessment. 3 It may be prudent to consider with the trainee in advance the sort of patient who will be at the appropriate level of complexity and challenge for a particular trainee’s level. 4 It should be agreed in advance that an assessment will be formal, rather than the trainee or trainer mentioning this at the last minute before, or worse, during the assessment. 5 The competencies being assessed must be defined in advance and be appropriate to the situation that is to be observed. 6 The assessor should be fully familiar with the assessment form, the competencies being assessed and the associated performance descriptors. 7 The assessor should only assess the competency in question if they are capable of making a judgement about it, and they should only score observed competence. 8 Assessors must be trained in the use of the assessment tools and this should include performance-dimension training, training in observation-based assessment and a calibration exercise (Holmboe et al, 2004). Pointers for trainees 1 2
3
4 5
6
The assessments should be trainee-led. The trainee should have regular discussions with their educational supervisor about the competencies they need to attain during a period of their training and the type and number of assessments they could undertake to demonstrate the attainment of these competencies. These should be clearly included in a learning plan. It might be prudent to undertake the initial assessment with the trainee’s own educational supervisor, in order to fine-tune the learning plan for the next few months. The trainee should also have discussions with their supervisor about the sort of case that would be appropriate for their stage of training. The assessor should be given enough notice for the assessment to be set up, so that they can clear their schedule to facilitate an uninterrupted assessment. The patient must give informed consent to participate in the assessment. This should be obtained by the trainee, recorded in the case notes and then reconfirmed in the presence of the trainer. 9
malik et al
7
8
In early stages of training (core trainee (CT) year 1 and 2), it is entirely appropriate for the assessment to be undertaken by a higher trainee (specialty trainee (ST) year 5 or 6) or an experienced specialty doctor. In the latter stages of training, the assessments should be undertaken by a more experienced clinician, in order to provide feedback on higher-level competencies. Analogously to an assessor, a trainee should also be fully familiar with the assessment form, the competencies being assessed and the associated performance descriptors.
How do these assessments link together? To answer this question, it is necessary to recall the purpose of assessment and then to consider these particular tools or forms of assessment. In overall terms, assessment is used for a number of purposes, such as: making judgements about the trainee’s mastery of the knowledge, skills and attitudes set out in the curriculum; measuring improvement over time; identifying areas of difficulty; providing feedback; and planning future educational and professional needs. Attempts are often made to divide assessments artificially into formative and summative types, although in real life these functions of assessments overlap significantly. It is useful, however, to briefly revise what the two types of assessment mean.
Formative assessment A formative assessment is used to monitor a trainee’s progress through a period of training. It involves using assessment information to feed back into the teaching and learning process. It should, and indeed must, foster learning and understanding. The trainer (supervisor)Â�–trainee relationship is fundamentally important to successful and effective formative assessments. Formative assessments must be built into the curriculum and not be added on as an afterthought. Observed clinical work is an excellent example of an assessment method used in formative assessment. However, its purpose is only realised when there is effective dialogue between the trainer and trainee. Hence the skills of supervising and giving effective feedback are as important for the prospective trainer/supervisor as any technical knowledge of the assessment tools themselves. For formative assessment to act as a means of improving competencies, both trainee and trainer must have a shared understanding of the assessment’s position, power and purpose. Comparisons with a standard can be made and remedial action taken if required. The quality and honesty of the feedback is critical. A trainee cannot be told that they did well and then receive average ratings. Also, trainees who perform poorly should not be given high ratings. Such information will not assist in identifying strengths and weaknesses and thus will not enable the reshaping of educational objectives. It may also lead to an unsafe and 10
introduction
unsustainable clinical and educational relationship between the trainee and their supervisor as the trainee is allowed to work at stages beyond their real competence.
Summative assessment Summative assessments are usually undertaken at the end of a training course, module or programme and determine whether the educational objectives have been achieved. A grade or mark is given, indicating what has been learnt. Good summative assessment involves the analysis of evidence from as many sources as possible. In any form of summative assessment programme it is important that every essential aspect of the curriculum is covered to ensure that the resulting report validly reflects the trainee’s ability. In postgraduate training in psychiatry, summative assessments will provide a statement of achievement, at times serve as a guide to the continuation through the training grade (through the annual review of competence progression (ARCP) process) and will necessarily provide evidence to support the award of a certification of competence (such as the Certificate of Completion of Training from the GMC). No single assessment is adequate to assess a trainee’s overall competence. Experts have recommended a programmatic approach to developing assessments (Dijkstra et al, 2009). Various qualitative and quantitative methods to combine these assessments are described by educationalists, but these are beyond the scope of this book (for the interested readers, we recommend Schuwirth & van der Vleuten, 2006).
Educational supervisor reports Educational supervisor reports (further discussed in Chapter 9) are an overarching method used to assess overall performance in the working context and thus they have traditionally been associated with both high content and high context validity. They are discussed here because of their significance in a trainee’s portfolio and their contribution to the summative assessment in the form of the ARCP. Ratings from a trainee’s supervisor have been used for many years in local schedules of assessment. These have generally shifted from unstructured formats, such as letters which have low reliability and tend to be subjective, to more structured reports on specific areas of performance. Supervisor reports are particularly useful for testing areas that are difficult to assess by conventional methods. These include personal attributes, attitudes, generic competencies and professional values (e.g. reliability), ability to work with others and time-keeping. Additionally, supervisors can draw upon evidence from more structured evidence around workplace-based assessments and well-structured feedback from colleagues and peers to support the report. Well-designed reports allow for assessment 11
malik et al
against agreed standards and can identify underperforming trainees. Some utilise rating scales to assess various domains of a trainee’s performance. Supervisor reports can be improved if supervisors are trained in their use, receive feedback on their reports and if multiple sources of evidence are used such as workplace-based assessment multisource feedback (see Chapter 6). Supervisor reports must be designed with facility of use in mind and with an identification of the competencies to be assessed at a particular stage of training. Finally, a debate has opened on the use of ‘gut feeling’ or trust in assessment (ten Cate, 2006). This moves beyond reliance on just structured and formal evidence of performance to an attempt to capture performance as a global outcome to expert judgement. This would be a more formal expression of a supervisor declaring who they would choose or trust to handle more complex clinical tasks or who they would be comfortable with treating a family member.
Conclusions Various government initiatives (including MMC) and changes in the legal frameworks (including the PMETB – now GMC – and EWTD) have transformed the delivery of postgraduate medical education in the UK. Notwithstanding this, the assessment of clinical performance has always been a complex task. The work of a doctor – the execution of their dayto-day clinical responsibilities – is more than just a sum of competencies. There is no single test that assesses this overall competence. Instead, what is required is a programme of assessments using different tools involving a range of clinical scenarios and settings and several assessors. The tools described in this book have potential to do just that provided they are employed as part of an overall assessment programme, with adequate sampling and triangulation through a range of assessors. These methods are at their most valuable when seen as educational tools that guide and mould learning, particularly the development of clinical skills. They can focus supervision, highlight progress, identify need and stimulate enquiry and understanding. Their development and implementation is fundamental to the delivery of the College’s curriculum and thus to the development of psychiatrists of the future. Just as the curriculum itself will change in anticipation of and in response to both experience of its use in practice and new workforce needs, so these tools will be adapted and new tools will be developed. The chapters that follow discuss the use of various assessment tools, the utility of portfolios in the future, the new national Royal College of Psychiatrists’ exams and experiences from the WPBA pilot projects. Each chapter on assessment tools is based on the relevant background for each tool, discussions leading to their development, the description of the tools 12
introduction
along with the person descriptors, and the authors’ early experience with the implementation of these tools. It is hoped that these details will help trainees, trainers and training programme organisers in this ever-changing world of postgraduate medical education.
References Dijkstra, J., van der Vleuten, C. P. M. & Schuwirth, L. W. T. (2009) A new framework for designing programmes of assessment. Advances in Health Sciences Education, doi: 10.1007/ s10459-009-9205-z. General Medical Council (2010) Standards for Curricula and Assessment Systems – Revised. GMC. Grant, J. (1999) The incapacitating effects of competence: a critique. Advances in Health Sciences Education, 4, 271–277. Holmboe, E. S. & Hawkins, R.E. (1998) Methods for evaluating the clinical competence of residents in internal medicine: a review. Annals of Internal Medicine, 129, 42–48. Holmboe, E. S., Hawkins, R. E. & Huot, S. J. (2004) Effects of training in direct observation of medical residents’ clinical competence: a randomized trial. Annals of Internal Medicine, 140, 874–881. Independent Inquiry into Modernising Medical Careers (2008) Aspiring to Excellence: Findings and Final Recommendations of the Independent Inquiry into Modernising Medical Careers Led by Professor Sir John Tooke. MMC Inquiry. Kneebone, R. L., Kidd, J., Nestel, D., et al (2005) Blurring the boundaries: scenario-based simulation in a clinical setting. Medical Education, 39, 580. McKimm, J. (2010) Current trends in undergraduate medical education: teaching learning and assessment. Samoa Medical Journal, 2, 38–44. Miller, G. E. (1990) The assessment of clinical skills/competence/performance. Academic Medicine, 65, 563–567. Norcini, J. J., Blank, L. L., Duffy, D., et al (2003) The mini CEX: method for assessing clinical skills. Annals of Internal Medicine, 138, 476–481. Oyebode, F. (2009) Competence or excellence? Invited commentary on… Workplacebased assessments in Wessex and Wales. The Psychiatrist, 33, 478–479. Royal College of Physicians and Surgeons of Canada (2005) The CanMEDS Physician Competency Framework. Royal College of Physicians and Surgeons of Canada (http:// rcpsc.medical.org/canmeds/index.php). Schuwirth, L. W. T. & van der Vleuten, C. P. M. (2006) How to Design a Useful Test: The Principle of Assessment. Understanding Medical Education. Association of Medical Education. Talbot, M. (2004) Monkey see, monkey do: a critique of the competency model in graduate medical education. Medical Education, 38, 587–592. ten Cate, O. (2006) Trust, competence, and the supervisor’s role in postgraduate training. BMJ, 333, 748–751. van der Vleuten, C. P. M. (1996) The assessment of professional competence: developments, research and practical implications. Advances in Health Sciences Education, 1, 41–67. van der Vleuten, C. P. M, van Luyk, S. J. & Swanson, D. B. (1988) Reliability (generalizability) of the Maastricht Skills Test. Research in Medical Education, 27, 228– 233. Wass, V. & Jolly, B. (2001) Does observation add to the validity of the long case? Medical Education, 35, 729–734.
13
Chapter 2
Workplace-based assessment methods: literature overview Amit Malik and Dinesh Bhugra
This chapter provides a short introduction and background to some of the WPBA methods: the long case, Assessment of Clinical Expertise (ACE), multi-source feedback (MSF), mini-Clinical Evaluation Exercise (mini-CEX), Direct Observation of Procedural Skills (DOPS), case-based discussion, and Journal Club Presentation. For each assessment method, what the approach practically involves is first defined, before considering the key messages and research evidence from the literature.
The long case Across most medical specialties, the traditional long case has historically occupied a central and critical role in the evaluation of clinical skills (Weiss, 2002). In the long case, trainees are given 30–60â•›min of unobserved time to interview and examine an unstandardised patient, before presenting and discussing the case with one or more examiners. This assessment can take up to an hour. For examination purposes, the underlying belief is that within a single long case, active and usually unstructured questioning by an experienced examiner can determine a trainee’s competency. The key assessment strength of this approach is that trainees are required to formulate differential diagnosis and management plans for real patients in an actual clinical setting. However, the method has been criticised for the poor reliability of its assessments, and the lack of direct examiner observation of the trainee/patient encounter (reducing the validity of assessments). Consequently, a new instrument for undertaking long-case assessments with psychiatric trainees has been developed – the ACE.
Reliability of the long case Concerns have repeatedly been voiced about the reliability of information generated through the traditional long case. This is because it is usually based upon a single patient encounter and unstructured examiner questioning. 14
literature overview
This causes three problems (Norcini, 2001, 2002). 1 Inter-case reliability. The long case is typically based on one in-depth patient encounter. However, trainees’ performances will vary across cases, reflecting their strengths and weaknesses across different patient problems, and the different responses of patients to them. Good inter-case reliability requires that a larger number, and broader sample, of different cases are included. 2 Interrater reliability. The long case is typically based upon the scores of no more than two examiners. Research shows that examiners differ in their ratings when assessing the same event. Good interrater reliability requires that multiple examiners are used. 3 Aspects of competence. The long case is often organised around case presentation and unstructured trainee–examiner discussion. Research indicates that a standardised list of different features of competence can improve reliability. Of the challenges posed by the long case, inter-case reliability has been identified as the most significant (Norcini 2001, 2002). There are surprisingly few published data that reflect these concerns about the long case. However, Norcini reported that in an American Board of Internal Medicine’s (ABIM’s) cardiovascular subspecialty study conducted in the 1970s, two long cases (each with two examiners) generated a combined reproducibility coefficient of just 0.39, whereas one case resulted in a coefficient of 0.24 (Norcini, 2002). For the former, this effectively meant – in strict psychometric terms – that 39% of the variance in trainees’ scores was accountable to trainees’ ability, whereas the remaining 61% was due to error measurement (Norcini, 2002). Kroboth et al (1992), in a study of the Clinical Evaluation Exercise (CEX) report that two long cases (again with two examiners) produced an overall generalisability coefficient of 0.0, and an overall interrater reliability coefficient of 0.40. Weisse (2002) reports that the 1972 decision of the ABIM to stop using the long case was due to an unacceptably low interrater agreement (measured at 43%, just 5% higher than agreement occurring by chance alone).
Validity of the long case The second concern with the long case relates to its validity. This may appear unusual, given that an argument for retaining the long case is that it accurately replicates the type of situations trainees will encounter in their future professional life. For example, as Wass & van der Vleuten (2004) note, testing the trainees’ ability to engage with real patients, collect relevant data, and propose an appropriate course of action, the long case represents ‘a highly authentic task…[that] comes very close to a candidate’s actual daily practice’ (p. 1177). However, because the long case does not typically involve the direct observation of trainees during the patient interview and examination, this can mask weaknesses in trainees’ basic skills (Wass & Jolly, 2001). Wass 15
malik & bhugra
& Jolly (2001) undertook a prospective study comparing examiners who observed the history-taking component of a long case with examiners who only observed the case presentation component. They found a lack of correlation between scores given for long-case observation compared with presentation. In essence, those examiners who directly observed trainees during the history-taking component marked their competency differently from those examiners who only observed the case presentation.
Improving the long case Attempts to improve the reliability of the traditional long case fall into three categories. First, studies have considered how many additional long cases would be required, with Kroboth et al (1992) suggesting that six to ten (lasting 1–1.5â•›h each) would achieve a generalisability coefficient of 0.8. Second, commentators have attempted to increase the number of long cases, but have done so by employing a format that draws on shorter assessments (20–45â•›min) and multiple cases (4–6) taken directly one after another in a single session (McKinley et al, 2000; Wass & Jolly, 2001; Norcini, 2002; Hamdy et al, 2003). Third, elements of the discussion and questioning aspects of the long case have been standardised in an attempt to improve reliability and student perceptions of fairness (Olson et al, 2000). Improving the validity of the long case has been addressed in different ways. To start with, there is the introduction of examiners who directly observe trainee performance throughout the long case. This appears to have been a more recent development in the UK literature (Wass & Jolly, 2001), compared with the US Clinical Evaluation Exercise instrument (Kroboth et al, 1992) and the Australian Direct Clinical Examination (Price & Byrne, 1994). Second, content validity has also been addressed through attempting to sample ‘types’ of patients for the long case, rather than selecting them randomly (Hamdy et al, 2003). This approach has been criticised on the grounds that trainees should be competent enough to deal with most types of patient problems that they encounter (Dugdale, 1996). Third, there are suggestions in the literature on how the overall utility of the assessment scales used in long-case assessments can be enhanced by developing more clearly defined and validated anchor points and broadening the aspects of the curriculum that can be assessed using the tool (Norcini, 2002). Fourth, providing assessors with training in observation, utilisation of assessment tools and providing feedback will improve the educational and assessment value of the long case (Fitch, 2007, personal communication). Finally, a more recent review has also outlined other methods such as increasing the number of cases and using multiple assessors (Ponnamperuma et al, 2009).
Assessment of Clinical Expertise (ACE) The College has made attempts to learn from the above evidence base in developing the ACE instrument. Instead of making competence judgements 16
literature overview
based on a trainee’s case presentation skills, ACE relies heavily on the direct observation of trainees through a complete ‘new patient’ assessment. Previous concerns regarding reliability have been mitigated to a great extent by including ACE within a portfolio of WPBAs, thus allowing for greater overall reproducibility of the assessment system. This allows the ACE to focus on its strength – the direct observation of trainee performance across a vast range of cases – rather than trying to achieve exceptionally high levels of standardisation at the expense of losing the richness and variety that psychiatric symptoms present with. Therefore, reliability may be less of an issue within the Royal College of Psychiatrists’ system compared with situations where the long case may have been used as the sole method of assessment (Turnbull et al, 2005). However, direct assessor observation is never a guarantee of accurate observation – assessors will require training and support (Fitch, 2007, personal communication).
Multi-source feedback Multi-source feedback involves the assessment of aspects of a medical professional’s competence and behaviour from multiple viewpoints. This can include peer review, where peers at the same level in the ‘organisational chart’ and usually within the same medical discipline as the professional concerned are asked to assess the professional. It can include co-worker review, where other co-workers who may operate at a higher/lower level in the ‘organisational chart’ or may work in a different medical discipline are asked to assess the professional. It can also incorporate self-assessment, where the professional undertakes an assessment of their own competence and behaviour for comparison with other sources; as well as patient review, where patients are asked to assess a professional, typically using a different instrument than that used for peer, co-worker or self-assessment. The increasing use of multi-source feedback is based on two different beliefs: first, that assessments from multiple viewpoints may offer a fairer and more valid description of performance than an evaluation based solely on a single source; second, that multi-source feedback can help in assessing aspects of professional performance (such as humanistic and interpersonal skills) that are not captured by written or clinical examinations. The College MSF for psychiatric trainees incorporates patient review. After piloting two MSF tools, the mini-Peer Assessment Tool (mini-PAT) and the Team Assessment of Behaviour (TAB) for psychiatric trainees, the College has decided to recommend the use of the mini-PAT as part of its assessment programme. While reviewing this specific approach with psychiatric trainees (and multi-source feedback in general) it is important to remember that multi-source feedback is a term used to describe an approach to assessment, rather than a specific instrument. Unlike studies on the mini-CEX, we need to be far more careful in concluding that what has worked in one multi-source feedback programme will also work in another. 17
malik & bhugra
This is because different programmes will use different instruments, with varied sources, and will measure different behaviours and competencies.
Key research messages The clear and overarching message from the literature is that the developÂ� ment of a multi-source feedback tool should be underpinned by a clearly communicated purpose for its use to all stakeholders (Lockyer, 2003) and its validation in a specialty-specific context (Davies et al, 2008). Additionally, a number of points can be made. The number of sources targeted by different multi-source feedback approaches appears to range from 8 to 25 peers, 6 to 14 co-workers, and 25 to 30 service users. Research data from evaluations of different instruments indicate that between 8 and 11 peer raters can generate a generalisability coefficient of 0.7–0.81 (Ramsey et al, 1996; Lockyer & Violato, 2004; Morgeson et al, 2005). Furthermore, allowing participants to select their own raters does not necessarily bias assessment, contrary to the belief that trainees would nominate raters who they felt would give them a higher score (bias) (Violato et al, 1997; Durning et al, 2002). However, there is evidence that various professional groups rate individuals differently (Bullock et al, 2009). Therefore, assessment guidance should specify the minimum number of professionals from specific occupational groups who must provide feedback within a single multi-source feedback process, without being prescriptive about which individuals complete the tool. In addition to the occupational group of reviewers, there are multiple other reviewer-related factors that must be taken into account to reliably and meaningfully interpret the ratings (Burford et al, 2010). Another finding that emerged from the literature review is that the acceptance of MSF assessment is typically associated with the source of the data – participants tend to value feedback from peers and supervisors more than that from co-workers (such as nurses), particularly when clinical competence is being assessed (Ramsey et al, 1993; Weinrich et al, 1993; Nichols Applied Management, 2001; Higgins et al, 2004). A final point is that rater groups frequently do not agree about an individual’s performance – self-assessments typically do not correlate with peer or patient ratings, and differences have been found between peers with differing levels of experience (Hall et al, 1999; Thomas et al, 1999). This disagreement can be seen as a technical threat to interrater reliability, or, more practically, as measuring different aspects of performance from the position of the rater (Bozeman, 1997).
Undertaking multi-source feedback with psychiatric trainees In implementing the approach with psychiatric trainees, there are a number of actions that can be taken to improve the feedback outcome. First, instruments with content that better reflects the fact that psychiatry differs in its daily practice from other medical specialties can be employed, with a far greater emphasis on communication, interpersonal skills, emotional 18
literature overview
intelligence and relationship-building skills. Generic multi-source feedback instruments should be revised to reflect these differences. Further, the use of shorter instruments, central administration, and alternatives to pen and paper (such as computer or telephone input) have been highlighted as a possible means of countering the perception that multi-source feedback involves ‘too much paperwork’ (Lockyer et al, 2006). Also, multi-source feedback has an important role in making trainees aware of how their performance is perceived by a range of stakeholders, and addressing weaknesses in competence (Violato et al, 1997; Lipner et al, 2002). This, however, hinges on the quality of feedback provided. Research shows that highly structured feedback (oral and written) is important (Higgins et al, 2004), as is trainee education in appreciating feedback from non-clinical sources. Furthermore, multi-source feedback has been demonstrated to bring about practice changes, and longitudinal studies have shown an increase in multi-source feedback scores over time (Violato et al, 2008). It is important that these are carefully monitored, both for individual trainee development, and also for demonstrating to potential participants/sources that multi-source feedback is a worthwhile activity (Nichols Applied Management, 2001). Finally, an additional difficulty in the UK, with its multi-ethnic population, is to find a way in which non-English speakers may be included, especially for the Patient Satisfaction Questionnaire. One method for achieving this has been conducting interviews with patients using interpreters (Mason et al, 2003), but other approaches will need to be developed to avoid sampling bias.
mini-Clinical Evaluation Exercise (mini-CEX) The mini-CEX is a focused direct observation of the clinical skills of a trainee by a senior medical professional. It involves a single assessor observing a trainee for approximately 20â•›min during a clinical encounter. This is followed by 5–10â•›min of feedback. The mini-CEX was to an extent envisaged as part of an effort to address problems posed by the traditional long case (as discussed earlier). The mini-CEX involves assessors directly observing trainees while they engage with real patients in working clinical contexts. Critically, assessors are required to focus on how well a trainee undertakes specific clinical tasks, rather than attempting to evaluate every aspect of the patient encounter. This means that one mini-CEX may consider a trainee’s skills in history-taking and communication, whereas a later mini-CEX may focus on a trainee’s clinical judgement and care. Consequently, multiple mini-CEX assessments are undertaken with each trainee. The College version of the mini-CEX is known as the mini-Assessment of Clinical Expertise (mini-ACE).
Key research messages As a workplace-based assessment tool, the mini-CEX has been extensively researched for a number of years. It has been shown to have a strong internal 19
malik & bhugra
consistency (Durning et al, 2002; Kogan et al, 2003) and a demonstrated reproducibility, where 12–14 assessments can achieve a generalisability coefficient of 0.8 (Norcini et al, 1995), whereas 8 assessments can result in a generalised coefficient of 0.77 (Kogan et al, 2003). It has also been argued that the tool has pragmatic reproducibility, where the scores from four mini-CEXs can indicate whether further assessments are required (Norcini et al, 1995). The mini-CEX has also been shown to have reasonable construct validity, being able to distinguish between different levels of trainee performance (Holmboe et al, 2003). However, the mini-CEX does have limitations. The first of these is that the use of direct observation in the mini-CEX is not a guarantee of accurate observation (Noel et al, 1992) – there is evidence to suggest that assessors do make observational errors, making in-depth training for assessors vital. Another limitation is that the feedback component of the mini-CEX is underdeveloped (Holmboe et al, 2004), whereas assessor feedback to trainees is critical for their development. Research indicates that assessors do not employ basic feedback strategies such as inviting trainees to self-assess or using feedback to develop an action plan, and training of assessors should also emphasise these aspects of feedback. Further, time and resource constraints can make scheduling of these assessments difficult (Morris et al, 2006; Davies et al, 2009). The mini-CEX assessment process should be driven by the trainee and if assessments are not carried out, this should be taken into consideration when making decisions regarding annual progression of individual trainees (Wilkinson et al, 2008). Finally, developing a standardised tool that reflects clinical practice more closely in terms of the domains, anchors and rating scales may increase the validity and reliability of the mini-CEX (Donato et al, 2008).
Direct Observation of Procedural Skills (DOPS) The DOPS assessment allows an educational supervisor to directly observe a trainee undertake a practical procedure, make judgements about its specific components, and grade the trainee’s overall performance (Wilkinson et al, 2003). The instrument was originally developed by the Royal College of Physicians, but it is based on a large body of work on the rating of technical and procedural skills, including the Objective Structured Assessment of Technical Skills (OSATS) used by the Royal College of Obstetricians and Gynaecologists (Martin et al, 1997). This has primarily focused on technical and psychomotor surgical skills in operating rooms (Moorthy et al, 2003), laboratories, and more recently virtual environments (Moorthy et al, 2003, 2005). Proficiency in basic clinical procedures remains central to good patient care in many specialties of medicine. Despite this, there is good evidence that some doctors lack such proficiency (Holmboe, 2004). For this reason, direct observation and evaluation of competence in clinical procedures 20
literature overview
should be a core part of the training curriculum. Studies carried out in the USA suggest that this is not currently the case and report that educational supervisors do not routinely make such observations (Holmboe, 2004).
Key research messages Studies that consider reliability or validity when using DOPS are scarce. However, studies from the use of the OSATS and similar instruments indicate three issues. First, observation checklists are less reliable than global rating scales (Regehr et al, 1998). Second, despite this, lower reliability does not mean checklists should be totally rejected, as reliability calculations include a consideration of the amount of variance in scores. Consequently, if trainees all correctly perform the same set of procedures, these will reduce the level of calculated reliability, owing to less variance in scores. Third, as demonstrated by the OSATS, the DOPS approach has been reported to be resource- and time-intensive (Moorthy et al, 2003) – raters need to be present during procedures, and if multiple raters of the same procedure are needed, this can be difficult to arrange. For this reason, some commentators have suggested that the OSATS may be better conducted using retrospective video evaluation (Datta et al, 2002).
Undertaking DOPS with psychiatric trainees It has been noted that psychiatric practice has fewer practical procedures than other medical specialties (Brown & Doshi, 2006). In psychiatry, DOPS could be used in its current form with psychiatric trainees for physical procedures such as: administering electroconvulsive therapy (although this may be infrequent); control and restraint techniques; proficiency in cardiopulmonary resuscitation; and physical examinations. However, if these procedures are too infrequent or difficult to schedule, the definition of a ‘practical procedure’ might be stretched to include practices such as a Mini-Mental State Examination (MMSE) or assessing suicide risk. Clearly, this second option raises important questions about the relationship between DOPS and instruments such as the mini-CEX, which also directly observe and assess aspects of these ‘procedures’. In implementing the approach with psychiatric trainees, there are a number of actions that can also be taken with regard to DOPS. To begin with, observational training programmes can address documented basic errors in assessor observations (Holmboe, 2004) and can therefore avoid critical trainee performance issues being overlooked. Brief educational interventions on instruments that involve observation have been shown by one study to be ineffective. It is argued that in-depth observational training is required for all assessors. Given that direct observation features in three workplace-based assessments (long case, mini-CEX and DOPS), this is a clear issue for action. Second, strategies for observing infrequent events need to be developed (Morris et al, 2006), with the situations and 21
malik & bhugra
contexts in which these events occur being identified in advance and made known to assessors and trainees. Third, evidence from the Royal College of Physicians suggests that DOPS (Wilkinson et al, 2008) take a long time to undertake and data from the workplace-based assessment pilots within psychiatric training in the UK suggest that a high number of assessments are required to attain adequate reliability (see Chapter 13). Put together, these two factors suggest significant feasibility concerns with wide application of DOPS. Therefore further work needs to be done to explore the utility of DOPS within psychiatric training, especially within the context of the development of the new Direct Observation of Non-Clinical Skills (DONCS) tool (see Chapter 7).
Case-based discussion Case-based discussion (CbD; or Chart Stimulated Recall (CSR) as it is known in North America) uses a written patient record to both stimulate a trainee’s account of how they clinically managed a case and allow the examiner to evaluate the decisions the trainee took (and those that they ruled out). Through assessing the written notes contributed by a trainee to the written patient record, case-based discussion can provide useful structured feedback to the trainee. In practice, it involves the trainee pre-selecting several written case records of patients they have recently worked with. The assessor then chooses one of these cases, with detailed consideration being given to a limited number of aspects of the case (rather than an overall case description). During the discussion, trainees explain the clinical decisions they made in relation to the patients, as well as the medical, ethical, legal and contextual issues that were considered. This is followed by assessor feedback. The entire process typically takes 20–30â•›min.
Key research messages Case-based discussion is reported to have reasonable validity. In a comparative study of five assessment methods of physician competency – case-based discussion, standardised patients, structured oral examinations, OSCEs, and multiple-choice questionnaires – Norman et al (1998) established that case-based discussion was among the three methods found to have ‘superior’ reliability and validity. Meanwhile, Maatsch et al (1984), in a study of competence in emergency medicine, report concurrent validity in the relationship between physicians’ CSR scores and results from the American Board of Emergency Medicine examination (Maatsch et al, 1984). As well as validity, CbD approaches have reasonable reliability (Norman et al, 1993). Solomon et al (1990) compared CSR with a simulated patient encounter, and concluded that it was a reliable form of assessment provided that examiners had received adequate training. Data from postgraduate training in the UK support the assertions on the reliability and validity 22
literature overview
of CbD. The same evaluations also suggest that the tool has high user acceptability and feasibility (Booth et al, 2009; Chapter 13). Maatsch et al (1984) report that three to six cases are required to assess physician competence using the CSR, based on a study of physician competence in the specialty certification examination for emergency medicine. More recent evaluations of assessment programmes have shown that this number varies depending on specialty and context (Booth, 2009; see Chapter 13 for a description of WPBA pilots in psychiatry). What also emerged from literature review is that case-based discussion may be positively related to student knowledge and observational skills, with Goetz et al (1979) reporting that although student performance on chart reviews was affected by time pressures, performance improved with clinical experience. Finally, case-based discussion can be combined with review of the trainee’s record-keeping to build a more comprehensive picture of their performance (Nichols Applied Management, 2001). However, the method does have important limitations. Jennett & Affleck (1998) note that its reliance on self-report raises questions about the accuracy of trainee recall and rationalisations of a case. Such reliability concerns can be reduced to some extent by timing the assessment soon after the trainee has seen the patient (Rubenstein & Talbot, 2003). Unless due care is exercised, case-based discussion can sometimes be used as a knowledge test rather than a test of performance. It is, therefore, important that the discussion focuses on the trainee’s explanation and decisionmaking processes in undertaking certain aspects of patient management rather than being turned into a viva voce. This may flag up the potential for linking case-based discussion to other assessments of the same case under consideration (such as mini-CEX or DOPS).
Journal Club Presentation A medical journal club is any group of individuals who regularly meet to discuss the strengths, weaknesses and clinical application of selected articles from the medical literature (Lee et al, 2005). Modern medical journal clubs have evolved from being primarily a discursive method for trainees to keep abreast of new literature, into a forum where critical appraisal skills and evidence-based medicine are taught and applied (Ebbert et al, 2001). This has resulted in increasing interest in the role and effectiveness of journal clubs in informing academic and clinical practice, and several systematic and thematic reviews of the literature have been undertaken (Alguire, 1998; Norman & Shannon, 1998; Green, 1999; Ebbert et al, 2001; Lee et al, 2005). These reviews indicate that journal clubs may improve knowledge of clinical epidemiology and biostatistics, reading habits, and the use of medical literature in clinical practice. Interestingly, with the exception of Green (1999) there is no evidence that journal clubs have a proven role in improving critical appraisal skills. Successful journal clubs are organised 23
malik & bhugra
around structured review checklists, explicit written learning objectives, and formalised meeting structures and processes. A number of reviews have also recommended that journal clubs could serve as a tool for teaching and assessing practice-based competency. Lee et al (2005), for example, contend that the journal club has a familiar format, requires little additional infrastructure for assessment, and has low start-up and maintenance costs.
Key research messages The role of the journal club in assessing the new competency blueprints of bodies such as the Accreditation Council for Graduate Medical Education (ACGME) is now taking shape (Lee et al, 2006). To our knowledge, however, no studies have considered oral presentations as a method of assessing competency, with a greater emphasis instead being placed on studies of the wider membership of the journal club. Consequently, to consider this method of assessment we have to turn to the large published literature on the assessment and evaluation of oral presentations. Unsurprisingly, numerous criteria and checklists have been proposed, including: disciplinefocused criteria, for example in chemistry (Bulska, 2006), pharmacy (Spinler, 1991) and nursing (Vollman, 2005); methods’ checklists;1 and delivery and oratory guidelines (criteria developed to evaluate ‘non-content’ issues of presentations such as structure, voice audibility or body language).
Conclusions This chapter describes the literature associated with some of the methods applied in workplace-based assessments in psychiatry. Most of these instruments are discussed in greater detail in subsequent chapters. The initial evaluation of new workplace-based assessment system is also discussed (see Chapter 13). Overall, these tools and methods seem to provide a valid, reliable, feasible, acceptable and cost-effective means of assessing trainee competence and performance. Additionally, new methods of assessment such as the DONCS and psychotherapy assessment tools have been developed since the first introduction of WPBAs in the UK. As their experience within postgraduate training grows, so will the literature associated with them.
1
24
Trainee presentations can cover a range of different research studies, each with different research methodologies. This may require examiners to have access to generic critical appraisal guidelines (Greenhalgh, 1997) and criteria for particular methods to assess the quality of the trainee’s presentation, for example the Critical Appraisal Skills Programme (www.phru.nhs.uk/casp/casp.htm) or Canadian Centre for Health Evidence (www.cche.net) (Greenhalgh, 2006).
literature overview
References Alguire, P. C. (1998) A review of journal clubs in postgraduate medical education. Journal of General Internal Medicine, 13, 347–353. Booth, J., Johnson, G., & Wade, W. (2009) Workplace-Based Assessment Pilot Report of Findings of a Pilot Study. Royal College of Physicians. Bozeman, D. (1997) Interrater agreement in multi-source performance appraisal: a commentary. Journal of Organizational Behavior, 18, 313–316. Brown, N. & Doshi, M. (2006) Assessing professional and clinical competence: the way forward. Advances in Psychiatric Treatment, 12, 81–89. Bullock, A. D., Hassell, A., Markham, W. A., et al (2009) How ratings vary by staff group in multi-source feedback assessment of junior doctors. Medical Education, 43, 516–520. Bulska, E. (2006) Good oral presentation of scientific work. Analytical and Bioanalytical Chemistry, 385, 403–405. Burford, B., Illing J., Kergon, C., et al (2010) User perceptions of multi-source feedback tools for junior doctors. Medical Education, 44, 165–176. Datta, V., Chang, A., Mackay, S., et al (2002) The relationship between motion analysis and surgical technical assessments. American Journal of Surgery, 184, 70–73. Davies, H., Archer, J., Bateman, A., et al (2008) Specialty-specific multi-source feedback: assuring validity, informing training. Medical Education, 42, 1014–1020. Davies, H., Archer, J., Southgate, L., et al (2009) Initial evaluation of the first year of the Foundation Assessment Programme. Medical Education, 43, 74–81. Donato, A. A., Pangaro, L., Smith, C., et al (2008) Evaluation of a novel assessment form for observing medical residents: a randomised controlled trial. Medical Education, 42, 1234–1242. Dugdale, A. (1996) Long-case clinical examinations. Lancet, 347, 1335. Durning, S. J., Cation, L. J., Markert, R. J., et al (2002) Assessing the reliability and validity of the mini-clinical evaluation exercise for internal medicine residency training. Academic Medicine, 77, 900–904. Ebbert, J. O., Montori, V. M. & Schultz, H. J. (2001) The journal club in postgraduate medical education: a systematic review. Medical Teacher, 23, 455–461. Goetz, A. A., Peters, M. J., Folse, R., et al (1979) Chart review skills: a dimension of clinical competence. Journal of Medical Education, 54, 788–796. Green, M. L. (1999) Graduate medical education training in clinical epidemiology, critical appraisal, and evidence-based medicine: a critical review of curricula. Academic Medicine, 74, 686–694. Greenhalgh, T. (1997) How to read a paper: assessing the methodological quality of published papers. BMJ, 315, 305–308. Greenhalgh, T. (2006) How to Read a Paper: The Basics of Evidence-Based Medicine. Blackwell. Hall, W., Violato, C., Lewkonia, R., et al (1999) Assessment of physician performance in Alberta: the Physician Achievement Review. Canadian Medical Association Journal, 161, 52–57. Hamdy, H., Prasad, K., Williams, R., et al (2003) Reliability and validity of the direct observation clinical encounter examination (DOCEE). Medical Education, 37, 205–212. Higgins, R. S. D., Bridges, J., Burke, J. M., et al (2004) Implementing the ACGME general competencies in a cardiothoracic surgery residency program using a 360-degree feedback. Annals of Thoracic Surgery, 77, 12–17. Holmboe, E. S. (2004) Faculty and the observation of trainees’ clinical skills: problems and opportunities. Academic Medicine, 79, 16–22. Holmboe, E. S., Huot, S., Chung, J., et al (2003) Construct Validity of the MiniClinical Evaluation Exercise (MiniCEX). Academic Medicine, 78, 826–830. Holmboe, E. S., Yepes, M., Williams, F., et al (2004) Feedback and the mini clinical evaluation exercise. Journal of General Internal Medicine, 5, 558–561. Jennett, P. & Affleck, L. (1998) Chart audit and chart stimulated recall as methods of needs assessment in continuing professional health education. Journal of Continuing Education in the Health Professions, 18, 163–171.
25
malik & bhugra
Kogan, J. R., Bellini, L. M. & Shea, J. A. (2003) Feasibility, reliability, and validity of the mini-clinical evaluation exercise (mCEX) in a medicine core clerkship. Academic Medicine, 78 , S33–S35. Kroboth, F. J., Hanusa, B. H., Parker, S., et al (1992) The inter-rater reliability and internal consistency of a clinical evaluation exercise. Journal of General Internal Medicine, 7, 174–179. Lee, A. G., Boldt, C., Golnik, K. C., et al (2005) Using the journal club to teach and assess competence in practice-based learning and improvement: a literature review and recommendation for implementation. Survey of Ophthalmology, 50, 542–548. Lee, A. G., Boldt, C., Golnik, K. C., et al (2006) Structured journal club as a tool to teach and assess resident competence in practice-based learning and improvement. Ophthalmology, 113, 497–500. Lipner, R. S., Blank, L. L., Leas, B. F., et al (2002) The value of patient and peer ratings in recertification. Academic Medicine, 77, S64–S66. Lockyer, J. (2003) Multisource feedback in the assessment of physician competencies. Journal of Continuing Education in the Health Professions, 23, 4–12. Lockyer, J. M. & Violato, C. (2004) An examination of the appropriateness of using a common peer assessment instrument to assess physician skills across specialties. Academic Medicine, 79, S5–S8. Lockyer, J., Blackmore, D., Fidler, H., et al (2006) A study of a multi-source feedback system for international medical graduates holding defined licenses. Medical Education, 40, 340–347. Maatsch, J. L., Huang, R. R., Downing, S., et al (1984) The predictive validity of test formats and a psychometric theory of clinical competence. Research Medical Education, 23, 76–82. Martin, J. A., Regehr, G., Reznick, R., et al (1997) Objective structured assessment of technical skill (OSATS) for surgical residents. British Journal of Surgery, 84, 273–278. Mason, R., Choudhry, N., Hartley, E., et al (2003) Developing an effective system of 360-degree appraisal for consultants: results of a pilot study. Clinical Governance Bulletin, 4, 11–12. McKinley, R. K., Fraser, R. C., van der Vleuten, C., et al (2000) Formative assessment of the consultation performance of medical students in the setting of general practice using a modified version of the Leicester Assessment Package. Medical Education, 34, 573–579. Moorthy, K., Munz, Y., Sarker, S. K., et al (2003) Objective assessment of technical skills in surgery. BMJ, 327, 1032–1037. Moorthy, K., Vincent, C. & Darzi, A. (2005) Simulation based training. BMJ, 330, 493–495. Morgeson, F. P., Mumford, T. V. & Campion, M. A. (2005) Coming full circle. Using research and practice to address 27 questions about 360-degree feedback programs. Consulting Psychology Journal: Practice and Research, 57, 196–209. Morris, A., Hewitt, J. & Roberts, C. M. (2006) Practical experience of using directly observed procedures, mini clinical evaluation examinations, and peer observation in pre registration house officer (FY1) trainees. Postgraduate Medical Journal, 82, 285–288. Nichols Applied Management (2001) Alberta’s Physician Achievement Review (PAR) Program: A Review of the First Three Years Report. NAM. Noel, G. L., Herbers, J. E. Jr., Caplow, M. P., et al (1992) How well do internal medicine faculty members evaluate the clinical skills of residents? Annals of Internal Medicine, 117, 757–765. Norcini, J. J. (2001) The validity of long cases. Medical Education, 35, 720–721. Norcini, J. J. (2002) The death of the long case? BMJ, 324, 408–409. Norcini, J. J., Blank, L. L., Arnold, G. K., et al (1995) The mini-CEX (clinical evaluation exercise): a preliminary investigation. Annals of Internal Medicine, 123, 795–799. Norman, G. R. & Shannon, S. I. (1998) Effectiveness of instruction in critical appraisal (evidence-based medicine) skills: a critical appraisal. Canadian Medical Association Journal, 158, 177–181.
26
literature overview
Norman, G. R., Davis, D. A., Lamb, S., et al (1993) Competency assessment of primary care physicians as part of a peer review 270 program. JAMA, 9, 1046–1051. Olson, L. G., Coughlan, J., Rolfe, I., et al (2000) The effect of a Structured Question Grid on the validity and perceived fairness of a medical long case assessment. Medical Education, 34, 46–52. Ponnamperuma, G. G., Karunathilake, I. M., McAleer, S., et al (2009) The long case and its modifications: a literature review. Medical Education, 43, 936–941. Price, J. & Byrne, G. J. A. (1994) The direct clinical examination: an alternative method for the assessment of clinical psychiatry skills in undergraduate medical students. Medical Education, 28, 120–125. Ramsey, P. G., Wenrich, M. D., Carline, J. D., et al (1993) Use of peer ratings to evaluate physician performance. JAMA, 13, 1655–1660. Ramsey, P. G., Carline, J. D., Blank, L. L., et al (1996) Feasibility of hospital-based use of peer ratings to evaluate the performances of practicing physicians. Academic Medicine, 71, 364–370. Regehr, G., MacRae, H., Reznick, R. K., et al (1998) Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCEformat examination. Academy of Medicine, 73, 993–997. Rubenstein, W. & Talbot, Y. (2003) Medical Teaching in Ambulatory Care. Springer. Solomon, D. J., Reinhart, M. A., Bridgham, R. G., et al (1990) An assessment of an oral examination format for evaluating clinical competence in emergency medicine. Academic Medicine, 65, S43–S44. Spinler, S. A. (1991) How to prepare and deliver pharmacy presentations. American Journal of Hospital Pharmacy, 48, 1730–1738. Thomas, P. A., Gebo, K. A. & Hellmann, D. B. (1999) A pilot study of peer review in residency training. Journal of General Internal Medicine, 14, 551–554. Turnbull, J., Turnbull, J., Jacob, P., et al (2005) Contextual considerations in Summative Competency Examinations: relevance to the Long Case. Academic Medicine, 80, 1133– 1137. Violato, C., Marini, A., Toews, J., et al (1997) Feasibility and psychometric properties of using peers, consulting physicians, co-workers, and patients to assess physicians. Academic Medicine, 72, S82–S84. Violato, C., Lockyer, J. M. & Fidler, H. (2008) Changes in performance: a 5-year longitudinal study of participants in a multi-source feedback programme. Medical Education, 42, 1007–1013. Vollman, K. M. (2005) Enhancing presentation skills for the advanced practice nurse: strategies for success. American Association of Critical-Care Nurses Clinical Issues, 16, 67–77. Wass, V. & Jolly, B. (2001) Does observation add to the validity of the long case? Medical Education, 35, 729–734. Wass, V. & van der Vleuten, C. P. M. (2004) The long case. Medical Education, 38, 1176– 1180. Weinrich, M. D., Carline, I. D., Giles, L. M., et al (1993) Ratings of the performances of practicing internists by hospital-based registered nurses. Academic Medicine, 68, 680687. Weiss, A. B. (2002) The oral examination. Awful or awesome? Perspectives in Biology and Medicine, 45, 569–578. Wilkinson , J., Benjamin, A. & Wade, W. (2003) Assessing the performance of doctors in training. BMJ, 327, s91–s92. Wilkinson, J. R., Crossley, J. G. M., Wragg, A., et al (2008) Implementing workplace-based assessment across the medical specialties in the United Kingdom. Medical Education, 42, 364–373.€
27
Chapter 3
Case-based discussion Nick Brown, Gareth Holsgrove and Sadira Teeluckdharry
Case-based discussion is in part derived from and is a close cousin to case-based assessment, and is used in the performance assessments of the General Medical Council (GMC) and National Clinical Assessment Service (NCAS). It has been a key element of the assessment programme for psychiatrists in training under the guidance of the Royal College of Psychiatrists since 2007. Its incorporation within the systems for recertification and thus revalidation for psychiatrists in established practice has now been piloted. This chapter discusses the origins of the instrument in Canada and the USA before describing its use in the UK. The tool is placed within the context of contemporary postgraduate medical education and the Royal College of Psychiatrists’ curricula, offering practical guidance on how best to use this method for the assessment of reasoning and judgement. Finally, some questions are posed with regard to the potential use of case-based discussion in the proposals for revalidation. Case-based discussion enables a documented, structured interview about a real case by a doctor who has been involved or is responsible for the assessment and treatment of the patient. The starting point is either a case note or patient record for any patient for whom the doctor has significant involvement and responsibility, or an observed interview/assessment with a patient. Case-based discussion is a powerful educational tool for the assessment of progress, the attainment of clinical competencies and the setting and resetting of educational objectives.
Aims and application of case-based discussion The aim of case-based discussion is to enable an assessor to provide systematic assessment and structured feedback to another doctor (usually a trainee). It can enable assessment of clinical decision-making, clinical reasoning and the application of medical knowledge on real patients for whom the doctor under assessment has direct responsibility. It is one element of the assessment programme for doctors in training in psychiatry. In addition, it is proposed to form part of the evidence used in enhanced (or strengthened) appraisal, which in turn is a fundament of the procedure 28
case-based discussion
for recertification and revalidation for established psychiatrists. The focus of case-based discussion is the doctor’s clinical decision-making and reasoning. The method can be used for both formative (assessment for learning) and summative (assessment of learning) purposes (Postgraduate Medical Education and Training Board, 2009). Authenticity is achieved by basing the questions on the doctor’s own patients in their own workplace. The assessment is always focused solely on the doctor’s real work with their own patients and at all times is concerned with exploring exactly what was done and why, and how any decision, investigation or intervention was arrived at. A case is chosen with particular curriculum objectives in mind, and then discussed using focused questions designed to elicit responses that will indicate knowledge, skills and behaviours relevant to those domains. Assessors require training, particularly in the areas of question design and giving feedback. This is because case-based discussion is intended to assess the doctor’s reasoning and judgement and it is important to guard against it becoming an oral test of factual knowledge or an open discussion of the patient’s problems. In postgraduate medical education, case-based discussion, in common with other workplace-based assessments (WPBAs), must always be accompanied by effective feedback to aid performance improvement, so it is essential that assessors have the skills to deliver it. In sum, case-based discussion can be used for doctors at any level of training or experience; it is suitable for use in community, out-patient or in-patient settings. Each case-based discussion should represent a different clinical problem, sampled against need, which is in turn informed by the curriculum. When it is used in training programmes different doctors must assess each individual trainee through the duration of their training. The assessor must be a trained health professional, who may or may not have prior knowledge of the trainee or the case, but must be trained and accredited in the use of the instrument. The process works best if the assessor has the opportunity to review the case record in advance of the interview. A trained assessor questions the trainee about the care provided in the predetermined areas: problem definition (i.e. diagnosis), clinical thinking (interpretation of findings), management, and anticipatory care (treatment/ care plans) (Southgate et al, 2001). Granted its importance, it is vital to describe the origins of case-based discussion as well as why it is an effective method of assessment and how it must be deployed. Therefore, the genesis of case-based discussion in medicine and in UK psychiatry is described and there follows guidance on how to apply the instrument successfully before some consideration of its use with established psychiatrists.
Origins of case-based discussion Case-based discussion is one of a family of workplace-based assessment methods that have developed from a US instrument called Chart Stimulated Recall (CSR). In CSR, an assessor, having reviewed a selection of patients’ charts (clinical records), discusses with the practitioner data-gathering, 29
brown et al
diagnosis, problem-solving, problem management, use of resources and record-keeping to validate information gathered from the records. The CSR was shown to have good face and content validity (Jennett et al, 1995). In addition, it has been demonstrated that with sufficient sampling, good levels of reliability (Norman et al, 1993) and validity with assessor training (Solomon et al, 1990) can be achieved. Furthermore, Maatsch et al (1984) demonstrated concurrent validity in the relationship between CSR scores and results from the American Board of Emergency Medicine examinations. A major literature review on case-based discussion was undertaken by Jennett & Affleck (1998), who discuss examples of chart audit and case (note) review skills as a dimension of clinical competence extending back to the 1970s (e.g. Goetz et al, 1979). Work in the USA using CSR as part of the recertification of practising doctors showed scores that were highly correlated with performance in a clinical examination using standardised patients. It also showed that CSR scores distinguished between doctors who had been referred for fitness-topractise procedures because of concerns and those about whom there were no concerns (Goulet et al, 2002).
Adoption and development of case-based discussion in the UK Case-based discussion was adapted from the original work in Canada and the USA by the GMC in its performance assessment. Currently in the UK, the method is used extensively by the GMC and NCAS in their assessments of performance, as well as forming a key component of the assessment framework for foundation and specialty training across medical education. Foundation programme Case-based discussion has been one of the four assessment tools in the foundation programme (www.hcat.nhs.uk) since its inception in the UK as part of Modernising Medical Careers (MMC). All foundation trainees are required to undertake case-based discussions, each with a different assessor, during each year of training. The assessor may be a consultant, experienced specialty registrar, staff or associate specialist grade (SASG). The assessor is asked to declare both the level of complexity of the case and their own experience in conducting case-based discussion. The trainee must present the trainer with two case records, selected from patients they have seen recently and in whose notes they have made an entry. There are seven areas of competency to be assessed and guidance includes descriptors for the rating of a satisfactory trainee. The seven competency areas are: medical record-keeping; clinical assessment; investigation/referrals; treatment; follow-up and future plan; professionalism; and overall clinical care. It should be noted that, although as discussed above the review of records is a separate matter, this has been incorporated into the same schedule. Descriptors accompany each domain in order to assist the assessor in rating the trainee. The trainee should be scored in comparison with peers at the 30
case-based discussion
same stage of training (i.e. satisfactory for a year 1 or a year 2 trainee). It is also made clear in the guidance that the system is to reflect the trainee’s incremental development so that ratings below ‘meets expectations for FY1 or FY2 completion’ (foundation training year 1 or 2) will be in keeping with the trainee’s level of experience early in the year. Early versions of case-based discussion utilised a 9-point Likert scale. These were modified because, although the approach was theoretically the correct one, the impact for trainees of receiving scores of one or two against a satisfactory of four and a maximum of nine were demoralising and demotivating. A single examination has been found to take approximately 15–20â•›min. Early evidence suggested that three to six case-based discussions selected from a sample representative of the doctor’s range of practice may be enough to provide reliable and valid assessment for a given phase of training (Jennett & Affleck, 1998). Foundation trainees are advised to undertake six to eight assessments every year. The initial results concerning the use of case-based discussion in the foundation programme have now been reported in a summary of the WPBAs in postgraduate training of physicians in the UK. Data are presented suggesting good levels of reliability for case-based discussion with four, but preferably eight, cases (Davies et al, 2009). In addition, Davies and colleagues estimated the correlations between the WPBA methods to explore hypotheses based on the intended focus of assessment as evidence for construct validity. Their findings supported these hypotheses. They also established that casebased discussion scores increased between the first and second half of the year, indicating validity. Early findings from the Royal College of Physicians support the notion of case-based discussion as a valid and reliable instrument in the assessment of doctors’ performance (Booth et al, 2009).
Development in specialty training The Royal College of Psychiatrists has taken the WPBA methods already in use in the foundation programme and, where necessary, modified the criteria to suit psychiatric training; case-based discussion is one of these methods. The College curricula for specialty training in psychiatry currently use casebased discussion. In addition, a number of variants on the theme are used in encounters with simulated or standardised patients (including incognito standardised patients, so-called ‘mystery customers’) in clinical settings in many other specialties and in many different countries. Other variants on case-based discussion can be used away from the workplace, in formal examination such as the Objective Structured Clinical Examination (OSCE) and the Clinical Assessment of Skills and Competencies (CASC). The College assessment programme comprises the formal MRCPsych examinations and a range of workplace-based assessments. These are described in detail on the College website (www.rcpsych.ac.uk/training.aspx and www.rcpsych. ac.uk/exams.aspx), which is regularly updated. Within this programme, the WPBAs are principally formative in nature, but they also have a summative 31
brown et al
function in that they can provide evidence of attaining the required standards for completing a phase of training via the Annual Review of Competence Progression (ARCP) (Modernising Medical Careers, 2007, 2008). In addition to the domains used in the foundation programme, three items were added in case-based discussion for specialty training (risk assessment and management, overall clinical care and assessment based on the trainee’s stage of training). The use of this method for trainees in psychiatry presents significant practical challenges. Trainees, especially in early years, do not have a significant individual case-load. They work with a varying degree of independence and autonomy. At least two factors come into play here. The first is the close clinical working relationship between the trainee and their supervising consultant. This has been enshrined in the College’s operational guidance for training. The stated conditions for clinical supervision mean that trainees, rightly, do not undertake out-patient appointments or ward rounds as a routine without consultant presence. The second factor is the changing nature of services themselves, particularly with regard to emergency services. The pattern is of a multiprofessional team working increasingly in functionally determined teams. Out-of-hours and emergency care is provided by crisis teams and, although senior trainees may be a part of these teams, taking a full role in assessment and clinical decisionmaking, junior trainees are often in the position of simply processing care plans drawn up by others. Taken together, this would suggest that gaining a full sample of cases that can assess adequately the trainee’s independent ability to form judgements and solve problems will not always be as straightforward as it might be in other specialties, and certainly requires a high degree of planning from trainer and trainee.
Assessment programmes in contemporary postgraduate medical education Before proceeding with further consideration of the case-based discussion itself, it is useful to place it within the context of contemporary thought on assessment in medical education. Postgraduate medical education is a growing field with a language, set of terms and, some would say, fashions of its own. It is important to note that contemporary best practice in assessment in medical education is moving in a direction that is significantly different to the traditional model. This issue is discussed in detail elsewhere, particularly in the seminal paper by van der Vlueten (1996) and later papers by van der Vleuten and Shurwirth (van der Vleuten & Shuwirth, 2005; Shuwirth & van der Vleuten, 2006a,b). They have also been discussed and developed in a broader context by Holsgrove & Davies (2007, 2008). However, put briefly, there are two main issues. One is the point raised in van der Vleuten’s earlier paper (1996) that assessment is not a measurement issue intended, for example to reduce clinical competence into its supposed component parts and express the levels of attainment in 32
case-based discussion
numerical terms. Instead, it should be seen as a matter of educational design aimed at assessing clinical competence as a global construct. Furthermore, assessments should have a variety of purposes (Southgate & Grant, 2004; Postgraduate Medical Education and Training Board, 2008). These purposes include feedback – and not just feedback to the person being assessed, but also to teachers, assessors and other interested parties (Holsgrove & Davies, 2008). This philosophy leads us to the concept of a utility model in which assessment programmes are designed to suit a particular set of requirements and circumstances, with context-dependent compromises made where necessary. This model (introduced here in Chapter 1, pp. 7–10) recognises that assessment characteristics are weighted depending on the nature and purpose of the assessment (van der Vleuten et al, 2005). Reliability depends not on structuring or standardisation but on sampling. Key issues concerning validity are authenticity and integration of competencies; thus alignment with the curriculum and its aims is essential. In sum, this model suggests that fundamentals of a successful assessment programme are adequate sampling across judges, instruments and contexts, which can then ensure both validity and reliability. The assessment programme designed to support the College curricula includes both formal (MRCPsych) examinations and workplace-based assessments. Case-based discussion is among the WPBAs in that programme and, along with the observed assessment (ACE or mini-ACE), it assesses the very heart of a doctor’s daily clinical performance. It is necessary to consider further the context of modern medical education in order to understand better the importance of the curriculum for sampling and to determine what trainers and trainees need to know about how best to implement this assessment instrument. It is worthwhile to remember once more some of the context from which case-based discussion has originated. As with many assessments in contemporary medical education, it is important to remember that case-based discussion is attempting to serve both formative (‘How am I doing?’) and summative (‘Have I passed or failed?’) assessment purposes. It is therefore an instrument both for and of learning (Postgraduate Medical Education and Training Board, 2009). The content and application of learning, and the outcomes of assessment, are themselves defined in the curriculum. Therefore, the selection and use of learning and assessment instruments must always be undertaken with the curriculum in mind. This, although apparently very obvious, is in sharp contrast with learning and assessment in the past. Consideration of the place of case-based discussion is therefore an exemplar for the need for trainers (and trainees) to be fully aware of the curriculum in setting learning plans with intended learning outcomes that match curriculum outcomes and are subject to continuing assessment that is carefully planned and not simply a spontaneous occurrence! The curriculum is not simply an examination syllabus or a list of things that a trainee is supposed to learn. Equally, assessment is no longer solely a series of formal, high-stakes examinations, although the latter remain critical in 33
brown et al
the setting and monitoring of national and international standards. The Postgraduate Medical Education and Training Board (PMETB), for all its early shortcomings, has undoubtedly raised the quality of postgraduate medical curricula and assessment. For the individual trainer and trainee (together or alone) the fundamental need is to understand what case-based discussion can do, how it fits with other assessments into an individual educational programme (remember, the educational cycle includes assessment!) and, crucially, how and when to use it for best impact.
The use of case-based discussion – practical guidance Planning: first stage The fundamental nature of case-based discussion is that the doctor’s own patients are used as the starting point for a discussion that looks into that doctor’s applied knowledge, reasoning and decision-making. This is an assessment instrument that probes into the doctor’s clinical reasoning; it may be described as a structured interview designed to explore professional judgement exercised in clinical cases. Professional judgement may be considered as the ability to make holistic, balanced and justifiable decisions in situations of complexity and uncertainty. Case-based discussion can explore a full range of these issues, for instance the ability to recognise dilemmas, see a range of options, weight these options, decide on a course of action, explain the course of action and assess its results. It draws on practical aspects of a doctor’s routine clinical activity either in the form of documented observed practice (e.g. a mini-ACE or ACE would make very suitable materials) or entries made in the case notes and subsequent discussion of the case. Based on such records of a patient recently seen by the trainee, and to whose care they have made a significant contribution, case-based discussion is conducted as a structured discussion. A few days ahead of the scheduled case-based discussion, either the trainee gives the assessor two sets of suitable notes or they agree on the discussion stemming from an observed interview. The assessor reads the notes and selects one for the case-based discussion. The potential curriculum domains and specific competencies should be mapped at the outset. In their initial meetings trainers and trainees will find it invaluable to draw up a blueprint (either one prepared themselves or one from their postgraduate school) upon which to plot their learning plan, including specific learning objectives and assessments. This will ensure full and proper curriculum coverage. Case selection should be kept simple but must enable good curriculum coverage. A grid may be constructed which plots cases, questions from the case-based discussion and curriculum objectives (Fig. 3.1).
Planning: second stage In the next stage the assessor writes out the questions that they intend to ask. This may at first sight appear over-prescriptive; however, it is critical 34
Patient identifier
RF2
Fig. 3.1â•… Case-based discussion planning grid (source: Brown et al, 2011)
Efficient use of resources
Working within legal frameworks
RF1
Teaching and training
Constructive participation in audit, assessment and appraisal
Educational activities
Arranging referrals
Teamwork
Record-keeping
Working within limits of competence
Treatment in emergencies
Providing or arranging treatment and care
Providing or arranging investigations
Assessment of patients’ condition (history‑taking, mental state examination)
Communication with patients
Respect for patients, trust and confidentiality
Curriculum/ performance objective
RF3
RF4
RF5
RF6
RF7
RF10
RF9
RF8
RF11
RF12
case-based discussion
35
brown et al
that the discussion retains its focus. There are false trails to be avoided, for example into discussing the patient’s problems, reviewing the notes themselves, or turning the case-based discussion into a factual viva. Good practice is to prepare about three questions per case, which will facilitate coverage of more than one competency with the same case. Some thoughts and suggestions on construction of questions are offered in Box 3.1.
Discussion It is important to ensure that the doctor being assessed has enough time to review the records and refresh their memory before case-based discussion starts and that the records are present throughout the discussion for reference. Questioning should commence with a reminder of the case and, ideally, the competency and/or curriculum domain that is being covered and should begin in an open fashion. The subsequent discussion is then anchored on this particular case and it is clear in its aim of assessment against a curriculum domain or intended learning outcome. This is facilitated by the assessor, who might use prompts such as those in Box 3.1. There is no set structure for the discussion, but some helpful prompts are listed in Box 3.2.
Box 3.1â•… Structured question guidance 1 Problem definition: What are the issues that arose in this case? What conflicts were you trying to resolve? 2 Integration of information: What relevant information did you have available? Why was this relevant? How did the information or evidence that you had available help you? What other information could have been useful? 3 Consideration of options: What were your options? Which one did you choose? Why did you choose this one? What are the advantages/disadvantages of your decision? How do you balance them? 4 Consideration of implications: What are the implications of your decision? For whom? How might they feel about your choice? How does this influence your decision? 5 Justification of decision: How do you justify your decision? What evidence do you have to support your choice? Can you give an example? Are you aware of any model, framework or guidance that assists you? Some might argue with your decision, how might you engage them? 6 Ethical practice: What ethical framework did you refer to in this case? How did you apply it? How did you establish the patient’s/service user’s point of view? What are their rights? How did you respond? 7 Team working: Which colleagues did you involve in this case? Why? How did you ensure that you had effective communication with them? Who could you have involved? What might they have offered? What is your role? 8 ‘Duties of a doctor’: What are your duties and responsibilities? How did they apply to this case? How did you observe them?
36
case-based discussion
It can be clearly seen that the major contribution that case-based discussion makes to the overall assessment programme is that it allows the assessor to explore clinical reasoning and professional judgement. However, it is important again to distinguish case-based discussion from a traditional viva because a case-based discussion must not be a viva-type interaction exploring the trainee’s knowledge of the clinical problem. Instead, it must focus on what is in the notes, and the trainee’s thinking in relation to the diagnosis and management of the case.
Box 3.2â•… Helpful prompts for planning a case-based discussion 1 General •â•¢ Please tell me about this meeting/visit/appointment, or •â•¢ Please tell me about your approach to the patient’s presenting problem, or •â•¢ What were the key points about this meeting/visit/appointment? 2 Assessment/diagnosis •â•¢ What specific features led you to this impression/conclusion or diagnosis?, and/or •â•¢ What other conditions have you considered/ruled out? 3 Investigation/referrals •â•¢ What specifically led you to choose these investigations?, and/or •â•¢ Were there any other investigations or referrals that you considered? 4 Therapeutics •â•¢ What specific features led you to the management/therapeutics that you chose?, and/or •â•¢ Were there any other treatments that you thought about or ruled out? 5 Follow-up/care plan •â•¢ What decisions were made about follow-up (to this entry)?, and •â•¢ What were the factors that influenced this decision? 6 Monitoring chronic illness •â•¢ In your care of X, have you discussed the monitoring of his/her progress?, and/or •â•¢ Do you think that there are some monitoring strategies that would be appropriate?, and/or •â•¢ Have you discussed any health promotion strategies (alcohol use, diet)? 7 Individual patient factors around care •â•¢ Was there anything particular/special about this patient that influenced your management decisions (e.g. demographic characteristics, psychosocial issues, past history, current medications and treatment?, and/or •â•¢ On reflection, is there anything about this patient that you wish you knew more about? 8 Care setting •â•¢ Is there anything about the setting in which you saw the patient (e.g. home, ward, accident and emergency department) that influenced your management?, and/or •â•¢ In considering this case, what changes would improve your ability to deliver care to this patient?
37
brown et al
Judgement The task of the assessor is to make judgements on the performance of the doctor under assessment. Essentially, a judgement must be made about whether a performance is satisfactory and this is referenced for doctors in training to the stage of training. For doctors in training, there is an emphasis on competence as opposed to performance and therefore the following breakdown into areas or domains becomes important in making an assessment and guiding feedback. It should be borne in mind, however, that the emphasis on competence is itself the subject of considerable debate in medical education (Grant, 1999). The various domains that are assessed on the case-based discussion, together with the performance descriptors to be used, are listed in Boxes
Box 3.3â•… Case-based discussion: clinical record-keeping performance descriptors 1 Very poor, incomplete records; might be unsystematic, almost illegible, not comprehensible, unsigned, undated and missing important detail 2 Poor records; signed and dated, but poorly structured, not adequately legible or comprehensible, and missing some important details 3 Structured, signed and dated, but incomplete, although without major omissions 4 Structured, signed and dated; legible, clear and comprehensible with no important omissions 5 Very clear, structured records, signed and dated, in which all the relevant information is easy to find 6 Excellent records with no flaws at all
Box 3.4â•… Case-based discussion: clinical assessment (including diagnostic skills) performance descriptors 1 Fails to obtain or interpret clinical evidence correctly; gross omissions in assessment and differential diagnoses considered 2 Several omissions and/or poor understanding of differential diagnosis. Fails to obtain or interpret clinical evidence adequately 3 A reasonably good clinical assessment, but missing some relevant details, or marginally inadequate differential diagnosis 4 A good clinical assessment showing satisfactory diagnostic skills based on appropriate evidence from, for example, history, examination and investigations. Appropriate diagnosis and spread of suggestions in the differential diagnosis 5 A good clinical assessment and differential diagnosis based on good historytaking, examination, investigations, etc 6 A thorough, accurate, and appropriately focused clinical assessment and diagnosis demonstrating excellent assessment and diagnostic skills
38
case-based discussion
3.3–3.11. For each set of descriptors, the fourth one (in bold) describes a satisfactory trainee.
Box 3.5â•… Case-based discussion: medical treatment performance descriptors 1 2 3 4 5 6
Unacceptably inadequate or inappropriate medical treatment Very poor treatment, inadequate or inappropriate Some inadequacies in medical treatment plan, but no major failings Adequate and appropriate medical treatment Well thought-out medical treatment Excellent, carefully considered medical treatment
Box 3.6â•… Case-based discussion: risk assessment and management performance descriptors 1 Fails to assess risk to the patient or others 2 A poor and inadequate assessment of risk or failure to understand the significance of risk-assessment findings 3 Barely adequate assessment of risk or understanding of the significance of findings 4 An adequate risk assessment leading to an appropriate management plan, including consideration of risks to the patient and others 5 A good assessment of potential risks to themselves, the patient and others leading to a good, safe management strategy that is well communicated to all concerned 6 A very thorough and appropriate risk assessment, excellently documented, with a very good management strategy (if appropriate, including alternative options) properly communicated to all the appropriate individuals
Box 3.7â•… Case-based discussion: investigation and referral performance descriptors 1 Little or no proper investigation; referral not made or made inappropriately 2 Inadequate or inappropriate investigation; unsatisfactory referral 3 Investigation barely adequate, although it should include gathering some information from relatives, carers or other appropriate third parties; referral might be appropriate or not 4 Adequate investigation and appropriate referral. Investigation includes talking to relatives, carers and any other appropriate third parties 5 Appropriate and timely investigation including information from relatives, carers and other appropriate third parties. The best available referral option chosen and appropriately made 6 Excellent selection and implementation of investigations and interpretation of findings; best available referral option chosen and appropriately made
39
brown et al
Box 3.8â•… Case-based discussion: follow-up and care planning performance descriptors 1 Total lack of care planning and follow-up; unacceptable performance 2 Little thought given to follow-up and care planning; care plans not properly recorded and communicated 3 Barely adequate follow-up and care planning 4 Satisfactory arrangements made, recorded and communicated for follow-up and planned care 5 Thoughtful and appropriate arrangements for follow-up and care plan, correctly recorded and communicated 6 Excellent and highly appropriate care planning and follow-up arrangements, with proper documentation and communication
Box 3.9â•… Case-based discussion: professionalism performance descriptors 1 Evidence of an unacceptable lack of professional standards in any aspect of the case 2 Not seriously unprofessional, but nevertheless clearly below the required standard 3 Not quite up to the required professional standards, perhaps through an occasional lapse 4 Appropriate professional standards demonstrated in all aspects of the case 5 Evidence of high professional standards in several aspects of the case, and never less than appropriate standards in the others 6 Evidence of the highest professional standards throughout the case – a role model for others to learn from
Box 3.10â•… Case-based discussion: clinical reasoning (including decision-making) performance descriptors 1 Practically no evidence of appropriate clinical reasoning or adequate decisionmaking; unsafe 2 Poor reasoning or decision-making, clearly below the required standard 3 Clinical reasoning and/or decision-making below the required standard, but not dangerously so 4 Good, logical clinical reasoning and appropriate decision-making 5 Insightful clinical reasoning and good decision-making 6 Excellent clinical reasoning taking proper account of all the relevant factors leading to decision-making that will result in a very high standard of clinical care
40
case-based discussion
Box 3.11â•… Case-based discussion: overall clinical care performance descriptors 1 Serious concern over the standard of clinical care demonstrated in this case – unsafe and probably unfit for practise 2 Generally a poor standard of clinical care; perhaps due to one or more major shortcomings. There might be a few adequate aspects, but nevertheless clearly substandard overall 3 Clinical care below the required standard, but with no evidence of major inadequacy or oversight 4 Clinical care of the required high standard, although possibly allowing a few minor shortcomings 5 A high standard of clinical care demonstrated, with practically no shortcomings 6 Evidence of excellent clinical care in all aspects of the case – a role model
Feedback It is axiomatic that a well-conducted assessment must be accompanied by timely and effective feedback that clearly informs the doctor being assessed not only about their level of performance but also, and more importantly, about their next developmental needs, i.e. what they need to learn or improve (Brown & Cooke, 2009).
Specific assessor skills and questions Case-based discussion needs preparation from both the trainer and the trainee. It is important to select cases very carefully and to look at the case before the meeting. Assessors must not try to make it up as they go along. The assessment can be quite challenging and a trainer may find an early difficulty in suppressing all those urges to say ‘What if?’. Case-based discussion is an assessment of what the doctor did with a particular patient, not what they might have done. So one can ask what they did and why, what evidence they have for that action, and even what their next step will be, but assessors should not go down the line of hypothetical exploration. Colleagues in all specialties have discovered that case-based discussion needs considerable practice to develop new skills and the discipline not to slip in to tutorial or viva modes of questioning. Case-based discussion tests what the doctor actually did rather than what they think they might do (which is what a viva or an OSCE might test). Thus, case-based discussion assesses at a higher level on Miller’s pyramid than most other assessments because it tests ‘what the doctor or trainee did’ as opposed to what they ‘show or know they can do’ (Miller, 1990). As a general rule, assessors should avoid theoretical ‘What would you do if…’ questions and stick to assessing ‘What did you do and why?’. This is particularly important with regard to case-based discussion. The discussion and feedback should take around 25–30â•›min in total, of which about 10â•›min should be reserved for feedback. 41
brown et al
Training of assessors There is surprisingly little data on the effect of training assessors in undertaking case-based discussion or CSR assessments. Trainers will need to be taught not only to assess the oral component of case-based discussion but also to appraise the case record entries in a standardised manner. As with other assessment tools, some training will be required into the technical aspects of case-based discussion/CSR. Assessors might also need some pointers with regard to probing skills that will aid in exploring the trainee’s thought process behind certain decisions. This will also help in further shaping effective feedback. The College has commenced some training but our personal experience and that of the NCAS strongly supports the need for detailed initial training, particularly with regard to the construction of questions, followed by continuous updating in the form of skills-based workshops.
Educational programme (including assessment) Case-based discussion can, and indeed should, be used in conjunction with other assessment instruments, in particular those assessing observed clinical practice (ACE and mini-ACE) and written clinical material such as out-patient letters (Crossley et al, 2001), as well as the case presentation assessment used in the College curriculum (Searle, 2008). Moreover, as well as being a very useful assessment instrument, case-based discussion can also be an effective learning method using, for example, multimedia adaptation of cases (Bridgemohan et al, 2005) or interdisciplinary discussion.
Case-based discussion and revalidation Revalidation is a set of procedures operated by the GMC to secure the evaluation of a medical practitioner’s fitness to practise as a condition of continuing to hold a licence to practise (adapted from the Medical Act 1983 (Amendment) and Miscellaneous Amendments Order 2006). A strengthened or enhanced appraisal system that is consistently operated and properly quality assured lies at the core of the procedures for revalidation. Although appraisal must retain a formative element for all doctors, it will also take account of a doctor’s performance on an ‘assessment’. The Royal College of Psychiatrists has proposed that case-based discussion be used as one method for the assessment of performance (Mynors-Wallis, 2010). There is some literature with respect to the use of case-based discussion in informing learning plans for continuing professional development/ continuing medical education for psychiatrists in Canada. There it was found to be a promising tool guiding both the setting of educational objectives and course design (Spenser & Parikh, 2000). However, some concerns were expressed about the feasibility of using the tool for large populations of doctors. 42
case-based discussion
Pilots have now been undertaken in the UK and preliminary results are available (see Chapter 13), but clearly questions remain with regard to the full range of considerations. There is obvious validity, as there is in the training context, because the assessment focuses on professional judgement which is the bread and butter of the established psychiatrist. Nevertheless, there are clear concerns with regard to the choice, training and continuing accreditation of assessors, which in turn poses questions about reliability. There is a need to carefully consider numbers of cases and sampling across the individual’s full scope of practice in the context of feasibility and sampling. These issues provide a serious challenge to the idea that reasonable and potentially defensible evidence about performance will flow into an appraisal system and thus towards high-stakes decisions on fitness for purpose as a psychiatrist.
Conclusions Case-based discussion has developed from its first use in Canada and the USA to become an important part of the assessment of clinical competence and overall performance. It is an assessment of reasoning, exploring why a psychiatrist took a particular course of action at a particular time. It is authentic, feasible and useful. Its reliability has not yet been formally tested but the evidence suggests that reliability is enhanced by using as many cases and properly trained assessors as possible. Utility is then improved further by the pre-planning of questions and acceptable answers by assessors who fully understand the nature of both the doctor’s practice and the relevant curriculum. This presents a clear challenge to those with responsibility for postgraduate medical education and medical management in providing the right high-quality training as a priority, including continuing training for assessors so that these conditions may be met.
References Booth, J., Johnson, G., & Wade, W. (2009) Workplace-Based Assessment Pilot Report of Findings of a Pilot Study. Royal College of Physicians. Bridgemohan, C. F., Levy, S., Veluz, A. K., et al (2005) Teaching paediatric residents about eating disorders: use of standardised case discussion versus multimedia computer tutorial. Medical Education, 39, 797–806. Brown, N. & Cooke, L. (2009) Giving effective feedback to psychiatric trainees. Advances in Psychiatric Treatment, 15, 123–128. Brown, N., Holsgrove, G. & Teeluckdharry, S. (2011) Case-based discussion. Advances in Psychiatric Treatment, 17, 85–90. Crossley, J., Howe, A., Newble, D., et al (2001) Sheffield Assessment Instrument for Letters (SAIL): performance assessment using out-patient letters. Medical Education, 35, 1115–1124. Davies, H., Archer, J., Southgate, L., et al (2009) initial evaluation of the first year of the Foundation Assessment Programme. Medical Education, 43, 74–81. Goetz, A. A., Peters, M. J., Folse, R., et al (1979) Chart Review Skills: a dimension of clinical competence. Journal of Medical Education, 54, 788–796.
43
brown et al
Goulet, F., Jacques, A., Gagnon, R., et al (2002) Performance assessment: family physicians in Montreal meet the mark! Canadian Family Physician, 48, 1337–1344. Grant, J. (1999) The incapacitating effects of competence: a critique. Advances in Health Sciences Education, 4, 271–277. Holsgrove, G. & Davies, H. (2007) Assessment in the foundation programme. In Assessment in Medical Education and Training (eds N. Jackson, A. Jamieson & A. Khan). Radcliffe. Holsgrove, G. & Davies, H. (2008) Assessment in medical education and training. In A Guide to Medical Education and Training (eds Y. Carter & N. Jackson). Oxford University Press. Jennett, P. A. & Affleck, L. (1998) Chart Audit and Chart Stimulated Recall as methods of needs assessment in continuing professional health education. Journal of Continuing Education in Health Professions, 18,163–171. Jennett, P. A., Scott, S. M., Atkinson, M. A., et al (1995) Patient charts and physician office management decisions: chart audit and chart stimulated recall. Journal of Continuing Education in the Health Professions, 15, 31–39. Maatsch, J. L, Huang, R. R., Downing, S., et al (1984) The predictive validity of test formats and a psychometric theory of clinical competence. Research in Medical Education, 23, 76–82. Miller, G. E. (1990) The assessment of clinical skills/competence/performance. Academic Medicine, 65 (suppl. 9), S63–S67. Modernising Medical Careers (2007) A Reference Guide for Postgraduate Specialty Training in the UK (The Gold Guide). MMC. Modernising Medical Careers (2008) A Reference Guide for Postgraduate Specialty Training in the UK (The Gold Guide, Second Edition). MMC. Mynors-Wallis, L. (2010) Revalidation Guidance for Psychiatrists (College Report CR161). Royal College of Psychiatrists. Norman, G. R., Davis, D. A., Lamb, S., et al (1993) Competency assessment of primary care physicians as part of a peer review 270 program. JAMA, 9, 1046–1051. Postgraduate Medical Education and Training Board (2008) Standards for Curricula and Assessment Systems. PMETB. Postgraduate Medical Education and Training Board (2009) Workplace Based Assessment – A Guide for Implementation. PMETB. Schuwirth, L. W. T. & van der Vleuten, C. P. M. (2006a) A plea for new psychometric models in educational assessment. Medical Education, 40, 296–300. Schuwirth, L. W. T. & van der Vleuten, C. P. M. (2006b) Challenges for educationalists. BMJ, 333, 544–546. Searle, G. F. (2008) Is CEX good for psychiatry? An evaluation of workplace-based assessment. Psychiatric Bulletin, 32, 271–273. Solomon, D. J., Reinhardt, M. A., Bridgham, R. G., et al (1990) An assessment of an oral examination format for evaluating clinical competence in emergency medicine. Academic Medicine, 65, S43–S44. Southgate, L., Cox, J., David, T., et al (2001) The General Medical Council’s performance procedures: peer review of performance in the workplace. Medical Education, 35, 9–19. Southgate, L. & Grant, J. (2004) Principles for an Assessment System for Postgraduate Training. PMETB. Spenser, H. R. & Parikh, S. V. (2000) Continuing medical education [letter]. Canadian Journal of Psychiatry, 45, 297–298. van der Vleuten, C. P. M. (1996) The assessment of professional competence: theoretical developments, research and practical implications. Advances in Health Sciences Education, 1, 41–67. van der Vleuten, C. P. M. & Schuwirth, L. W. T. (2005) Assessing professional competence: from methods to programmes. Medical Education, 39, 309–317.
44
Chapter 4
The mini-Assessed Clinical Encounter (mini-ACE) Nick Brown
The current shift in emphasis in teaching and learning in medicine is towards outcome-based learning. This means that greater importance is placed upon the daily clinical performance of the doctor. The direct observation of the interaction between the doctor and patient is at the heart of any schedule for the assessment of a doctor in clinical practice. The mini-Assessed Clinical Encounter (mini-ACE) was introduced with a particular eye to the learning and assessment needs of doctors at an early stage in career development in psychiatry as a whole or within a chosen specialty. The mini-ACE enables a structured observation of an aspect of clinical practice to occur with accompanying assessment and feedback on defined areas of competence. The tool is of limited value for more advanced trainees or established psychiatrists because of its emphasis on competence rather than performance and lack of content validity with regard to the work of more senior practitioners, the latter owing to its emphasis on small items of the assessment of a patient rather than the whole.
Description The mini-ACE, which has been developed from the mini-Clinical Evaluation Exercise (mini-CEX), is a method both for assessing the clinical skills of the trainee and offering immediate feedback. It involves a single senior health professional (almost always a doctor) observing a trainee while they conduct a patient assessment in any of a variety of settings. The miniCEX was itself a modification of the traditional long case assessment, in which the trainee conducts a focused history and/or mental state/physical examination. After asking the trainee for a diagnosis and treatment plan, the assessor rates the trainee using a structured format and then provides educational feedback. Each trainee must be assessed on several different occasions by different assessors and over a range of conditions and settings. The mini-ACE should be conducted as a routine part of the clinical and educational programme. 45
brown
The mini-ACE is specifically designed to be relatively short and easy to carry out in normal clinical practice. The whole session should take around 30â•›min, with the time distributed between the observation of the clinical encounter (approximately 15â•›min) and the summary, feedback and completion of documentation. The mini-ACE has the capability to assess trainees in a very broad range of clinical situations, certainly greater than the long-case assessment (or the Assessment of Clinical Expertise, ACE; see Chapter 5). However, it may be more difficult to administer because multiple encounters must be scheduled for each trainee. Exclusive use of the mini-ACE format also prevents trainees from being observed while performing the complete history and examination, the hallmark of clinical practice in psychiatry. The Royal College of Psychiatrists therefore encourages the use of this latter method in conjunction with, rather than as an alternative to, the traditional long-case assessment.
Background The long-case assessment is designed to assess and provide feedback, by observing an actual clinical encounter, on skills essential to the provision of good and safe clinical care. The mini-ACE is a snapshot of a clinical interaction between doctor and patient; not all elements need to be assessed on each occasion. The initial experience in long-case assessments comes from the Clinical Evaluation Exercise (CEX), which was designed by the American Board of Internal Medicine for assessing trainee doctors at the patient’s bedside. The CEX is an oral examination whereby the trainee is observed by one physician completing a full history and examination to reach a diagnosis and plan for treatment. The CEX has clear strengths (these will be discussed in Chapter 5) and include the high content and face validity of the assessment format, the opportunity for instant feedback from an expert clinician and the comprehensive and realistic nature of the assessment. Trainee performance in a real clinical situation with a real patient is assessed. This is in contrast to the Objective Structured Clinical Examination (OSCE), where the clinical situation is simulated; the OSCE is often used to assess demonstrated clinical skills. However, as trainees approach more specialist and more independent practice, their assessment needs to involve real patients who exhibit the full range of conditions in the equally full variety of clinical settings (with their attendant day-to-day pressures). But despite these strengths it is increasingly clear that the CEX has limited generalisability (Kroboth et al, 1992; Noel et al, 1992) because it is restricted to one patient and one assessor and hence it is a snapshot view, vulnerable to rater bias. Still, as seen with the data on the traditional long-case examination, which the CEX resembles, greater reliability can be achieved by increasing the sample of assessments performed by a single trainee. The issue raised then is one of 46
mini-assessed clinical encounter
feasibility, i.e. the ability to perform, in day-to-day clinical life, a number of these tests of clinical competence. The mini-CEX was in large part a response to some of these shortcomings of the long-case assessment. Its origins can be seen in the sort of interactions that have long been a part of medical life and training wherein senior doctors observe trainees during ward and teaching rounds. The mini-CEX has been used in a number of countries, settings and clinical specialties as well as at different levels of training. It has been demonstrated to have good reproducibility (Norcini et al, 1995), validity and reliability (Kroboth et al, 1992; Durning et al, 2002; Kogan et al, 2003) in general medicine (see Chapter 2 for more details on the evidence base).
Foundation programme assessment – mini-CEX The mini-CEX tool can be used to assess a range of competencies, including history-taking, physical examination, mental state examination, professionalism, clinical judgement, communication skills, organisational efficiency and overall clinical care.
Undertaking the assessment A foundation trainee will undertake six to eight mini-CEX assessments over the course of their training, and these will be based on items in the curriculum. Each one will be rated by a single (and each time different) assessor and will not be an assessment of skills examined previously. The assessor does not need to have any prior knowledge of the trainee. The process is trainee-led: the trainee chooses all aspects, including the assessor. The assessor must be clear in following the guidance for the exercise and give a clear and honest opinion of the trainee’s performance with reference to the case at issue only. The majority of assessors are medical but assessments may be performed by other suitable and trained healthcare professionals who feel confident to assess around a particular case. The patient must be made aware that the mini-CEX is being carried out. The skills being assessed are predefined. Rather than undertaking a full history and examination, the trainee is asked to conduct a focused interview and examination; for example, they may be asked to assess the suicidal intent of a patient. The assessment occurs in settings in which one would normally see patients (such as in out-patient clinics or on the wards) and enables immediate direct feedback to the trainee. Ratings are made against descriptors provided and assessors are required to describe the complexity of the case and the focus of the clinical encounter, and declare the number of previous mini-CEX assessments they have observed and rated. Feedback is given immediately after completion of the trainee–patient encounter so that strengths, weaknesses and areas for development can be agreed to enable any adjustment of the educational plan that is required. 47
brown
Training for assessors Preparatory training for assessors used to be provided in two forms. The first was written only and consisted of guidance notes alongside suggestions for further reading and direction to a DVD/video available online for personal study. The second was (and still is) deanery-based workshops. The aims of the workshops would be familiar to those who have been involved with any form of examiner/assessor training: to reduce common errors (e.g. being too harsh or too lenient), to understand the dimensions of the assessment and the standards of assessment, and to improve the accuracy of ratings.
Evaluation The mini-CEX continues to be evaluated as part of the schedule for assessment in the MMC foundation programme. Davies et al (2009) report that 3592 trainees completed 19102 assessments (giving an average of 5.3 each), using 8728 assessors (mainly specialist registrars and consultants) with a mean score of 4.89. The distribution shows the drift to the right of the scale that appears to occur for all workplace-based assessments currently in use in training grades. Trainee performance on the mini-CEX correlated well with performance at Case-based Discussion.
Specialist training assessment – mini-ACE The assessment format employed in the foundation programme has been adapted for the purposes of specialty training. This adapted version (mini-ACE) has been included in the assessment programme developed to assess the new specialist curriculum for psychiatry. The mini-ACE can be performed in a variety of settings, including emergencies. It involves several assessments, each up to 20â•›min long, conducted at intervals over a period of time during the training. Each assessment is followed by 5–10â•›min of feedback. Each clinical encounter is selected to focus on areas and skills that the trainee will most often need in real-life encounters with patients.
How many assessments are needed? The number of mini-ACEs required in specialist training has yet to be determined. The reproducibility studies on the mini-CEX suggest that for a given area of performance at least four assessments are needed if the trainee is doing well and more than four if their performance is marginal or borderline (Norcini et al, 1995). As yet it has not been purposely evaluated as an assessment tool for psychiatric trainees, but it seems from the above that four assessments per annum will be the minimum. A greater evidence base specific to psychiatric training in the UK needs to be developed with the help of psychometric data available from the assessment programmes implemented as part of the Royal College of 48
mini-assessed clinical encounter
Psychiatrists’ pilots (see Chapter 13) and also those delivered during the first few years of run-through training.
Assessment set-up The assessments are trainee-led. Thus the trainee should indicate to the educational supervisor that they are ready for a given assessment and arrangements should be made from there. In the course of their training, individual trainees should have the opportunity to be assessed across a range of clinical conditions and scenarios. The mini-ACE may be used for short focused tasks, for example, to elicit key elements from the history or in the mental state examination; alternatively, it may be used to assess the performance of a clinical function (e.g. the assessment of risk) or of cognitive function. There are similarities to the ‘observed interview’ portion of the traditional clinical examination component of the MRCPsych. The focus of the mini-ACE is on defined competencies (e.g. history-taking, mental state examination, communication with patients) and the assessment is used to determine successful progress towards their attainment. The deployment of the mini-ACE in psychiatry is a significant challenge as the specialty does not naturally lend itself to patient encounters in 10–20â•›min chunks without too much artificiality. To ensure the validity of the overall assessment framework, the mini-ACE must be regarded as a part of the whole assessment programme which includes the Assessment of Clinical Expertise (ACE). This is more than mere rhetoric; significant judgements about a doctor’s standard or level of performance must not be made on the basis of any single assessment in whatever modality. There are some provisos to this. Early experience suggests that the best way is to plan an assessment session in advance. The estimated time required for a mini-ACE (mini-CEX) assessment, including feedback, as stated in the curriculum and based on initial experience for foundation years (Davies et al, 2009) may be an underestimate when translated to experience in a complex specialty such as psychiatry. This may present a significant challenge and resources are required in terms of time and patients. The new National Health Service business systems will need to reflect this in terms of its effect on patient flow, as there are some emergent anecdotal data from UK acute hospital sector that the presence of trainees has an effect of lowering the numbers of patients that a unit or team can accommodate by 25–35%. This begs further examination: data from the USA (Sloan et al, 1983) and Spain (López-Casasnovas & Saez, 1999) clearly suggest an impact on hospital costs, ranging from 9% to more than 20%, when hospitals have teaching responsibility for undergraduates and postgraduates. In addition, there must be flexibility to allow for assessments to occur when an opportunity arises, such as in the case of emergencies. Finally, it is intended that some assessments will make a more significant contribution to the part-summative assessment process that is the Annual 49
brown
Review of Competence Progression (ARCP). The potential for using a series of such assessments is considerable and could represent a test with high reliability (if four or even six to eight assessments were performed) and undoubted validity because this is the basis of clinical practice. These properties of the mini-ACE, as used in postgraduate psychiatric training in the UK, need to be evaluated further.
Planning assessments I have already mentioned a basic possibility of performing the assessment at any time and in any setting. There is therefore a great element of spontaneity. However, in keeping with the concept of a programme of assessment and the qualitative model of needing enough assessments to form a proper judgement with regard to a doctor’s performance, there is a need for planning led by the trainee but supported by the educational supervisor in order to ensure appropriate curriculum coverage. The potential curriculum domains and specific competencies should be mapped at the outset. In their initial meetings trainers and trainees will find it invaluable to draw up a blueprint (either prepare one themselves or take a copy from their postgraduate school) upon which to plot their learning plan, including specific learning objectives and assessments. This will ensure full and proper curriculum coverage. A grid may be constructed which plots cases and clinical competencies (for example, Table 4.1). The framework or blueprint should be reviewed regularly by the trainee and educational supervisor and should be seen periodically by the training programme director.
Table 4.1â•… A blueprint for a mini-ACE assessment (example) Mini-ACE reference number Domains assessed
1
2
3
4
5
6
7
8
Communication with patients
Communication with family, carer
Communication with team members
Assessment of patients’ condition/history-taking Mental state examination
Physical examination Specific function, e.g. risk assessment
Assessment in emergencies
Teaching and training
Working within legal frameworks
Arranging referrals
50
mini-assessed clinical encounter
Domains of assessment The rated elements in the assessment tool being used in pilot scheme are history-taking, mental and physical examination, communication skills, clinical judgement, professionalism, organisation and efficiency, plus overall clinical care. These domains will bediscussed in Chapter 5; the difference is that in the case of the mini-CEX the trainee is rated in the context of a shorter clinical assessment. Some of these elements, such as clinical judgement, are not properly assessed in the mini format. The person descriptors for each domain are detailed in Boxes 4.1–4.7. The descriptor marked in bold in each case is the one denoting satisfactory performance for each aspect (rating = 4).
Box 4.1â•… mini-ACE: history-taking performance descriptors 1 Very poor, incomplete and inadequate history-taking 2 Poor history-taking, badly structured and missing some important details 3 Fails to reach the required standard; history-taking is probably structured and fairly methodical, but might be incomplete, although without major oversights 4 Structured, methodical, sensitive and allowing the patient to tell their story; no important omissions 5 A good demonstration of structured, methodical and sensitive history-taking, facilitating the patient in telling their story 6 Excellent history-taking with some aspects demonstrated to a very high level of expertise and no flaws at all
Box 4.2â•… mini-ACE: mental state examination performance descriptors 1 Fails to carry out more than the most rudimentary mental state examination through lack of skill, knowledge, etc 2 A poor and inadequate mental state examination, covering some of the basics but with significant inadequacies 3 A reasonably satisfactory mental state examination, but missing some relevant details 4 A good mental state examination covering all the essential aspects 5 A good, appropriately thorough and detailed mental state examination with no significant flaws or omissions 6 A thorough, accurate and appropriate mental state examination demonstrating excellent examination and communication skills
51
brown
Box 4.3â•… mini-ACE: communication skills performance descriptors 1 Unacceptably poor communication skills 2 Poor and inadequate communication skills; perhaps evidenced in poor listening skills, body language or inappropriately interrupting the patient 3 Barely adequate communication skills, short of the required high standard, with perhaps one or more significant inadequacies 4 A good standard of communication skills demonstrated throughout, with appropriate listening and facilitative skills, and good body language. Clearly reaches the high standard required 5 Exceeds the high standard required, with evidence from one or more aspects of excellent communication skills 6 Excellent communication skills demonstrated throughout the encounter
Box 4.4â•… mini-ACE: clinical judgement performance descriptors 1 2 3 4
Practically no evidence of good clinical judgement – unsafe Poor clinical judgement, clearly below the required standard Clinical judgement below the required standard, but not dangerously so Good, logical clinical reasoning, judgement, and appropriate decisionmaking; safe and in the patient’s best interests 5 Insightful clinical judgement and good decision-making centred on good clinical care 6 Excellent clinical judgement, taking proper account of all the relevant factors, leading to decision-making that will result in a very high standard of clinical care
Box 4.5â•… mini-ACE: professionalism performance descriptors 1 Evidence of an unacceptable lack of professional standards in any aspect of the case 2 Not seriously unprofessional, but nevertheless clearly below the required standard 3 Not quite up to the required professional standard, perhaps through an occasional lapse 4 Appropriate professional standards demonstrated in all aspects of the case 5 Evidence of high professional standards in several aspects of the case, and never less than appropriate standards in the others 6 Evidence of the highest professional standards throughout the case – a role model for others to learn from
52
mini-assessed clinical encounter
Box 4.6â•… mini-ACE: organisational efficiency performance descriptors 1 Disorganised and inefficient – far below the required standard 2 Inadequate organisation and inefficiency, creating significant difficulties 3 Not particularly well organised and/or efficient – not a major problem but must be improved 4 Well organised and reasonably efficient 5 Very well organised, leading to efficient use of time and resources 6 Excellent organisation, and evidence of efficient yet sensitive professional practice
Box 4.7â•… mini-ACE: overall clinical care performance descriptors 1 Serious concern over the standard of clinical care demonstrated in this encounter – unsafe and probably unfit for practice 2 Generally a poor standard of clinical care, perhaps due to one or more major shortcomings. There might be a few adequate aspects, but nevertheless clearly substandard overall 3 Clinical care below the required standard, but with no evidence of major inadequacy or oversight 4 Clinical care of the required high standard, although possibly allowing a few minor shortcomings 5 A high standard of clinical care demonstrated, with practically no shortcomings 6 Evidence of excellent clinical care in all aspects of the case – a role model
Feedback Like any other formative assessment, detailed feedback is crucial to the success of the mini-ACE as an instructional tool. All assessors undertaking feedback should utilise interactive feedback techniques to discuss the trainee’s performance. This means that the feedback should not just be a didactic process of the assessor informing the trainee of their strengths and weaknesses, but should also encourage the trainee to ‘embrace and take ownership of their strength and weaknesses’ (Holmboe et al, 2004: 560). Trainees should encourage assessors to engage in interactive feedback techniques, including assessing learner reaction, promoting self-assessment and developing an action plan. Assessors should receive training in interactive feedback and they should also receive regular ‘reinforcement training’ as a follow up to the initial training (Holmboe et al, 2004). 53
brown
Will assessors need to be trained? Holmboe et al (2003) note that although direct observation is an essential component of performance assessments that allows trainees immediate access to expert feedback, the quality of direct observation is very important. Assessors’ observation skills should be improved through training programmes. As has already been noted, assessor training in feedback techniques is also crucial (Holmboe et al, 2004). Intuitively, it would also seem sensible to train assessors in the technical aspects of the assessment tool and how to give feedback, and to consider the impact of this form of assessment on the trainer–trainee (supervisory) relationship. The Royal College of Psychiatrists launched a programme of training to accompany the inception of workplace-based assessments and, although not perfect, there is evidence from Postrgraduate Medical Education and Training Board trainer survey data (Postgraduate Medical Education and Training Board, 2007)) to suggest that compared with other specialties, trainers in psychiatry felt equipped to administer the assessment schedule. However, recent published studies clearly indicate that the level of penetration of this training for workplace-based assessments as a whole may be far from sufficient and certainly not enough for the successful implementation of the programme (Babu et al, 2009; Menon et al, 2009).
Conclusions Mini-ACE provides a successful compromise between the reliability issues of undertaking a single long-case assessment and the feasibility issues of undertaking multiple long cases with multiple assessors. It is an itemised assessment of specific competencies and not a test of expertise or fuller professional performance. Thus its place is most appropriately at early stages of training for the assessment and learning of basic clinical skills. There is significant experience in the medical setting in the USA, with the use of the precursor tool (mini-CEX). As with the other tools, the miniACE will be continually developed further for the postgraduate psychiatric assessments in the UK.
References Babu, K. S., Htike, M. M. & Cleak, V. E. (2009) Workplace-based assessments in Wessex: the first 6 months. Psychiatric Bulletin, 33, 474–478. Davies, H., Archer, J., Southgate, L., et al (2009) Initial evaluation of the first year of the Foundation Assessment Programme. Medical Education, 43, 74–81. Durning, S. J., Cation, L. J., Markert, R. J., et al (2002) Assessing the reliability and validity of the mini-clinical evaluation exercise for internal medicine residency training. Academic Medicine, 77, 900–904. Holmboe, E. S., Huot, S., Chung, J., et al (2003) Construct validity for Mini-Clinical Evaluation exercise (Mini-CEX). Academic Medicine, 78, 826–830.
54
mini-assessed clinical encounter
Holmboe, E. S., Yepes, M., Williams, F., et al (2004) Feedback and the mini clinical evaluation exercise. Journal of General Internal Medicine, 5, 558–561. Kogan, J. R., Bellini, L. M. & Shea, J. A. (2003) Feasibility, reliability and validity of the mini-clinical evaluation exercise (mini-CEX) in a medicine core clerkship. Academic Medicine, 78, s33–s35. Kroboth, F. J., Hanusa, B. H., Parker, S., et al (1992) The inter-rater reliability and internal consistency of a clinical evaluation exercise. Journal of General Internal Medicine, 7, 174–179. López-Casasnovas, G. & Saez, M. (1999) The impact of teaching status on average costs in Spanish hospitals. Health Economics, 8, 641–651. Menon, S., Winston, M. & Sullivan, G. (2009) Workplace-based assessment: survey of psychiatric trainees in Wales. Psychiatric Bulletin, 33, 468–474. Noel, G. L., Herbers, J. E. Jr., Caplow, M. P., et al (1992) How well do internal medicine faculty members evaluate the clinical skills of residents? Annals of Internal Medicine, 117, 757–765. Norcini, J. J., Blank, L. L., Arnold, K. A., et al (1995) The mini-CEX: A preliminary investigation. Annals of Internal Medicine, 123, 295–299. Postgraduate Medical Education and Training Board (2007) National Survey of Trainers 2007: Summary Report. PMETB. Sloan, F. A., Feldman, R. D. & Steinwald, A. B. (1983) Effects of teaching on hospital costs. Journal of Health Economics, 2, 1–28.
55
Chapter 5
The Assessment of Clinical Expertise (ACE) Geoff Searle
Training in psychiatry has traditionally been based on an apprenticeship model. Many years ago, during my first post as a junior trainee in psychiatry, I was very fortunate that my first educational supervisor was a particularly skilled clinician and educator. For the first three out-patient clinics I undertook for her, she sat with me through my fumbling attempts to take a psychiatric history and conduct a mental state examination. She also checked my notes to ensure they were legible and comprehensive, listened to my discussion with the patient about their diagnosis and the treatment we might offer, and finally checked my letter to the general practitioner. Immediately after the patient left we had a brief discussion about my interview, and diagnostic and therapeutic skills. These practices, although intimidating, proved to be very educational, and I learnt quickly. This is of course the fundamental approach of the Assessment of Clinical Expertise (ACE), during which an experienced clinician observes and assesses an entire clinical encounter between a trainee and a patient in order to be able to assess the trainee’s ability to take a full history, perform a mental state examination and arrive at a diagnosis and management plan. Thus, the ACE as an assessment process has very good face validity, as it directly accesses and assesses key competencies and their underlying attitudes, skills and knowledge.
Background Alongside adopting workplace-based assessment, the Royal College of Psychiatrists has radically revised its national examinations (Chapter 12). The ACE component of workplace-based assessment most closely resembles the superseded long case. There have always been significant technical concerns about the long case, around interrater reliability, case specificity, and intra-observer reliability. Van der Vleuten et al (1994) reported that the generalisability coefficients (a measure of reliability) of the judgements made using different formats of examination show clearly that even after 8â•›h of testing, oral examinations could only achieve a coefficient 56
the ACE
of 0.48, as opposed to the multiple station or Observed Structured Clinical Examination (OSCE) format, which gave a coefficient of 0.86 after 8â•›h of testing; multiple choice questionnaires (which are more stable and the most time-efficient of all), gave a coefficient of 0.93 after 4â•›h of assessment. Any high-stakes examination should have a coefficient of at least 0.8 to be considered fair (for more details, see Chapter 2 in this volume). The ACE is a simplified version of the Clinical Evaluation Exercise (CEX), introduced in America in the 1970s to replace the postgraduate medical clinical examination. Several physician versions have been published; the closest to the ACE (a 9-item instrument with a 1–9 score) was examined in a study of 135 attending physicians and 1039 ratings, by Thompson et al (1990). This showed marginal agreement between raters (0.64), with high correlations between items (r=0.72 to 0.92) and a single factor accounting for 86% of the variance. In their paper investigating a complex variant of the CEX, Kroboth et al (1992) suggested that six to ten repetitions would be required to achieve sufficient reliability. UK experience of using a CEX to record the progress of psychiatric trainees was positive, with reassuring similarity between CEX and MRCPsych examination results (Searle, 2008). The problem described by Thompson et al (1990) that almost all ratings were in the top third of the 9-point scale was confirmed (Searle, 2008), but there was better (lower) correlation between items. The ACE was modified to ameliorate these problems by having shorter rating scales (1–6) and clearer anchor statements. The change from the long case to the ACE and other workplace-based assessments allows issues around reliability to be addressed by triangulation of information obtained by repeated measurement in a variety of situations, by different assessors using different methods. Concerns about substituting local assessors for external examiners can be addressed by comparison of the computer record of workplace-based assessments (WPBAs) against the results of the MRCPsych CASC examination, and any issues then identified can be addressed.
Using the ACE The key strength of the ACE is that it is the assessment of a whole clinical encounter: full history, mental state examination, diagnosis and treatment plan (in contrast to mini-ACE which focuses on an isolated agreed task). The ability to perform these tasks is core to the practice of psychiatry, and in an ACE this is observed by an experienced clinician who offers immediate feedback as well as rating the trainee. All workplace-based assessments are both formative and summative, and thus are valuable coaching tools as well as assessments. The developmental intention of the tool is shown in the ACE item asking ‘How would you rate the trainee’s performance at this stage of their training?’ This item was introduced to prevent trainees becoming despondent at receiving low ratings, and to give a measure of 57
searle
their ongoing progress. Because point 4 of the other scales (case-based discussion and mini-ACE, boxes 3.3–3.11 and 4.1–4.7, see pp. 38–41 and pp. 51–53 respectively) represents satisfactory performance for completion of that phase of training (ST1, ST3, ST5 or ST6), a doctor who has only recently entered that phase would be making perfectly satisfactory progress, yet only obtain ratings of 2 or 3.
How many ACEs are needed? A minimum of two ACEs are required in CT1 and three in CT2 and CT3, as they are needed to obtain a satisfactory result in the trainee’s Annual Review of Competence Progression (ARCP). Eligibility for the MRCPsych examination is gained via the ARCP for trainees in GMC-approved rotation posts. For those from other posts who wish to attempt sitting the exam, a number of satisfactorily completed ACEs would be an essential part of the portfolio of evidence to demonstrate equivalent competence. In ST4–6 posts the need for an ACE will be very flexible, depending on discussions between the trainee and the supervisor. Further assessments may be required for those who are close to or below the expected standard, or whenever particular concerns are expressed.
When and with whom should the ACE be arranged? The most effective assessors are consultants, associate specialists, ST6 trainees or senior nurses, psychologists and social workers who feel confident to assess the case. For CT1 and CT/ST2–3 trainees, nurses, psychologists and social workers at band 7 or equivalent can be assessors. For ST4–6 trainees, nurses, psychologists and social workers at band 8 or equivalent can be assessors. Core trainees CT1 and CT/ST2–3 trainees cannot assess each other, and this is also true for ST4–6 doctors, but ST4–6 doctors can assess CT1 and CT/ST2–3 doctors. The ACE is particularly effective when performed by the educational/ clinical supervisor very early in a new placement as it then can inform the educational agreement/plan for that post. Similarly, repeating the ACE is a powerful source of information for appraisal. However, it is also best practice for the trainee to have at least three different assessors each year and it is expected that trainees will perform best later in their placement. Satisfactory ACE assessments in the subspecialties will be important, especially for applicants without an ARCP. Both trainees and assessors must bear in mind that ARCP panels look for evidence of progress in skills/competence, not excellent ratings from the cradle onwards. All these factors should encourage trainees to avoid cramming in assessments at the end of the year.
How does the ACE work? The ACE usually takes between 30â•›min and 1â•›h to complete, including the assessor giving immediate feedback to the trainee and ideally completing 58
the ACE
the online record. The trainee selects an appropriate case, in consultation with their assessor. Patients new to the trainee are usually required because they allow the trainee’s history-taking and examination skills to be assessed thoroughly. Junior doctors should not have the routine information available about the patient restricted – the essence of the process is to interfere as little with routine practice as possible. Although a patient known to the trainee could be seen if a specialist assessment is undertaken, it is likely that the learner will have less opportunity to shine.
Set-up The most common and simplest arrangement will be for the trainee’s clinical supervisor or a senior team member to sit in on the comprehensive assessment (‘clerking’) of a new patient in an out-patient clinic. Junior doctors in training are clinically supervised in out-patient clinics and this requires that the supervisor has time in their own schedule to complete the task, thus they should have the flexibility in their timetable to allow the shared hour. It is important to consider in advance what sort of patient may be involved, to ensure that the clinical case has an appropriate level of complexity and challenge to allow the trainee to demonstrate their skills well. As the trainee psychiatrist progresses in training, the cases must become more complex and challenging; at ST3 and beyond, emergency cases may be particularly appropriate as they will be sufficiently complex to challenge the trainees and allow them to demonstrate a wider variety of competencies. Tricky cases tend to produce higher ratings from assessors as trainees have an opportunity to impress them. Although routine outpatient clinics will be the most common venue for an ACE, it is also perfectly feasible to invite a patient along specifically at the beginning or end of a clinic, at the time of educational supervision, to see a patient in the community or on the in-patient ward. All that is required is an hour of free time, a room, a patient, an assessor and a trainee, which means that opportunities can be grasped at short notice – such as emergency assessments, liaison cases or even Mental Health Act assessments. Trainees and supervisors should know who is a trained WPBA assessor in their team to exploit serendipity. There are of course some posts that have either a limited or a negligible amount of out-patient contact, but it should always be feasible to arrange clinical contact with a patient who can be rated, perhaps from another team, or when on call. After the opportunity has been identified, the trainee may want to print off a blank ACE form (from https://training.rcpsych.ac.uk/), along with any other guidance notes that will ease the assessment. Immediately before starting, the trainee must gain the patient’s verbal consent to help with the exercise. After being introduced the assessor should try to remain quiet and out of the eyeline of both the patient and the trainee, while being able to observe both, most especially the latter. It is helpful to keep notes, but the 59
searle
physical form should only be an aide-mémoire for GMC trainees, who must have the ACE recorded on the web system. The task of the assessor is to observe the entire patient encounter, and then give verbal and written feedback, before rating performance in each of the six specified domains on the rating form and a global rating.
Assessment domains History-taking The speed and facility with which a trainee obtains the information for a psychiatric history should develop quickly to begin with and, once CT1 competence has been achieved, the process is more of greater fluency and subtlety. Later in their education, the psychiatric trainee must be able to demonstrate the ability to prioritise the information they obtain first and manage to obtain an adequate history in difficult situations or with patients who show reluctance or non-adherence. Performance descriptors for all ACE domains are listed in Boxes 5.1–5.7 at the end of this section (the descriptor marked in bold in each case denotes satisfactory performance, rating = 4).
Mental state examination The mental state examination is the core specialist skill of a psychiatrist. Demonstrating fluency and competence in this process is the key to being a good psychiatrist. As a trainee progresses through different stages of training, the expectation is for them to develop greater ease and precision in identifying relevant psychopathology in a wide range of cases and clinical contexts. Whereas the more general aspects of the mental state examination would be assessed in the early years of training, the latter years will be devoted to assessing the subtle variations in presentation, depending on the trainee’s chosen area of specialisation. For instance, a trainee in CT1 should be assessed on their ability to identify correctly low mood in a depressed 30-year-old man with no other comorbidity. On the other hand, the same trainee when in an ST4 post for the psychiatry of the elderly should be able to demonstrate their ability competently to detect low mood in a 76-year-old woman with a moderate degree of cognitive impairment. As the training progresses, the emphasis should move from just using the structure of the mental state examination and the understanding of relevant psychopathology to assessing the finer nuances of a complex presentation.
Communication skills In psychiatry, if history-taking and the mental state examination are the ‘what’, communication skills are the ‘how’. Psychiatrists must have the ability to communicate sensitively and effectively with all their patients, regardless of ethnicity, age or diagnosis. This overarching competence 60
the ACE
includes skills both in verbal and non-verbal communication. Empathy and rapport are important aspects of communication that should be considered under this domain. Although these are core skills that should be expected of all doctors, they are key tools for a successful psychiatrist. The structure of the ACE allows the trainee to demonstrate, in a short space of time, their ability to communicate effectively and efficiently in a genuine clinical situation. Conversely, any problems or deficiencies will very rapidly and clearly become apparent, allowing appropriate measures to be taken to overcome them. The trainees should demonstrate an ability to communicate with a wide range of patients, from those who are completely mute to those who are very irritable and loquacious. This skill should also be demonstrated across the diagnostic and age ranges. This breadth of competence must be evidenced in the trainee’s final portfolio. Needless to say, in the higher specialist years specific aspects of communication within the trainee’s chosen area of specialisation will need to be assessed.
Clinical judgement Having gathered relevant clinical information through various aspects of the three competency domains already discussed, the trainee should then be able to weigh this information in order to make a judgement about the diagnosis and the management plan. This judgement should take into account all aspects of the history, mental state examination, risk assessment and information gathered from other sources. This overall picture should then be utilised to reach various formulations (e.g. psychodynamic, behavioural) and diagnoses (ICD, DSM). These should then be considered with consideration of the current evidence base for good clinical practice (e.g. National Institute for Health and Clinical Excellence guidelines) to come up with an appropriate management plan. As before, the ability to formulate, diagnose and manage cases with varying levels of complexity should be assessed in the context of the trainee’s current level of training. Partly, this assessment is going to be made on the basis of observed behaviour, but it is appropriate to delay making a rating in this domain until after some discussion and feedback, once the patient has left the room.
Professionalism Psychiatric practice is the branch of medicine where professionalism can be seriously strained and the ability to form and maintain an appropriately professional rapport and relationship with the patient, their relatives and others involved in the patient’s care is very important. The issues of capacity and the use of the Mental Health Act raise considerable legal and ethical challenges that make assessment under this domain even more significant. Balancing patient choice against their best interest and wider public safety can sometimes make treatment choices very difficult. One of the key principles assessed in this domain is the trainee’s ability to act reasonably and appropriately where clear guidelines and standards do not 61
searle
exist or where there are conflicting rights or needs. This rating should similarly be delayed until after there has been some discussion of the case and of the reason behind particular choices. As with satisfactory progress of training, the finer competencies in this domain might be tested by moving the assessment from a planned out-patient setting to an emergency setting, with more complex considerations including the Mental Health Act and the Mental Capacity Act.
Organisational efficiency Structure, time-keeping and control of the assessments are the key attributes that are scored on this domain, but it also includes the organisational coherence of a management plan and the steps the trainee says they are going to take to implement it. Sometimes, note-keeping as a part of the assessment process can also be assessed to ensure that the notes are comprehensive, comprehensible and legible; however, other assessments (e.g. case-based discussion) provide more comprehensive opportunities to assess the note-keeping skills (Chapter 3).
Overall clinical care This global rating scale is one of the most important elements of the assessment scale. Besides giving an overall impression of the assessment, global ratings are also the most reproducible when it comes to reliability testing. All elements of the ACE count, although of course not everything may be rateable on a particular assessment. It is important to score the individual domains before actually scoring the overall global impression.
Feedback Trainees inevitably find assessments stressful and intimidating. As mentioned earlier, the particular strength of the ACE is its usefulness
Box 5.1â•… ACE: history-taking performance descriptors 1 Very poor, incomplete and inadequate history-taking 2 Poor history-taking, badly structured and missing some important details 3 Fails to reach the required standard; history-taking is probably structured and fairly methodical, but might be incomplete although without major oversights 4 Structured, methodical, sensitive and allowing the patient to tell their story; no important omissions 5 A good demonstration of structured, methodical and sensitive history-taking, facilitating the patient in telling their story 6 Excellent history-taking with some aspects demonstrated to a very high level of expertise and no flaws at all
62
the ACE
Box 5.2â•… ACE: mental state examination performance descriptors 1 Fails to carry out more than the most rudimentary of mental state examination through lack of skill, knowledge, etc. 2 A poor and inadequate mental state examination, covering some of the basics but with significant inadequacies 3 A reasonably satisfactory mental state examination, but missing some relevant details 4 A good mental state examination, covering all the essential aspects 5 A good, appropriately thorough and detailed mental state examination with no significant flaws or omissions 6 A thorough, accurate and appropriate mental state examination, demonstrating excellent examination and communication skills
Box 5.3â•… ACE: communication skills performance descriptors 1 Unacceptably poor communication skills 2 Poor and inadequate communication skills, perhaps evidenced in poor listening skills, by body language or by inappropriately interrupting the patient 3 Barely adequate communication skills, short of the required high standard, with perhaps one or more significant inadequacies 4 A good standard of communication skills demonstrated throughout, with appropriate listening and facilitative skills and good body language. Clearly reaches the high standard required 5 Exceeds the high standard required, with evidence from one or more aspects of excellent communication skills 6 Excellent communication skills demonstrated throughout the encounter
Box 5.4â•… ACE: clinical judgement performance descriptors 1 2 3 4
Practically no evidence of good clinical judgement – unsafe Poor clinical judgement, clearly below the required standard Clinical judgement below the required standard, but not dangerously so Good, logical clinical reasoning, judgement, and appropriate decisionmaking; safe and in the patient’s best interests 5 Insightful clinical judgement and good decision-making centred on good clinical care 6 Excellent clinical judgement taking proper account of all the relevant factors leading to decision-making that will result in a very high standard of clinical care
63
searle
Box 5.5â•… ACE: professionalism performance descriptors 1 Evidence of an unacceptable lack of professional standards in any aspect of the case 2 Not seriously unprofessional, but nevertheless clearly below the required standard 3 Not quite up to the required professional standards, perhaps through an occasional lapse 4 Appropriate professional standards demonstrated in all aspects of the case 5 Evidence of high professional standards in several aspects of the case, and never less than appropriate standards in the others 6 Evidence of the highest professional standards throughout the case – a role model from which others can learn
Box 5.6â•… ACE: organisational efficiency performance descriptors 1 Disorganised and inefficient – far below the required standard 2 Inadequate organisation and inefficiency, creating significant difficulties 3 Not particularly well organised and/or efficient – not a major problem, but must be improved 4 Well-organised and reasonably efficient 5 Very well organised, leading to efficient use of time and resources 6 Excellent organisation, and evidence of efficient yet sensitive professional practice
Box 5.7â•… ACE: overall clinical care performance descriptors 1 Serious concern over the standard of clinical care demonstrated in this encounter – unsafe and probably unfit for practice 2 Generally a poor standard of clinical care, perhaps due to one of more major shortcomings; there might be a few adequate aspects, but nevertheless clearly substandard overall 3 Clinical care below the required standard, but with no evidence of major inadequacy or oversight 4 Clinical care of the required high standard, although possibly allowing a few minor shortcomings 5 A high standard of clinical care demonstrated, with practically no shortcomings 6 Evidence of excellent clinical care in all aspects of the case – a role model
64
the ACE
as a formative tool. Once the clinical contact reaches its final stage, it is quite appropriate for the assessor to be drawn into a discussion with the patient and the trainee. It is important to ensure that the trainee states their own opinion of the correct diagnosis and management plan first, as it is highly tempting for an assessor to expound the ‘correct’ answers, but this invalidates the rating of the management plan/overall clinical care items. Discussion should be brief and focused on what to do next, not the reasoning behind the choices, which should be addressed during the immediate feedback to the trainee. Feedback usually works best by leaving the process of going through the rating form item by item almost until the end. A good way to start is to ask the trainee to give their view of the positive aspects of the assessment first, followed by any aspects they recognise as needing improvement. The assessor can then expand from this lead, which usefully illuminates the trainee’s understanding and selfawareness. Feedback should always focus both on the strengths and on the weaknesses. Strengths can easily remain unacknowledged, and weaknesses can be examined and re-examined endlessly. Conversely, it is possible to fall into the trap of concentrating on errors, yet still offer high ratings. It is also important to allow trainees the opportunity for self-assessment, as this is a skill that they will utilise in their career long after their years as a trainee. Starting feedback with strengths and weaknesses leads naturally to filling in the first two text boxes of the form (‘Anything especially good’, ‘Suggestions for development’; see Appendix 1). There should always be at least one and usually no more than three suggestions for development recorded during feedback. The numerical scores can then be discussed and recorded, finishing with agreed actions, which occasionally will lead to modification or reframing of the educational plan or agreement for that particular trainee. Where there are serious problems, repeated assessment by various clinicians using the ACE may be especially helpful, as it would demonstrate burgeoning skills or point up further the need for more powerful action. Where the trainee’s supervisor has performed the ACE it may be appropriate to delay detailed feedback a little or to spread it out into the next educational supervision session. This extended feedback and discussion may be particularly useful in the last stages of training when the situation that is being managed should be complex. There will be occasions when it is inappropriate to record a judgement on all subscales, but this should not be used as an excuse to avoid challenging skills, attitude or behaviours that fall below the required standard. Honest, fair and balanced judgement is the clear responsibility of the assessor. To maximise the benefits of arranging and undertaking an ACE, trainees could also complete a related reflection into their online e-portfolio.
65
searle
Completing the e-form All assessments should be recorded online. Blank forms, instructions and standards can be printed off from the Royal College of Psychiatrists e-Portfolio website prior to the assessment. The web-forms can be completed later from a working copy, but ratings and comments must not be changed between the discussion with the trainee and transcription into the central database. For those not in training posts a hard copy of the form may be appropriate and should be kept in the psychiatrist’s portfolio.
Assessor training Training in observation and feedback skills as well as the technical aspects of the ACE is essential for reliable application of this tool. The Royal College of Psychiatrists has created a number of training videos/DVDs and many senior educators are now trained trainers. Training courses for assessors are regularly available from deaneries and nationally from the College. Only on very rare occasions should an untrained assessor complete an assessment, and this should be acknowledged in the trainee’s portfolio.
Some practical issues As psychiatric services become more functionally specialised, trainees may find that their educational supervisor (who manages the administration and overview of their development) is not their clinical supervisor. In any case trainees must have an hour of supervision weekly and it is in this forum that any new ACE or other assessment should be discussed. If the educational supervisor who formulates the educational plan/agreement has little or no clinical contact with the trainee, an ACE performed early in a new post by the clinical supervisor is even more important, and in this case a discussion between the two supervisors following the assessment would be very beneficial to the trainee. For assessors, the thorniest practical issue is that of offering any rating less than satisfactory at any point. Junior doctors exhibit an impressive array of verbal and non-verbal communications in this case, but it is critically important for assessors to be honest and discerning, always bearing in mind that their assessment is only one of many. This problem can become critically important with the weakest trainees who consistently have a startlingly inflated opinion of themselves (as is in fact normal and not a peculiarity of psychiatric junior doctors). Discussion with the trainee’s educational supervisor would be essential in this case, as an increase in the number of ACEs as well as other actions and assessments may be appropriate to enhance learning and clarify difficulties.
66
the ACE
Conclusions My own educational supervisor Dr Longman was a consultant way ahead of her time. In my own experience of using the ACE, the discipline of sitting down with the trainee and watching them for an hour is always extremely useful. Even the most skilled trainees manage the interview, make decisions or act in ways that surprise me. The discussion that follows the assessment is always illuminating and never predictable or fruitless. As part of a well-triangulated assessment framework, the ACE will continue to provide valuable information about a wide range of competencies that are essential to practise as a consultant psychiatrist.
References Kroboth, F. J., Hanusa, B. J., Parker, S., et al (1992) The Inter-rater reliability and internal consistency of a clinical evaluation exercise. Journal of General Internal Medicine, 7, 174–179. Searle, G. F. (2008) Is CEX good for psychiatry? An evaluation of workplace-based assessment. Psychiatric Bulletin, 32, 271– 273. Thompson, W. J., Lipkin, M., Gilbert, D. A., et al (1990) Evaluating evaluation: assessment of the American Board of Internal Medicine Resident Evaluation Form. Journal of General Internal Medicine, 5, 214–217. van der Vleuten, C. P. M., Newble, D., Case, S., et al (1994) Methods of assessment in certification. In The Certification and Recertification of Doctors: Issues in the Assessment of Clinical Competence (eds D. Newble, B. Jolly & R. Wakeford), pp. 105–125. Cambridge University Press.
67
Chapter 6
Multi-source feedback Caroline Brown
Multi-source feedback (also known as peer ratings, 360°-feedback, MSF) involves collecting opinions about an individual from a range of coworkers using a structured rating scale. Psychiatry trainees work as part of a multiprofessional team with other people who have complementary skills. Trainees are expected to understand the range of roles and expertise of team members in order to communicate effectively to achieve highquality service for patients. As such, obtaining feedback from members of the extended team provides a reliable, valid and hopefully educationally supportive method of assessing performance. The technique was developed during the 20th century in settings external to medicine and has been used successfully in the military, education and industry (Powell & White, 1969; Woehr et al, 2005; Meeks et al, 2007). In the UK, there has been an explosion of MSF approaches proposed by medical and commercial organisations that have become increasingly focused following the publication of Good Doctors, Safer Patients (Donaldson, 2006). Additionally, the chief medical officer’s White Paper, Trust, Assurance and Safety: The Regulation of Health Professionals (Department of Health, 2007) suggested that the role of multi-source feedback will become more and more significant for all doctors, not simply trainees, in the demonstration of fitness to practise and to support relicensure and revalidation.
The history of multi-source feedback External to medicine Multi-source feedback was first developed and evaluated in settings other than medicine. As early as the 1920s, psychologist Hermann H. Remmers led the field in education, discussing the importance of and considerations for students evaluating their teachers. After the Second World War the military explored the possible role of professional ratings in order to identify natural leaders (Williams & Leavitt, 1947). An increasing sophistication in the understanding of leadership led to increasingly refined models to assess it. Within industry, a desire to identify and facilitate 68
multi-source feedback
good leadership became paramount as companies became increasingly competitive, especially in the 1980–1990s. Whereas psychologists tried to understand the dynamics of leadership, personality traits, behavioural styles and, more recently, complex models of leadership taken within the organisational situation, industry tried to find practical solutions to evaluating and motivating employees.
Within medicine Multi-source feedback in medicine has developed since the 1950s. In the 1960s the importance of humanistic skills became apparent and MSF was proposed as a potential way of assessing these skills. However, it was not until the 1990s when there were financial demands on the health services in the USA (Ramsey et al, 1993, 1996; Ramsey & Wenrich, 1999) and Canada (Violato et al, 1997; Hall et al, 1999; Violato et al, 2003; Lockyer & Violato, 2004) that MSF really began to be researched in medicine. The use of MSF questionnaires has increased exponentially over the past 25 years (Ramsey, et al, 1993, 1996; Ramsey & Wenrich, 1999; Lockyer, 2003; Archer et al, 2005; Lockyer et al, 2006; Violato et al, 2006). They have been studied around the world as a way of assessing multiple components of medical skill and target the highest level of Miller’s pyramid, ‘performance’ (Norcini, 2003, 2005). They have been shown to be feasible and acceptable to doctors (Lipner et al, 2002; Archer, 2007), as well as being reliable and valid across different settings and at different levels of practice (Hall et al, 1999; Archer & Davies, 2004; Whitehouse et al, 2005; Lockyer et al, 2006; Violato et al, 2006; Crossley et al, 2008; Wilkinson et al, 2008). Internally, there is reliability demonstrated between and within raters, although external ratings are rarely consistent with the ratings from self (Brett, 2001; Archer et al, 2005; Archer, 2007). When considering validity, MSF, irrespective of its purpose, maps to the measurement of two key factors: clinical skills (‘Can a person do their job?’) and humanistic skills (‘Is the person nice while they do their job?’) (Ramsey et al, 1993, 1996; Archer et al, 2005, 2008; Archer, 2007). To date there have only been small amounts of work on providing external criteria with which to validate MSF. Pilot studies of one instrument may demonstrate an association with previously agreed standards (Archer, personal communication). There is some support for validity: data from multi-source feedback conducted by the Royal College of Paediatrics and Child Health have shown that post-core specialist registrars (SpRs) score higher than core SpRs, as one would expect. Longitudinal studies have also shown an increase in MSF scores over time (Violato et al, 2008).
Self-assessment Multi-source feedback usually requires doctors to make a judgement about themselves using the same structured rating scale as that used by their peers. This is known as self-rating. The self-ratings obtained using MSF 69
brown
have been strongly debated within the literature as there is frequently a discrepancy between the rating given by the individual and that obtained from peers. This discrepancy is seen across medicine and other disciplines. Within medicine it is found in undergraduates and postgraduates at all levels (Atwater et al, 1998; Fletcher & Baldry, 2000; Davis et al, 2006; Violato & Lockyer, 2006; Lockyer et al, 2007). This discrepancy is likely to be important for several reasons, such as response to feedback, and because the everyday practice of doctors revolves around accurate self-assessment.
Response to receiving multi-source fedback For multi-source feedback to have a clear use it needs to at the very least be received and internally digested by the participating doctor. Doctors who appropriately alter their behaviour based on their feedback would be ideal. Researchers studying responses to performance assessment have shown that people’s reactions vary and can influence how performance feedback is used (Dauenheimer et al, 1999; Brett, 2001; Sargeant et al, 2005, 2008; Brinkman et al, 2007). Impact of feedback depends not only on the method used but also on individual readiness to change, which can be influenced by the nature of the feedback and acceptance of the need for change. There is some evidence, from settings other than performance assessment, that it is more difficult to change the behaviour of well-established doctors (Hulsman et al, 1999; Kurtz et al, 2005). On receipt of multi-source feedback the first question doctors seem to ask is ‘Is the feedback consistent with how I see myself?’, that is, in line with a self-assessment (Sargeant et al, 2008). Additionally, the source of the feedback also influences whether doctors agree with that feedback or not (for a more detailed discussion, see Chapter 2). Whether or not a doctor agrees with the feedback they receive significantly influences their response, and identifying their own needs is imperative in acting on them. Sargeant and colleagues showed that those doctors who agreed with their feedback perceived it as positive and found it useful. However, those who did not agree with their feedback had a negative reaction to it and stated that they did not find it useful or meaningful. In this latter group the feelings brought on by the feedback were often longlasting and permanent (Sargeant et al, 2008). In conclusion, MSF may be an extremely useful method for assessing performance in the workplace but it must be recognised that in order for change to occur an appropriate level of self-assessment as well as belief in, and acceptance of, the feedback is necessary.
Practicalities Mini-Peer Assessment Tool (mini-PAT) The Royal College of Psychiatrists has chosen the mini-PAT as the instrument it requires its trainees to use to undertake multi-source 70
multi-source feedback
feedback. Developed from the Sheffield Peer Review Assessment Tool (SPRAT), the mini-PAT provides feedback across all the domains of Good Medical Practice (General Medical Council, 2006). These can be mapped to the core objectives and competencies of the curriculum. Development of SPRAT Mini-PAT was developed as a version of the SPRAT instrument, primarily for use in the foundation assessment programme. An extensive literature review informed the development of SPRAT and suggested three main aspects to be considered: the scale, the contents and the susceptibility to bias. Evidence showed that the scale should be an even numerical scale with descriptors, to both support a continuum (the numerical aspect) and provide reliability with a clearly defined scale (the descriptors). The contents were initially suggested by mapping to worldwide professional frameworks but subsequently adapted by using the ‘curriculum’ for UK doctors, Good Medical Practice. Therefore, SPRAT maps directly to the domains of Good Medical Practice (Box 6.1). To investigate bias the authors of SPRAT collected extensive demographic data, which were later analysed and used to ensure the bias was minimal. The tool has been validated and its reliability and feasibility assured. Work has also been undertaken looking into the educational impact of the instrument. Development of mini-PAT With the development of the foundation assessment programme it became apparent that it would not be possible to simply use the SPRAT and hence modifications were necessary, leading to the development of mini-PAT. The authors of the new instrument reviewed the foundation curriculum, making appropriate adaptations and assuring the tool of content validity for foundation trainees. Descriptors were developed to define the scale and assist with standard setting: 1, 2 ‘below expectation’, 3 ‘borderline’, 4 ‘meets expectations’ and 5, 6 ‘above expectation’. The Royal College of Psychiatrists has chosen to use mini-PAT to provide continuity for its trainees as they progress from their foundation training through to specialty training. Mini-PAT assesses trainees against a range of Box 6.1â•… Good Medical Practice and mini-PAT domains 1 2 3 4 5 6 7
Good clinical care Maintaining good medical practice Teaching and training, appraising and assessing Relationship with patients Working with colleagues Health and probity Global ratings and concernsa
a. This domain is exclusive to the mini-PAT tool.
71
brown
Good Medical Practice domains (Box 6.1) and asks reviewers to rate them on a 6-point Likert scale similar to the scale used in foundation training (see previous paragraph). It should be borne in mind that the instrument has not yet been formally completely validated for this level of trainees, although work is ongoing to complete this. Additionally, it must be remembered that the MSF only acts as part of the assessment programme in psychiatry and a blueprinting process against the competencies for the whole programme ensures they are covered in their entirety.
Process By virtue of their membership of the Royal College of Psychiatrists each psychiatry trainee holds an account with the assessment company that undertakes the assessments on behalf of the College (currently Assessments Online). The multi-source feedback process is wholly electronic, although the College strongly recommends that the feedback generated from the assessment be given in a face-to-face meeting between the educational supervisor and the trainee. In order that trainees have the maximum opportunity to undertake MSF there are five rounds held each academic training year. This enables repeat assessments to be completed if necessary and allows trainees to undertake timely assessments in line with their training. For workplace-based assessments to be as beneficial as possible, wide sampling of clinical context and assessors is imperative and so trainees are required to complete two rounds of mini-PAT per year. These should preferably be chosen with the educational supervisor. Once a trainee has selected a round, they must nominate their assessors. The College recommends that 8–12 assessors be nominated across a range of members of the multidisciplinary team. Individuals from the following groups are required: •â•¢ at least two senior medical staff (in CT1–3 one consultant and one SpR or ST4–6 would suffice; in ST4–6 two consultants are required) •â•¢ at least two non-medical clinical staff (e.g. nurses, social workers, occupational therapists, psychologists) •â•¢ not more than two trainees at your level (CT1–3 or ST4–6 level) •â•¢ administrative staff (may be included). Assessors receive notification that they are being invited to complete an assessment. Trainees must also complete a self-assessment. Currently preliminary investigations suggest that at least six responses are required for a reliable and valid feedback to be generated. Within the literature and across expert working groups there has been repeated discussion as to the validity of individuals choosing their own assessors. To date it appears to be acceptable for trainees at this level of stakes assessment to nominate their own assessors as long as they do so within the parameters defined above (occupation, career stage, etc.). In the future this may alter and it may be that for some assessments nominations have to be agreed by supervisors. 72
multi-source feedback
The assessment is time-limited and the forms must be completed by the trainee and assessor in a timely fashion. There is guidance on the forms to assist in the completion of the assessment. Assessors should be assured of confidentiality of their feedback. The criterion expected for the scale is that of completion of each level of training. The rating scale acts for ratings in relation to below standard for completion of level (1, 2, 3), meets standard for completion of level (rating 4), and above standard for completion of level (5, 6). There is also ample opportunity for free-text comments to be made at each domain of the instrument. These help trainees interpret the feedback obtained in context and are often felt to be the most beneficial part of the process.
Feedback Once completed, the system generates a report which is sent to the trainee’s educational supervisor. It is the educational supervisor’s responsibility to review the feedback before allowing the trainee access to it. This is enabled to support the trainee in the receipt of the feedback. The College’s strong recommendation is that feedback be given to the trainee face to face. Both the trainee and the educational supervisor should be assured that the report is confidential as this facilitates valid feedback and promotes the uptake of feedback with a view to alteration of behaviour where necessary.
How long does it take? For individual assessors the average time spent completing the rating form is 8–10â•›min; this is the same for the self-assessment. Nominating assessors takes about 15–20â•›min. Ideally it should not come as a surprise to the assessors that they have been nominated and it is therefore good practice for trainees to request completion of the instrument prior to the nomination email arriving. Feedback takes longer, especially if done face to face. Practice will vary between education supervisors and they may choose to allow trainees time to review the feedback in private before the discussion or simply present them with it during one of their meetings. As stated before, for acceptance of feedback there is some evidence that individuals must believe in the feedback generated. Should concerns about aspects of a trainee’s behaviour be raised, it may be necessary for the educational supervisor to discuss the feedback with another senior colleague, for example specialty tutor or training programme director, before meeting with the trainee. In all cases, ideally the feedback should generate a fruitful discussion about the trainee’s performance so that an action plan can be developed that contributes to their overall personal development plan. Also, in an ideal world, feedback will be given by individuals skilled in giving supportive and educationally sound feedback. 73
brown
Conclusions Multi-source feedback is an extremely useful part of the Royal College of Psychiatrists’ assessment strategy. It is well researched, provides reliable and valid feedback, is educationally sound and evidence suggests that it may promote behavioural change. Of the workplace-based assessments, MSF is probably the instrument that most effectively targets the highest level of Miller’s pyramid, as it allows true performance to be assessed (Miller, 1990).
References Archer, J. (2007) Multisource Feedback to Assess Doctors’ Performance in the Workplace. Academic Unit of Child Health, Univeristy of Sheffield [PhD thesis]. Archer, J. C. & H. Davies (2004) Clinical management. Where medicine meets management. On reflection. Health Services Journal, 14, 26–27. Archer, J. C., Norcini, J. & Davies, H. A. (2005) Use of SPRAT for peer review of paediatricians in training. BMJ, 330, 1251–1253. Archer, J., Norcini, J., Southgate, L., et al (2008) Mini-PAT (Peer Assessment Tool): a valid component of a national assessment programme in the UK? Advances in Health Sciences Education, 13, 181–192. Atwater, L. E., Ostroff, C., Yammarino, F. J., et al (1998) Self–other agreement: does it really matter? Personnel Psychology, 51, 577–598. Brett, J. A. L. (2001) 360 feedback: accuracy, reactions and perceptions of usefulness. Journal of Applied Psychology, 86, 930–942. Brinkman, W. B., Geraghty, S. R., Lanphear, B. P., et al (2007) Effect of multisource feedback on resident communication skills and professionalism: a randomized controlled trial. Archives of Paediatrics and Adolescent Medicine, 161, 44–49. Crossley, J., McDonnell, J., Cooper, C., et al (2008) Can a district hospital assess its doctors for re-licensure? Medical Education, 42, 359–363. Dauenheimer, D., Stahlberg, D. & Peterson, L. E. (1999) Self-discrepancy and elaboration of self-conceptions as factors influencing reactions to feedback. European Journal of Social Psychology, 29, 725–739. Davis, D. A., Mazmanian, P. E., Fordis, M., et al (2006) Accuracy of physician self assessment compared with Observed Measures of Competence: a systematic review. JAMA, 296, 1094–1102. Department of Health (2007) Trust, Assurance and Safety: The Regulation of Health Professionals. HM Government. Donaldson, L. (2006) Good Doctors, Safer Patients. Department of Health. Fletcher, C. & Baldry, C. (2000) A study of individual differences and self awareness in the context of multi-source feedback. Journal of Occupational and Organizational Psychology, 73, 303–319. General Medical Council (2006) Good Medical Practice. GMC. Hall, W., Violato, C., Lewkonia, R., et al (1999) Assessment of physician performance in Alberta: the physician achievement review. Canadian Medical Association Journal, 161, 52–57. Hulsman, R. L., Ros, W. J. G., et al (1999) Teaching clinically experienced physicians communication skills: a review of evaluation studies. Medical Education, 33, 655–668. Kurtz, S., Silverman, J. & Draper, J. (2005) Teaching and Learning Communication Skills in Medicine. Radcliffe Publishing. Lipner, R. S., Blank, L. L., Leas, B. F., et al (2002) The value of patient and peer ratings in recertification. Academic Medicine, 77 (suppl. 10), S64–S66. Lockyer, J. (2003) Multisource feedback in the assessment of physician competencies. Journal of Continuing Education in the Health Professions, 23, 4–12.
74
multi-source feedback
Lockyer, J. & Violato, C. (2004) An examination of the appropriateness of using a common peer assessment instrument to assess physician skills across specialties. Academic Medicine, 79 (suppl. 10), S5–S8. Lockyer, J. M., Violato, C. & Fidler, H. (2006) The assessment of emergency physicians by a regulatory authority. Academic Emergency Medicine, 13, 1296–1303. Lockyer, J. M., Violato, C. & Fidler, H. (2007) What multisource feedback factors influence physician self-assessments? A five-year longitudinal study. Academic Medicine, 82 (suppl. 10), S77–S80. Meeks Gardner, J. M., Powell, C. A. & Grantham-McGregor, S. M. (2007) Determinants of aggressive and prosocial behaviour among Jamaican schoolboys. West Indian Medical Journal, 56, 34–41. Miller, G. E. (1990) The assessment of clinical skills/competence/performance. Academic Medicine, 65 (suppl. 9), S63–S67. Norcini, J. J. (2003) Work based assessment. BMJ, 326, 753–755. Norcini, J. (2005) Work based assessment. In ABC of Learning and Teaching in Medicine (eds P. Cantillon, L. Hutchinson & D. Wood), pp. 36–38. BMJ Publishing Group. Powell, E. R. & White W. F. (1969) Peer-concept ratings in rural children. Psychological Reports, 24, 461–462. Ramsey, P. G. & Wenrich M. D. (1999) Peer ratings: an assessment tool whose time has come. Journal of General Internal Medicine, 14, 581–582. Ramsey, P. G., Wenrich, M. D., Carline, J. D., et al (1993) Use of peer ratings to evaluate physician performance. JAMA, 269, 1655–1660. Ramsey, P. G., Carline, J. D., Blank, L. L., et al (1996) Feasibility of hospital-based use of peer ratings to evaluate the performances of practicing physicians. Academic Medicine, 71, 364–370. Sargeant, J., Mann, K. & Ferrier, S. (2005) Exploring family physicians’ reactions to multisource feedback: perceptions of credibility and usefulness. Medical Education, 39, 497–504. Sargeant, J., Mann, K., Sinclair, D., et al (2008) Understanding the influence of emotions and reflection upon multi-source feedback acceptance and use. Advances in Health Sciences Education, 13, 275–288. Violato, C. & Lockyer, J. (2006) Self and peer assessment of pediatricians, psychiatrists and medicine specialists: implications for self directed learning. Advances in Health Sciences Education, 11, 235–244. Violato, C., Marini, A., Toews, J., et al (1997) Feasibility and psychometric properties of using peers, consulting physicians, co-workers, and patients to assess physicians. Academic Medicine, 72 (suppl. 1), S82–S84. Violato, C., Lockyer, J. & Fidler, H. (2003) Multisource feedback: a method of assessing surgical practice. BMJ, 326, 546–548. Violato, C., Lockyer, J. M. & Fidler, H. (2006) Assessment of pediatricians by a regulatory authority. Pediatrics, 117, 796–802. Violato, C., Lockyer, J. M. & Fidler, H. (2008) Changes in performance: a 5-year longitudinal study of participants in a multi-source feedback programme. Medical Education, 42, 1007–1013. Whitehouse, A., Hassell, A., Wood, L., et al (2005) Development and reliability testing of TAB: a form for 360 degrees assessment of Senior House Officers’ professional behaviour, as specified by the General Medical Council. Medical Teacher, 27, 252–258. Wilkinson, J. R., Crossley, J. G., Wrag, A., et al (2008) Implementing workplace-based assessment across the medical specialties in the United Kingdom. Medical Education, 42, 364–373. Williams, S. B. & Leavitt, H. J. (1947) Group opinion as a predictor of military leadership. Journal of Consulting Psychology, 11, 283–291. Woehr, D. J., Sheehan, M. K. & Bennett Jr, W. (2005) Assessing measurement equivalence across rating sources: a multitrait-multirater approach. Journal of Applied Psychology, 90, 592–600.
75
Chapter 7
Direct Observation of Non-Clinical Skills: a new tool to assess higher psychiatric trainees Andrew Brittlebank
One of the conditions for the validity of a workplace-based assessment (WPBA) system is that it provides for the assessment of all parts of its curriculum (Collett et al, 2009). The WPBA systems in current use in the UK emphasise the assessment of clinical competencies with only limited coverage of other areas of competency. For example, in addition to the tools that assess clinical areas, the psychiatry WPBA system offers assessment tools for teaching (Assessment of Teaching) and presentation skills (Journal Club Presentation and Case Presentation), yet there is no tool that assesses other areas of non-clinical competency. The importance of engaging doctors in the management and leadership of healthcare systems is now widely recognised (Dickinson, 2008) and the domains of management and leadership are therefore important areas of postgraduate medical training. In 2008, the Academy of Medical Royal Colleges (AMRC) in partnership with the NHS Institute for Innovation and Improvement published the Medical Leadership Curriculum, which sought to ensure that all postgraduate medical curricula, including those of psychiatry, included sufficient coverage of management and leadership competencies (AMRC, 2008). Up to now, the only evidence that psychiatric trainees can offer of their competency in the areas of management and leadership comes from multisource feedback and supervisors’ reports. Although these are important sources of evidence, they do not readily provide opportunities for giving timely and specific feedback, which is recognised as the strength of WPBAs (Carr, 2006). There is therefore a need for a workplacebased assessment tool that provides short-loop feedback on a trainee’s performance of tasks that involve the exercise of competencies in these areas. The Royal College of Psychiatrists has been working to develop a new assessment tool, the Direct Observation of Non-Clinical Skills (DONCS), to meet this need. The piloting of the DONCS allowed an opportunity to take the development of WPBAs further by trialling new ways of marking and a new way of mapping WPBAs to curricula requirements. 76
direct observation of non-clinical skills
In this chapter, I will say a little about how the DONCS was developed, describe the DONCS and how it is to be used, and outline the early results that have come from the pilot study of the tool.
The development of the DONCS Shortly after the launch of competency-based curricula in UK postgraduate training it became apparent that advanced trainees had difficulty identifying suitable evidence of their progress in non-clinical domains of practice, particularly in the areas of leadership and management. Advanced trainees routinely performed tasks such as chairing and facilitating meetings, giving evidence at formal hearings and providing supervision both to meet the needs of their service and as an essential part of their training. It was clear that there was little or no formal training in many of these day-to-day tasks of leadership and management and the educational value of such tasks was neglected. It is recognised that one of the most important elements in the utility of workplace-based assessment is its potential for impact on learning (van der Vleuten, 1996). The Royal College of Psychiatrists, as the body responsible for developing the specialty training curricula for psychiatry in the UK, recognised the desirability of having a WPBA tool that could give feedback to a trainee performing non-clinical tasks. The College established a task-and-finish group of psychiatric educators and trainees with lay input, to appraise the options and begin a pilot evaluation. The task-and-finish group considered that an assessment method based on direct observation of trainees in real practice offered both high-face validity and the benefit of short-loop feedback and this was therefore the preferred option. The model of this form of workplace-based assessment is the Direct Observation of Procedural Skills (DOPS). The DOPS was developed by the Royal College of Physicians to provide an assessment of competence of a trainee’s performance of practical procedures (Wragg et al, 2003). The assessment provides feedback on generic domains such as communication skills as well as the performance of the specific task being observed. This format seemed to lend itself well to the need that the new instrument was to meet. Furthermore, the choice of a name for the new instrument that was similar to that of the DOPS was deliberate to exploit user familiarity with an instrument they were already using. At the time the task-and-finish group was conducting its work, there was a growing awareness among medical educators that the practice of using numerical Likert-type scales as the outcome measure for WPBA tools had serious shortcomings. Indeed, it has now become accepted advice that text-based descriptors should be used in WPBA tools (Collett et al, 2009; PMETB, 2009). The group decided to adopt a 3-point descriptor-based scale with ‘ready for consultant practice’ as the outcome. 77
Brittlebank
Another issue emerging in psychiatric education at this time was dissatisfaction with using the General Medical Council’s Good Medical Practice (2006) as an organising framework for curriculum outcomes. Work was under way to rewrite the psychiatry specialty curriculum within an alternative framework, and CanMEDS (Frank, 2005) had been chosen as the framework to be used. CanMEDS organises the knowledge, skills and behaviours necessary for medical practice into seven physician roles or metacompetencies. The role of the doctor as medical expert is the central integrating metacompetency and this is supported by the six roles of the doctor as communicator, collaborator, manager, health advocate, scholar and professional. It is recognised that marking global domains provides a more valid assessment of professional performance than using a checklist (Regehr et al, 1998) and it is now recommended that WPBAs should also be marked using holistic descriptors (Swanwick & Chana, 2009). The task-and-finish group therefore saw the piloting of the DONCS as an opportunity to incorporate the CanMEDS framework to UK psychiatrists by using the seven metacompetencies as the marking domains for the DONCS. The task-and-finish group also considered the general layout of the DONCS form, particularly the prominence given to the scoring matrix. It was argued that, since the purpose of all WPBAs is primarily to give formative feedback, then it was distracting to place marking grid at the beginning of the form. Instead, the free-text feedback boxes should be placed first on the form. The group experimented with both versions and it was agreed to keep the established WPBA format because it was found that using the scoring grid first could usefully help the composition of the formative feedback.
How to use the DONCS The doctor being assessed leads the process and identifies the episode in which they will be assessed. The trainee’s supervisors will guide them, using the curriculum to identify the competencies in which they should be assessed and will help them identify suitable opportunities for assessment. Situations such as chairing and contributing to meetings, providing clinical supervision, assessing and appraising junior colleagues, acting as a consultant to external agencies and giving evidence at formal hearings all provide opportunities to be assessed against curriculum competencies using the DONCS. The assessor should be a senior member of the healthcare team who is competent in the skill that they will assess. They need not be medically qualified, but should be trained in assessing psychiatrists in training, so they are familiar with the expected standard of performance. They should also be trained in methods of giving and receiving feedback. The 78
direct observation of non-clinical skills
College has produced a written guide to the DONCS for assessors, which describes how the assessment should be carried out. The guide explains the use of the ‘readiness for consultant practice’ scale and the CanMEDS metacompetencies. In some situations, such as chairing a meeting, it may be possible for a number of assessors to independently rate the doctor’s performance. This is desirable, as it may improve the reliability of the rating. When this happens, raters should agree with the doctor in advance which of them will give the feedback. The observation of the trainee’s performance should last at least 30â•›min and may last longer. It should take 10–15â•›min to give immediate feedback and to maximise the educational impact of the exercise, this should be given as soon as possible after the assessment.
Using the DONCS in specific situations Chairing meetings The purpose of exercising this skill is to seek to maximise the decisionmaking capability of a group of people. The exercise may involve using competencies from all the CanMEDS roles, but particularly those of medical expert, communicator, collaborator and manager. Positive indicators that assessors should look for in making their assessments will include: •â•¢ ensuring that all relevant information is considered •â•¢ facilitating the expression of a range of views/options •â•¢ managing time effectively •â•¢ ensuring that clear decisions are reached and understood by all participants.
Clinical supervision The purpose of exercising clinical supervision is to contribute to patient safety and the efficient use of healthcare resources. This is achieved by offering psychiatric expertise to other practitioners to enhance the quality of clinical encounters to which the psychiatrist has not directly contributed. The exercise of this skill may involve competencies from all the CanMEDS roles, but particularly those of medical expert, communicator, collaborator, manager and professional. Assessors will look for positive indicators that will include: •â•¢ the identification of appropriate goals for the supervision encounter •â•¢ sufficient exploration of the clinical situation to ensure awareness of all information necessary for safe decision-making •â•¢ the development of a clinically appropriate management plan. 79
Brittlebank
Assessing and appraising The purpose of exercising this skill is to help provide assurance that learners (students and more junior trainees) are displaying levels of professional knowledge, skills and attitudes appropriate to their stage of training. The use of this skill should also help learners identify opportunities for further development and should help them become more effective reflective practitioners. The exercise of this skill may involve using competencies from all the CanMEDS roles, but particularly those of medical expert, communicator, scholar and professional. Positive indicators include: •â•¢ using assessment and appraisal tools appropriately •â•¢ applying appropriate standards for performance assessment •â•¢ encouraging the active engagement of learners in the appraisal or assessment process •â•¢ clearly addressing questions raised by learners •â•¢ helping learners identify further learning goals and development opportunities.
Providing oral information The purpose of this skill is to effectively communicate psychiatric information to others orally to inform them and, where appropriate, facilitate their decision-making. In some situations this will include giving evidence and undergoing cross-examination. The exercise of this skill may involve competencies from all the CanMEDS roles, but particularly medical expert, communicator, collaborator, scholar and professional. Positive indicators for assessors to be aware of include: •â•¢ using appropriate language that will be understood by the listener •â•¢ ascertaining the nature of the information that the listener requires •â•¢ accurately, sensitively and appropriately communicating the information •â•¢ responding appropriately to questions that the listener makes •â•¢ where the information shared relies upon clinical data, it will be evident that doing so respected patient privacy and complied with contemporary frameworks of confidentiality •â•¢ where an opinion is expressed, it will be reasonable and justified.
How many assessments are needed? Decisions about the amount of assessment that is conducted inevitably represent a compromise between the reliability, validity, feasibility and educational impact of the assessment (van der Vleuten, 1996). We do not yet have any formal psychometric data on the DONCS, therefore decisions about the numbers of assessments will be informed by face 80
direct observation of non-clinical skills
validity, feasibility and educational impact. From these considerations, it would seem reasonable to recommend that advanced trainees in the ST4–6 years undergo at least four DONCS assessments in each year. The episodes of assessment should be distributed among the four areas outlined above and, whenever possible, the trainee should be assessed by different assessors.
Piloting the tool Work began on piloting the tool in the autumn of 2008. The pilot was conducted at 10 training sites that involved about 300 advanced trainees in psychiatry. A 2-hour training package on the DONCS was offered and delivered to ten different sites. The training included a description of the tool, introduction to the CanMEDS framework and standard setting, and group calibration exercises on the skill of giving clinical supervision. Participants in the pilot were invited to use the DONCS in the assessment of advanced trainees in all psychiatry specialties in any situation that they felt it would be helpful. As part of the quality assurance of the DONCS, assessors were asked to record the skill that was assessed and the time taken to complete the assessment; assessors and trainees were asked to record their satisfaction with the episode of assessment using a 6-point Likert-type scale, where 1 is ‘not satisfied’ and 6 is ‘very satisfied’. The study is ongoing, and will continue until sufficient numbers of assessments have been returned to allow estimates of its reliability and validity to be calculated. The results of the pilot were analysed using descriptive statistics. Between October 2008 and March 2010, 439 DONCS assessments were performed on 298 advanced psychiatric trainees, giving an average of 1.47
120 100 80 60 40 20 0 ng
eti
ng
ari
Ch
e am
ing
ch
a Te
ing
tify
s Te
om
nc
W
e ritt
n
tio
ica
n mu
S
g
sin
rvi
e up
ng
eti
Me
g
ltin
su
n Co
n
tio
ipa
ic art
r
he
Ot
p
Fig 7.1â•… Skills assessed in the DONCS pilot study
81
Brittlebank
assessments per trainee. The skills that were assessed were recorded for 324 of the 439 assessments performed (Fig. 7.1). Chairing meetings, teaching, giving testimony and providing written communication were the three largest categories of skills assessed using the DONCS. There were some interesting skills assessed within the ‘other’ category, including organising teaching events, giving presentations and performing a Mental Health Act assessment. The average time taken to complete the assessment was 14â•›min (s.d.â•›=â•›12.3). The average satisfaction score with the DONCS for trainees, out of a maximum of 6, was 4.7 (s.d.â•›=â•›0.87), whereas the assessors’ average satisfaction score was 4.67 (s.d.â•›=â•›0.91). Although there are insufficient data to make judgements regarding the instrument’s validity, it is possible to make some early comments. The outcome measure on the instrument is ‘ready for consultant practice’. A total of 2357 judgements were made against this outcome in the 439 DONCS assessments that were submitted. Out of these, 992 (42%) indicated that the domain of performance was not yet up to the standard expected for consultant practice. These early results for the DONCS suggest that it is a welcome additional WPBA tool that can produce valuable assessment information in return for a relatively small amount of assessment time. The satisfaction ratings compare very well with those from other tools, suggesting that the instrument has a welcome degree of face validity. It appears to offer the potential to cover a wide range of curriculum areas, including those that are not specifically assessed in any other way, such as chairing meetings, giving evidence, supervising others, consulting with outside agencies and participating in meetings. It is probably not surprising that the skill most frequently assessed was chairing meetings; this is an important area of senior clinical work and there will be many opportunities for the trainee to practise this in their workplace. There were a number of skills such as teaching and performing Mental Health Act assessments that could be assessed using other methods. In this regard, providing written communication is another interesting skill to be assessed by the DONCS. The DONCS was intended to be an instrument that uses direct observation of performance and as such it is probably not suited to assessing the quality of written communication. There would appear to be a need for a tool that specifically assesses written communication. Possibly the most striking finding from this pilot study is the high proportion of judgements (42%) that assessed performance as below the outcome level. This contrasts with the figure of around 2% of WPBAs in the foundation programme that were deemed to be ‘unsatisfactory’ (Davies et al, 2009). The readiness of the assessors in this study to indicate performance as below the standard expected for consultant performance vindicates the switch from a Likert-type scoring system and it greatly enhances the credibility of the DONCS as a tool for giving formative feedback. This switch should be actively considered for all WPBA tools. 82
direct observation of non-clinical skills
Acknowledgement I would like to express my thanks to Simon Bettison of Assessments Online, who set up the DONCS pilot and provided the facility for data collection.
References AMRC (2008) Medical Leadership Curriculum: Enhancing Medical Leadership. Academy of Medical Royal Colleges, NHS Institute for Innovation and Improvement. Carr, S. (2006) The Foundation Programme assessment tools: an opportunity to enhance feedback to trainees? Postgraduate Medical Journal, 82, 576–579. Collett, A., Douglas, N., McGowan, A., et al (2009) Improving Assessment. Academy of Medical Royal Colleges. Davies, H., Archer, J., Southgate, L., et al (2009) Initial evaluation of the first year of the Foundation Assessment Programme. Medical Education, 43, 74–81. Dickinson, H. a. H. C. (2008) Engaging Doctors in Leadership: Review of the Literature. NHS Institute for Innovation and Improvement, Health Services Management Centre. Frank, J. R. E. (2005) The CanMEDS 2005 Physician Competency Framework: Beter Standards, Better Physicians, Better Care. Royal College of Physicians and Surgeons of Canada. General Medical Council (2006) Good Medical Practice. GMC. PMETB (2009) Workplace Based Assessment (WPBA): A Guide for Implementation. Postgraduate Medical Education and Training Board, Academy of Medical Royal Colleges. Regehr, G. M. H., Reznick R. & Szalay, D. (1998) Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format examination. Academic Medicine, 73, 993–997. Swanwick, T. & Chana, N. (2009) Workplace-based assessment. British Journal of Hospital Medicine, 70, 290–293. van der Vleuten, C. P. M. (1996) The assessment of professional competence: developments, research and practical implications. Advances in Health Sciences Education, 1, 41–67. Wragg, A., Wade, W., Fuller, G., et al (2003) Assessing the performance of specialist registrars. Clinical Medicine, 3, 131–134.
83
Chapter 8
Workplace-based assessments in psychotherapy Chess Denman
To many, psychotherapy represents the least medical end of the continuum of skills that make up psychiatric competence. Training in the area has often been patchy and although most people accept the value of the psychotherapy experience for trainees, the practical difficulties within the organisation often limit the exposure that trainees can obtain. Recently, the Royal College of Psychiatrists has modernised and updated its curriculum and this has happened as much for psychotherapy as for other topic areas. The modernised psychotherapy curriculum, as it applies to basic trainees, focuses on two core areas where psychotherapeutic competencies are important. The easiest of these to define is the aim that trainees should have a basic knowledge of the main modalities of psychotherapy, be able to implement basic psychotherapeutic strategies and techniques, and be rational prescribers of psychological treatments. To that end trainees are asked to acquire theoretical knowledge in the field of psychotherapy and also to gain practical experience in the treatment of two patients. These should be in two different modalities and of two different durations. Each of these cases should be formally evaluated and submitted as a workplacebased assessment. The second area that the psychotherapy curriculum tries to develop is what can be termed ‘the psychotherapeutic attitude’. This represents a way of approaching the experience and difficulties of patients that is at the same time empathic, developmental, narrative and psychologically literate. In the psychotherapeutic attitude relatively more weight is given to ideographic (drawing on themes and narratives) than to nomothetic (drawing on statistical or categorical) explanations, and care and effort are expended in developing and maintaining emotional and mental closeness with patients. The capacities that comprise this attitude are core to the practice of all psychiatric specialties. Helping trainees to develop the psychotherapeutic attitude is a broadly based enterprise that will occur in all areas of training but in the specific area of psychotherapy the mechanism of the case-based discussion is used both as a training and an evaluative tool and is the focus of the second main workplace-based assessment in psychotherapy. 84
WPBAs in psychotherapy
Faced with the task of training young psychiatrists in ‘human’ skills people are naturally sceptical because our cultural perspective on issues of character and personality, which are thought of as central to the deployment of these skills, tend to make us believe that such capacities are innate. There is also a shortage of good evidence about the efficacy of teaching in psychotherapy (Binder & Struff, 1993). The search for good evidence in the area is also hampered by a wide diversity of practice in training and in the definition of the competencies to be tested. This is an international problem as surveys in the USA show (Khurshid et al, 2005). However, where teaching programmes have been introduced in a well-organised way, they have had good uptake and have been able to gather evidence of the acquisition of skills in a range of modalities such as cognitive–behavioural therapy (Martinez & Horne, 2007) and psychodynamic psychotherapy (Zoppe et al, 2009). Furthermore, systematic organisation and careful implementation have helped to overcome problems with low attendance and to increase the proportion of trainees getting valuable experience (Carley & Mitchison, 2006). Finally, there have been significant improvements in the specification of the psychotherapy competencies both generically and in a range of specific treatment modalities. The most relevant and comprehensive of these is the definition of competencies undertaken by the Skills for Health programme (www.skillsforhealth.org.uk), which has developed a suite of competencies in cognitive–behavioural, psychodynamic, systemic and humanistic therapies using an approach that combined the combing of manualised and evidence-based therapeutic approaches with the work of expert groups of practitioners (the competencies can be accessed on the Skills for Health website).
Evaluation of psychotherapeutic competencies: general aspects The links between the emotional and social skills that underpin the psychotherapeutic competencies tested in the case-based discussions, and the supervisor’s evaluations of the trainee’s psychotherapy casework as well as traits of character and personality that are thought socially desirable, make giving feedback to students who do not do well in this area a task that can present considerable difficulty. Educators may often feel more able to offer negative evaluations on intellectual or practical skills because these evaluations feel less judgemental. However, trainees who have difficulty in emotional and social domains need feedback because without it they stand little chance of improving their capacities in this area. The use of structured tools for evaluating competence is an invaluable accompaniment to making this kind of judgement. The mental discipline that the tool offers the evaluating trainer allows them to fathom out whether what may seem at first like a hunch or an unsupported impression represents a realistic judgement of performance or is shaped by prejudice. 85
denman
Furthermore, evaluative tools that anatomise competence in this area into distinct domains also guide the trainee and their teachers towards specific competencies that are in need of remediation. It is commonplace that accounts of brusque, rude or thoughtless behaviour often accompany both patient complaints regarding their treatment and other accounts of medical mishaps. In psychiatry such behaviour may seriously prejudice the capacity of a psychiatrist to engage and treat patients who may, in any case, be suspicious or frightened. So, in extreme cases where a trainee cannot manage to develop a sufficient level of competence in these skills it may be necessary to prevent them from progressing to subsequent training levels. Sometimes remedial work will be needed and, arguably, for some trainees a change in career direction may be advisable.
Case-based discussion groups Case-based discussions offer trainees their first introduction to psychotherapeutic concepts in a psychiatric setting. The aim of the group is to foster emotional intelligence and sensitivity in trainees as well as an understanding of emotional processes in psychotherapy. Groups should be convened by a psychiatrist who is also an experienced psychotherapist. Trainees meet on a regular (usually weekly) schedule in a group of no more than seven or eight. Each trainee is encouraged to identify clinical situations or specific cases for presentation to the group. Although the trainee is expected to give an ordered account of the patient or situation, they are specifically not required to present the case as they might to a consultant running through the elements of a psychiatric clerking. Instead, the story of the patient and the emotional state, and reactions of the participants in the patient’s life and current situation are prominent. The group convener tries to help trainees to develop an understanding of the patient’s mental life, of the emotional reactions they may evoke in others, including the group members, and of the implications of these phenomena for the treatment of the patient. Groups of this nature can lead to establishment of considerable levels of trust between trainees and encourage them to discuss with each other the anxieties and concerns that their working life evokes. Trainees often begin by specifically bringing cases that lend themselves more intuitively to a ‘psychotherapeutic discussion’ such as cases of patients with personality disorders. However, as much and sometimes even more value can be gained by a discussion of a ‘routine’ case such as the readmission of a well-known patient with psychosis who is currently experiencing a relapse. Throughout the group’s meetings the convener is able to evaluate the progress of trainees by using a structured form (Table 8.1, pp. 88–89) which lists the specific competencies to be evaluated and gives anchor points for inadequate, competent and excellent performance. The form should be filled out on two occasions for each trainee – once during the course of the group and once towards the end of the trainee’s attendance, which should 86
WPBAs in psychotherapy
be for about 1 year. There are inevitable tensions in the use of an evaluative tool during the course of a process in which trainees are being encouraged to speak more freely about their thoughts and feelings in relation to clinical material when those feelings may not always seem acceptable or normative. This can be managed to an extent by stressing the formative nature of the first evaluation. The group convener can go through the ratings with the trainee, inviting a degree of self-assessment, and also exploring difficulties and pointing out ways to improve.
Evaluation forms Competencies 1 and 2 The first and second competencies on the form evaluate time-keeping and attendance. There are often simple organisational reasons for poor attendance by trainees and it is important to pick this up early during their time in the group. Once these are resolved, further difficulties in attending may reflect problems in self-organisation and thus act as an indicator of more generic problems in this area. It is important to help trainees be more organised but also to help them understand the impact of their absence on the rest of the group and to see that emotional involvement involves a level of commitment. This is an important preparation for trainees’ future roles within multidisciplinary teams, where the group trust that is important to carrying and accurately managing clinical risk involves emotional sensitivity and commitment. Case study 8.1
A new consultant psychotherapist took great care to set up a case-based discussion group on a day when all trainees said they could attend. Even so the group seemed to limp from week to week, with a different membership on each occasion and sometimes very few doctors present. The consultant introduced a record of attendance, and this improved matters partially but did not resolve difficulties. The consultant then began to talk to trainees one by one. For some it was clear that holidays, working nights and the other demands of the working time directive were disrupting attendance. For others, although practical difficulties were often adverted to (e.g. busy times on the ward), it seemed that the core problem was a lax attitude to attending teaching of all kinds. In the case of one trainee the conversation developed into an account of that trainee’s deep scepticism about psychotherapeutic approaches. The consultant was able to increase attendance by dealing with the wide range of issues that were impeding it.
Competency 3 The third competency deals with the capacity to get close to the mind of patients and deal with the feelings this evokes. We all have a capacity to understand the motivations and preoccupations of others and much ordinary conversation and human interaction betrays our deep interest in discovering the nature and quality of other people’s experiences and stories. However, some aspects of psychiatric enquiry such as the taking of a history and the anatomising of symptoms can alienate both patient 87
88
Unacceptable (score 1 or 2)
Poor attendance at CbD group or gives no notice of absences
Consistently late for the CbD group, regularly takes calls or leaves during the group or is otherwise distracted during sessions
Unable to reflect in the group on how the patient makes the trainee feel or shows evidence of inability to make any connections with patients discussed
Unable to think about the patient as a person in their own right who has problems. Rather, shows evidence of thinking of patients as ‘cases’ or medical diagnoses
Competence
1 Able to attend regularly and manage future predicted absences
2 Demonstrates an understanding of the importance of timekeeping and of having a predictable and regular setting (frame) for therapeutic work
3 Able to listen to and connect with the patient, adequately containing own anxiety
4 Able to provide a narrative account of contact with the patient without adopting a purely biological or medical model
Struggles to provide an account of the patient as a person in their own right
Demonstrates difficulty in reflecting in the CbD group about how the patient makes the trainee feel or shows evidence of difficulty connecting with patients’ feelings
Lateness and/or distractions interfere with trainee’s ability to reflectively work in the CbD group
Irregular attendance or sometimes fails to inform group of absences
Work to be done (score 3)
Can demonstrate an interest in the patient as a person with their own story which can be communicated both avoiding jargon and separately from a medical diagnosis
Can reflect on the personal impact of the patient without reacting too defensively, e.g. by becoming too theoretical at the expense of a connection or by being too quick to act (driven by strong emotion)
Is consistently punctual for CbD groups and manages other work to create a space for reflective work (turning off mobile phones, etc.)
Regularly attends CbD group. Can think ahead and keeps others in group informed in good time of predicted absences
Satisfactory (score 4)
continued
As before, and demonstrates an increased ability to pick out details and nuances of the story, attempting to link symptoms with anxiety and hidden feelings
Can confidently reflect on the personal impact of the patient and use this information to inform potential management strategies
As before, and demonstrates an awareness of how unpredictability can affect the therapeutic relationship
Accomplished (score 5 or 6)
Table 8.1â•… Case-based discussion (CbD) group assessment form (to be completed after 6 and 12 months’ attendance at the group). Developed by Mark Evans.
denman
Score 1 or 2
Is consistently opinionated, dogmatic or dismissive of other viewpoints within the group or shows evidence of doing so with patients
Consistently either imposes inappropriate personal strategies on the patient or does so to other trainees within the CbD group
Is either oblivious to such factors or demonstrates racist, sexist or ageist attitudes
Demonstrates a significant lack of awareness of, or is obviously unwilling or unable to think about, unconscious processes
Competence
5 Able to respond to others in a nonjudgemental way
6 Self-aware enough that he/she does not have to impose personal solutions or self-management strategies
7 Able to recognise and manage the different factors (gender, culture, age, disability, etc.) contributing to the emotional responses to the patient
8 Able to recognise the influence of unconscious processes on the interaction with the patient
Table 8.1â•… contd.
Demonstrates some lack of awareness of unconscious processes or struggles to think about them
Demonstrates some lack of awareness of such factors and their importance to the therapeutic relationship
Demonstrates an awareness that all that occurs in the therapeutic relationship may not be explained by conscious motivation
Demonstrates sufficient awareness of own reaction to such factors that relationships with patients do not appear to be adversely affected
Demonstrates an understanding that all people are different and that what works for the therapist may not work (or be appropriate) for the patient
Demonstrates in the CbD group an acceptance of others’ experiences as different from one’s own yet equally valid and informative
Can at times be opinionated, dogmatic or dismissive of other viewpoints within the group or shows evidence of doing this with patients At times imposes inappropriate personal strategies on the patient or does so to other trainees within the CBD group
Score 4
Score 3
Has some understanding of projective processes and is willing to think about these and their impact on the therapeutic relationship
As before but with increased confidence and demonstrates reflective curiosity about how these factors are affecting the therapeutic relationship
Shows recognition of how the professional can get drawn into offering solutions (both by the patient and through their own wish to cure) and why it might not be appropriate to do so
As before, and is curious to understand how different reactions from within the group may relate to the patient’s internal world
Score 5 or 6
WPBAs in psychotherapy
89
denman
and doctor from this normal human dialogue. Additionally, disorders of emotion, perception, belief or cognition all stretch the capacity for ordinary narrative understandings of experience. Thus the capacity to get close to the minds of patients and to see things from their perspective needs special attention and specific fostering. Difficulty in this area may signal anxiety on the part of the trainee in dealing with emotional states in patients. Anxiety restricts freedom of thought as well as intellectual and emotional creativity. Being able to get close to the mind of patients who are suffering or whose mental life is very different from the norm without becoming overwhelmed by anxiety is the hallmark of a good psychiatrist and so is a key skill. Trainees need help to reflect on problems they may have in this area. The case-based discussion group can be extremely important in facilitating this because trainees can not only share their anxieties but also begin to appreciate the diversity of responses that people have to emotional material. Competency 4 The fourth competency builds on the mental closeness that trainees have been helped to develop with their patients and moves this on from a simple extension of the ordinary interpersonally focused conversations that trainees might have about any individual. By moving back into a more analytical, evaluative and intellectual stance on the emotional and narrative information that mental closeness has generated, the trainee is helped to develop a psychological and social understanding of the patient which moves from checklist to narrative. The events that formed a psychiatric clerking become invested with meaning and their personal significance for the patient as well as their epidemiological implications are held in mind. The capacity to move from fact to meaning is a critical part of the intelligent practice of psychiatry. The work of Brown & Harris (1978), which shows that the specific context in which life events occur critically determines their causal force in generating psychiatric disorder, is a prime example of this move. Difficulty in this competency signals a limit in the trainee’s capacity to get close to the reality of the patient’s life and to sensitively appreciate the meaning of events to the patient, and therefore to predict the varying impact of similar life events on different patients. Support in this area can involve encouraging trainees to talk to patients in less formal settings, sharing perceptions of patients’ experiences with each other and, at times, even thinking about parallels from fiction or film. Case study 8.2
Trainee A started her discussion of a case in the group with a meticulously taken history that included parental divorce, an early experience of physical abuse at the hands of a stepfather and ultimately bullying of others by the patient at school. Although these events were listed, the trainee gave no sense that they might have had an effect on the patient beyond an initial discussion of the statistical association between early adversity and later
90
WPBAs in psychotherapy
psychiatric symptomatology. Discussion in the group was able to deepen this understanding. Group members were able to think about the dynamics that might turn an abused child into a bully and also about the kinds of interpersonal patterns that the patient might now be likely to adopt in adult life.
Competencies 5–7 Competencies five to seven all in their different ways deal with the appreciation and management of difference. At a basic level this involves being able simply to tolerate different forms of life about which one might have a critical, ignorant or judgemental reaction. As this skill deepens, appreciating difference involves developing the capacity to suspend too immediate a judgement about the way events may turn out or the likely outcome of actions that the patient may take. Sharing stories between trainees and the different reactions of different trainees to the stories of patients are ways in which this appreciation may be developed. These two aspects of the appreciation of difference need to be informed by an appreciation of cultural and social differences and, in particular, an understanding of the way in which oppression and discrimination affect and influence the lives of a range of marginalised groups. Difficulties in the area of respect for difference can seriously compromise the capacity of psychiatric trainees to practise psychiatry ethically and to treat all patients with equal respect, particularly with appropriate respect for autonomy. Case study 8.3
Trainee B presented the case of a woman who had been admitted to hospital after a serious episode of self-harm. The woman was a sex worker. After some discussion in the group about her condition the group leader pointed out that no one had mentioned the nature of the patient’s work. One group member said that it would be prejudiced to assume that the patient’s work was evidence of her disturbance. Other group members felt able to express their sense of disapproval of what the patient did. It was clear that no member of the group had a solid knowledge base about the social conditions that lead to sex working, the pressures that the patient might have been under to take up that sort of occupation, or circumstances of her daily life. One member of the group agreed to do some research in this area and to present this the following week.
Competency 8 Competency eight is a subtle and at first glance partly psychodynamically expressed notion but in fact it is equally important in cognitive and systemic therapies. Humans cannot account for all their actions and many psychiatric symptoms seem to arise from non-rational sources within the mind. There is no fundamental reason to suppose that psychiatrists will not be subject to the same influences to act in non-rational ways as their patients. Remaining alert to non-conscious influences on behaviour (which may range from unintended operant conditioning effects through to psychodynamic interpersonal defensive manoeuvres) is an advanced skill but not so difficult that trainees cannot begin to develop it. 91
denman
Psychotherapists make much of coincidences and slips (as did Oscar Wilde’s Lady Bracknell), perhaps at times too much. However, learning to recognise when people’s actions betray deeper motivations or the operation of deeply held but automatically operating schemas is a core psychotherapeutic skill. Trainees should learn how to recognise such moments but also when such inference is not legitimate and is used to blame the patient.
Psychotherapy casework Since psychological treatments are an important way to help patients, trainees need to understand them, their mode of action and their strengths and weaknesses. As future psychiatrists, trainees will at the very least need to understand how to prescribe psychological treatments and evaluate their outcome. Furthermore, not all psychotherapeutic interventions are delivered in the context of formal psychotherapy; when meeting with the patient, psychiatrists may well have an opportunity to use some basic psychotherapeutic techniques. These facts constitute one of the chief rationales for requiring workplace-based evaluation of treated patients. No single psychotherapeutic approach has achieved unquestioned preeminence over all others and a range of approaches are probably needed in different circumstances. For this reason trainees are asked to treat two patients using different modalities of treatment over different durations. Psychotherapy is an incredibly varied therapy and the practicalities of finding a suitable patient and supervisor mean that trainees are likely to take on a wide range of treatments. Suitable cases are often best drawn from the trainee’s routine clinical practice. Where good departments of cognitive–behavioural therapy are active, trainees may be able to take on a patient for this type of treatment, and, by the same token, where psychodynamic therapies are practised, trainees can get experience in this modality. Trainees getting experience in child psychiatry may have the opportunity to do play therapy or take on a family for treatment. Supportive therapies or psychoeducational treatment may also be available, and in some places group therapy may be undertaken. The latter has particular advantages in that it offers trainees the chance of co-leading a group with a more experienced therapist. Although interest and opportunity may determine much of what is done, there are some basic elements that must be adequately set up and which are similar in all cases. Trainees should not undertake treatments that are too advanced for them to perform competently, even under supervision. In some more theory-driven psychotherapeutic modalities an initial theoretical training may be needed. Patients should be assessed by a senior individual as suitable for a trainee and need to give informed consent to their course of treatment. Trainees should be supervised on a regular basis and their supervisor should be appropriately qualified in the therapeutic modality they are teaching the student to deliver. Supervision may involve listening 92
WPBAs in psychotherapy
to the trainee’s report of their activities, checking and reading notes and written materials and, on occasion, listening to an audio recording or watching a video recording of sessions. Supervised psychotherapy lends itself to presentation as a workplacebased assessment. Trainees present evidence of their work through the use of a structured assessment tool (Supervised Assessment of Psychotherapy Expertise, SAPE; Table 8.2, pp. 94–95) which helps supervisors summarise and track their progress. The SAPE tool can be used both formatively and summatively (formative and summative assessment functions are explained in Chapter 1, pp. 10–11). In a longer treatment it could be completed once during the middle part of the treatment and once at the end. Where briefer treatments are being conducted, two cases could be used, one formative and the other summative in nature. At a basic level the SAPE can be used to assess therapies conducted in any modality. It achieves this by dividing the competencies into those that are general and that must be exercised in the performance of any psychological treatment, and those that contain modality-specific elements. These latter competencies will be judged by the assessor according to the modality of treatment delivered. Thus, for competence 2 (‘Understand the rationale of treatment’) the understandings that would be needed in a cognitive–behavioural treatment would differ from those that would be necessary in a systemic family therapy. The SAPE has been adapted for higher specialist trainees in psychotherapy into a format that uses competencies specific to the modality being used and that are also linked to the Skills for Health standards.
Evaluation forms Competencies 1 and 4 The first and fourth competencies are an extension of the main competencies developed by case-based discussion – the capacity of the trainee to get close to the patient’s mind in a respectful and sensitive way. Because the relationship with the patient that is developed and supervised in psychotherapy is often longer and more intense – even in the briefer treatments – than the relationships with patients that are discussed in the case-based discussion groups, it is possible to help trainees develop their skills in this area and, importantly, show them ways to check the level of closeness that they have developed with the patient and repair breaks in that closeness when they occur. Trainees differ in their capacity to do this and the levels of skill that are required for success may be different for each patient. However, a trainee who is consistently unable to achieve this in a series of therapies would risk patients dropping out of treatment and, perhaps in later practice, risk patients asking to see a different psychiatrist. Competencies 5 and 8 Competencies five and eight deal with the work needed to begin, develop and end a professional therapy contract. These skills are partly specific to 93
94
Derogatory, intrusive or disrespectful
Cannot explain rationale of treatment
Minimal understanding of what formulation is or no attempt to produce one
Little or no sense of patient’s feelings or perspective
Behaves as if in another setting entirely (e.g. talking with a friend, leading an interrogation)
Actions in sessions bear no relation to patient’s needs
1 Attitude towards patient
2 Understand rationale of treatment
3 Provide working formulation of patient’s difficulties
4 Develop empathic and responsive relationship with patient
5 Establish frame for treatment
6 Use of therapeutic techniques
Unacceptable (score 1)
Attempts at intervention are often clumsy or inappropriate
Repeatedly fails to protect setting, keep to time or confuses patient by behaviour towards them
Working relationship is limited by lack of rapport, interest or understanding
Formulation is attempted but significantly incomplete or inaccurate
Confused about key differences between therapeutic approaches
Often makes unjustified assumptions
Much work to be done (score 2)
Interventions vary considerably in execution and success
Occasionally fails to maintain setting appropriately
Relationship is often sound but also lapses through therapist’s uneven attunement
Formulation lacks at least one important component
Still unsure of how therapy would help patient
Some difficulties in appreciating patient’s position
Borderline (score 3)
Well-chosen interventions are usually carried out thoughtfully and competently
Manages setting, time and personal boundaries consistently
Earns patient’s trust and confidence from ability to listen and appreciate their feelings
Adequate account of predisposition to, and precipitation and maintenance of, problems
Correctly explains basic principles of approach
Respectful and nonjudgemental
Satisfactory (score 4)
Table 8.2â•… Supervised Assessment of Psychotherapy Expertise (SAPE). Developed by Chris Mace.
continued
Interventions timed and phrased sensitively, linked to positive change
Optimises working collaboration by adjusting approach to patient
Developed capacity to feel and imagine events from patient’s perspective
Formulation is cogent, personalised and theoretically sound
Recognises how recommended actions lead to therapeutic change
Informed by realistic but positive view of patient’s potential
Accomplished (score 5 or 6)
denman
Guarded and uninvolved or too dominant in discussion. Fails to grasp what is being conveyed Records omit key events in treatment; summary excessively generalised or uninformative
Abandons patient without warning, or is unable to let them go
Misses several sessions without explanation or is very cynical
Records (notes and/↜or letters) are seriously incomplete, inaccurate or misleading
8 End treatment
9 Use of supervision
10 Documentation
Records are often competent but incomplete
Shows capacity to use supervision but this remains inconsistent
Ending is considered, but perfunctorily or at unsuitable moments in the treatment
Evident blind spots in assessments of impact on patient
Interventions vary considerably in execution and success
Score 3
Record of treatment sessions is focused and clear; final summary/ letter is apt and comprehensive
Attends regularly, participates honestly and openly in discussion, uses advice received
Patient is prepared for ending of treatment and its consequences are anticipated
Describes impact of therapy on patient comprehensively and accurately
Well-chosen interventions are usually carried out thoughtfully and competently
Score 4
Records resemble those of a more experienced therapist
Allies sensitivity with creativity in reflections about the therapy
Patient helped to continue to develop after cessation of treatment
Aware of interrelationship between different aspects of change during treatment
Interventions timed and phrased sensitively, linked to positive change
Score 5 or 6
Instructions for the supervisor: Consider each aspect in turn. Circle the one option that corresponds most closely to your experience of the trainee’s performance. Total the scores for each column and enter the total score opposite. Standards refer to level of performance expected by ST3.
Little attention paid to impact of ending, whether planned or patient leaves early
Limited insight into how patient is being affected by the therapeutic sessions and attendant risks
Repeatedly unable to recognise positive or negative effects when these occur
7 Monitor impact of therapy
Attempts at intervention are often clumsy or inappropriate
Score 2
Actions in sessions bear no relation to patient’s needs
Score 1
6 Use of therapeutic techniques
Table 8.2â•… contd.
WPBAs in psychotherapy
95
denman
the social encounter of therapy, and partly support the need for boundaries and rules of appropriate contact in therapy. Even so it is obvious that the opportunity to explore and understand aspects of professional behaviour such as an appreciation of the difficulties that self-disclosure can pose for the patient has widespread applicability to the general practice of psychiatry. The conduct of a therapy should offer the chance to take these skills beyond simple adherence to professional formalisms and allow trainees to understand the rationale for behaving in certain ways. It is important to be able to recognise and differentiate those times when deviations from the norms of professional behaviour may be tempting but dangerous and those occasions when less standard (but not unprofessional) conduct may be legitimate. Competence in these domains is so fundamental that trainees who fail badly in this area need serious attention. On the other hand, it can be very difficult to achieve this competence, such that even advanced practitioners continue to need supervision and support to ensure appropriately nuanced adherence. Case study 8.4
The secretary of the psychological treatment department telephoned the consultant to say that the patient had come into the department for their session but that their trainee therapist had not turned up. The patient was angry and upset. The consultant dealt with the situation and then investigated what had happened. It turned out that the trainee had become unwell and not come to work. However, although they had telephoned the ward to say they were not coming, they had not called the therapy department and cancelled the appointment with the patient. In supervision this omission was discussed from a number of perspectives. The trainee was an unusually responsible individual who was mortified by his error and wanted to telephone the patient, to apologise and to offer them a different therapist. The consultant discussed with the trainee whether this would be an intervention that was for the patient’s benefit or whether it was primarily one that would reduce the trainee’s sense of distress at their error. Ultimately, the trainee was able to take a more moderate view of their error, to think about the patient’s experience of what had happened and to prepare a more appropriate apology.
Competencies 2, 3, 6 and 7 Competencies two, three, six and seven look at the specific technical aspects of therapy. These will be different in different modalities, and need to be backed by specific theoretical understanding by the trainee. The basic techniques that underpin the different schools of psychotherapy are in themselves powerful therapeutic tools which may have applicability outside the specific arena of an individual treatment. So, for example, giving a depressed patient simple help with behavioural activation is entirely feasible in the context of an out-patient appointment, as would be the clarification and confrontation stages of drawing out the consequences of an emotional conflict within a patient. Although a trainee who is unable to deliver any elements of a specific therapy in a technically competent way obviously needs further training and support, some failures in this area 96
WPBAs in psychotherapy
need to be judged carefully. Supervisors who may be specialists in their field but may not be as used to training junior doctors as they are to training therapy specialists can hold trainees to a standard much higher than is truly appropriate, or may shirk any evaluative role, passing all students who display even minimal enthusiasm. These problems can best be managed by the careful selection and training of supervisors. For this reason, although supervisors need not be doctors, each training programme should be administered and monitored by an appropriately qualified medical psychotherapist or general psychiatrist with specific psychotherapeutic interest and training. Another issue is that technical difficulties may not always be a result of the trainee’s deficits, as some patients present unexpected technical challenges. Such cases provide a powerful argument for the use of techniques of oversight and supervision other than an account of the session given by the trainee to monitor and improve skills. Audio- or videotaping sessions is one way to achieve this. If tapes are made, then there should be appropriate care given to their storage, since they form part of the patient’s notes. Patients need to understand and consent to the making of a tape. Most importantly, both supervisor and trainee need to give thought to the considerable amount of work involved in listening to and evaluating a recording to ensure that this powerful but time-consuming tool is put to best use. A less exacting but also helpful technique is to ask therapists to make a blow-by-blow account of the session they have conducted in as much detail as possible. The act of writing such great detail helps the trainee to think over the session and gives the supervisor an insight into the way that the interpersonal dialogue has unfolded. Competencies 9 and 10 Competency nine covers trainee’s capacity to use the supervision process itself and competency ten covers the important topic of keeping sensible and informative notes.
The future of WPBAs in psychotherapy There remains considerable scope for developing future tools to assess trainees’ performance both in relation to the psychotherapeutic stance and in connection with specific treatments. An obvious area is to test the capacity of a trainee to develop a comprehensive formulation of a case that takes appropriate account of psychological mechanisms. One technique that could easily be developed into a more structured assessment tool is to ask trainees to discuss the origin and pathogenesis of a presentation in terms of separately discussed biological, psychological and social factors, describing for each factor the role that they play in the predisposition to, and the precipitation and perpetuation of, the condition. Simply asking trainees to attempt to write something in each section of the nine-cell grid that these two conceptual axes generate (Fig. 8.1) can stretch their conceptual capacities. Another area where the psychotherapeutic stance is relevant is in 97
denman
Factors
Predisposing
Precipitating
Perpetuating
Biological Psychological Social
Fig 8.1â•… A grid for systematically recording a complete psychological formulation of a patient’s difficulties
relation to work in teams and to an appreciation of and capacity to operate in a multidisciplinary and often conflicted environment. More advanced trainees may take on the supervision or support of junior staff – medical, nursing and in other professions allied to medicine. Efforts in these areas will ultimately need evaluation. Another area that will need evaluation is the whole methodology of workplace-based assessment. The causal chain from training method to trained individual, from trained individual to reliably and longitudinally competent trainee, and from competent practice to successful outcome of treatment is a long one. Whereas there are studies that independently suggest first that training does produce competence, second that competence produces methodological adherence and third that adherent treatment is associated with good outcome, there are no studies that follow the entire process through in one prospective arc. Furthermore, there are plenty of reasons to suppose that the chain of positive causation could be weaker in practice than it might seem from research in specialised isolated settings. Curiously though, the methodologies that have been developed for gathering reliable evidence of human qualities and interpersonal competencies and outcomes that come from the area of psychotherapy research may be the very ones that can help researchers investigate the relative benefits of a range of educational interventions.
References Binder, J. L. & Struff, H. H. (1993) Recommendations for improving psychotherapy training based on experiences with manual-guided training and research: an introduction. Psychotherapy: Theory, Research, Practice, Training, 30, 571–572. Brown, G. W. & Harris, T. (1978) Social Origins of Depression: A Study of Psychiatric Disorder in Women. Free Press. Carley, N. & Mitchison, S. (2006) Psychotherapy training experience in the Northern Region Senior Unified SHO Scheme: present and future. Psychiatric Bulletin, 30, 390–393. Khurshid, K. A., Bennett, J. I., Vicari, S., et al (2005) Residency programs and psychotherapy competencies: a survey of chief residents. Academic Psychiatry, 29, 452–458. Martinez, R. & Horne, R. (2007) Setting up and evaluating a cognitive–behavioural therapy training programme for psychiatric trainees. Psychiatric Bulletin, 31, 431–434. Zoppe, E., Helena C. C, Schoueri, P., et al (2009) Teaching psychodynamics to psychiatric residents through psychiatric outpatient interviews. Academic Psychiatry, 33, 51–55.
98
Chapter 9
Educational supervisor’s report Ann Boyle
Before Modernising Medical Careers’ implementation (Department of Health, 2003; Department of Health et al, 2004), an educational supervisor in psychiatry was the named consultant supervisor for a training placement. This individual provided trainees with 1 hour per week face-to-face supervision for the development of clinical and personal skills. This time was enshrined in supervisor and trainee timetables and has been a highly valued component of UK postgraduate training in psychiatry. From August 2007, A Guide to Postgraduate Specialty Training in the UK (The Gold Guide) set out the arrangements for the introduction of competencebased training in the UK (Department of Health et al, 2007). This document outlines the responsibilities of educational supervisors who oversee training to ensure trainees are making adequate clinical and educational progress for a defined period of training. These individuals need to be prepared for the role, with appropriate training in a number of key areas. It is now a requirement that these responsibilities of clinical and educational supervision be uncoupled within psychiatric training. The Royal College of Psychiatrists has developed a useful description of these different roles in specialty training: the named clinical supervisor will work closely and directly with the trainee in the training placement and deliver 1-hour weekly supervision, with the educational supervisor allocated to work in partnership with the trainee to drive the educational appraisal process (Royal College of Psychiatrists, 2008). The 2010 specialty curriculum for psychiatry (Royal College of Psychiatrists, 2010) is based on a model of intended learning outcomes with specific competencies given to illustrate how these outcomes can be demonstrated practically by trainees. Portfolio Online has been available to trainees from August 2010 (https://training.rcpsych.ac.uk/). It is a webbased tool developed by the Royal College of Psychiatrists which provides an electronic repository of trainee activities, development and achievements, including workplace-based assessments. It is the responsibility of the trainee to ensure evidence is developed to support learning objectives. The learning objectives can then be linked to the curriculum by the trainee. The educational supervisor can monitor trainee progression through the 99
boyle
electronic portfolio. This assists in the identification of gaps in curriculum coverage by the trainee which can then be explored with the trainee by the supervisor during the educational appraisal process. The final portfolio review will form the basis for the annual structured report by the educational supervisor. (The College’s e-portfolio is discussed by Larissa Ryan and Clare Oakley in Chapter 10, pp. 117–118.)
Purpose and structure of report The educational supervisor is responsible for the preparation of an annual structured report for submission to the Annual Review of Competence Progression (ARCP) panel (the ARCP requirements will be discussed in Chapter 11). The report must be discussed and agreed with the trainee in advance of submission and must be an honest and objective reflection of the trainee’s development (General Medical Council, 2006). Trainees are usually given at least 6 weeks’ notice for submission of the report. In an event of a trainee failing to provide this documentation for consideration of an ARCP panel, the panel will be unable to consider the trainee’s progress. The structured report should reflect the progress made by the trainee towards meeting the learning objectives developed for the training year, be based on the quality of evidence collected by the trainee in his/her learning portfolio, and evidenced by the appraisal process. The content of the report should assist the panel reviewing the evidence to make a judgement about whether the trainee is deemed to be progressing satisfactorily through the period of training. The cornerstone of the educational appraisal process is the development of an individual learning plan (also called personal development plan, PDP) with clear learning objectives and outcomes. Learning objectives should be SMART (specific, measurable, achievable, realistic and time-bound) and include areas of knowledge, skills and attitudinal development to guide trainee learning. Trainees are adult learners and are expected to take responsibility for their learning activities and clinical practice within this model. The individual learning plan should form the basis of all appraisal discussion throughout the period of learning. The relationship between educational supervisor and trainee should be a supportive one. The discussions at the meetings are confidential and trainees should feel free to openly bring up any worries or difficulties without fear of retribution. A summary of such meetings needs to be documented and agreed by both parties. The meetings should occur at regular intervals, at a minimum at the beginning, middle and end of a 6-month placement, but may need to take place more frequently to address any problems that arise. Ideally, the report should be typed rather than handwritten to ensure it is legible for scrutiny by the ARCP panel. The documentation for the structured report is developed by each specialty school and should include a review of performance in the 100
educational supervisor reports
workplace, experiential and other outcomes, and trainee underperformance. These elements are discussed below.
Review of performance in the workplace: workplace-based assessments There are a defined minimum number of assessments for each year of specialty training. The structured end-of-year report should establish that the assessments have been performed timely and with appropriate intervals in-between (not several assessments in one day), with sampling of the curriculum as well as a range of assessors. The educational supervisor should review both quantitative (scores) and qualitative (free-text) information provided by the assessments. Trainees will need encouragement to acquire evidence of remediation of any earlier development needs identified by an assessor. There may be concerns that those trainees who do not consistently reach the expected level of competence for a period of training will require additional assessments completed by different assessors, the outcomes of which can then be used to describe the nature of the problem(s) more clearly and demonstrate improvement in trainee performance. The educational supervisor must provide the collated feedback to trainees from a 360° appraisal (mini-Peer Assessment Tool, mini-PAT). This tool is particularly useful in the identification of the underperforming trainee. When taking an overview of the assessment process, the educational supervisor needs to form a judgement about whether the assessment tools are being used in an appropriate fashion. This may highlight important areas of development for clinical supervisors which should be fed back to the training programme director for action. Trainees may be reluctant to reflect on assessments that have not gone well. However, they should be encouraged to view them as developmental as they are likely to present the trainee with valuable learning opportunities and should provide a focus for reflection in appraisal meetings with the educational supervisor and identify learning gaps that need to be addressed by the trainee.
Experiential and other outcomes The educational supervisor will work with the trainee to develop evidence of experiential learning in a number of key areas. The quality of the evidence gathered for the final review of the portfolio for the training year should be summarised in the structured report under specific headings (‘Logbook’, ‘Reflective practice’, ‘Examination progress’ and ‘Audit, research, teaching, management, psychotherapy and special interest’) and include any areas of development in the next year of training. This is an interactive process, but suggestions under each heading include the following. 101
boyle
Logbook There are a number of approaches that could be taken. A quantitative logbook of new patients seen is useful to provide assurance that a trainee has seen a range of clinical cases throughout specialty training, but it does not provide evidence of the development of clinical competencies. A logbook of out-of-hours work for ST4–6 trainees provides evidence of opportunities to work as an autonomous clinician, exposure to emergency work from Mental Health Act assessments and opportunities to supervise more junior medical staff and inter-agency emergency work out of usual working hours. A logbook of clinical supervision sessions with topics covered in weekly supervision is kept by many trainees. Reflective practice The educational challenges posed by psychiatry are particularly complex (Dowrich, 2000). Psychiatrists need to make an objective assessment of the problems presented by patients not only to form a diagnosis but also to take a sensitive, empathic view in order to understand the patient’s experience. This perspective has been used to develop descriptive psychopathology. It is recommended that doctors try to unravel the nature of the patient’s experience ‘to understand it well enough and feel it so poignantly that [they evoke] recognition from the patient’ (Sims, 1988, p. 3). Treatment of patients in psychiatry is more complex than in many other medical specialties. Doctors and patients often do not agree about the need for treatment; therefore trainees will need to develop skills to facilitate decisions about treatment and balance patients’ wishes against the risks to patients and others. Decisions about treatment are often confounded by the range and limitations of treatment options in many clinical conditions in psychiatry. Opportunities to create and nurture a sense of critical enquiry and enhance personal awareness and tolerance of uncertainty should be fundamental educational components of specialty training in psychiatry. These are not clinical skills that can be learnt easily and may be best acquired through reflective practice. Reflection is the careful consideration of one’s clinical practice by means of systemic critical enquiry. Reflective ability is well recognised as a key component of medical professionalism, building on the original work of Schon (1983, 1987). When gathering portfolio evidence, which can demonstrate that learning has taken place, reflective accounts of clinical events and experiences are particularly helpful in identifying what has been learnt, what is still to be learnt and how this new learning can be approached by a doctor. Learning that has an impact on the professional and personal growth of doctors, rather than merely focusing on knowledge acquisition, is probably best facilitated by reflective practice. Doctors are experienced written communicators, both in medical records and in correspondence to other professionals about patients. This is technical and objective communication, unlike the highly subjective quality of personal reflection. 102
educational supervisor reports
Some trainees and supervisors may find that reflection comes easily to them. Others may find it difficult to begin to contemplate the human dimension of illness; these clinicians may have the greatest requirement to reflect on their experiences as a doctor. All doctors can be supported to reflect in a structured way from different perspectives – our own experiences, the experiences and perceptions of patients and colleagues, and review of the evidence base for treatment – to ensure that a holistic and integrative approach to patient care is developed. Trainees will select the events and experiences they deem appropriate as the ‘raw material’ for reflection to include in their learning portfolio. This should help personalise a trainee’s learning. It can be powerful to reflect on events and experiences that are out of the ordinary: events that are ‘surprises’ for the trainee (either for going well or badly). Trainees should be encouraged to reflect on any critical incident, serious or adverse event, or complaint by the educational supervisor, as these are often important learning opportunities. The reflective practice section will include sensitive personal and confidential information not just about the trainee, but also about patients and colleagues. Trainees will need to be encouraged to always ensure that identifiable patient details are anonymised and all information should be afforded a high degree of confidentiality. Educational supervisors may need to assist the trainee in reflecting upon experiences and establishing any learning points. There are different models of reflection, including keeping a reflective diary over a period of time or the use of structured reflective templates; different trainees will have an individual preference for a specific model. A rigid prescriptive approach to use a particular model of reflection is probably not that helpful. The educational supervisor will need to work with trainees to ensure that this section of the learning portfolio is developed. There is no minimum number of reflective entries to be completed for successful progression through specialty training. However, this should not mean reflective practice should become another tick-box exercise. It has been suggested that a strengthened National Health Service consultant appraisal process will form an important aspect of medical revalidation and will include evidence of structured reflection on clinical practice or ‘reflection on action’. Hence it is an important skill for all doctors to develop early in their professional career (Department of Health, 2008). Supervisors can develop their own reflective abilities alongside the trainees under their supervision. The end-of-year report should reflect the quality and range of reflective practice included in the learning portfolio. Examination progress Examination progress is an important part of the discussions between the educational supervisor and trainee in core specialty training to be reflected in the end-of-year report. Educational supervisors should have knowledge of the Royal College of Psychiatrists examination structure and the examination timetable, what resources are available locally to support a 103
boyle
trainee struggling with aspects of the examination and ideally should aspire to be a College examiner. When a trainee is persistently struggling with passing the relevant written papers or final clinical examination in CT3 (CASC), it may fall to the educational supervisor to begin the process of exploring career options if the trainee is unable to enter ST4. Trainee examination progress, including number of unsuccessful attempts and plans to sit each component, should be documented in the annual structured report as well as details of any mitigating circumstances and remediation offered. Audit, research, teaching, management, psychotherapy and special interest The annual report needs to reflect trainee progress throughout the relevant year of training in gathering appropriate evidence of engagement in experiential learning from audit, research, teaching, management, psychotherapy and special interest areas. It is not sufficient for the report to provide tick-box evidence of trainee involvement; the extent and nature of trainee involvement and achievements in each area need to be unambiguous and supported by evidence in the learning portfolio. Trainees may struggle with developing specific areas, for example identification and development of an appropriate audit project. They can be supported more effectively by a supervisor taking an overview of progress in these areas of experiential learning over the training year through the appraisal process than when a supervisor’s involvement is for the more traditional duration of the 6-month placement, where tracking such progress can be more challenging. Box 9.1 provides some suggestions for a range of evidence trainees could develop to demonstrate learning in areas specified here. Special interest sessions for ST4–6 should have clear learning objectives for the placement, with a plan as to how they are to be achieved and a report from the consultant supervisor summarising the trainees’ progress. Many specialty schools will have developed templates specifically for this purpose.
Trainee underperformance The educational appraisal process provides an opportunity to identify concerns early about progression and training. Performance problems that are identified and managed early are more likely to reach a satisfactory conclusion. It is important to remember that serious underperformance is rare, but when present it causes a great degree of anxiety for supervisors responsible for managing the difficulties that arise. In any case of underperformance, the degree of risk to patients from an individual doctor needs to be quickly established. This issue is beyond the scope of this book but should involve the relevant staff within the employing trust (director of medical education) and the training programme director or postgraduate deanery. Early warning signs which will be brought to the attention of educational supervisor include: failure to engage in the programme of WPBAs; issues 104
educational supervisor reports
Box 9.1â•… Portfolio evidence 1 Audit •â•¢ Audit proposal •â•¢ Audit report •â•¢ Evidence of presentation of results •â•¢ Evidence of completed audit cycle •â•¢ Reflective writing 2 Research •â•¢ Literature search •â•¢ Research proposal •â•¢ REC approval •â•¢ Report from research supervisor •â•¢ Publication/abstract •â•¢ Reflective writing 3 Teaching •â•¢ Evidence of participation in teaching programme •â•¢ Student evaluation/feedback results •â•¢ Teaching materials developed •â•¢ Report from teaching lead •â•¢ WPBA – Assessment of Teaching •â•¢ Evidence of developing teaching skills •â•¢ Attendance at ‘Teaching the Teachers’ or equivalent course •â•¢ Educational qualification •â•¢ Reflective writing 4 Psychotherapy •â•¢ WPBA – psychotherapy ACE •â•¢ Reflective writing •â•¢ Report from supervisor •â•¢ Attendance record of therapy sessions and supervision 5 Management •â•¢ Membership/terms of reference/minutes of meetings attended •â•¢ Outline Business Case •â•¢ Reflective writing •â•¢ Evidence of participation in: management training and higher qualification in management (MBA) ACE, Assessment of Clinical Expertise; MBA, Master of Business Administration; REC, research ethics committee; WPBA, workplace-based assessment.
raised in the mini-PAT by colleagues; concerns reported by the named clinical supervisor in the training placement, including significant or unexplained absences; difficulty accepting feedback; and other behavioural signs, including difficulty in managing clinical uncertainty, inappropriate outbursts of anger or complaints. The management of these issues is often confounded by a lack of insight in the trainee. The educational supervisor will be responsible for bringing these concerns to the attention of the trainee during an appraisal meeting. The management of the underperforming trainee is outside the scope of this chapter. Specific guidance has been developed, which is helpful for all supervisors and should be included in 105
boyle
any training for educational supervisors (National Association of Clinical Tutors, 2008). Box 9.2 lists important areas of exploration with such trainees to begin to understand causes of underperformance. There is a requirement to keep a clear written record of the communication that has occurred between a trainee with performance problems and those responsible for the management of the trainee. If no such records have been kept, the training programme will be unable to provide evidence that the trainee has been given appropriate and timely support to address any of his or her difficulties. At every stage the educational process should be documented and shared with the trainee at a feedback session. The trainee’s response to a report or a complaint should be recorded in writing. In addition to the annual report, supporting documentation may need to be submitted to the panel by the educational supervisor to reflect the steps taken to support and remediate the trainee during the training year. This documentation should be as structured as possible and include: •â•¢ clarification of areas of concern and specific competencies that require development •â•¢ summary of the evidence available that specific competencies have not been developed •â•¢ degree of trainee insight into the nature and severity of the problems and any mitigating circumstances •â•¢ modification to the individual learning plan with clear targets identified about what the trainee needs to do or has done to remedy the problems, how this is to be achieved, and time frames with review date(s) and individual responsibilities for trainee and supervisor for each action. The documentation should be agreed and signed by the trainee and both the supervisor and trainee need to retain copies. If the annual structured report to the ARCP panel is negative, the trainee must have seen the report prior to the panel receiving it and may submit a response to the report or any other aspects of the documentation. It may also be necessary for the training programme director to provide an additional report to the panel in such an instance. The guidance provided in Good Medical Practice (General Medical Council, 2006) about the responsibilities of all those involved in assessment and appraisal of others is invaluable for both clinical and educational
Box 9.2â•… Key areas to be explored with an underperforming trainee Deficient clinical knowledge or skills Ill health (own or family, physical or mental) •â•¢ Environmental or organisational issues in placement, directorate or trust •â•¢ Bullying •â•¢ Personality or behavioural issues •â•¢ •â•¢
106
educational supervisor reports
supervisors. As ill health is an important cause of underperformance in doctors, educational supervisors should ensure that trainee portfolios have been developed to include a health declaration.
References Department of Health (2003) Modernising Medical Careers. The Response of the Four UK Ministers to the Consultation on Unfinished Business: Proposals for Reform of the Senior House Officer Grade. Department of Health. Department of Health (2008) Medical Revalidation – Principles and Next Steps: The Report of the Chief Medical Officer for England’s Working Group. Department of Health. Department of Health, Scottish Executive, Welsh Assembly Government, et al (2004) Modernising Medical Careers – The Next Steps: The Future Shape of Foundation, Specialist and General Practice Training Programmes. Department of Health. Department of Health, Department of Health, Social Services and Public Safety, NHS Scotland, et al (2007) A Guide to Postgraduate Specialty Training in the UK (The Gold Guide) (1st edn). Department of Health (http://www.mmc.nhs.uk/pdf/Gold%20Guide%20 2007.pdf). Dowrich, C. (2000) The educational challenge of mental health. Medical Education, 34, 545–550. General Medical Council (2006) Good Medical Practice. GMC. National Association of Clinical Tutors (2008) Managing Trainees in Difficulty: Practical Advice for Educational and Clinical Supervisors. NACT. Royal College of Psychiatrists (2008) Postgraduate Training in Psychiatry: Essential Information for Trainers and Trainees (Occasional Paper OP65). Royal College of Psychiatrists. Royal College of Psychiatrists (2010) A Competency Based Curriculum for Specialist Training in Psychiatry. Royal College of Psychiatrists (http://www.rcpsych.ac.uk/training/ curriculum2010.aspx). Schon, D. A. (1983) The Reflective Practitioner. Basic Books. Schon, D. A. (1987) Educating the Reflective Practitioner. Jossey-Bass. Sims, A. (1988) Symptoms in the Mind: An Introduction to Descriptive Psychopathology. BallièreTindall.
107
Chapter 10
Portfolios Larissa Ryan and Clare Oakley
This chapter discusses the use of portfolio learning generally, and more specifically its applications in psychiatry. It focuses on the use of portfolios in specialist training in psychiatry, but also their use in revalidation. The chapter considers what portfolio learning is, and the advantages and disadvantages of this method. The models that have previously been established for a portfolio framework will be discussed and we will evaluate the e-portfolio format. Finally, we will explain the recent process of developing the Royal College of Psychiatrists’ Portfolio Online platform, and look ahead to the future developments in this area.
What is a portfolio? Portfolios have been in use for many years in a wide variety of fields, but more recently they have been applied in the training of health professionals. They have become a feature of many undergraduate medical curricula, and feature as part of postgraduate specialist training for all the major medical specialties. A portfolio has been simply defined by Mathers et al (1999) as ‘a collection of evidence maintained and presented for a specific purpose’. A more detailed description is that portfolios should include ‘a documentation of learning, an articulation of what has been learned and a reflective account of the events documented or personal reflection’ (Snadden & Thomas, 1998a). The second definition highlights the difference between a professional portfolio, which is used only to present examples of work or to list achievements, and a learning portfolio, which includes a record of educational experience but also contains reflection on this, and planning for future learning. An important feature of learning portfolios is that they rely on, and encourage, self-directed learning, which is a key feature of adult learning. Kolb’s learning cycle (Fig. 10.1) describes a process where concrete experience leads to reflection, which leads to conceptualisation and learning from the experience, which leads to further experimentation with trying out new knowledge, which then results in further experience (Kolb, 1984). 108
Portfolios
Concrete experience
Active experimentation
Reflective observation
Abstract conceptualisation
Fig. 10.1â•… Kolb’s learning cycle
This cycle is ideally suited to a portfolio format, whereby doctors can, for example, record a clinical experience, reflect on their own performance and identify their learning needs, as well as how to meet these, and suggest ways to obtain further relevant experience. From this process, portfolios allow doctors to develop individual, personalised learning plans, thus giving them some ownership of their training. For this to be successful, the portfolio needs to allow and encourage the identification of gaps in knowledge, and record failures as well as triumphs (Seed et al, 2007).
Why use a portfolio? The increasing use of portfolio learning in medicine is driven by several factors. First, medical education has shifted from being driven by the acquisition of knowledge, to considering the achievement of competence. Competence is a broad-reaching concept, defined by Epstein & Hundert (2002) as ‘the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individuals and communities being served’. The Royal College of Psychiatrists’ competency-based curriculum for specialist training focuses achievement in training on attainment of specific competencies. This allows the performance of the doctor in the workplace to be a more robust measure of outcome and has led to the introduction of workplace-based assessments (WPBAs). Whereas the ability to retain knowledge for a certain period of time is well suited to assessment by traditional examination formats, the assessment of competence requires a more complex approach. This will include assessments in the workplace, demonstration of professional attitudes and utilising appropriate approaches to lifelong learning, which will be better demonstrated by a portfolio of evidence than by obtaining examinations. The second factor behind the growing popularity of portfolio learning in medicine is the introduction of the European Working Time Directive, which has resulted in most medical trainees attempting to complete postgraduate training in fewer hours than their predecessors. This means that training obtained within the limited time available must be carefully 109
ryan & Oakley
focused on achieving relevant competencies and working on areas of weakness. Portfolio learning helps trainees and their supervisors to be clear about how close they are to achieving a stated competency, and to think about what they can do to achieve this. Deficiencies in the skills and knowledge base can be thus identified and targeted. Another reason why portfolios are becoming widespread is the introduction of revalidation for all UK doctors. Department of Health reports on revalidation (Department of Health, 2007, 2008) note the importance of using a variety of sources of evidence to demonstrate good practice. The General Medical Council (GMC) guidance on revalidation (General Medical Council, 2009, 2010) states that the evidence collected will need to show attainment of standards outlined by the relevant medical Royal College. Doctors in training would be able to use the evidence of their progress through training for the purposes of revalidation. In the future, all UK doctors are likely to need to produce a portfolio of evidence demonstrating their competence, in order to be able to retain their licence to practise. Acquiring the skills needed to produce such a portfolio will therefore be of the utmost importance for all doctors, and the process of collecting this evidence could usefully be started at an early stage in a doctor’s career. As well as their use in collecting evidence and self-assessment, portfolios are also increasingly being used for purposes that would have traditionally required a curriculum vitae, such as job interviews. Most interviews for application to specialty training include a ‘portfolio station’ specifically for this purpose. It is possible that this practice will extend to consultant interviews in the future.
Using portfolios to demonstrate competence and progression through training As well as a tool to stimulate reflection and learning, portfolios have been considered a method of assessing students and doctors. Using a portfolio alongside traditional ways of assessing doctors may permit a more complete picture of their day-to-day practice. Sturmberg & Farmer (2009) argue that portfolios offer ‘the broadest possible way of conveying the achievement of one’s complex capabilities in descriptive ways’. They suggest that basic knowledge is the lowest level of skill to be assessed, and other activities such as WPBA build on this knowledge and develop it. This idea is compatible with Miller’s ‘pyramid of competence’ (1990), whereby the lowest level is a learner who ‘knows’, building to a learner who ‘knows how’, then to one who ‘shows how’, and finally to one who ‘does’. Multiplechoice questions and other written examinations assess the lowest level of the pyramid, whereas a real-life performance assessment (such as a WPBA) is required for the highest level. Concerns have been raised by Roberts et al (2002) and others regarding the possible problems of having a portfolio that combines the purposes of 110
Portfolios
reflective and formative learning with a summative assessment process. It is suggested that students and doctors may feel restricted in making honest reflections, which may be negative, if they know they are going to be formally assessed on the same information. This might be particularly relevant for portfolios used for revalidation, which is clearly a ‘high-stakes’ assessment having implications for the future practice of a doctor. However, there are solutions to this issue, including keeping some reflective notes between only the doctor and their direct supervisor. If portfolios are to be used for assessment, particularly in revalidation, the reliability and validity of the assessment used must be high, and should have some evidence base to support it. Several studies have evaluated the reliability and potential problems of using portfolios in assessments. Tochel et al (2009) carried out a recent review into the effectiveness of portfolios in postgraduate assessment and education. They looked at 56 articles involving seven healthcare professions. Despite noting a lack of high-quality evidence and lack of objective examination of the effectiveness of portfolios, the review concluded overall that portfolios are effective and useful for formative assessment. Findings regarding summative assessment were described as widely variable in terms of level of reliability. The importance of other methods of assessment taking place alongside portfolio assessment was highlighted. This review suggested that summative assessment may be possible for more structured portfolios, for more junior trainees and students; however, for individuals who are more senior in their career progression, qualitative methods are more likely to be suitable. Driessen et al (2007a) also reviewed the effectiveness of portfolios in assessment and concluded more favourably regarding reliability of assessment than Tochel et al. From six studies on interrater reliability of portfolios, Driessen et al found that three raters were required to achieve an acceptable level of reliability for high-stakes assessment. The most successful means of achieving high reliability was found to be by using small groups of specifically trained assessors, with clear guidelines for scoring. This review considered the issue of combining formative and summative assessment, and concluded that this combination not only did not cause any problems, but the inclusion of summative assessment was in fact important in maintaining the place of the portfolio among other competing assessment formats for students. Portfolios are now an integral part of all psychiatry trainees’ Annual Review of Competence Progression (ARCP). A review of the trainees’ portfolio determines whether they have acquired the necessary competencies, as outlined in the curriculum, in order to progress to the next year of training or to achieve their Certificate of Completion of Training. The portfolio will be reviewed at several stages in the process, usually by the trainee’s direct supervisor, their educational supervisor or training programme director, and ultimately by the ARCP panel. The portfolio reviews guide the completion of the supervisor’s reports, which inform the summative decision of the ARCP panel. Despite the fact that there has been a focus on the completion 111
ryan & Oakley
of workplace-based assessments (WPBAs) as the primary task for trainees towards their ARCP, the considered development of a portfolio allows a more thorough demonstration of progression than WPBA alone. A lack of the pursuit of excellence is a criticism that has been levelled at WPBA (Oyebode, 2009), but a portfolio allows the doctor to provide evidence of complexity and excellence. It is therefore the portfolio as a whole, not WPBA alone, that provides a more effective measure of a doctor’s expertise.
Factors that help to make a portfolio successful Studies on portfolio use have demonstrated some factors that are important to successful implementation of a portfolio. The review by Driessen et al (2007a) highlights some of these (Box 10.1). The review found that problems arose when portfolios had a poorly defined purpose, and when learners and teachers were insufficiently informed about the portfolio. A similar review of reflective learning portfolios in undergraduate medical education (Driessen et al, 2005) found that successful portfolio use required an appropriate portfolio structure, an appropriate assessment procedure, enough new experiences and materials, and sufficient teacher capacity for coaching and assessment. For a portfolio to succeed, all parts of the learning cycle need to be carried out. Doctors have been found to be poor at self-assessment of learning needs (Davis et al, 2006). Defining needs should lead to the formation of learning objectives that are specific, measurable, achievable, realistic and time-bound (SMART). Holloway (2000) highlights that lack of achievement of objectives is likely to arise from goals that are non-specific, re-visit old ground, are not needs-based and are unmonitored. Learners must have the resources and availability of clinical experience to be able to meet their learning needs, and they need to have the capacity to reflect on them afterwards.
Potential barriers to using portfolios Several themes are identified from studies surveying opinions on portfolios. Overall views over the usefulness of portfolios are divided, and one study of
Box 10.1â•… Important factors for portfolio success 1 2 3 4 5
Clearly communicated goals and procedures Integration with curriculum and assessment Flexible structure Support through mentoring Measures to heighten feasibility and reduce required time
Source: Driessen et al 2007a.
112
Portfolios
foundation programme doctors found a roughly 50:50 split between those who thought the portfolio was a good idea and those who did not (Hrisos et al, 2008). However, reviews of longitudinal attitudes found that positive perceptions of portfolios increased with familiarity and time spent using them (Seed et al, 2007; Davis et al, 2009). The first obstacle to consider is that trainees may not wish to use a portfolio at all. Pearson & Heywood (2004) looked at a sample of general practice (GP) registrars and found that 35% of the group had not used their portfolio within the previous month, and 58% had not used it in a reflective way during that time. A more recent study by Seed et al (2007) looking at portfolio use among London psychiatric trainees found that only around a fifth of those surveyed used a portfolio. Half of those who did not have a portfolio said they had never heard of one, and just under half said they would only use a portfolio if it was compulsory. However, the introduction of the formalised ARCP process for all trainees, making a portfolio compulsory, has occurred since this survey was conducted. A major concern is time, and many students and doctors perceive the portfolio method to be very time-consuming (Mathers et al, 1999; Seed et al, 2007; Hrisos et al, 2008; Davis et al, 2009; O’Brien et al, 2010). Specifically, the amount of paperwork involved in the exercise is cited as burdensome by both undergraduate and postgraduate learners (Challis et al, 1997; Mathers et al, 1999; Hrisos et al, 2008; Davis et al, 2009). Trainees do not want portfolios that are overly time-consuming, or that contain too much paperwork that needs completing. Any successful portfolio design must therefore be mindful of the many competing requirements for trainees to fulfil, and the multiple educational and clinical claims upon their time. A study showed that GP registrars’ portfolio entries decreased markedly around the time of their MRCGP (Member of the Royal College of General Practitioners) examinations (Snadden & Thomas, 1998b), a priority shift that will be familiar to all training grade doctors. Another important issue is knowledge of why and how to use a portfolio (Challis et al, 1997; Davis et al, 2009; Ross et al, 2009). In particular, written guidance, opportunities to ask questions, worked examples of how to create appropriate entries and links to further helpful resources are valued (Ross et al, 2009). Studies have shown that another factor important to the perceived usefulness of portfolios is the involvement of a trainer, who should have both a familiarity with the format, and a degree of enthusiasm for the whole portfolio process (Snadden & Thomas, 1998b; Ryland et al, 2006; Hrisos et al, 2008; Ross et al, 2009; O’Brien et al, 2010). For this to be a productive process, the relationship the trainee has with their supervisor needs to be supportive and honest, such that negative feedback can be offered in an appropriate way (Snadden & Thomas, 1998b). Creating this relationship may be difficult at times, particularly for trainees in rotations where their supervisor frequently changes, and mentoring groups may be a valuable addition to the supervisor–trainee relationship. For foundation 113
ryan & Oakley
doctors a particular problem was obtaining assessment and supervision from consultants, whom they perceived as being very busy and who they felt reluctant to ‘bother’ (Hrisos et al, 2008). In one study medical students commented that they would prefer a trainer or mentor to have had personal experience of using the portfolio format (Ross et al, 2009). As consultants begin to use more formalised portfolios and specifically an e-portfolio for revalidation, this may improve the advice and support they are able to offer trainees with their portfolios. Reflective learning was seen in a generally negative light among both medical students and doctors in training (Hrisos et al, 2008; Ross et al, 2009). Other studies found that trainees are more likely to practise reflective learning when there is a supportive trainer involved (Snadden & Thomas, 1998b; Pearson & Heywood, 2004), and trainees who do reflect on their experiences rate the usefulness of portfolios more highly (Pearson & Heywood, 2004). Previous surveys of trainee concerns have not identified patient confidentiality as a relevant issue. However, this does need to be considered carefully, for example in the use of clinical logs or case presentations. As with any instance of patient information use, it is vital that this information is fully anonymised.
Existing portfolio models Webb et al (2002) identify four general models of how portfolios can be used in the real-life setting. They describe the ‘shopping trolley’ model, where students include in their portfolios anything that has been used or produced as part of their training. This format is very flexible and allows an inclusive approach to building a portfolio; however, it lacks any direction or linking between sections, and is more difficult to assess. The ‘toast rack’ portfolio model is one where specific areas require specific forms or activities to be completed. The trainee must complete all these forms, which are then ‘slotted in’ to the portfolio. This model is criticised as it does not allow for any individuality, reflection or self-directed learning on the part of the student. The ‘cake mix’ portfolio is a step forward from the previous two, as it requires a ‘blending’ of both learning objectives and evidence that these have been achieved. Last is the ‘spinal column’ model, where learning needs are compared to vertebrae, with activities supporting those needs linking back to individual needs. This model requires evidence to be directly linked to the identified learning need, with each learning need (‘vertebra’) having its own unique collection of evidence, and the sum of all the vertebrae coming together to form the skeleton framework of learning. This model has been utilised in the development of the Royal College of Psychiatrists’ Portfolio Online. In 2005, the foundation programme was introduced as the first step of UK postgraduate medicine. An important part of the programme was the use of a portfolio to record learning and process, and hence all doctors who 114
Portfolios
have progressed through foundation years will have some experience of using a portfolio. The foundation programme portfolio is available in paper and web-based formats, and consists of a competency-based curriculum plus WPBA tools. General practice has a long-standing tradition of using portfolio-based learning. Their current electronic portfolio is built around the curriculum and the reflections that trainees make. All reflections are expected to be reviewed by the trainer, who can give feedback if needed. General practice registrars are expected to make around three reflections per week in their portfolio, which could be related to patients seen, educational activities attended, discussions with trainers or journal articles read. Workplacebased assessments are performed throughout training and are incorporated within the portfolio. For many years the Royal College of Psychiatrists provided trainees with a logbook that had sheets on which to record training experiences such as audits, psychotherapy and publications. However, the use of any form of logbook or portfolio was not widespread among trainees before the introduction of the ARCP (Seed et al, 2007). More recently the College compiled a portfolio framework to assist trainees in structuring their portfolios (Oakley et al, 2008). This includes a suggested format for an individual learning plan and reflective notes, in addition to a clinical log and tables to record other training experiences.
Electronic portfolios An electronic portfolio (e-portfolio) is similar in content to a paper one, but has the advantage of making use of information technology to increase usability and access. Usually the data making up the portfolio are stored remotely and accessed via the internet. A web-based portfolio can be accessed at any time and from any computer. The format allows hyperlinks to be made between stored data; for example, clicking on a curriculum outcome could take a user directly to the relevant evidence demonstrating the achievement of the outcome. The use of an electronic web-based portfolio has been assessed by Driessen et al (2007b). In this study undergraduate students completed either a web-based or paper-based portfolio, using the same format in either medium. Portfolio structure, quality of reflection and quality of evidence showed no significant difference between the two formats. Webbased portfolios had a significant positive effect on student motivation, and mentors involved with the students found them more user-friendly. The web-based portfolios took more time to complete (15.4 v. 12.2â•›h), and it was suggested that this may have been related to the increased motivation found among students to complete them, so that they chose to spend longer working on them. The literature review by Tochel et al (2009) found there was good evidence that the flexibility of an electronic format was beneficial, encouraged more 115
ryan & Oakley
time to be spent voluntarily updating the portfolio, and was better at facilitating reflection. Other reported benefits from single studies included a better overview of training and better focus on learning objectives (Kjaer et al, 2006); and assistance in recalling previous experiences, better organisation and better presentation (Murray & Sandars, 2009). Two studies found that the e-portfolio was helpful in stimulating reflective learning (Dornan et al, 2002; Kjaer et al, 2006). In common with studies on paper-based portfolios, usage of the e-portfolios was poor throughout the year, peaking shortly before the students or trainees were due to be assessed on their portfolios (Murray & Sandars, 2009). Methods to support usage were found to be a compulsory element to completion and the provision of feedback; trainee doctors who received feedback on their e-portfolios were twice as likely to use it regularly (Murray & Sandars, 2009). Involvement of the trainer, an introduction to the format consisting of an explanation of the use and purpose of a portfolio, and a practical technical demonstration were also found to be beneficial (Kjaer et al, 2006). There are also disadvantages to the e-portfolio format. As with paperbased portfolios, the issue of time needed to complete them is uppermost for learners (Dornan et al, 2002; Kjaer et al, 2006). However, the study by Kjaer et al (2006), looking at general practice trainees, estimates the appropriate amount of time to spend on the e-portfolio to be only 10–15â•›min per day. Problems with lack of computer access and lack of IT expertise were also identified (Dornan et al, 2002). Both trainees and trainers require reasonable IT skills for the format to work. The ideal is that trainees could make frequent brief updates to their portfolios, for example logging all the patients seen in a morning’s out-patient clinic. They are less likely to be able to do this if it means an extra change of location to find an available computer. However, with the increasingly widespread use of electronic patient notes systems, this should hopefully become less of an issue, as computer access will be needed in all patient areas. There is clearly potential for an e-portfolio to use technology beyond just helping to organise and present written evidence. There is the potential for visual media to be stored in a portfolio, for example a video of a presentation or teaching session. An innovative design for critical care portfolios incorporates several e-learning exercises within the portfolio itself, to address specific curriculum competencies such as interpretation of a pulmonary artery catheter waveform (Clay et al, 2007). Trainees are guided through the exercise, and prompted with specific questions to consider and research further if needed. It is noted that these exercises would be very time-consuming to set up – in this case it required 3 months of dedicated time to create a web-based lecture series. However, once the portfolio was created, it only took around 1â•›h per week to maintain. There are many other potential applications of technology to expand the e-portfolio format. Introduction could be provided by an interactive online guide, and queries or requests for support could be submitted 116
Portfolios
electronically. Students could create online case presentations, which other team members could then remotely comment on. This could facilitate assessments such as case and journal club presentations for trainees in smaller hospitals, with fewer medical staff physically present. There would also be the potential for different ‘views’ of the same portfolio from different logins, e.g. the training programme director’s login might give them only a summary view, whereas the trainee’s educational supervisor would have a more detailed view. The trainee login could potentially include access to a completely private area, if this was thought to be desirable for reflective practice or other notes. Communication can be facilitated from the portfolio administrators to trainees. If a message needs to be distributed to trainees, for example reminding them about an outstanding assessment or informing them of an update in the system, this can be included as a message box when the e-portfolio is accessed by the trainee. This system is quicker than communication by post, and less likely to become lost in a long list of emails. Finally, the actual time spent carrying out activities on the e-portfolio could be logged and presented as evidence for continuing professional development for consultants and general practitioners (Dornan et al, 2002).
Development of a psychiatry e-portfolio Many medical specialties are moving from paper-based to web-based portfolios. Currently the UK colleges of physicians, obstetrics and gynaecology, surgery, paediatrics, public health, pathology, emergency medicine and general practice all operate e-portfolios. The Royal College of Psychiatrists launched its e-portfolio in August 2010 as Portfolio Online. This has been initially aimed at trainees, but the expectation is that in the future it will be used by all members of the College in their preparation for revalidation. The general principles underlying Portfolio Online are that it should be structured yet flexible enough to meet doctors’ personalised training needs, and must be intuitive to use. It must also offer tangible benefits over a paper-based portfolio. The main advantage of the e-portfolio over paper is the ability to cross-reference evidence with both an individual learning plan and the curriculum. This allows a piece of evidence, for example a WPBA, to be linked to a competence in the curriculum and also to an individual learning objective for a particular placement. In this way a web of evidence can be constructed over the years of training, to allow the doctor to clearly demonstrate at the end of their training that they have achieved the competences outlined in the curriculum. There is the facility to view this evidence by placement, by year or as a whole. This reflects the ‘spinal column’ model discussed earlier, where educational activities supporting training link back to individual learning objectives and the curricular competencies. 117
ryan & Oakley
The portfolio is designed around the individual learning plan, which is a set of learning objectives with a description of how particular competences are to be achieved in that placement and how these achievements will be demonstrated. This learning plan should act as the driver for educational activity for the trainee, and this is why it is central to the Portfolio Online system and acts as a personalised homepage for the trainee. All evidence entered into the e-portfolio by the trainee will be linked to these learning objectives and the curriculum. The evidence is in three broad categories: WPBA via the existing Assessments Online system (which is subsumed into the Portfolio Online), text entered into tables in the system (e.g. clinical logs, publications, audits) and scanned documents (e.g. conference certificates, presentations and documents). The way Portfolio Online may be utilised by trainees throughout the training year is demonstrated in Box 10.2. The primary benefit of the e-portfolio for trainers is the ability to have an overview of their trainees’ progress at any time. In the future, there will be the facility on Portfolio Online for viewing a summary of a trainee or a group of trainees’ portfolios. This will allow a paperless ARCP, without
Box 10.2â•… Utilising the Royal College of Psychiatrists’ Portfolio Online •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢
At the beginning of the training year the trainee should enter the details of their placement, including their training level Details of their educational supervisor should then be added In discussion with their supervisor learning objectives for the placement should be drawn up and entered into the system These learning objectives can then be linked to competencies in the curriculum WPBAs and other forms of evidence are collected Each WPBA undertaken can be linked to a learning objective and/or the curriculum Other evidence can be added by means of creating log entries, incorporating reflective notes and/or scanning documents This other evidence can also be linked to learning objectives and/or the curriculum The trainee can then share the evidence in their portfolio with their supervisor There is a function to review the curriculum coverage for the trainee, which will assist in reviewing the learning objectives Summary reports of WPBAs undertaken can also be created The educational supervisor will be able to complete their annual report electronically and this will be stored in the trainee’s portfolio The training programme director and head of school will have electronic access to their trainees’ portfolios, allowing a paperless ARCP
ARCP, annual review of competence progression; WPBA, workplace-based assessment
118
Portfolios
the logistical issues of moving multiple folders of paper between different locations for review. The uniformity of portfolio layout and structure via Portfolio Online will allow more meaningful assessments of portfolios by assessors and comparison between trainees.
Future directions The training year commencing August 2010 has seen the first widespread use by trainees of the Royal College of Psychiatrists’ Portfolio Online system. It is hoped that this will prove to be a useful tool for trainees to actively plan their own learning and create a body of evidence demonstrating their competence as specialists. Despite the work that has gone into its development and testing, there will inevitably be some problems with the Portfolio Online. Trainees will be able to make enquiries and register difficulties with the IT support team, who already perform this role for the College’s Assessments Online system. Feedback from trainees will help the system to be more dynamic and continually adapting to trainees’ needs and result in a more useful tool over time. In addition, the trainee e-portfolio is likely to be adapted and developed to meet the needs of trained psychiatrists for revalidation. Similarly to the oversight of trainees by heads of school, this will allow the responsible officers involved in revalidation to oversee their doctors’ portfolios electronically and streamline the process. As with the e-portfolio for specialty training, this e-portfolio will undergo a period of piloting and user feedback if there is agreement to launch it nationally.
Conclusions Portfolio-based learning and e-portfolios specifically are now wellestablished components of both undergraduate and postgraduate medical education in the UK. Use of portfolios relies on reflection and self-directed learning, making them particularly appropriate for adult learners. With the approach of revalidation, the concept of a collection of evidence to demonstrate a doctor’s skills and achievements will become very important, and portfolios provide a format to do this. Portfolios have a growing evidence base for use as both formative and summative assessments, but this is not yet conclusive. In general, trainees find them helpful for their own learning, although there are concerns about the time taken to create them, and the knowledgeable support of a trainer or supervisor is critical.
References Challis, M., Mathers, N. J., Howe, A. C., et al (1997) Portfolio-based learning: continuing medical evaluation for general practitioners – a mid-point evaluation. Medical Education, 31, 22–26.
119
ryan & Oakley
Clay, A. S., Petrusa, E., Harker, M., et al (2007) Development of a web-based, speciality specific portfolio. Medical Teacher, 29, 311–316. Davis, D. A., Mazmanian, P. E., Fordis, M., et al (2006) Accuracy of physician selfassessment compared with observed measures of competence: a systematic review. Journal of the American Medical Association, 296, 1094–1102. Davis, M. H., Ponnamperuma, G. G. & Ker, J. S. (2009) Student perceptions of a portfolio assessment process. Medical Education, 43, 89–98. Department of Health (2007) Trust, Assurance and Safety – The Regulation of Health Professionals in the 21st Century. TSO (The Stationery Office). Department of Health (2008) Medical Revalidation – Principles and Next Steps. TSO (The Stationery Office). Dornan, T., Carroll, C. & Parboosingh, J. (2002) An electronic learning portfolio for reflective continuing professional development. Medical Education, 36, 767–769. Driessen, E .W., van Tartwijk, J., Overeem, K., et al (2005) Conditions for successful reflective use of portfolios in undergraduate medical education. Medical Education, 39, 1230–1235. Driessen, E., van Tartwijk, J., van der Vleuten, C., et al (2007a) Portfolios in medical education: why do they meet with mixed success? A systematic review. Medical Education, 41, 1224–1233. Driessen, E. W., Muijtjens, A. M. M., van Tartwijk, J., et al (2007b) Web- or paper-based portfolios: is there a difference? Medical Education, 41, 1067–1073. Epstein, R. M. & Hundert, E. M. (2002) Defining and assessing professional competence. Journal of the American Medical Association, 287, 226–235. General Medical Council (2009) Revalidation: Information for Doctors. GMC (http://www. gmc-uk.org/publications/licences_to_practise.asp). General Medical Council (2010) Revalidation. GMC (http://www.gmc-uk.org/revalidation). Holloway, J. (2000) CPD portfolios and personal development plans: why and how? Advances in Psychiatric Treatment, 6, 467–475. Hrisos, S., Illing, J. C. & Burford, B. C. (2008) Portfolio learning for foundation doctors: early feedback on its use in the clinical workplace. Medical Education, 42, 214–223. Kjaer, N. K, Maagaard, R. & Wied, S. (2006) Using an online portfolio in postgraduate training. Medical Teacher, 28, 708–712. Kolb, D. (1984) Experimental Learning: Experience as a Source of Learning and Development. Prentice Hall. Mathers, N. J., Challis, M. C., Howe, A. C., et al (1999) Portfolios in continuing medical education – effective and efficient? Medical Education, 33, 521–530. Miller, G. E. (1990) The assessment of clinical skills/competence/performance. Academic Medicine, 65, 563–567. Murray, C. & Sandars, J. (2009) e-Learning in medical education: Guide supplement 32.2 – Practical application. Medical Teacher, 31, 364–365. Oakley, C., Brown, N. & White, O. (2008) A Portfolio Framework for Specialty Training in Psychiatry. Royal College of Psychiatrists. O’Brien, M., Brown, J., Ryland, I., et al (2010) Exploring the views of second-year Foundation Programme doctors and their educational supervisors during a deanerywide pilot Foundation Programme. Postgraduate Medical Journal, 82, 813–816. Oyebode, F. (2009) Competence or excellence? Invited commentary onâ•›…â•›Workplacebased assessments in Wessex and Wales. Psychiatric Bulletin, 33, 478–479. Pearson, D. J. & Heywood, P. (2004) Portfolio use in general practice vocational training: a survey of GP registrars. Medical Education, 38, 87–95. Roberts, C., Newble, D. I. & O’Rourke, A. J. (2002) Portfolio-based assessments in medical education: are they valid and reliable for summative purposes? Medical Education, 36, 899–900. Ross, S., Maclachlan, A. & Cleland, J. (2009) Students’ attitudes towards the introduction of a Personal and Professional Development portfolio: potential barriers and facilitators. BMC Medical Education, 9, 69.
120
Portfolios
Ryland, I., Brown, J., O’Brien, M., et al (2006) The portfolio: how was it for you? Views of F2 doctors from the Mersey Deanery Foundation Pilot. Clinical Medicine, 6, 378–380. Seed, K., Davies, L. & McIvor, R. J. (2007) Learning portfolios in psychiatric training. Psychiatric Bulletin, 31, 310–312. Snadden, D. & Thomas, M. L. (1998a) The use of portfolio learning in medical education. Medical Teacher, 20 (3), 192–199. Snadden, D. & Thomas, M. L. (1998b) Portfolio learning in general practice vocational training – does it work? Medical Education, 32, 401–406. Sturmberg, J. P. & Farmer, L. (2009) Educating capable doctors – A portfolio approach. Linking learning and assessment. Medical Teacher, 31, e85–e89 (online paper). Tochel, C., Haig, A., Hesketh, A., et al (2009) The effectiveness of portfolios for postgraduate assessment and education: BEME Guide No 12. Medical Teacher, 31, 299–318. Webb, C., Endacott, R., Gray, M., et al (2002) Models of portfolios. Medical Education, 36, 897–898.
121
Chapter 11
Annual Review of Competence Progression (ARCP) Wendy Burn
The Annual Review of Competence Progression (ARCP) was introduced in 2007 as part of the implementation of Modernising Medical Careers, a complete overhaul of the design and assessment of postgraduate medical training. It is fully described in section 7: Progressing as a Specialty Registrar in A Guide to Postgraduate Specialty Training in the UK (The Gold Guide) (Department of Health et al, 2007). Recommendations are updated on a regular basis and each latest version is applicable to trainees taking up placements at the start of the training year in August after the latest version of the guide is released. The ARCP replaced the older system of assessment, Record of In-↜Training Assessment (RITA) that was used for higher trainees, but, more importantly, it was extended to every level of specialty training, including core training. The ARCP provides a seamless yearly record of training achievements and will be used to support the application for a Certificate of Completion of Training (CCT) awarded at the successful completion of specialist training. The certificate also makes the trainee eligible for inclusion on the General Medical Council’s (GMC’s) Specialist Register. The ARCP is an important function for the deanery schools, which were set up to deliver postgraduate medical training. They are managed by the deanery with advice from the Royal College of Psychiatrists, and have a responsibility for training, quality control and confirming that a trainee is ready to enter College examinations and to receive their CCTs in due course. The ARCP provides evidence to the school of either the trainee’s progression or failure to progress, which can eventually even lead to termination of training.
Progressing as a specialty registrar To progress as a specialty registrar a trainee must demonstrate that they are acquiring the necessary skills and competencies at the appropriate rate. The ARCP is an annual review of the evidence provided by the trainee to demonstrate that they are progressing as they should be. It also enables identification of trainees who are struggling, so that appropriate remedial measures can be put in place at an early stage. 122
annual review of competence progression
Identifying the acquisition of competencies and developing ways of evidencing this acquisition is a challenge. Knowledge can be assessed relatively easily by tests such as the MRCPsych examination. However, postgraduate medical education is often characterised as a process of learning by experience. A qualitative study of how doctors learn (Teunissen et al, 2007) used focus groups to examine what happens when doctors learn in the workplace. Work-related activities such as history-taking are the starting point and are followed by interpretation, which is the process of reading a situation and noticing specific aspects while subconsciously overlooking others. Interpretation of work activities gives rise to personal experiences and will be influenced by the views of trainers and textbooks. The trainees then construct an understanding of their personal experiences, defining what they have learnt from the experience. This type of learning can best be evidenced by workplace-based assessments and reflective practice. An example is outlined in case study 11.1 Case study 11.1
A trainee assesses a patient who is low in mood. The patient says: ‘I take my antidepressant in the morning, by the evening it kicks in and I am much better’. The trainee reflects on this and remembers that diurnal variation is a symptom of depression. A case-based discussion takes place and the trainer agrees that diurnal variation is sometimes misinterpreted by patients as an immediate response to medication.
Trainee preparation for ARCP One of the key functions of the ARCP is to review the trainee’s educational evidence; thus the trainee must prepare for the ARCP throughout the year. It will not be possible to put this evidence together in a few days before the ARCP. The only way to gather what is needed efficiently and successfully is to develop a portfolio. A portfolio is a collection of evidence that learning has taken place; it may be paper-based or electronic (portfolios are described in detail in Chapter 10). It is essential that the trainee knows what is expected in the portfolio. The College website gives up-to-date instructions and schools will also issue their own guidelines. The College also provides additional guidance regarding the evidence that trainees should submit at the end of each training year in support of their attaining the relevant curriculum competencies (see Appendix 2 for ARCP guidance for core psychiatric training and general adult psychiatry as an example; guidance related to other curricula can be obtained on the College website: www.rcpsych.ac.uk/ training/specialtytrainingguides.aspx). The College also provides an online portfolio option which some trainees have decided to take on from August 2010. The number of workplace-based assessments will be specified. These need to take place at regular intervals throughout the year and not be left until the week before the ARCP. All trainees must register with the College online recording system and reports will then be provided to the schools 123
burn
for the ARCPs. It is important that programme directors are aware of what is needed and are able to guide and direct trainees. Each year the educational supervisor/s will complete a report that goes to the ARCP panel. It is the trainee’s responsibility to ensure that this is done. If the report is not present when the panel meets the trainee is issued with an outcome 5 of the review (‘Incomplete evidence presented – additional training time may be required’; see pp. 126–129 for a list of ARCP outcomes) and has to provide the evidence within a given timeframe.
Educational supervisor’s report Although the educational supervisor’s report has been discussed in detail elsewhere (Chapter 9), it is prudent to revoke its crucial points here in the context of the ARCP. The panel will need the educational supervisor’s structured report, which is a summary of the evidence in the trainee’s portfolio including results of workplace-based assessments and examinations. The report should also contain evidence of other activities such as audit, teaching, research, psychotherapy and additional activities which contribute to the training process. The College has produced a sample form available on the website (www.rcpsych.ac.uk/training/ specialtytrainingguides.aspx) and also included here in Appendix 1. It suggests that the supervisor provide a rating from an array of options – ‘insufficient evidence’, ‘needs further development’, ‘competent’ or ‘excellent’ – for the following domains: providing a good standard of medical practice and care; decisions about access to care; treatment in emergencies; maintaining good medical practice; maintaining performance; teaching and training, appraising and assessing; relationships with patients; dealing with problems in professional practice; working with colleagues; maintaining probity; and ensuring that professionals’ health problems do not put patients at risk. In addition, there should be a record of adverse incidents and complaints, with a statement as to whether the complaint was found to be justified or not. The report needs to demonstrate that the trainee has met the requirements of the curriculum for their year of training. Any areas for development must be highlighted. Higher trainees have a personal development day. There should be a separate short report from their supervisor for that activity. This need not be detailed but should form part of what is assessed by the panel. The Gold Guide envisages the educational supervisor as being separate from the clinical supervisor and remaining the same for the whole year. In psychiatry the educational supervisor may be the same person as the clinical supervisor and can change every 6 months in core training. Where there have been two educational supervisors during the year, both must have input into the final report but one will take overall responsibility for it. The school will advise which one this should be. The report should always be discussed and agreed with the trainee before submission. 124
annual review of competence progression
Academic trainees In addition to the usual ARCP form, academic trainees need to have an academic report completed by their academic supervisor. This must include details of academic placements, academic training modules and other relevant academic experience, together with an assessment of the academic competencies achieved.
Review of portfolio At a point in the ARCP process a detailed review of the trainee’s portfolio must be undertaken. This is done differently in different schools. Regardless of the point at which this review is undertaken it is a time-consuming exercise. There is a little known about the time taken to assess a portfolio but a paper looking at medical students’ portfolios found that about 60 minutes was needed to read the portfolio fully (Davis et al, 2001). The main examination of the portfolio can be done by the educational supervisor, who can use the ARCP report to provide a summary of the portfolio to the panel. Panels should then review the report to ensure that minimum criteria are met. If they are not met, or if trainers have noted problems, the portfolio should be available to the panel for further detailed examination.
ARCP panel The ARCP panel comprises at least three people chosen by the head of school in conjunction with the deanery. These could include the postgraduate dean or their deputy or the head of school him-/herself. Other suitable panel members are training programme directors, the regional advisor or College tutors. If a trainee is on an academic programme, there should be two academic representatives neither of whom was involved in the trainee’s programme. The panel should also have a representative from an employing authority, to ensure that the trainees they employ are robustly assessed. Each panel should have input from a lay member to ensure that the process is fair, transparent and robust. Lay members are appointed by the deanery and receive training in how to undertake this work. They should review at least a random 10% of the outcomes and supporting evidence. There should also be external input from within psychiatry but from outside the school. This is most easily achieved by exchanges between adjacent deaneries. All panel members must have undergone training in equality and diversity issues. This training should be refreshed every 3 years.
Trainees attending the panel If a satisfactory outcome is expected the trainee does not have to attend the ARCP panel, but for any other outcome the trainee must be seen by the panel. Some schools see all the trainees, including those who are progressing 125
burn
well. An invitation to attend the ARCP panel or receiving an adverse outcome should not come as a surprise to the trainee. Those with problems should be identified by their educational supervisor and College tutor some months before the ARCP and should have time to prepare for the interview.
Timing of the ARCP All Deaneries hold annual reviews of competence progression towards the end of the training year, prior to the new start in August. Some will hold additional panels for trainees whose training is out of phase (e.g. due to maternity or sick leave). An ARCP does not have to cover an exact period of 12 months; amounts of training that are reviewed can be shorter or even occasionally longer for various reasons. The exact timing of the summer panel is difficult. If it is held too early, the trainees will not have had time to undergo the requisite assessments. If it is late, it leads to problems in adjusting placements for the next rotation which may need to take into account trainees with special needs. Most are held in June. It is wise to set the dates well in advance and to inform trainees of these, as they may need to attend. External members also need plenty of notice. There is an additional problem in that all the ARCPs come together, thereby adding to the stress on trainees, assessors and the wider healthcare delivery system. Ways to deal with this need to be explored further, locally and nationally.
Examinations The College examinations are very important as the external quality assurance system. By the end of the second year of training it is expected that Paper 1 should have been passed. It is still possible to achieve an outcome 1 without this, but a trainee in this position will need extra support and guidance. A successful outcome at the end of year 3 requires the trainee to have passed the Clinical Assessment of Skills and Competencies (CASC).
Outcomes There are a number of possible outcomes from the ARCP.
Outcome 1 – Satisfactory progress This outcome will be the one achieved by the majority of trainees. It is defined as achieving the competencies within the specialty curriculum approved by the GMC at the rate required. The College defines this rate in terms of the successful completion of workplace-based assessments and progress in the MRCPsych examination. If an outcome 1 is not achieved due to unsatisfactory or insufficient evidence, the trainee is required to meet with the panel. 126
annual review of competence progression
Outcome 2 – Development of specific competences required, additional training time not required The trainee’s progress has been acceptable overall but there are some competencies that have not been fully achieved and need to be developed further. It is expected that the trainee will still be able to complete training in the usual time frame. The panel will identify what further development is required and produce a written remedial training plan which they can then share with the educational supervisor (Box 11.1).
Outcome 3 – Inadequate progress by the trainee, additional training time required The panel has identified that an additional period of training is needed. The panel must make clear recommendations to the deanery about the form this training should take. This must then be negotiated with the employer, who will have to be asked to extend the contract. The extension will normally be for a maximum period of a year. In very exceptional circumstances this may be extended to 2 years with the approval of the postgraduate dean. The failing of an examination would not be considered an exceptional circumstance. The panel will identify what further development is required and produce a written remedial training plan which can then be shared by the trainee with the educational supervisor.
Outcome 4 – Released from training programme with or without specified competences The panel may recommend release from the training programme if the trainee shows prolonged lack of progress that does not respond to remedial measures. Any competences that have been achieved must be documented. The trainee will then have to give up the National Training Number and leave the training scheme.
Box 11.1â•… Example of a remedial plan for a trainee who has failed the CASC examination •â•¢ •â•¢ •â•¢ •â•¢ •â•¢
Aim for a high level of clinical activity, with a good number of patient assessments Undertake mini-ACEs with a number of different observers, particularly local consultants who are CASC examiners Attend the local CASC mock exam Attend local teaching on communication skills Use reflective practice to consider performance and feedback
ACE, Assessment of Clinical Expertise; CASC, Clinical Assessment of Skills and Competencies
127
burn
Outcome 5 – Incomplete evidence presented, additional training time may be required This outcome is given when the trainee presents insufficient evidence to the panel for it to be able to decide whether or not competences have been acquired. The trainee must supply a written explanation of their failure to provide the necessary documentation within 5 working days. The panel does not have to accept this and may require the trainee to produce the documentation required within a specified time.
Outcome 6 – Gained all the required competencies, will be recommended as having completed the training programme and for award of a CCT Once all the competencies have been achieved the panel can recommend to the College that a CCT be issued.
Outcome 7 – Fixed-term specialty training (FTSTA) Fixed-term specialty trainees are trainees on a 1-year contract (fixedterm specialty training appointment, FTSTA). They undergo the same programme of assessments as other trainees and the outcome of their ARCP is recorded by the deanery.
Outcome 8 – Out of programme for research/approved clinical training/career break (OOPR/OOPT/OOPC) The trainee must submit documentation on the required form stating what they are doing during their out-of-programme (OOP) time. If the trainee is out of programme on a GMC prospectively approved training placement then an OOPT document is needed together with the usual evidence of progress. If the purpose of the OOP is research, the trainee must produce a research supervisor’s report along with the OOPR indicating that appropriate proÂ� gress is being made. If the doctor is on a career break, a yearly OOPC request must be sent to the panel with the date that the trainee expects to return.
Outcome 9 – For doctors undertaking top-up training in a training post Some doctors who have applied to the GMC for entry to the Specialist Register through Article 14 (of the General and Specialist Medicine Practice (Education, Training and Qualifications) Order 2003) may be advised to undergo top-up training. To do this they can be appointed competitively to approved training programmes for a limited period of time where the programme can accommodate this. The doctor should submit appropriate workplace-based assessments and documentation so that the panel can decide whether the objectives set by the GMC have been met. 128
annual review of competence progression
Extension of training time The Gold Guide Core Training Supplement states that core trainees will be able to have additional aggregated training time of up to 6 months within the total duration of the training programme, unless, exceptionally, this is extended at the discretion of the postgraduate dean, but with an absolute maximum of 1-year additional training time during the total duration of core training programme (Modernising Medical Careers, 2009: p. 7). If the trainee does not comply with the planned additional training, he or she may be asked to leave the programme before the additional training has been completed. The time limit on extension does not apply to sick or maternity leave. Run-through and higher trainees may have an overall extension of training for a maximum of 1 year, unless, exceptionally, this is extended by the postgraduate dean, but with an absolute maximum of 2 years additional training during the total duration of the training programme.
Process following the ARCP Following the ARCP, the outcome form will be completed and signed by the chair of the panel. It is then forwarded to the trainee for signing; the trainee makes a copy for their portfolio and then returns the signed form. The form is retained by the deanery and a copy is sent to the College. Efforts are under way to make this process electronic using the College’s Portfolio Online system.
Appeals of the ARCP outcome The trainee who is dissatisfied with an ARCP outcome 3 or 4 has a right of appeal. Appeals must be made in writing to the postgraduate dean within 10 working days of being informed of the decision. Initially, there should be a discussion between the trainee, regional advisor and programme director in an attempt to come to a mutually agreed decision. If agreement cannot be reached by discussion, a formal appeal hearing will be arranged. Members of the original panel cannot take part in this. Wherever possible, the appeal should occur within 15 working days of the request for it.
Conclusions There are advantages and disadvantages to the introduction of the ARCP process. The advantages are that trainees at all levels receive thorough and structured assessment which is recorded in a standardised way. Those in difficulty are now more likely to be picked up and given remedial training plans. In the past higher trainees received this type of input through the RITA process but many core trainees did not benefit from robust assessment and feedback. In an uncoupled specialty more problems are 129
burn
likely to be found in core training and this is the right place to invest energy and resources to support doctors who are failing. The main disadvantage is the amount of time the process consumes; this is hugely increased from the local assessment procedures that were in place before 2007. There is the hidden expense of time lost from clinical and teaching activities and the obvious expense for lay input and travel. The system is more inflexible than it was and is perceived by some trainees as bureaucratic and unhelpful. There are also differences in how deaneries apply the guidance. In summary, the ARCP process represents an improvement in the standardisation and quality control of assessments of doctors in training, which can only be beneficial to both trainees and patients. The process is still in its infancy and will continue to develop and evolve over the next few years as expertise is gained by those who deliver it.
References Davis, M. H, Friedman Ben-David M., Harden, R. M., et al (2001) Portfolio assessment in medical students’ final examinations. Medical Teacher, 23, 357–366. Department of Health, Department of Health, Social Services and Public Safety, NHS Scotland, et al (2007) A Guide to Postgraduate Specialty Training in the UK (The Gold Guide) (1st edn). Department of Health (http://www.mmc.nhs.uk/pdf/Gold%20Guide%20 2007.pdf). Modernising Medical Careers (2009) A Reference Guide for Postgraduate Specialty Training in the UK, ‘The Gold Guide’: Core Training Supplement (3rd edn). Department of Health (http:// www.mmc.nhs.uk/pdf/Core%20training%20supplement%20-%202009.pdf). Teunissen, P. W., Scheele, F., Scherpbier, A. J. J. A., et al (2007) How residents learn: qualitative evidence for the pivotal role of clinical activities. Medical Education, 41, 763–770.
130
Chapter 12
Examinations in the era of competency training Anthony Bateman
Historically, training in psychiatry, in line with all other medical training, followed an apprentice model in which a trainee was attached to senior practitioners and, through observation and supervision, learnt the skills applicable to the treatment of mental illness. Each trainee was offered a range of experience in a variety of contexts and with different trainers in the hope that a diversity of skills would be developed over time to ensure safe and independent practice. The achievement of adequate knowledge and clinical skill was assessed by national examinations. It was only in national examinations that trainees were formally assessed by independent practitioners external to their training. Until then trainees could rely on the benign opinions of their educators and assessments from people with whom they had a personal relationship. It is not surprising that the variability inherent in this style of training led to questions about its reliability in producing competent doctors. Extremes of training experience were obvious, some trainees gaining excellent clinical experience and receiving consummate teaching, whereas others received limited experience and poor tuition. The lack of frequent assessment points led to delays in progression late in training when it was too late for failing trainees to consider a different specialty. Although there was limited information about what type of alternative training would result in more competent doctors, training was reorganised by central directive with greater emphasis on assessment of competencies in the workplace, workplace-based assessments (WPBAs) and frequent local evaluation and certification. In this brief chapter I will argue that, within the context of local assessment, national examinations are more important than ever.
The importance of examinations Examinations are part of a wide range of evaluators of a trainee’s ability. There are a number of reasons why centrally organised exams should continue. To start with, they are fair, reliable, informative, defensible and can easily be integrated with a local summative appraisal. They can add to 131
bateman
the reliability and validity of an assessment matrix which includes local assessment of competence and performance, and it is prudent to combine locally based appraisal/assessments with central examinations. Both can be used to assess doctors as they move through training. Further, the use of centrally organised assessment enables a national standard to be set which ensures uniformity of practice and consistency in levels of competence. Although most underperforming trainees who do not reach a satisfactory level of competence will be identified through workplace-based assessments early in their career, exams contribute important additional evidence using a national reference standard as a benchmark. Only the misguided trainer or hospital manager would allow someone to progress who continually fails a central examination even if they appear to be doing well in local appraisal. Finally, if best-evidence medical education is going to match best-evidence clinical practice then it is prudent to combine national examinations with local appraisal systems simply because the effectiveness of a competency training system remains uncertain and there is scant evidence to support this approach. In contrast, there is considerable knowledge of the reliability of examinations. Despite the lack of evidence of reliability, training based on competency outcomes has flourished over the past few years (Prideaux, 2004), yet the new approach has not addressed within the framework one of the most important aspects of medical practice. Daily medical practice relies on the ability of the doctor to synthesise a range of clinical information into an integrated, meaningful clinical opinion relevant to implementing treatment. How to assess this fundamental but complex skill remains problematic. It was unaddressed to some extent in the previous system of training but now there is a danger of too much emphasis being placed on specific competencies or people taking an oversimplified ‘sign off as soon as you can do it and then forget about it’ approach. If this is done in the workplace-based assessment, the essence of good medical practice is lost. This ability to make an overall clinical assessment is far greater than any of its deconstructed parts and in the previous system it was addressed in psychiatry by a focus on the oral examination. This will be discussed in more detail later in the chapter.
Principles for examination in psychiatry There are a number of principles that should be applied to medical examinations, some of which have been identified by the Postgraduate Medical Education and Training Board (PMETB), which had responsibility for all aspects of medical training in the UK before its merger with the General Medical Council (GMC) in April 2010. Examination boards organising nationally approved examinations should be able to demonstrate: •â•¢ a clear statement of the purpose of the exam •â•¢ how the content is determined •â•¢ how the test methods have been selected and developed 132
examinations
•â•¢ •â•¢ •â•¢ •â•¢
that reliably set standards are maintained that selection, training and monitoring of examiners are in place that feedback is available to candidates that an appeals procedure is in place.
Purpose of assessment and examination It is axiomatic that examinations have a purpose. The problem lies in defining first whether an exam is assessing what has been learnt and/or second, whether it has an additional predictive aim to assess suitability for further training. Formerly membership examinations in psychiatry were divided into two parts: the first part assessed whether a trainee had achieved a certain level of knowledge and acquired basic clinical skills, and the second identified whether a trainee was appropriately prepared for higher-level training. Both parts of the exam had subsidiary aims. The first part was designed to assess the suitability of a trainee to continue in psychiatry and the second part to confirm that a trainee was familiar with the context of mental healthcare issues and that they were able to provide independent patient care with a reduced level of supervision in a professional, caring and knowledgeable manner. This was deemed to be necessary to benefit from higher training. There were problems. There is negligible evidence that exams themselves are able to assess suitability for a chosen career and the first component of the old exam in fact only ensured that trainees had adequate knowledge of psychiatry and related areas and had attained skills relevant to 12–18 months of psychiatry training. Locally delivered arrangements, currently organised as formative appraisal rather than an assessment of competency, are probably a better way to assess an individual’s suitability for psychiatry even though reliability and validity of WPBAs in assessing a doctor’s suitability for psychiatry is uncertain. A doctor who is competent in a skill might perform a task well during a brief assessment even though his or her attitude to mental illness makes them unsuited to psychiatry. This will be picked up more easily in the workplace, where trainees are monitored over a period of time rather than during a brief encounter in a clinical examination. In contrast, examination is probably a better way to assess preparedness for career progression, although a doctor who performs well under certain circumstances might not be able to show competence at higher levels when more skilful interactions in more difficult contexts are required (Talbot, 2004). Nevertheless, by using an examination, progression in training becomes dependent on assessment against a nationally agreed standard and not against a local benchmark which by its very nature would be highly variable. The standard of psychiatric knowledge, skills and competencies required for higher training is set at a level of competence expected of a trainee after 30 months of training in general adult psychiatry and psychiatric subspecialties. This is judged as the minimum time for 133
bateman
candidates to prepare themselves adequately for progression to higher training where they will receive less supervision. It is during higher training that practitioners will become competent, substantially independent psychiatrists.
Eligibility criteria Examination should not be decoupled from training in the workplace. Trainees have the right to expect that examinations reflect both the knowledge and competencies they use in everyday practice. Linking examinations and WPBAs implies the development of a seamless assessment and appraisal system. There are a number of ways in which to achieve this. First, accomplishments by trainees identified within a formative appraisal system can be used as eligibility criteria for entrance to an exam. Each trainee needs to show that he or she has undertaken a series of assessments over time and that these assessments demonstrate achievement of a range of skills in a variety of contexts. Second, the form of WPBAs and examinations can be similar; clinical training in the workplace becomes formative preparation for the exam. Third, some WPBAs could themselves be part of the examination and subjected to external assessment. Currently trainees undertake a series of supervised WPBAs. These are primarily designed as assessments for learning rather than assessments of skills. The focus is to identify areas of strength and weakness and to provide feedback to trainees about areas of practice that need more attention. Appraisals or assessments for learning are not configured as rigorous assessment procedures and generally take place in a benign, facilitative atmosphere. This leads to problems for examiners and for trainees. Many trainees ‘pass’ their WPBAs only to find that they perform badly in the examination itself, despite both being similar in format and content. Not surprisingly, this is at best puzzling and at worse demoralising. There may be a number of reasons for the discrepancy. To start with, trainees are encouraged by trainers with whom they have a personal relationship to believe that they are achieving a high standard and yet the trainer and trainee are unaware of the national standard. Further, the system may become a ‘counting’ procedure – completing the exercises but failing to learn. Also, trainers may not wish to appear too critical of their trainees or to score them too low, fearful that this will damage their working relationship during the training post. Finally, the trainer may not be capable of training, assessing clinical skills and teaching. Many of these problems could be overcome by introducing aspects of external assessment into the WPBA system. For example, one appraisal per year could be undertaken under examination conditions and assessed by independent, trained examiners or at the very least by local educational supervisors trained to exam standards. For example, if WPBAs included assessment of five new out-patients in a community clinic, the first four 134
examinations
assessments could be configured as training appraisals and the last could be undertaken in the normal clinic but presented to an external assessor examiner, who marks the presentation according to defined criteria. The mark awarded could be carried forward to the exam as part of the overall marks or be a component of exam entry criteria. Bringing marks into an exam ensures that it becomes less of a ‘killer’ or high-stakes assessment and more of a component, albeit an important one, of an integrated assessment and appraisal system.
Assessment methods In any examination a battery of assessment methods are selected to ensure that a range of domains of theory and practice are scrutinised. Assessments must be valid, reliable and feasible, and, if at all possible, cost-effective and organised to provide feedback. In psychiatry assessment methods have included multiple-choice papers, essays, critical appraisal, and structured oral examinations. The current format and content of the MRCPsych examination is outlined in Appendix 3.
Multiple-choice paper Multiple-choice questions (MCQs) in standard format are proven to be the most reliable instruments for assessing knowledge-related areas of clinical practice. They are an efficient assessment method and therefore costeffective simply because a considerable breadth of material can be tested in a relatively short time. Papers are organised using a blueprint specifying categories, content and apportionment of the paper, so that candidates are fully aware of the test specification to be followed at every exam. Each diet of the exam is organised to ensure adequate coverage and to avoid duplication and gaps. The format of the MCQs in psychiatry examinations is a mix of singlestatement/best answer and extended matching questions (EMQs). The aim is to test core and emerging knowledge about psychiatry and application of that knowledge. The MCQ and EMQ items are organised into a wellbalanced paper with a pass mark that is carefully standardised using one of the well-recognised methods for standard setting. Standardisation The standard for MCQ papers is set using a modified Angoff procedure (Zieky, 2001). A group of judges or panel estimate the performance of a notional ‘average, just good enough to pass’ candidate. The panel is made up of representatives who have a stake in the examination and who are either involved in working for the exam or currently engaged in teaching and training. The panel scrutinise every question of every exam and estimate the percentage of candidates likely to get any one question correct, 135
bateman
based on their judgement of the notional ‘just good enough’ candidate. This gives a measure of the level of difficulty of each and every exam. The pass mark is then set first by combining the level of difficulty of the exam with assumptions about candidates based on previous data and second by the use of ‘anchor questions’ which pass from one exam to another. We know that if a cohort of candidates is stable, then the pass rate should vary little between one exam and another as long as the cohort is large enough. We know that candidates who received a primary medical qualification from a UK medical school have higher pass rates than others and that women do better than men. So it is possible to estimate the effects of such fixed factors on the scores achieved in the exam. If we add to that a linear equating formula, a technique whereby the pass mark from the previous session is carried forward to the present session by an estimating equation, we can estimate the pass mark for a specific exam to ensure that the standard is equivalent to earlier exams. There is therefore no fixed number of people who can pass an exam. Someone taking the exam in the context of a highly intelligent cohort is no more disadvantaged than if he or she were taking it with a group of less able people.
Writing skills Formerly the essay paper was used to assess a candidate’s ability to integrate knowledge, synthesise diverse information, develop a reasoned argument, communicate views coherently, and show knowledge of literature. It has been discontinued in favour of WPBAs. However, there is limited assessment of candidates’ ability to synthesise information in the workplace. Further adjustment to the assessment procedures needs to be made. Bringing together diverse sources of information into a coherent argument that is understandable to patients, carers and other professionals is a higher-order skill and an important ability for the practice of good psychiatry. Reliable methods need to be developed to assess these higher-order skills so that the assessment is not abandoned altogether. For example, it is possible to assess ability to make a reasoned argument by scrutinising reports written for mental health tribunals or for courts, to evaluate integration of knowledge through complex extended matching items, or to review a synthesis of literature when applied to clinical practice in letters to general practitioners. These methods need further development in the MRCPsych examination and the WPBA.
Oral assessments Historically, oral assessment has been prized by medical educators and assessors. An oral assessment can take a number of formats. All involve a face-to-face contact between the candidate and assessors, using real or simulated patients and/or clinical material. The traditional viva voce examination remains in common use; observed clinical encounters with patients are frequently part of medical examinations; discussion of 136
examinations
clinical material with an examiner remains widespread. In psychiatry oral assessments took the form of the individual patient assessment (IPA) and patient management problems. These were considered to be an essential part of determining whether a practitioner had mastered the skills of assessing a patient, presenting findings and discussing management and treatment of common clinical scenarios. The main advantage of the IPA was that it mimicked normal clinical practice and allowed assessment of overall clinical judgement. But the problem with this ‘long case’ oral assessment lies in the lack of standardisation, the reliability of the assessment process itself, its bias and its vulnerability to challenge. Norcini (2002) questioned the reliability and predictive validity of the long case and developed the mini-Clinical Evaluation Exercise (mini-CEX) as an alternative, suggesting that the results of several mini-CEXs had better reliability than a longer clinical examination lasting an hour or more (Norcini et al, 1995, 1997). When looking at the psychiatric long case, Oyebode and colleagues (2007) found a poor correlation between pairs of observers when using a 10-point scale, but good correlation over pass/fail criteria, suggesting that examiners know what constitutes satisfactory and unsatisfactory practice but have problems over fine gradation. Nevertheless, oral assessments have high face validity and studies suggest that adequate reliability can be attained if the skills being tested are carefully standardised, examiners are carefully trained, and marking schemes are straightforward with clear anchor points. Standardisation is impossible in the traditional long case oral assessments and so a different exam had to be considered. The psychiatry Objective Structural Clinical Examination (OSCE) had been shown to have acceptable validity in assessing trainees at basic level (Sauer et al, 2005) and was acceptable to psychiatry trainees as an exam format (Hodges et al, 1999). But there are limited data about OSCEs as an assessment tool of high levels of skill in complex tasks, for example assessing mental competency, forming a therapeutic alliance, or addressing transference issues. Few countries use the OSCE to assess higher-order skills. Thus, the Clinical Assessment of Skills and Competencies (CASC), based on earlier assessment experience with OSCE, was developed to test higher-level psychiatric clinical skills. This exam is carefully standardised, has face validity, and has acceptable reliability. The CASC assesses whether trainees have developed a broad range of clinical competencies and whether they are able to perform them at an appropriate level of skill. A pass is used to indicate that trainees are capable of benefiting from the final stages of training. In the CASC, there is less emphasis on testing basic skills such as eliciting components of a mental state, for example paranoid delusions, and greater emphasis on assessing higher-level skills. Basic skills are assessed in the workplace. In order to assess higher-level competencies new stations were devised to test candidates’ ability to perform tasks that had hitherto not been tested in OSCEs. Linked stations, for example assessing a specific aspect of a 137
bateman
patient’s mental function in one station and then explaining to a relative what was happening or what treatment was needed in the next station, were created to mimic normal clinical practice. This increase in the complexity of the tasks enables an assessment of clinical decision-making to be made, and allows candidates to justify their findings and to explain a treatment plan. There is obvious face validity to this assessment process but factors other than the ability to perform the skill itself could influence the examiners’ decision, for example the candidate’s language and communication skills. The ability to communicate clearly in English and to understand English is necessary to practise medicine in the UK. However, candidates complain that it is not their ability to perform skills that is being tested but their ability to use language to communicate. Good communication is an essential part of practising medicine. In the context of psychiatry, ability to communicate is a significant component of almost all clinical interactions with patients, and difficulties in language and communication are likely to impair safe, joint decision-making with patients. Since communication is such a crucial component of psychiatry, assessing communication skills is a fundamental part of examination and has an impact on performance scores of all stations. It cannot be viewed as an isolated skill but as a component of eliciting psychopathology, discussing treatment, outlining management, and even dealing with complaints. If a trainee shows difficulty in language skills when engaged in a task, and that task is set in a context commonly found in clinical practice, then it is likely that their ability to undertake safely the specific interaction in the workplace will be compromised. The evidence suggests that examiners are not unduly influenced by communication skills. Lunz & Bashook (2008) found that the communication ability of candidates did not affect examiner ratings; there was no correlation between independently rated communication ability and examiner scores for decision-making skills. The CASC samples a wide range of clinical situations and contexts in order to be generalisable and reliable. Time constraints make this difficult, but not impossible. A series of stations are used based on a published template which samples different areas of psychiatry. Each CASC follows the same blueprint of content and context and this information is feely available to candidates to help with preparation. The use of an adequate number of stations assessing different skills makes it possible to reach reliabilities commensurate with a high-stakes examination. Reliability is improved by adequate testing time and can be enhanced further by increasing the number of assessors for each task, although the evidence for the latter is limited in the context of limited resources (Wass et al, 2003). A compromise has to be drawn between numbers of stations, testing time, availability of skilled examiners, and acceptability of the exam to candidates and examiners. This compromise is currently set at 16 stations assessing different skills at varied levels of complexity, each lasting 8–10 minutes and marked by a single examiner. 138
examinations
Skilled examiners are essential to any oral examination and measures have to be taken to ensure that assessment is standardised and bias is reduced as much as possible. All examiners undertake training for the CASC and their performance in an exam is monitored. Examiners whose performance is questionable are reviewed carefully and receive further training before being allowed to examine again. Consistency of stations is tested during a rigorous piloting phase. Judgement of key performance indicators for a station is standardised at the beginning of an exam when all examiners calibrate their marking together. An exam that is too long will be too taxing for both candidates and examiners and there is evidence that anxiety plays a significant part in all examinations and in particular clinical examinations (Glass et al, 1999). There is thus an argument for structuring exams so that candidates take them every 6 months over a number of years. This would allow candidates to habituate to the anxiety and incrementally improve their preparation as they became increasingly familiar with the standard of the exam itself. Overall, CASC examination combined with assessment data from the workplace provides a fair and reliable assessment battery of clinical skills to ensure that trainees can move safely towards independent practice.
Evidence-based practice Evidence-based medicine grew out of the need to improve the effectiveness and efficiency of medical education and offers clinicians, mindful of their limitations, a strategy for recognising and managing clinical uncertainty and information overload. It is impossible for busy clinicians to maintain good medical practice without competence in evidence-based medicine. The centrality of evidence-based practice in clinical learning makes it imperative to assess trainees’ competence in its application. This is done through the Critical Review Paper (CRP). Competence to assess evidence is a skill that can only build up over time and so it is inappropriate to make a formal assessment of trainee ability in this area too early. There is consensus that evidence-based practice curricula should be based on five steps (Dawes et al, 2005): 1 translation of uncertainty to an answerable question 2 systematic retrieval of best available evidence 3 critical appraisal of evidence for validity, clinical relevance and applicability 4 application of results in practice 5 evaluation of performance. These five steps are integral not only to clinical governance but also to the GMC’s definition of good clinical care. Although not described in these exact terms, UK medical graduates are required to demonstrate that they have the necessary knowledge, skills and attitudes to practise evidencebased medicine. 139
bateman
To practise the five steps, doctors should be competent in a number of skills. 1 To translate their clinical uncertainty into answerable questions, doctors should be: •â•¢ able to assess patients and formulate a management plan •â•¢ aware of their own limitations and uncertainties •â•¢ motivated to seek guidance from published literature and colleagues •â•¢ able to translate these uncertainties into clinical questions. 2 To systematically retrieve the best available evidence, doctors should: •â•¢ have knowledge and understanding of the resources available •â•¢ have knowledge and understanding of how research is catalogued and of strategies for efficient retrieval •â•¢ have knowledge and understanding of the ‘hierarchy of evidence’ (see step 3) •â•¢ be able to effectively and efficiently access appropriate research evidence. 3 To critically appraise the evidence, doctors should: •â•¢ have knowledge and understanding of study design, and epidemiological and biostatistical principles •â•¢ be able to critically appraise primary research evidence and secondary sources, including guidelines •â•¢ be able to determine whether the appraised evidence is applicable to a particular patient. 4 To apply the results in practice, doctors should: •â•¢ be able to effectively communicate the strengths and weaknesses of the evidence respectful of the individual’s circumstances and preferences so that the patient is able to make an informed decision. 5 To evaluate their own performance, doctors should: •â•¢ be committed to monitoring performance •â•¢ have knowledge and understanding of the strategies to evaluate performance, including the importance of accurate, legible records, the role of electronic databases and the principles of audit •â•¢ be able to evaluate their own performance and that of their team, and be actively engaged in developing strategies for quality improvement. These principles need to be formalised for psychiatry (Geddes, 1996). Not all of the principles need assessment in a CRP and many aspects can be done in the workplace as long as the appraisal is performed by someone with adequate skills and knowledge. Case-based discussion with framing of clinically relevant questions can be used to assess competency in translating a clinical problem into an answerable question; training in literature searches will equip the trainee to assemble the best available evidence to answer his or her question. However, assessment of ability to appraise critically evidence relevant to clinical practice is best done in an examination set to a specific standard. 140
examinations
The future Questions for the future not only relate to how selection and assessment of doctors and quality assurance can be improved, but also how national examinations and trained examiners can be combined with locally based assessments. Local assessments in which trained supervisors identify those doctors in whom they can entrust complex clinical tasks (ten Cate, 2006) need to be combined with objective assessments. It is inadequate for quality assurance to allow all trainers to perform all of the appraisal and assessment procedures; they will not be competent to do so. A subgroup of locally based trainers should be extensively trained in appraisal procedures to then train those who can then perform some of the locally based assessments. But in addition, localities or areas should have trained examiners, probably from the current group of examiners, who make an independent assessment of trainee skills in carefully defined domains before they enter a national exam. This would allow a national standard to be developed and maintained as well as enabling trainees to enter the exam with their in-training assessments contributing to the final outcome.
References Dawes, M., Summerskill, W., Glasziou, P., et al (2005) Sicily statement of evidence-based practice. BMC Medical Education, 5, 1–6. Geddes, J. R. (1996) On the need for evidence-based psychiatry. Evidence-Based Medicine, 1, 199–200. Glass, C. R., Arnkoff, D. B. & Wood, H. (1999) Anxiety and performance on a careerrelated oral examination. Journal of Counseling Psychology, 542, 47–54. Hodges, B., Hanson, M., McNaughton, N., et al (1999) What do psychiatry residents think of an objective structured clinical examination. Academic Psychiatry, 23, 198–204. Lunz, M. E. & Bashook, P. G. (2008) Relationship between candidate communication ability and oral certification examination scores. Medical Education, 42, 1227–1233. Norcini, J. J. (2002) The death of the long case? BMJ, 324, 408–409. Norcini, J. J., Blank, L. L., Arnold, G. K., et al (1995) The mini-CEX (clinical evaluation exercise): a preliminary investigation. Annals of Internal Medicine, 123, 795–799. Norcini, J. J., Blank, L. L., Arnold, G. K., et al (1997) Examiner differences in the miniCEX. Advances in Health Sciences Education, 2, 27–33. Oyebode, F., George, S., Math, V., et al (2007) Inter-examiner reliability of the clinical parts of MRCPsych part II examinations. Psychiatric Bulletin, 31, 342–344. Prideaux, D. (2004) Clarity of outcomes in medical education: do we know if it really makes a difference? Medical Education, 38, 580–581. Sauer, J., Hodges, B., Santhouse, A., et al (2005) The OSCE has landed: One small step for British psychiatry. Academic Psychiatry, 29, 310–315. Talbot, M. (2004) Monkey see, monkey do: a critique of the competency model in graduate medical education. Medical Education, 38, 587–592. ten Cate, O. (2006) Trust, competence, and the supervisor’s role in postgraduate training. BMJ, 333, 748–751. Wass, V., Wakeford, R., Neighbour, R., et al (2003) Achieving acceptable reliability in oral examinations: an analysis of the Royal College of General Practitioner’s Membership Examination’s oral component. Medical Education, 37, 126–131. Zieky, M.J. (2001) So much has changed: how the setting of cutscores has evolved since the 1980s. In Setting Performance Standards (ed. G. J. Cizek). Lawrence Erlbaum Associates.
141
Chapter 13
Piloting workplace-based assessments in psychiatry Andrew Brittlebank, Julian Archer, Damien Longson, Amit Malik and Dinesh Bhugra
There are two main reasons for piloting the use of workplace-based assessment (WPBA) tools in psychiatry. The first relates to meeting the requirements of the competent authority for the regulation of postgraduate medical training (at the time, the Postgraduate Medical Education and Training Board (PMETB), since April 2010, the Postgraduate Board of the General Medical Council), and the second relates to winning the confidence of other stakeholders in the assessment process, particularly that of psychiatric trainers and trainees. In 2004, PMETB (Southgate & Grant, 2004) stipulated nine principles that assessment systems in postgraduate medical education should meet to gain the necessary approval from the Board. In 2008, the Board rewrote the ‘principles’ as ‘standards’ and merged the standards for assessments with those for curricula to produce 17 combined standards; they were then updated by the General Medical Council (GMC) in 2010 (GMC, 2010). Box 13.1 shows the standards that apply to assessment systems. In view of the high validity of WPBAs, the PMETB has recommended that this form of assessment be included as part of ‘an overarching assessment strategy’ (PMETB, 2005). The introduction of new assessment systems, including WPBAs will inevitably necessitate the development of new instruments and the adaptation of established tools to new situations. One of the requirements within PMETB’s Standard 8 was that methods of assessment, including WPBAs, be chosen for use on the basis of validity, reliability, feasibility, cost-effectiveness, opportunities for feedback and impact on learning. The second purpose of piloting the new methods of assessment is to win the confidence of the main end-users of the assessment system: psychiatric trainers and trainees. The evidence from studies that have examined how innovations come to spread in human systems indicates that change does not follow apparently logical paths of dissemination and implementation, but rather the social networks, relationships and informal opportunities for information sharing of the actors within the system are vital components of the change process (Dopson et al, 2002). A programme for introducing 142
piloting wpbas
Box 13.1â•… General Medical Council’s standards for curricula and assessment systems Standard 2: The overall purpose of the assessment system must be documented and in the public domain Standard 4: Assessments must systematically sample the entire content, appropriate to the stage of training, with reference to the common and important clinical problems that the trainee will encounter in the workplace and to the wider base of knowledge, skills and attitudes demonstrated through behaviours that doctors require Standard 8: The choice of assessment method(s) should be appropriate to the content and purpose of that element of the curriculum Standard 10: Assessors/examiners will be recruited against criteria for performing the tasks they undertake Standard 11: Assessments must provide relevant feedback to the trainees Standard 12: The methods used to set standards for classification of trainees’ performance/competence must be transparent and in the public domain Standard 13: Documentation will record the results and consequences of assessments and the trainee’s progress through the assessment system Standard 15: Resources and infrastructure will be available to support trainee learning and assessment at all levels (national, deanery and local education provider) Standard 16: There will be lay and patient input in the development of assessment Source: General Medical Council (2010).
new ways of assessing trainee doctors must therefore attend to the social as well as technical aspects of the proposed change. Furthermore, the change to a competency-based framework of training and assessment entails a major challenge to the established medical educational culture (Swanwick, 2005). The change is not wholly welcomed and continues to be questioned and criticised (Leung, 2002; Talbot, 2004; Fish & de Cossart, 2006; Oyebode, 2009). The main criticisms of competency-based approaches to training and assessment can be divided between five categories: 1 reduction of medical practice to a meaningless atomisation that ignores the complexity of ‘real’ clinical experience 2 neglect of important aspects of professionalism in favour of measuring what can easily be measured 3 focus on ‘competence’ at the expense of pursuing excellence 4 demotivation of trainee specialists by the necessity of gathering evidence 5 unnecessary bureaucratisation of medical training. 143
brittlebank et al
The first of those categories echoes a concern that participants at WPBA training events frequently make regarding what they see as the potential of new assessment methods to promote a ‘dumbing down’ of professional standards. The second category appears to be a reworking of the McNamara fallacy as described by Alimo-Metcalfe & Alban-Metcalfe (2004) in their critique of methods of assessing leadership behaviours, in which only those variables that can be easily measured are measured and those things that cannot be easily measured are declared either to be of no importance or to be non-existent. Many of these concerns, however, reflect widespread confusion about the multiple purposes of assessment and a lack of understanding of the educational opportunities that workplace-based assessment can afford (Swanwick & Chana, 2009). The process of piloting new assessment methods must, therefore, seek to address the concerns of clinicians and medical educators as well as the requirements of the GMC. Otherwise, no matter how robust their psychometric properties or how well they are embedded in curricula, they will not be used as intended and the process of WPBA will degenerate into a meaningless ‘tick-box’ exercise. Indeed, there is early evidence from the first year of the ‘live’ assessment system that there is still much work that needs to be done to win the hearts and minds of psychiatrists (Babu et al, 2009; Menon et al, 2009). There have been two main evaluation studies of WPBA systems in psychiatry in the UK. The first was a small-scale field trial supported by the Northern Deanery Postgraduate Institute for Medicine and Dentistry and was carried out in the former Newcastle, North Tyneside and Northumberland Mental Health (3Ns) Trust between February and August 2006. The second is a national study involving 16 individual sites; it was conducted between 2006 and 2007, and has been supported by the Royal College of Psychiatrists. I will start by describing the Northern Deanery pilot study and the lessons that were learnt from it, before moving on to describe the national study.
The Northern Deanery field trial The aim of the project was to determine whether it was feasible to deliver a framework of competency-based assessments. This was introduced at the time that postgraduate medical training in the UK went through major structural changes as part of the implementation of Modernising Medical Careers (MMC) (Department of Health, 2004). As well as supporting the requirements of MMC, the project also sought to enable the trust to meet the risk management standards for clinical governance purposes. The WPBA tools that were used were adapted from those used in the portfolio for foundation training that was developed by the MMC Project Team at the Department of Health (Department of Health, 2005). The tools that were adapted for the project were the Mini-Clinical Evaluation 144
piloting wpbas
Exercise (mini-CEX), case-based discussion and the Direct Observation of Procedural Skills (DOPS). The last tool was especially suited to evaluating a trainee’s competency to deliver electroconvulsive therapy. Because the successful implementation of an innovation can often be facilitated by the end-users having the capacity to alter and customise the innovation to suit their purposes (Fitzgerald et al, 2002), the foundation portfolio tools had been adapted before they were used in the project. The adaptation involved a Delphic process in which the assessment tools were shown to senior clinicians within the trust, who tried them out and whose suggestions were incorporated into the versions that were introduced from February 2006. Further suggestions for the use of the tools were obtained following discussion with a focus group of trainees. The next stage of the project was to train the end-users in the WBPA tools. Although large-scale educational events can bring an innovation to the attention of a large number of people, they are known to have little impact in terms of lasting behavioural change. The effectiveness of these approaches can be increased by combining them with other techniques such as sending reminders and feedback to individuals (Grimshaw et al, 2004). The training package therefore consisted of large educational meetings of 3 hours’ duration combined with individualised email feedback to trainers and trainees. One of the educational meetings was recorded and made available as a DVD for those who did not attend. The literature on the transfer of innovation also suggests that there are important roles for ‘opinion leaders’ in the process of gaining acceptance for a new practice. The educational events were therefore supported by recognised authorities in medical education, as well as trainees who had already experienced WPBA as part of their medical training and senior psychiatrists who also had experience of WPBAs as trainers. It was probably helpful that they were completely honest that their experience of WPBA had not been problemfree; the feedback from the educational meetings was that the comments of the peer opinion leaders were the most valuable part of the meetings. In February 2006, all psychiatry senior house officers (SHOs) in the Trust were issued with a portfolio that outlined the clinical competencies that they were to be assessed against. The portfolio also contained copies of the WPBA tools that they were meant to complete. Each WPBA tool included an evaluation that was based on that used in the foundation portfolio. At each episode of assessment both parties were asked to complete an evaluation of the assessment. The evaluation recorded the time taken to complete the assessment and the assessor’s and trainee’s satisfaction with the assessment process. After 6 months trainees’ responses to the portfolio were surveyed by means of a questionnaire that was designed to gather information about the feasibility of this form of assessment and its educational impact. Respondents were encouraged to supply free-text comments about their experience of the portfolio. 145
brittlebank et al
Results The first 6 months’ evaluations for mini-CEX, case-based discussion and DOPS are shown in Table 13.1. This reflects evaluations received from a total of 76 WPBAs, which is fewer than the actual number of WPBAs conducted; it is clear that many of the evaluation forms were not completed. Satisfaction with the episode of assessment was measured using a 10-point Likert scale where 1 was ‘not satisfied’ and 10 was ‘very satisfied’. A score of 7 or more was interpreted as being satisfied. The results show a high degree of satisfaction with both assessment tools, with trainees tending to be more satisfied than assessors. The time spent on assessment was generally within the guidelines given in the foundation portfolio (miniCEX: 15â•›min with 5â•›min for feedback; case-based discussion: 20â•›min with 5â•›min for feedback). There were some outlying values, particularly with the mini-CEX, where a number of assessments had involved observing the trainee for at least 50â•›min, which would clearly have constituted an entire clinical encounter, rather than the ‘snapshot’ of a doctor–patient interaction as suggested in the guidance for the mini-CEX. Completed portfolio evaluation questionnaires were received from a total of 22 SHOs out of 28 that it had been sent to (79%). Results were analysed using descriptive statistics for the quantitative data and a thematic analysis was applied to free-text comments.
Experience of the portfolio All trainees who returned evaluation questionnaires completed at least one of the WPBA tools, 14% completed between one and three, 55% had completed between four and six assessments and 31% completed more than six. We know from records of a WPBA held at the trust medical education centre that three trainees out of the cohort of 28 (11%) did not return any completed WPBA forms during the 6 months of the study. Most of the trainees had therefore completed the WPBAs at the rate of at least one every 6 weeks. Respondents were asked to rate the ease of use of the portfolio on a scale of 1 (‘very difficult’) to 5 (‘very easy’). The mean response was above the mid-point at 3.42 (s.d.â•›=â•›0.87). For the majority of trainees, therefore, Table 13.1â•… Satisfaction scores and time taken to complete the first 6 months of WPBAs Mini-CEX
Case-based discussion
DOPS
Assessor satisfaction, %
75
71
100
Trainee satisfaction, %
86
81
85
Time to complete, min (range)
33 (10–75)
25 (7–50)
35 (13–75)
DOPS, Direct Observation of Procedural Skills; Mini-CEX, Mini-Clinical Evaluation Exercise; WPBAs, workplace-based assessments
146
piloting wpbas
the portfolio and its associated WPBA tools constituted a feasible method of being assessed.
Effect on work We were concerned to look at how well these methods of assessment fitted in with the need to deliver clinical services. The SHOs were asked to rate the ease of fitting assessments into their work using a five-point scale from 1 (‘very difficult’) to 5 (‘very easy’). The mean response to this was below the mid-point at 2.86 (s.d.â•›=â•›0.77), indicating that the SHOs had encountered problems getting the assessments done; the free-text comments in this section (vide infra) provide some clues as to why this was the case. The participants were asked to rate the amount of disruption caused to their work using a scale from 1 (‘significant disruption’, i.e. cancellation of other commitments) to 5 (‘none at all’). The mean response to this was above the mid-point at 3.45 (s.d.â•›=â•›0.91). This would suggest that, although tricky to organise, the WPBAs did not cause major disruption to clinical work.
Impact The impact of the portfolio was examined by asking questions related to reflection, changes to practice and effect on learning as a result of feedback from the assessments. Respondents were asked to rate the extent to which feedback had made them think about their practice on a scale from 1 (‘not at all’) to 5 (‘considerably’). The mean response was 3.76 (s.d.â•›=â•›0.83). They were asked to rate the amount of change to their practice of psychiatry on a scale from 1 (‘no change’) to 5 (‘considerable change’). The mean response to this was only a little above the mid-point at 3.18 (s.d.â•›=â•›1.14), with a wide degree of variation. The effect on their learning was measured by asking how much the feedback had encouraged them to learn more about particular aspects of psychiatry on a scale from 1 (‘not at all’) to 5 (‘a considerable amount’). The mean response was 3.81 (s.d.â•›=â•›0.73). Taken overall, the responses to this section of the questionnaire indicate that the portfolio of assessment had, on the whole, encouraged reflective practice and had encouraged learning, but had relatively less impact on actual clinical practice. This could be because the trainees were already practising at a good-enough level and the impact of WPBA was to reassure them; indeed, several respondents made just such a comment in the free-text sections of the questionnaire. Alternatively, and of greater concern, it is possible that the assessment failed to discriminate bad practice from that which is good enough.
Overall satisfaction Respondents were asked to give their overall satisfaction with the portfolio on a five-point scale from 1 (‘very dissatisfied’) to 5 (‘very satisfied’). The 147
brittlebank et al
mean response to this was 3.63 (s.d.=0.84). Taken together with the satisfaction scores for the individual WPBA tools, there appears a great deal of satisfaction among trainees for this new form of assessment.
Qualitative element The free-text responses offer some help in understanding the perceptions of respondents of the portfolio and elucidates the quantitative data. A thematic analysis of the free-text comments was performed. A number of themes emerged in relation to the use of the portfolio. Several respondents raised concerns regarding the time requirements of the new assessment tools, particularly in relation to gaining the attention of their supervisors: ‘It can be difficult to get supervisors to do it [perform a WPBA]. You have to be proactive and seek out opportunities when you can get the forms completed’.
There was also concern about the administrative burden that this posed for trainees: ‘It [filling in forms] does increase the burden of paperwork considerably’.
Yet many trainees commented on the ease of using the tools and of fitting them in with clinical work. They also frequently mentioned positive benefits of assessment on their confidence in their ability to perform clinical work and in clinical examinations: ‘Getting positive feedback definitely improves my confidence, because there are times when we are not sure if what we are doing is right’.
Some trainees were positive about the greater fairness of the assessment process: ‘Assessments are still as subjective as they were, but now there is a wider range of people evaluating you, which gives a more accurate indication of what is good and what needs improvement’.
A number of individual respondents made helpful suggestions for improving the tools: to develop the case-based discussion to assess the management of more complex cases, to develop ways of performing miniCEX assessments by video-link to reduce the effect of having an assessor in the room during particularly sensitive interviews such as with children or young people, and for the introduction of tools to capture feedback from patients.
Discussion The results of the field trial indicated that the application of WPBA tools as part of a competency-based approach to learning and assessment can be a feasible and useful addition to basic psychiatric training. Despite the novelty of this approach to nearly all the trainers and all the trainees in our group, there was a remarkably high degree of engagement with the process, with almost 90% of trainees completing at least one episode of assessment and most achieving the target figure of four assessments 148
piloting wpbas
during a 6-month placement. Trainees in particular seemed to welcome the opportunity for more frequent and targeted feedback that WPBA affords and there was some evidence from their self-report that such feedback has a desirable impact on their learning of psychiatry. There are, however, a significant number who did not engage with the process at all. Although the proportion is similar to that seen in a similar-scale study of psychiatric trainees in the USA (O’Sullivan et al, 2004), the reasons behind nonengagement are not known. Some problems with the WPBA tools were identified by the study and they have led to further developments. Although trainees were evidently quite successful in arranging assessments, at times they experienced difficulties in persuading senior colleagues to assess them. This will be a concern if it prevents trainees either achieving the required number of assessments or being assessed by a sufficiently broad sample of assessors. It was apparent that educational supervisors used the supervision hour to conduct assessments, and although this may be an appropriate use of educational supervision, it will not allow trainees to be assessed by a range of assessors. One answer may be for supervisors to exchange trainees for some episodes of assessment, but ultimately a broad group of assessors will need to be established within each clinical service and time for assessment will have to be made available. It also became apparent that the four foundation programme WPBA tools were not sufficient to evaluate the competencies that need to be developed in psychiatric training. The data from the mini-CEX evaluation indicated that there is a need in psychiatric training for an instrument that permits the assessment of an entire clinical encounter as well as one that assesses components of the encounter. The Assessment of Clinical Expertise (ACE) was developed to meet this need. Some of the specific suggestions made in feedback have been taken up, such as the development of the patient satisfaction questionnaire, and some still need further consideration, such as the suggestion to assess clinical encounters through video links. This form of assessment does present logistical problems. Such problems may have more to do with perception rather than reality, however, and despite concerns about ‘the burden of paperwork’, most trainees were able to complete sufficient assessments and did not bring forward evidence that assessment caused significant disruption to clinical work. These and similar concerns were addressed later by the development of a web-based assessment system. Although this field study went some way towards addressing concerns and questions about WPBA, many issues were still unanswered. We did not know how this approach to assessment would work in other training schemes of different size and composition, nor did we have the data necessary to address the PMETB’s requirement regarding the reliability and validity of the assessment methods. A much larger and longer-lasting study was needed to address these issues. In August 2006, the Royal College of Psychiatrists initiated a national pilot study to examine these questions. 149
brittlebank et al
The Royal College of Psychiatrists’ pilot of WPBAs In August 2006, the Royal College of Psychiatrists began a national pilot study to evaluate the new tools that had been developed for psychiatric training as well as those that had been adapted from the foundation programme. The study also provided the opportunity to evaluate the assessment tools’ performance in assessing trainees’ progress against the College’s approved curriculum (Royal College of Psychiatrists, 2006). The tools evaluated were: Directly Observed Procedural Skills (DOPS), the mini-Assessed Clinical Encounter (mini-ACE) (adapted from the miniCEX), the Assessment of Clinical Expertise (ACE), case-based discussion, multi-source feedback (MSF), the Patient Satisfaction Questionnaire (PSQ), Clinical Presentation (CP), and the Journal Club Presentation (JCP). The tools were administered singly and in various combinations to trainee psychiatrists across 16 pilot sites taking in approximately 600 psychiatric trainees. The pilot sites encompassed a range of psychiatric training schemes that varied in size from small rotations of 10 specialty trainees to large deanery-wide schemes of up to 100 specialty trainees. The training schemes were drawn from England, Scotland and Wales and represented both urban and rural localities as well as teaching hospital and non-teaching hospital and community-based clinical services. Training in WPBA was targeted on the pilot sites, so that each site was offered a 3-hour training package that was delivered by a medical educator and a psychiatrist. The training had three main learning outcomes, so that by the end of the programme participants would be able to: •â•¢ •â•¢ •â•¢
outline the roles of the Modernising Medical Careers (MMC) and the PMETB, and the resultant changes to training use the Competency-Based Curriculum for Specialist Training in Psychiatry (Royal College of Psychiatrists, 2006) recognise and appropriately use the WPBA tools that were being piloted.
The training events also aimed to deliver the components of assessor training that have been demonstrated as necessary to bring about sustained change in assessor rating behaviours (Holmboe et al, 2004). The training session included observational skill development, training in developing performance dimensions and a group-rater calibration exercise. The feedback from workshop participants indicated that the desired outcomes were largely achieved. The assessment forms that were distributed to each pilot site were printed on multipart carbonless paper, which produced two copies of each assessment. The trainee retained the bottom copy for inclusion in their portfolio and the top copy was sent away to be read by document recognition software. The software then produced summary reports for each trainee and site that participated in the pilot and pooled data regarding the characteristics of each assessment tool. The WPBA tools were evaluated 150
piloting wpbas
using similar questions to those used in the evaluation of WPBA in earlier studies. Assessors and trainees were asked to rate their satisfaction with the episode of assessment and the time taken to complete the assessment. Frequencies, means and standard deviations were calculated to describe the participants, the ratings of the participants, the satisfaction of participants (for all instruments except the mini-PAT and PSQ) and, for case-based discussion, the ACE, the mini-ACE and the CP, the diagnostic category of the patient involved in the assessment. The reliability of the tools was assessed by using generalisability theory. Because of the complexity of professional performance and the variance introduced by being assessed in multiple situations and by multiple assessors, the classical approach to measuring reliability has limited application in evaluating WPBA tools (Crossley et al, 2002). Generalisability is now thought to provide a more useful measure of reliability. It gives a result in the form of the number of individual contributions that are needed to generate confidence that the assessment has captured all aspects of the candidate’s performance. Correlations between scores on each of the tools were made where possible in order to support or refute evidence for validity and to guide the further development of the assessment system. The full results of this study are described elsewhere (details available from the authors on request) and only the main findings will be presented here. All the instruments studied achieved high satisfaction ratings from assessors and trainees. The case-focused assessments (ACE, mini-ACE, DOPS, case-based discussion, CP and JCP) appeared to achieve wide curriculum coverage in that they involved assessments around a wide range of psychiatric diagnoses. The reliability study of the ACE, the mini-ACE, case-based discussion and the mini-PAT was large and able to generate robust conclusions. These instruments each achieved the level of reliability that is acceptable for a high-stake assessment within a relatively modest number of assessments. A reliability coefficient of 0.8 was achieved after five episodes of assessment for the ACE, after eight episodes for the mini-ACE and after four episodes for case-based discussion. The mini-PAT achieved an acceptable level of reliability with six assessors. The reliability data for the DOPS, PSQ, CP and JCP were less convincing. Acceptable levels of reliability required 12 assessors for the DOPS, 15 for the PSQ, more than 15 for the CP and 19 for the JCP. These amounts of assessment are not feasible within current training arrangements, therefore these tools in their present forms, despite retaining their educational value, cannot be recommended as high-stakes assessments. Correlations between the aggregate scores and the global ratings items within the instruments gave strong support to their construct validity, whereas the high levels of approval from assessors and trainees supported their face validity. The validity of all the instruments, except the PSQ, was 151
brittlebank et al
further supported by the high intercorrelations between them. The PSQ did not correlate with any other instrument, raising an important concern about this instrument. The mini-ACE and ACE, and the CP and JCP were particularly highly correlated with each other, suggesting that these four instruments could be combined into two. In the case of the CP and JCP, combining the instruments would produce an acceptable level of reliability after six episodes of assessment. The main conclusions about the psychiatric WPBA system that can be drawn from this study can inform recommendations regarding the minimum number of assessments for the mini-ACE, ACE, case-based discussion and mini-PAT that each trainee should have performed, combining the CP and JCP to produce a single instrument and only using scores from the DOPS and PSQ in an overarching assessment system for psychiatry trainees with great caution, if at all.
Conclusions The move towards a competency basis for training and assessing junior psychiatrists represents a fundamental shift for both trainees and trainers. The field trial provided encouraging evidence that the process of change may be achievable and the College’s main pilot study provided further data to address the requirements of the PMETB (now GMC). Furthermore, the latter study demonstrated that many of the WPBA tools that were evaluated were capable of providing acceptably reliable assessments with some validity. It must be recognised that this is very much ‘work in progress’; new ideas and insights are being absorbed as we go along. The trainee surveys from Wessex and Wales, although conducted after only short periods of exposure to WPBAs, indicate that the required culture change in medical education that Swanwick identified has yet to happen. The qualitative data from the Northern Deanery pilot study, however, showed that with careful attention to the social as well as the technical aspects of introducing WPBAs, it might be possible to bring about the necessary culture change. More attention needs to be given therefore to training more assessors and trainees in the delivery and interpretation of WPBAs as components of overarching assessment systems. Assessors and trainees need clearer guidance on standard setting and the level of performance required. Greater prominence must be assigned to the value of giving and receiving feedback as a routine part of medical training. Further steps towards embedding WPBA systems in medical education should be developed; the introduction of the Royal College of Psychiatrists’ Portfolio Online is an essential part of this. Finally, developing new tools for use in advanced training as well as to support revalidation in established practitioners will serve not only the technical needs for assessment systems to encompass all areas of professional practice, but will also support the necessary culture change 152
piloting wpbas
of making assessment an essential component of a process of continuing improvement of practice at all stages of professional life.
References Alimo-Metcalfe, B. & Alban-Metcalfe, J. (2004) The myths and morality of leadership in the NHS. Clinician in Management, 12, 49–53. Babu, K. S., Htike, M. M. & Cleak, V. E. (2009) Workplace-based assessments in Wessex: the first six months. Psychiatric Bulletin, 33, 474–478. Crossley, J., Davies, H., Humphries, G., et al (2002) Generalisability: a key to unlock professional assessment. Medical Education, 35, 972–978. Department of Health (2004) Modernising Medical Careers – The Next Steps. Department of Health. Department of Health (2005) Foundation Learning Portfolio. Department of Health. Dopson, S., Fitzgerald, L., Ferlie, E., et al (2002) No magic targets: changing clinical practice to become more evidence based. Health Care Management Review, 37, 35–47. Fish, D., & de Cossart, L. (2006) Thinking outside the (tick) box: rescuing professionalism and professional judgement. Medical Education, 40, 403–404. Fitzgerald, L., Ferlie, E., Wood, M., et al (2002) Interlocking interactions, the diffusion of innovations in health care. Human Relations, 55, 1429–1449. General Medical Council (2010) Standards for Curricula and Assessment Systems. GMC. Grimshaw, J. M., Thomas, R. E., MacLennan, G., et al (2004) Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technology Assessment, 8, 1–84. Holmboe, E. S., Hawkins, R. E. & Huot, S. J. (2004) Effects of training in direct obseration of medical residents’ clinical competence: a randomized trial. Annals of Internal Medicine, 140, 874–881. Leung, W.-C. (2002) Competency based medical training: review. BMJ, 325, 693–696. Menon, S., Winston, M. & Sullivan, G. (2009) Workplace-based assessment: survey of psychiatric trainees in Wales. Psychiatric Bulletin, 33, 468–474. O’Sullivan, P., Reckase, M., McClain, T., et al (2004) Demonstration of portfolios to assess competency of residents. Advances in Health Sciences Education, 9, 309–323. Oyebode, F. (2009) Competence or excellence? Invited commentary onâ•›…â•›Workplacebased assessments in Wessex and Wales. Psychiatric Bulletin, 33, 478–479. Psychiatric Medical Education and Training Board (2005) Workplace Based Assessment (A Paper from the PMETB Workplace Based Assessment Subcommittee). PMETB. General Medical Council (2010) Standards for Curricula and Assessment Systems – Revised, April 2010. GMC (http://www.gmc-uk.org/Standards_for_Curricula__Assessment_ Systems.pdf_31300458.pdf). Royal College of Psychiatrists (2006) A Competency-Based Curriculum for Specialist Training in Psychiatry. Royal College of Psychiatrists. Southgate, L. & Grant, J. (2004) Principles for an Assessment System for Postgaduate Medical Training. Postgraduate Medical Education and Training Board. Swanwick, T. (2005) Informal learning in postgraduate medical education: from cognitivism to ‘culturalism’. Medical Education, 39, 859–865. Swanwick, T. & Chana, N. (2009) Workplace-based assessment. British Journal of Hospital Medicine, 70, 290–293. Talbot, M. (2004) Monkey see, monkey do: a critique of the competency model in graduate medical education. Medical Education, 38, 587–592.
153
Chapter 14
Developing and delivering an online assessment system: Assessments Online Simon Bettison and Amit Malik
Developing an online system to support postgraduate medical education presents its own unique challenges, given the diversity of stakeholders across geography and occupational groups. This chapter aims to set out some of these challenges by describing the development of Assessments Online, the Royal College of Psychiatrist’s online assessment system. In 2008, the Postgraduate Medical Education and Training Board (PMETB) set out revised standards for curricula and assessment systems. These standards detailed the ways in which assessment and training should take place, and how it should be monitored and evaluated. To successfully meet these standards would require the development of organised process and systems for monitoring training and assessment at every level. The use of information systems or IT is ideally suited to this kind of scenario, as it allows information to be collected, organised and analysed in ways that are not possible using traditional paper-based methods. The introduction of a large-scale information system not only relies on successful implementation and delivery, but also on being able to engage the end-users. This would prove to be one of the biggest challenges the Royal College of Psychiatrists would face in implementing their own assessment system, as the memory of the Medical Training Application Service (MTAS) project was fresh in people’s minds. Coupled with trainees’ experiences of a rapidly changing landscape of online assessment in the foundation programme it was not surprising that there was at the time a great deal of concern and potentially also resistance to any IT system. This chapter will discuss not only the rationale but also the process of the development of what is now regarded as a largely successful online system.
Making a case for Assessments Online There are many benefits at various user levels to having assessments delivered online. From a trainee perspective, it facilitates an efficient way of administering the multisource feedback tool, mini-Peer Assessment Tool (mini-PAT; Chapter 6). It also provides them with a permanent portable 154
developing assessments online
record of their assessments. With the development of the new electronic portfolio, the trainees now also have the opportunity to link assessments to the curriculum (Chapter 10) in an efficient and feasible manner. For the assessor, an electronic portal allows them to complete assessments online and obviates the need for having to post forms (such as the miniPAT) to a central location for them to be collated. Additionally, assessors can keep track of the assessments they have completed on other trainees and use them at a later date for their own portfolio purposes. The educational supervisor benefits hugely from an online system as they are able to monitor their trainee’s progress electronically, especially with the development of the electronic portfolio. Moreover, the system can create appropriate reports such as the mini-PAT report and the assessment summary reports, thus saving the supervisor a great deal of manual work. This collated information can then be used to provide the trainee with effective feedback and also as evidence for the educational supervisor’s report. From a school/local education provider perspective, the assessment data for each trainee could be collated and assessed without any need to employ extra staff to facilitate this. Summary reports from the assessment data would help schools and local education providers with their quality assurance and quality control processes. Finally, the Royal College of Psychiatrists is able to collate the assessment data centrally with the aid of an electronic system. This supports the College’s role of further developing these assessment tools and fulfilling its quality assurance obligations to the national regulator for postgraduate medical education, the General Medical Council (GMC).
The Royal College of Psychiatrists’ Assessments Online – case study The Royal College of Psychiatrists had, in 2007, initially outsourced the delivery of its online assessments to the Healthcare Assessments and Training (HcAT) service. Having experienced difficulties with the delivery of an adequate service, in 2008 the College commissioned the development of its own online assessment system.
Establishing a project team A project team was set up to include software developers, trainees, educational supervisors and College staff in order to bridge the gap between the conceptual and design expertise of the programmers and reality of educational activity in the postgraduate psychiatric milieu. The aim was for the trainees and supervisors to advise on the process as it happened in the real world, the software developers to translate that into an online reality and the College staff to manage the organisational aspects of the project. Given that the team was operating under challenging timescales, it was agreed that teleconferences and electronic communication would 155
bettison & Malik
supplement face-to-face meetings and decision-making would need to be rapid and responsive.
Scoping phase This is the initial stage in which the manual process that the system is trying to replace is scoped so that as a minimum it must start from a baseline of replicating all the existing manual processes. This is crucial for the software to then be effective in remedying some of the shortfalls of the manual process. It is a common misconception that the purpose of introducing an IT system is to simply address the problems that have been identified in the business case. In fact, what typically happens is that the scoping exercise to define system requirements actually draws detailed attention to the process in itself and helps clarify any ambiguities that have hitherto gone unnoticed. Humans are good at dealing with complex systems ad hoc, reacting to any given set of circumstances to decipher the optimal solution at that moment in time. Computers, however, need to know the specifics of the implementation, as they are incapable of making a decision unless they have specifically been programmed to do so for the given scenario. This means that, typically, during the requirement capture phase, broad and non-specific guidelines for any process must be examined to decide in what way they can best be made specific for a computer programme so that they can be universally accepted. This usually implies looking at all the possible permutations that might occur in the real world and defining a solution that would accommodate them all.
Establishing requirements As already explained, the first stage is to clearly define each and every manual process that is currently deemed essential to ensure that the online system in some way performs all of these functions. Without this, the system would fail to meet one of the primary objectives, which is replicating all existing processes. Rigorously defining each process introduces a number of desirable benefits: •â•¢ effects of local variation in process are mitigated •â•¢ administrative burden is eased locally and centrally •â•¢ defined process allows for clearer instruction and less ‘interpretation’ of guidelines. Secondary to this stage is the definition of requirements that arise from identifying those areas of the manual process that can be or need to be improved, and then finding a way in which the software solution can facilitate this improvement. These secondary-stage requirements can deliver a range of additional improvements in: •â•¢ data quality •â•¢ monitoring and ability to audit •â•¢ collation and access to data •â•¢ security. 156
developing assessments online
Crucially, though, the aim is to deliver these benefits without significant detriment to the existing process and, in most cases, to add value for users at different stages of the process. In addition to process requirements there are other special technical requirements. Availability True high availability is the requirement for guaranteed 100% uptime; this is typically the domain of international trading systems or some other situation where even seconds of downtime could potentially result in huge losses. Admittedly, the collateral damage caused by unplanned downtime in the College system is not directly comparable to that caused to a trading system, but it is still a highly undesirable scenario. As a general principle, the higher the level of uptime required, the greater the cost of the technology to support this. In the case of Assessments Online, the most cost-effective solution was to host the service on a reputable third-party data centre, using a set-up that provided at least one layer of redundancy for each potential point of failure. Speed Studies have shown that the responsiveness of a website can have a significant effect on a user’s overall experience. This includes not only the actual speed of the website, but also the perceived speed. A twofold approach was used to address the speed issue. First, careful consideration was given to the choice of a hosting provider to ensure that they had the network capacity to deal with the peak loads that are typical of the traffic patterns that we see for assessments systems, backed up with hardware and platform choices that provide the technology that can deal with the concurrency. In this instance, the LAMP (Linux Apache PHP MySQL) stack was an appropriate choice, as this provided a very fast and robust platform, on top of which the application could be developed. Second, the developers undertook to constantly review the areas of the application that seemed to be the bottlenecks, and investigate ways in which either these could be sped up or steps could be taken to reduce the perceived sluggishness. For instance, through the use of messaging or on-screen feedback, the system can occupy the user’s attention and mask the fact that they may have to wait for something to happen. Having said that, typically page-load times are less than 1â•›s for the Assessment Online system (though in practice this depends on the user’s connection speed). Security One of the biggest concerns for users is that the data that are being entered are safe and secure in the system. From an application developer’s perspective, there is an inverse relationship between security and accessibility. Significantly, accessibility has a direct effect on the level of usability. Therefore, as more measures are put in place to secure data, it 157
bettison & Malik
often becomes more difficult for legitimate users to access and use that data and the overall system. Typically, security requires a layered approach, which means that one does not rely on just one form of security. Each additional layer makes it more difficult to gain unauthorised access. The first consideration is the physical security of the machines upon which the data are held. The company used by the College for this purpose is ISO 27001 certified. This standard defines the requirements for an information security management system and is designed to ensure the selection of adequate and proportionate security controls. The next consideration is that of encrypting the communications between the user’s local PC and the servers. This is achieved through the use of Secure Sockets Layer (SSL) technology, a cryptographic protocol that provides security for communications over a network. By encrypting communications we are preventing an attacker from obtaining credentials that might allow them to later gain access to the system by impersonating another user. Both of these methods are concerned with preventing direct access to the servers and the data held on them and do not present any real obstacle to users. The most difficult aspect of security is that of requiring user authentication. In Assessments Online the email and password authentication method was chosen, as this was felt to be the minimum level of acceptable authentication that would not encumber users too much. The user and password then worked in conjunction with a role-based system of permissions that ensured even authorised users were only able to access data that their particular role afforded them legitimate access to. For instance, trainees are able to access particular parts of the system that are distinct from those supervisors can access. Back-up It is an absolute requirement of the system that under no circumstances should data be lost. Although it may not be possible to ultimately guarantee this, Assessments Online was developed with this target for the data contained within the system. To that end we have taken a multilayered approach to back-up. The service is split between two servers that, in day-to-day operation, share the load. They also replicate their contents between themselves. Although not strictly a back-up, this is the first stage in ensuring that no data are lost in the event of the catastrophic failure of any single server. The second layer involves the daily back-up of data to tape to protect against total and massive failure of both servers and the loss of data. Finally, after 24 hours these back-up tapes are taken securely to an off-site location to create an additional layer of backup that would protect against the very unlikely scenario of total destruction of the data centre. Disaster recovery Data held within the system are of vital importance to the users, and, as discussed in the availability section, it is important that a consistent service is maintained. A detailed assessment of the risks and impact of 158
developing assessments online
various disaster scenarios was undertaken, and relevant courses of action were identified in each case. This involves making use of the relevant back-ups and identifying alternative providers setting out the roles and responsibilities of various parties. Audit With regard to the security and robustness of the system an independent audit was also undertaken to confirm that adequate measures had been put in place. This gives reassurance to the College that nothing has been overlooked. Internally, systems are also developed to keep detailed records of activity; this enables performance monitoring and error checking and also provides information that can prove invaluable should any questions arise regarding the reliability and robustness of the system or process. Timescales For a system of this size and complexity the timescale was incredibly short. The College had previously been using a provider that was part of the National Health Service. Owing to structural reorganisation, this provider intended to withdraw their service within a matter of months. By the time the decision had been made to bring the project in house, there was approximately 3 months remaining before the start of the new training year. Ideally the new system needed to be designed, developed, tested and rolled out live within this time frame, or as close as possible to it. To support this strategy the technology used to manage development and release was chosen to allow any new functionality to be introduced seamlessly and to minimise risk when doing so by ensuring it was easy to roll back in case of unforeseen difficulties. Migration All of the existing assessment data that had been gathered by the old provider were to be migrated to the new system. As part of this exercise, it was also intended to migrate the user authentication credentials to try to minimise the disruption caused by migrating systems. This would imply that existing users would not need to change their usernames and passwords.
Design phase Addressing the specifics of delivering all those requirements is a complex task. Even small-scale software development requires the technical skills from a number of different subspecialties of computer programming. These smaller components must then all work together to realise desired functionalities. The structures and software components of the system are known as its architecture. In its simplest sense, a system can be thought of as a series of inputs, processes and outputs. This is a useful approach to take when considering 159
bettison & Malik
the specifics of the implementation during the design process. An exhaustive list of inputs, processes and outputs can prove invaluable to the software developer as this will enable them to make sure that the model they are building is able to incorporate each of the items. It is also a good way for generating discussion, particularly about processes to make sure that the development team’s understanding matches closely the expectations of the users. One final factor to consider in system design is how external factors might support the smooth running of the system as a whole. The challenges that are created by use of technology are discussed later, but the strategy by which we will support end-users should be considered right from the start. Therefore, part of the overall design process is to define the scope of the end-user support that will be offered. This can take the form of education or assistance, both of which can be direct or passive. The production of supporting literature is a passive method of educating users about the system and the process, or of helping a user self-diagnose and resolve problems they may be having. This kind of support requires little resource other than the upfront costs of generating literature. However, in many cases it does not suit the end-user, as inexperienced computer users are typically unfamiliar with the idea that websites often provide self-help services. Direct support requires a channel be made available to end-users, by which they can contact those responsible for support and seek resolution in specific issues. This is particularly suited to those users who, often by their own admission, find it difficult to use a computer in general. By utilising email or telephone it is possible for the troubled user to obtain assistance without having to further engage with the computer. It was decided at an early stage that a combination of both direct and passive support would be used to complement Assessment Online.
Is the online assessment system user-friendly? One of the biggest challenges that any software solution faces is the implementation of a user interface. The user interface is the means by which end-users can interact with the system. User interaction can broadly be divided into two categories: •â•¢ a user wants to find out information •â•¢ a user wants to do something. It follows then that the user interface should provide a means for achieving both of these goals. The key difference between these two categories is that although both require information retrieval, only one of these tasks requires the system to update itself. In either case, we must also ensure the user can find the relevant part of the system. This not only requires a user-friendly navigational aspect, i.e. moving from screen to screen, possibly through hyperlinks, but also necessitates that a person recognise the correct screen when they have 160
developing assessments online
arrived at it and then are able to locate the relevant area or item on that screen. Successfully locating information is heavily influenced by the visual appearance of the various system elements. What constitutes the most pragmatic solution in terms of visual appearance can be incredibly visceral to individuals in many different ways. The task of separating visual preference from what constitutes best practice in the field of usability is incredibly difficult. Studies have indicated that there is a high correlation between users’ perceptions of interface aesthetics and usability. This can be beneficial in maximising the perceived usability of the system, yet aesthetics must not be sought at the expense of actual usability. A leading authority on usability, Jakob Nielsen has assembled a list of ten usability heuristics (Box 14.1), which are an excellent summary of some of the most important considerations. Unfortunately, despite the developers’ best efforts, the system’s success depends on the whim of individual user and their opinion, and no matter how well you may have followed guidelines or best practice, the complexity of the human condition is such that it is almost impossible to create something that is user-friendly to all people in all cases. We accept that this is the de facto standard and work from a position of deciding what can be considered a satisfactory level of user-friendliness. A good measure is to use information regarding the number of (known) issues that relate to usability v. the number of users who remain silent. Of course, there are many external factors unrelated to the online system itself that might have an impact on its usability and acceptability. This means that absolute figures, such as the number of support calls received after a change in the system such as the launch of a new feature, can only give a very rough indication; however, the relative growth or decline in this figure can be just as telling.
Implementation phase During the initial stages of development, it can be difficult to see any concrete progress. This is because there are significant portions of the system that must be developed that do not have any visibility, but without which the dependent – and highly visible – components cannot function or even exist. This is the point at which it is crucial that the planning and requirements were as accurate as possible. With regard to development and implementation, the College favoured a staged approach, prioritising all of the desired functionality and identifying the parts that were critical for going live. This functionality was then specified and implemented so that it would be possible to launch a service within the timescales provided. Other features that were critical to the system but not critical at launch were scheduled for development following launch and with target dates. This strategy entailed a greater amount of risk, but it enabled us to minimise the perception of disruption, and, by 161
bettison & Malik
Box 14.1â•… Nielsen’s top ten usability heuristics 1 Visibility of system status. The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. 2 Match between system and the real world. The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. 3 User control and freedom. Users often choose system functions by mistake and will need a clearly marked ‘emergency exit’ to leave the unwanted state without having to go through an extended dialogue. Support ‘undo’ and ‘redo’. 4 Consistency and standards. Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. 5 Error prevention. Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action. 6 Recognition rather than recall. Minimise the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. 7 Flexibility and efficiency of use. Accelerators – unseen by the novice user – may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. 8 Aesthetic and minimalist design. Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. 9 Help users recognise, diagnose, and recover from errors. Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. 10 Help and documentation. Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large. Source: Jakob Nielsen, useit.com (www.useit.com/papers/heuristic/heuristic_list.html).
implementing functionality selectively, gave users the functionality they needed in a timely fashion. In each case these strategies were chosen primarily with the need to minimise disruption to the end-user in sight. This had been identified as a priority by the College, as it was important to try to win over users from the outset by presenting a competent and well-thought-out solution. A second project was set up to handle the migration of data. When working on the data model, a series of scripts were developed that would 162
developing assessments online
convert the raw data that had been made available by the previous system provider into a format that would fit into the new model. Throughout development of the new model these scripts were repeatedly tested and kept up to date. This meant that once the new system had been fully tested it could be emptied and the migration of the data from the previous provider could take place automatically, minimising the disruption to the service for the end-users as they would not lose any of their old assessment data previously available on the HcAT system. Additionally, the development team was kept to a minimum to allow for rapid prototyping and development, and, where possible, existing code libraries were used to reduce development times. The communication among the project team was fluid and rapid to enable quick decision-making instead of relying on concrete details being present in the pre-defined specification that had to be modified by a committee at each stage.
Live phase In spite of the circumstances, the system was operational and fully live by the second week of August 2010; this was within days of the new training year starting. Within hours, new assessments were being recorded by trainees on Assessments Online. As mentioned earlier, to assist users, support was made available by phone and email. A strict 1-business day response time target was set. This helped to create a positive overall experience of the service. By monitoring the types of enquiries that were received it was possible to identify those areas that could benefit from improvement (most frequent enquiries are provided in Box 14.2). In addition to these typical support enquiries, over the course of the first year we identified several very specific issues that could be improved by introducing new functionalities. 1 Multiple email addresses. In many cases users were registered with an email address other than the one known to their colleagues. This meant that an additional account may have been created using this alternative email address. For example, an assessor may be registered with a personal email address, and may then subsequently be nominated by a trainee using their work email address. This would result in creating two accounts for this particular assessor. A facility was created that allowes a user to add alternative email addresses to their account, which solved the problem of creating additional accounts. 2 Mini-PAT nominations. When nominating assessors users would search by name. This led to situations where it was sometimes unclear whether an assessor already had an account. Searching by email had been ruled out as this would escalate the issues in creating duplicate accounts for people. Once the multiple email address feature had been implemented, this restriction no longer applied, and the nomination process was changed to use email address as the means for nomination. 163
bettison & Malik
Box 14.2â•… Typical support enquiries Completing mini-PAT forms In volume terms, this is the most common activity; in a high percentage of these cases, the assessor will be new to the system and in many cases this may be the only interaction they ever have with the Assessments Online system. To this end the process of completing a mini-PAT must be made as straightforward as possible. This runs contrary to the fact that the process must also be made secure. To ensure that the form is being filled in by the nominated assessor one might typically require a user to create an account that can be then tied to an email address. Studies have shown that upfront registration is bad for usability. By re-engineering the process so that new users are taken to the form as quickly as possible, we improve the user’s experience. General login issues Usability studies have shown that users do not remember usernames and/or passwords. The system was designed to replace the use of arbitrary usernames with using email address as the username. In addition, an automated password reset service gives users the ability to quickly and easily reset their own password. In spite of this a significant number of users still prefer to contact us for help when they are unable to login due to a password-related issues. Nomination enquiries Closely linked to the mini-PAT issues, and likely due to the high percentage of first-time users, enquiries about nominations are also frequent. This can range from trainees nominating incorrect assessors, through to assessors who have not been approached before assessment seeking information about the process. A common problem we have encountered is assessors who have already completed one mini-PAT form but did not complete the post-form registration process, and do not realise that they have an account with us. Stale data A number of other enquiries revolve around the trainee data becoming outdated, for example, because of a change in supervisors or training levels. The system has been improved to give users greater freedom to update these details.
3
164
This resulted in a much simpler process and a corresponding reduction in the number of enquires. Navigation. Owing to the time constraints imposed at the start of the project, work on the navigation and usability of the system had suffered, and a number of areas of improvement were identified from the feedback. The navigation and layout was overhauled to provide a more consistent experience across the system. The most significant change was to remove all unnecessary information from the user’s homepage and instead create a blank area that would only display the most immediately relevant task(s) to the logged-in user. A huge number of one-time or occasional users benefited from this directly
developing assessments online
as it removed the need for them to engage in the navigation process entirely – whatever they needed to access would always be presented to them on their home page immediately upon logging in.
Reflection In the 20 months following the launch of Assessments Online, approximately 8200 emails were received from approximately 45â•›000 users (trainees, assessors, supervisors, etc.). Of these, only 78 were complaints, whereas 827 were sent to thank the support team for assistance or the development team for the improvements in the system. Although the complaints were few, it is these comments that often give the most insight into which areas could be improved still further. Often complaints were due to a lack of clarity either regarding the assessment process or the workings of Assessments Online. These insights helped us to identify the need for better guidance and encouraged us to investigate the methods used to present the guidance to the user. Instead of having all the guidance available as a library of information that users must search through, the system identified and presented more prominently those items that were most relevant to a person based on their current status and position within the system. Some of the complaints were levelled at the workplace-based assessment process itself, and a very small minority were quite clearly just the result of irate users venting at a faceless entity. This too proved insightful in reminding us how varied the user base is and the extremes of ability and temperament that must be considered by the support system.
Conclusions Our experiences with Assessment Online to date have helped us validate some of our conceptual frameworks that formed the basis for developing the system and are presented in this chapter. The positive feedback that we have received suggests that notwithstanding the apprehension felt among users before its launch, the Assessments Online system was able to capture the hearts and minds of its users and assuage many of the fears that it would be another high-profile failure of IT to deliver on its promises. By continuing to innovate in line with users’ needs and expectations it should be possible to build on the level of engagement that has been developed already. The success of the system as a mechanism for higher-level management of the process relies directly on the continued participation of users, and this can only be achieved by remaining focused on maintaining user engagement. The road ahead will prove interesting: the expansion of Assessments Online into a full web-based portfolio system has already taken place (Portfolio Online is presented in Chapter 10). Its role in the annual review of competence progression is likely to develop as people start to move away from paper altogether. These developments will create new challenges – perhaps the need to access online data in an offline setting, the increase 165
bettison & Malik
in storage requirements and associated back-up measures. As the system grows, there is also potential for it to become a high-profile target for infiltration by unauthorised users. Only time will tell how these factors will shape future development.
Further reading Brooks Jr., F. P. (1995) The Mythical Man-Month: Essays on Software Engineering (20th Anniversary Edition). Addison-Wesley. Chen, J. (2009) The impact of aesthetics on attitudes towards websites. Usability. (http:// www.usability.gov/articles/062009news.html). Connolly, T. & Begg, C. (2004) Database Systems: A Practical Approach to Design, Implementation and Management (4th edn). Addison-Wesley. Crescimanno, B. (2005) Sensible forms: a form usability checklist. A List Apart (http:// www.alistapart.com/articles/sensibleforms/). Forrest, B. (2009) Bing and Google agree: slow pages lose users. O’Reilly Radar (http:// radar.oreilly.com/2009/06/bing-and-google-agree-slow-pag.html). Gamma, E., Helm, R. & Johnson, R. (1994) Design Patterns: Elements of Reusable ObjectOriented Software. Addison-Wesley. Green, D. & DiCaterino, A. (1998) A Survey of System Development Process Models CTG. MFA – 003. Models for Action Project: Developing Practical Approaches to Electronic Records Management and Preservation. Center for Technology in Government at the University at Albany/SUNY (http://www.ctg.albany.edu/publications/reports/survey_of_sysdev/ survey_of_sysdev.pdf). Hunt, A. & Thomas, D. (1999) The Pragmatic Programmer. Addison-Wesley. Kurosu, M. & Kashimura K. (1995) Apparent Usability vs. Inherent Usability Experimental Analysis on the Determinants of the Apparent Usability. Design Center, Hitachi (http://www. sigchi.org/chi95/proceedings/shortppr/mk_bdy.htm). Lynch, P. (2009) Visual decision making. A List Apart (http://www.alistapart.com/articles/ visual-decision-making/) Nielsen, J. (1999) Ten usability heuristics. Useit (http://www.useit.com/papers/heuristic/ heuristic_list.html). Nielsen, J. (1999) Web research: believe the data. Useit (http://www.useit.com/ alertbox/990711.html). Rackspace EMEA Blog (2009) Server hugging and security. Rackspace (http://blog. rackspace.co.uk/?p=116), accessed 20 April 2010. Raskin, A. (2007) Never use a warning when you mean undo. A List Apart (http://www. alistapart.com/articles/neveruseawarning/). Ritzenthaler, D. (2009) Taking the guesswork out of design. A List Apart (http://www. alistapart.com/articles/taking-the-guesswork-out-of-design/). Shah R. (2011) UI messaging and perceived latency. Google code (http://code.google. com/speed/articles/usability-latency.html). Tractinsky, N. (1997) Aesthetics and Apparent Usability: Empirically Assessing Cultural and Methodological Issues Industrial Engineering and Management. Ben Gurion University of the Negev (http://www.sigchi.org/chi97/proceedings/paper/nt.htm). Tufte, E. R. (2001) The Visual Display of Quantitative Information (2nd edn). Graphics Press USA. West, R. (2010) Theory of Addiction. University College London. Wróblewski, L. (2008) Sign up forms must die. A List Apart (http://www.alistapart.com/ articles/signupforms/).
166
Chapter 15
A trainee perspective of workplace-based assessments Clare Oakley and Ollie White
The introduction of workplace-based assessments (WPBAs) represented a significant shift in the culture of postgraduate medical education. The assessments became compulsory for specialty trainees in August 2007, at a time of significant upheaval within the profession. Many trainees were already experiencing significant stress (Whelan et al, 2008) following the disasters of specialty selection resulting from the Medical Training Application System (MTAS), and confidence about the ability of the leaders of the profession to make the right decisions for medical training was low. In addition, many trainees were unclear about the role of the Royal College of Psychiatrists in their training since the advent of the Postgraduate Medical Education and Training Board (PMETB). Concurrently, changes were being made to the MRCPsych examinations with the familiar long case, critical appraisal and essay papers being replaced. Changing guidelines about the role of WPBA in the new examination eligibility criteria compounded the confusion for trainees and trainers. Unfortunately, the combination of these circumstances created a negative preconception for many trainees about the implementation of a new form of assessment. Workplace-based assessments were perceived by some trainees as being an unwanted imposition that added additional hurdles to their training. In this chapter we will outline the benefits and challenges of WPBAs from the perspective of psychiatric trainees. Although we have focused on psychiatry, many of the issues are also relevant to those in other medical specialties. We will consider the WPBA tools in turn, their role in the overall assessment of competence, their impact on the trainee–trainer relationship, and the practicalities of their implementation and delivery, including the online system. Finally, we will highlight areas that need further development. In writing this chapter we have drawn upon our experiences as two trainees who have fallen either side of the Modernising Medical Careers (MMC) reforms. Between us, we have experience of undertaking WPBA as part of the new curriculum and also as assessors for junior colleagues. As past chairs of the College’s Psychiatric Trainees’ Committee we have incorporated feedback from trainees throughout the UK about their experience of the implementation of WPBA. 167
oakley & white
Why do we need workplace-based assessments? The need for WPBA is a vital issue for trainees, and exploration and clarification of this issue is crucial to ensure that this form of assessment becomes an accepted and valued part of postgraduate training in psychiatry. Trainees have long complained about the ‘injustice’ of high-stakes, ‘all or nothing’ examinations, as initially highlighted in the Department of Health’s report Unfinished Business – Proposals for the Reform of the Senior House Officer Grade (Donaldson, 2002). This report included specific recommendations that the purpose of all the medical Royal Colleges’ examinations should be reviewed and progress through training programmes should be determined by competence-based assessments. The subsequent reform of postgraduate medical education has led to modernisation of the MRCPsych examinations. Details of the changes to the MRCPsych examinations, including their relationship with WPBA, are discussed elsewhere in this book (see Chapter 12). In brief, whereas passing the MRCPsych examinations rightly remains an essential requirement for progression into higher training, WPBAs are now a key component of demonstrating readiness for sitting the examinations. This is with the aim of ensuring that trainees receive feedback on their progress and are therefore better prepared to sit the examinations. Whereas examinations provide a snapshot of a trainee’s abilities in an artificial, pressurised situation, it has been argued that observing and assessing what doctors actually do in the workplace is the best measure of the care they can provide to patients (Norcini, 2003). It is possible to assess a wider range of attitudes, skills and behaviours in the workplace than can be marked in an examination. Therefore, it is felt that performance in the workplace provides a robust measure of a doctor’s overall competence. Assessment remains key to the educational process and is an underlying principle in postgraduate psychiatric curricula in the UK. However, the benefits depend on robust delivery of workplace-based assessments to ensure a valid measure of competence. Recent evidence suggests that many trainees currently have negative views about the implementation and usefulness of such assessments (Babu et al, 2009; Menon et al, 2009). Successful implementation is affected by many factors, including the reliability and validity of the WPBA tools and the practicalities of conducting WPBAs. These factors are discussed in this chapter.
International experience In the USA there has been increasing concern about the workplace-based training of doctors over the past 20 years. This was highlighted by the finding that the vast majority of first-year trainees in internal medicine were not observed more than once by a faculty member in a patient encounter where they were taking a history or conducting a physical examination (Day et al, 1990). This prompted the development of various 168
trainee perspective
WPBA tools by American boards of medicine and Canadian universities. Examples include Chart Stimulated Recall (CSR) (Maatsch et al, 1983), a precursor of case-based discussion; the mini-Clinical Evaluation Exercise (mini-CEX) (Norcini et al, 1995) that has been developed by the Royal College of Psychiatrists into the mini-Assessed Clinical Encounter (miniACE); Clinical Work Sampling (CWS) (Turnbull et al, 2000); and Clinical Encounter Cards (CEC) (Hatala & Norman, 1999). The need for formative assessment is now well established in postgraduate medical education in North America and the role that training organisations have in ensuring its successful continued implementation is particularly stressed (Norcini & Burch, 2007). Competency-based training is also being introduced in several countries in Western Europe, including Sweden, The Netherlands and Denmark. The Netherlands and the UK have undergone similar processes of developing the workplace-based assessment system, resulting in several similar assessment tools and trainees expressing similar concerns about their implementation (Oakley et al, 2008). Several European countries have introduced variations of the core WPBA tools of multisource feedback, casebased discussion and observation of a patient assessment. Therefore, it is crucial that there is collaboration between countries to ensure that there is a sharing and further development of WPBA knowledge and experience. This collaboration began with a competency-based training working group of the Union Européenne des Médicins Spécialistes (UEMS), which developed a European curriculum and assessment framework for psychiatry. Trainees are represented in this group by the European Federation of Psychiatric Trainees (EFPT), of which the Psychiatric Trainees’ Committee of the Royal College of Psychiatrists is an active member.
WPBA tools Assessment of Clinical Expertise (ACE) The removal of the long case (Individual Patient Assessment) from the MRCPsych examination was felt by some, including trainees, to represent a significant loss in the comprehensive assessment of the holistic skills of a psychiatrist (Benning & Broadhurst, 2007; Baker-Glenn, 2008). It has been argued that the compartmentalising of skills into short Objective Structured Clinical Examination (OSCE) stations does not capture the important skills of a thorough assessment and formulation of a management plan (Norman, 2002). Therefore, the inclusion of long cases assessed in the workplace (ACE) is a crucial part of the assessment programme for psychiatry. Many higher trainees think that the ACE is less relevant for their use as they have already demonstrated their ability to take an appropriate history and perform a mental state examination by completing core training. It may, however, be appropriate for higher trainees to undertake ACEs in particularly complex clinical situations, with the aim of achieving mastery. 169
oakley & white
Mini-Assessed Clinical Encounter (mini-ACE) As the ACE can be compared to the long case of the old examinations, so the mini-ACE can be considered analogous to an OSCE/Clinical Assessment of Skills and Competencies (CASC) station. Mini-ACE can be utilised in a variety of clinical encounters, including reviews of patients in both the in-patient and out-patient setting. It is a more concise patient interaction than the ACE and therefore has the advantage of increased opportunities to undertake the assessment in terms of flexibility of clinical situations and a variety of assessors. Owing to the similarities between the mini-ACE and CASC stations, many trainees may find this tool a valuable means of examination preparation. However, the fact that the mini-ACE assessment is conducted without time pressure allows trainees a different opportunity to demonstrate competence.
Case-based discussion Case-based discussion can be regarded as the formalisation of a process that has always occurred in clinical and educational supervision. Psychiatric trainees and trainers have therefore found it the easiest WPBA to use, as it easily fits within the existing structure and understanding of weekly educational supervision. The structured case-based discussion procedure allows an examination of the trainee’s written and oral communication skills and an assessment of their formulation and management abilities. It is often useful to undertake case-based discussion directly after an ACE to allow consideration of the overall assessment, formulation and initial management plan of the patient. Although the observation, discussion and feedback of a case will be a lengthy process lasting up to 2 hours, it is likely to be a very informative exercise for trainee and trainer to undertake at the beginning of a placement to assist with the development of an individual learning plan.
Mini-Peer Assessment Tool (mini-PAT) Multisource feedback is becoming an increasingly important element of professional development for all doctors, including consultant psychiatrists (Lelliott et al, 2008). Various multisource feedback tools have been considered to enable trainers to gather feedback from colleagues and multidisciplinary team members about the performance and progress of their trainee. The favoured tool for psychiatric trainees is the miniPeer Assessment Tool (mini-PAT). It can be argued that the mini-PAT formalises and anonymises the informal process of third-party feedback that trainers have traditionally undertaken. The benefit of the anonymity of the mini-PAT multisource feedback process for trainers is that it may allow assessors to feel more able to express any concerns. In addition, the online administration and collation of mini-PAT responses reduces the administrative burden on the trainer. 170
trainee perspective
From the trainee’s perspective the mini-PAT took some time to adapt to but is now generally functioning well. Initial figures for the training year 2008/2009 indicated that 85% of trainees were achieving six or more assessors completing their mini-PAT assessment. However, there has been criticism centring on difficulties in trainees obtaining the online results, as it is a requirement for the educational supervisor to initially review them and meet face to face with the trainee to provide constructive feedback. Although a source of frustration for trainees, this mechanism is important as it not only allows supervisors to place the feedback within the context of the trainee’s and the placement’s wider circumstances and the trainee’s overall performance, but also supports supervisors in appropriately addressing any health and probity concerns that might be raised through the feedback process. It is also felt by some trainees that the mini-PAT is too limited in the depth and range of feedback for higher trainees. It may be that there is an opportunity for specialty-specific development of the mini-PAT for higher trainees, perhaps also making it more homogenous with the ACP 360, the multi-source feedback tool developed by the College for consultant psychiatrists.
Journal Club Presentation (JCP), Case Presentation (CP) and Assessment of Teaching (AoT) The Case Presentation and Journal Club Presentation are easily completed during trainee presentations given at academic programmes. They have the advantage that multiple suitable assessors are likely to be in the audience. This is in contrast to the Assessment of Teaching, as trainees have reported that it can be problematic to find a suitable assessor to complete such an assessment. This is due to the fact that it is unlikely that teaching is delivered to more senior trainees and owing to the limitations preventing junior colleagues (e.g. more junior trainees and medical students) acting as assessors. Therefore, to undertake an Assessment of Teaching it is usually necessary to ask a consultant to observe the teaching session being given.
Direct Observation of Non-Clinical Skills (DONCS) To date, the development of workplace-based assessment tools for psychiatric training has focused on core trainees. This was in part due to the time pressures involved in preparing tools for use as part of the examination eligibility for the new MRCPsych examinations. Higher trainees have expressed concern that many of the existing tools have limited suitability to assess post-Membership competencies (i.e. competencies obtained after passing the MRCPsych examination), including the supervision of colleagues and management skills. The existing core WPBA tools place an emphasis on clinical competencies and neglect the greater focus on nonclinical skills that occurs during higher training. More recent workplace-based assessment development work has therefore focused on the assessment of non-clinical skills in higher trainees. 171
oakley & white
The WPBA tool that has recently been piloted to assess non-clinical skills is the DONCS (described in detail in Chapter 7). This tool allows assessment of skills such as giving evidence in a mental health tribunal, leading a ward round and supervising a more junior trainee. Many higher trainees feel that their competencies would be most appropriately assessed using the DONCS and case-based discussion tools.
Trainee–trainer relationship Workplace-based assessments have required an evolution in the relationship between a trainee and their trainer. Post-MMC training, and in particular the introduction of WPBA, has introduced more structure and formalised procedures requiring additional work for both trainees and trainers. Trainees’ experience is that to date consultant trainers are the most frequent WPBA assessors compared with other members of the multidisciplinary team. Reasons for this are likely to include a sense of obligation on behalf of the trainer and practical limitations for other members of the team. This has significant time implications for trainers, as identified by the General Medical Council (GMC) trainers’ survey report (GMC, 2010a). This lack of resources has recently been recognised by the Royal College of Psychiatrists, which has now provided guidance about the time that should be set aside in a consultant’s job plan in order to deliver specialist training in psychiatry. They recommend that consultant trainers (or clinical supervisors) should have 0.5 programmed activities per week per trainee, and educational supervisors (also called College tutors or training programme directors) should have 0.5 programmed activities per week for every four core trainees or every eight higher trainees. This guidance takes account of the fact that assessment should be in addition to, not instead of, the traditional educational supervision. The weekly 1-hour educational supervision is highly valued by trainees and considered an essential component of postgraduate psychiatric training. Although trainees understand that consideration of assessments will be a part of this supervision time with a trainer, it should not overshadow the other equally important facets such as mentoring, discussion of teaching, management and research experience, and pastoral care. Compared with pre-MMC training, the trainee is now required to be even more proactive in obtaining the necessary experience to ensure that they are able to demonstrate the required competencies. At the commencement of each placement the trainer and trainee should develop and agree an individual learning plan that outlines the trainee’s learning objectives, how these objectives will be achieved (e.g. clinical experiences and learning opportunities), and how the trainee will demonstrate their achievement (e.g. WPBA). This individual learning plan should be reviewed regularly to ensure that the placement is meeting the trainee’s specific learning requirements and that they are progressing appropriately. As WPBA forms 172
trainee perspective
one part of the evidence of competency achievement, the individual learning plan should include details of the WPBAs that will be undertaken in specific clinical situations during the placement. Another issue related to WPBA that has an impact on the trainee–trainer relationship concerns poorly performing trainees. Trainers may perceive low scores and negative feedback to trainees as a failing on their part. This is also likely to create additional work for the trainer due to the remedial measures that will be required, including additional assessments and liaison with others (e.g. the training programme director). Another issue is that trainers may feel protective of their trainee. All these factors may ultimately result in unduly lenient assessments. This is unhelpful for the trainee as specific areas requiring improvement will not be highlighted and may lead to subsequent unexpected failures, including in the CASC examination. The early stages of development of WPBA in psychiatry included discussions about introducing independent assessors for a proportion of trainees’ assessments. Although this would have significant logistical implications, it would have the advantage of improving the robustness of the WPBA process.
The importance of structured feedback As highlighted earlier, to ensure that WPBA is and remains a meaningful learning experience it is essential that neither trainees nor trainers view the assessments as merely tick-box exercises. Structured feedback is an essential part of the WPBA process but many trainees believe that it is currently lacking. This is supported by research, which has shown that assessors do not employ basic feedback techniques such as encouraging trainees to self-rate or using assessments to develop an action plan (Fitch et al, 2008). Workplace-based assessment provides a valuable opportunity to give structured, meaningful and timely feedback to trainees about their performance. It has been suggested that this may help to redress the perceived reduction in feedback and mentorship that has occurred as a result of the move to more shift-based working (Carr, 2006). In the UK, trainees in psychiatry have always viewed feedback as desirable (Day & Brown, 2000). However, there is a need to provide high-quality feedback to ensure trainee development and for this purpose one of the various established feedback methods such as Pendleton’s rules or the SET-GO method should be adopted (Brown & Cooke, 2009).
Annual Review of Competence Progression (ARCP) The ARCP is explained in detail elsewhere in this book (Chapter 11). From a trainee’s perspective, the role WPBAs play in the ARCP process is complex, and confusion and conflict remain regarding their formative and summative purposes. The aim of formative assessment is to identify individual trainee 173
oakley & white
strengths and weaknesses, with the view of improving performance via effective feedback that facilitates the development of specific learning plans. This contrasts with summative assessments, which determine achievement of goals or competencies through pass-or-fail tests. Workplace-based assessment is a formative assessment process, but, unfortunately, the fact that the assessments were initially linked with MRCPsych examination eligibility has resulted in both trainees and trainers perceiving them as pass-or-fail tests. This created significant additional anxiety among trainees that they must ‘pass’ the required WPBA to be eligible for the examination. The MRCPsych examination eligibility criteria requirements have moved away from trainees completing specific WPBAs to the demonstration of an overall level of competence as evidenced at the ARCP. It is crucial to emphasise that the attainment of competence is demonstrated by a variety of evidence and expert judgements. Workplacebased assessments are only one of these types of evidence and it has been suggested that they are not sufficiently reliable for exclusive utilisation for the purposes of making summative decisions (Wilkinson et al, 2008). Other evidence includes attendance at training courses, passing the MRCPsych examinations, a logbook of clinical activity, audit reports and publications. Expert judgements about the progress of trainees are made via educational supervisor reports and ultimately by the ARCP panel. So although, quite rightly, much has been made of the reliability and validity of the new WPBA tools, they must be considered in the overall context of a trainee’s educational evidence, not in isolation. The results of the analysis of the pilot WPBA data, including reliability and validity, are discussed elsewhere in this book (Chapter 13). In summary, it is clear that a reasonably high number of WPBAs are needed to be indicative of the performance of the trainee. Current College guidelines specify a minimum of two ACEs in CT1 and three ACEs in CT2 and CT3, two case-based discussions and four mini-ACEs for all core trainees; higher trainees should be undertaking one WPBA per month. These figures should be viewed as a minimum standard for trainees and often more assessments will be required. It is also important for trainees to undertake a range of WPBAs in a variety of clinical situations with a range of assessors from the multidisciplinary team. Trainees hope that improvements to WPBA reliability and validity data will continue over forthcoming years as workplace-based assessment becomes more established. This will increase the robustness and confidence of using WPBA in helping to determine a trainee’s progress.
Practical considerations As already highlighted, WPBAs should not involve completely new training methods and are probably best considered as the evolution of processes that have long been good practice in postgraduate medical education. For example, the key features of case-based discussion should be undertaken 174
trainee perspective
whenever a trainee presents a case to a senior colleague for advice. In the traditional apprenticeship model of training, the trainee observes their consultant interviewing a patient, followed by the consultant then observing the trainee undertaking a similar task, and providing feedback on their performance. The second part of this process is essentially the same as an ACE assessment. Therefore, undertaking WPBA involves the trainee and trainer formalising many of the training opportunities that have long been in existence. It is important to plan some WPBA in advance, but it is also crucial for trainers and trainees to be able to seize opportunities as they present themselves. For example, a crisis assessment in the community or out of hours may be a suitable chance to be assessed by a senior nurse, social worker or higher trainee. Medics within the multidisciplinary team, particularly the trainee’s consultant, are likely to undertake the majority of a trainee’s assessments. Educational supervisors are viewed highly by trainees in terms of their ability to perform these assessments (Menon et al, 2009). The pilot WPBA data have clearly demonstrated that consultant psychiatrists are the strictest in terms of scoring compared with other assessors. This is a finding consistent with other specialties (Wilkinson et al, 2008). However, it is important to involve other multidisciplinary team members in WPBA to obtain a wider perspective of the trainee’s abilities. The College WPBA guidance has clearly laid out the recommended seniority of multidisciplinary team assessors. Many higher trainees have reported that this results in a very small pool of suitable assessors during each placement. However, the guidance is intended to ensure that the assessors have sufficient experience to provide a meaningful assessment of the trainee. Indeed, concerns expressed by multidisciplinary team assessors include uncertainty regarding the standard and level they should be assessing the trainee at. Although consultants have a clear responsibility for assessing their trainee, other multidisciplinary team assessors do not have such responsibility. This, combined with a lack of specific dedicated time for such activities in their job plan, has resulted in some trainees finding it difficult to ensure the necessary breadth of assessors.
Portfolio Online Workplace-based assessments were introduced immediately following the difficulties of the Medical Training Application Service (MTAS). Trainee confidence in online systems was minimal and this resulted in a low tolerance for some of the inevitable initial technical difficulties when Healthcare Assessment and Training (HcAT), the initial online portal for psychiatric WPBAs, was launched (Babu et al, 2009; Menon et al, 2009). The psychiatric WPBA online system was subsequently brought in house to the College under the name Assessments Online. It was designed specifically for psychiatry and a significant increase in satisfaction has been reported by 175
oakley & white
trainees. More recently, the Assessments Online system has been extended and developed to become Portfolio Online (see Chapter 10). Trainees report that the assessment functions of Portfolio Online are intuitive and user-friendly for both assessors and those being assessed. Importantly, phone and email queries are dealt with efficiently, usually by the next working day. The fact that approximately 10% of the emails received by the Portfolio Online support team expressed thanks demonstrates the improvement in the new system. Less than 20% of queries relate to technical issues (e.g. how to register for an account or retrieve a forgotten password). The trainees have commented that there are far fewer technical difficulties with the new system. To date, approximately half of enquiries to Portfolio Online support staff have been related to the mini-PAT. These have primarily concerned the trainees providing incorrect assessor email addresses and trainees and assessors missing response deadlines. Another common query surrounds the issue of CT2 trainees being assessed at CT3 level, and ST4 trainees being assessed at ST5 level. This has caused confusion for trainees and, more commonly, for assessors. Trainees gradually became more familiar with the arrangement that CT2/3 and ST4/5 were considered phases of training rather than distinct individual training years, as per the relevant psychiatric curricula. This issue is an example of how changes in the detail of psychiatric postgraduate training can cause significant confusion and stress for both trainee and trainer. Understanding of and familiarisation with both the structure and processes involved will continue to improve with time and online guidance via Portfolio Online has proved crucial in disseminating key information. A remaining problem with using an online system to store and collate WPBAs is the degree of computer literacy among assessors. It is not uncommon for trainees to report that their consultants are not familiar with information technology and are therefore reluctant to use a computerised rather than paper-based system. Trainees find it difficult to relate to this position, particularly considering the current reliance on emails and the internet within the National Health Service. In some clinical situations which lend themselves to WPBAs, a computer may not be available to enter the data (e.g. when undertaking an ACE during a home visit). However, it is possible to print blank assessment forms from Portfolio Online in advance, which can then be taken to the site of the assessment and completed by hand. These can then be entered on Portfolio Online retrospectively with the prior date of assessment recorded. This may raise specific issues for some members of the multidisciplinary team who are not familiar with Portfolio Online and trainees may need to provide specific instructions on how to access the site. There are many advantages to an online assessment system when compared with a paper-based system for collating WPBA data (see Chapter 14). One of the most obvious is in the administration of the mini-PAT, as email reminders to assessors, collation of responses, and the 176
trainee perspective
graphical display of results clearly exceed what can be achieved on paper alone. However, for the individual trainee the merits of entering the other assessments online are less clear. As discussed above, trainees often find it challenging to engage colleagues in the WPBA process and many feel that this is hampered further by the requirement of assessors to interact with an online system. The main beneficiaries from an online system are educational supervisors, training programme directors and deaneries, as they can readily view summary reports of individual trainees, whole cohorts of trainees, and various subgroups. These data are also very valuable in the monitoring of training schemes.
Trainees as assessors Although this chapter focuses on the assessment of trainees by WPBA, many trainees are also assessors. Higher trainees frequently assess their junior colleagues, many of whom report difficulties in ensuring their consultant trainers complete the necessary number of assessments. Advantages of fellow trainees undertaking assessments include the widening of clinical situations in which WPBA can be undertaken. Out-of-hours experience is a particular opportunity; for example, the necessary discussion regarding an acute assessment between a core trainee and a higher trainee while on call can fairly easily be adapted into a case-based discussion. Higher trainees working within the same clinical team also provide core trainees with a resource for assessing a wide range of WPBAs, including ACE, mini-ACE and mini-PAT. As with all assessors, higher trainees require training in workplacebased assessment to ensure reliability and validity. The structured nature of MMC training offers an opportunity to embed specific WPBA assessment competencies within the new competency-based curricula. Indeed, to ensure WPBA becomes a fully integrated part of postgraduate medical education, all doctors need to be able to assess others. The current plans to include WPBA assessment competencies in the revised foundation curriculum are crucial in achieving this, although these competencies will need to be developed through the training before obtaining a Certificate of Completion of Training (CCT) and continued professional development post-CCT.
The way forward At the time of writing, it is only just 3 years since the widespread introduction of WPBAs in UK postgraduate medical education. Trainees, trainers and other members of the multidisciplinary team involved in assessment have been required to adopt various new skills and adapt to another demand on their already stretched time resources. The long-term benefits of WPBA are an improvement in trainee skills owing to structured 177
oakley & white
feedback and improved demonstration of competencies during training, leading to greater assurance for patients and carers. For these advantages to be optimised, careful planning needs to continue with regard to the continued implementation of WPBA. Perhaps the greatest challenge to the success of WPBA tools surrounds the change in culture regarding their utilisation. The medical profession is traditionalist in nature and any significant changes in postgraduate medical education are likely to be resisted by many. When this is considered alongside the other changes resulting from Modernising Medical Careers, particularly the structure of training, it is perhaps not surprising that WPBA is viewed negatively by some trainees and trainers. However, it appears that WPBA is here to stay and in fact is likely to expand into recertification for consultants. It is therefore important that assessment be accepted by the profession and becomes an integrated part of training and medical practice. As well as the general cultural change around workplace-based assessment that is required, there are two specific areas that call for further development. The first is adequate training for WPBA assessors to ensure that the reliability and validity of the assessments are not compromised. The College ran an extensive national programme of local training sessions as WPBAs were implemented, but inevitably it was impossible to reach every consultant psychiatrist and very few multidisciplinary team assessors were trained. These training sessions involved explaining the WPBA tools and discussing videos of assessments to achieve consensus about scoring. It is felt that the face-to-face discussion component of this training is important to its success. It is a current GMC requirement that ‘trainers must understand and demonstrate ability in the use of the approved in-work assessment tools and be clear as to what is deemed acceptable progress’ (GMC, 2010b). This standard is strengthened with the 2010 GMC standards for training that require all WPBA assessors to understand the requirements of the assessments (GMC, 2010b). The logistical difficulties of face-to-face training mean that other methods will need to be employed to reach all assessors. Other medical Royal Colleges are using online training, which includes videos and real examples to maximise learning. However, literature suggests that brief training interventions may not be sufficient to achieve the required accuracy (Noel et al, 1992). Another potential impact of the GMC requirement is the reduction in the number of multidisciplinary team assessors. Because assessing trainee psychiatrists is not considered a core part of their job, it may not be possible for them to be released to undertake training in WPBA. The second area that requires specific progress is the development of WPBA tools for higher trainees. Although the existing tools are applicable in higher training, they mainly focus on clinical competencies, whereas the higher training curricula include a number of non-clinical competencies such as leadership, management, research, teaching and supervision. Some work has already been carried out to develop a tool that assesses teaching competencies, but further work is required to ensure that trainees gain 178
trainee perspective
skills such as leading a ward round and chairing a meeting. The DONCS tool offers a potential solution to the assessment of a wide range of nonclinical competencies and will require development over the forthcoming years. There is also scope for further development of WPBA tools to meet the needs of higher trainees. The DONCS will be applicable to all higher trainees, but there may be specialty-specific unmet needs for assessing advanced competencies. This may involve the adaptation of existing tools, for example by introducing specialty-specific items in the mini-PAT, or it may involve developing new tools. In addition, we understand that the first steps have been taken to develop specialty-specific guidelines for higher trainees about which WPBAs are best undertaken in which circumstances.
Conclusions The introduction of workplace-based assessment has been a challenge for trainees. The main frustrations surround the significant increase in bureaucracy as a result of Modernising Medical Careers. Although workplace-based assessments form only part of this increased workload, they have required trainees to become more responsible for demonstrating their competencies by seeking assessors for WPBAs. There is a need for all trainers to collaborate effectively with trainees to achieve meaningful assessments. Despite these difficulties, many trainees do understand the benefits of workplace-based assessment, particularly the opportunity to demonstrate progression and obtain structured feedback from a range of assessors. These benefits will continue to increase as WPBA tools continue to develop and this process of assessment becomes accepted within the culture of postgraduate psychiatric training.
References Babu, K. S., Htike, M. M. & Cleak, V. E. (2009) Workplace-based assessments in Wessex: the first 6 months. Psychiatric Bulletin, 33, 474–478. Baker-Glenn, E. (2008) New exam structure – too much too soon? Psychiatric Bulletin, 32, 197. Benning, T. & Broadhurst, M. (2007) The long case is dead – long live the long case: Loss of the MRCPsych long case and holism in psychiatry. Psychiatric Bulletin, 31, 441–442. Brown, N. & Cooke, L. (2009) Giving effective feedback to psychiatric trainees. Advances in Psychiatric Treatment, 15, 123–128. Carr, S. (2006) The Foundation Programme assessment tools: an opportunity to enhance feedback to trainees? Postgraduate Medical Journal, 82, 576–579. Day, E. & Brown, N. (2000) The role of the educational supervisor: A questionnaire survey. Psychiatric Bulletin, 24, 216–218. Day, S. C., Grosso, L. G., Norcini, J. J., et al (1990) Residents’ perceptions of evaluation procedures used by their training program. Journal of General Internal Medicine, 5, 421–426. Donaldson, Sir L. (2002) Unfinished Business: Proposals for Reform of the Senior House Officer Grade (A Report by Sir Liam Donaldson, Chief Medical Officer for England. A Paper for Consultation). Department of Health (http://217.154.121.42/doh/Docs/UnfinishedBusiness.pdf).
179
oakley & white
Fitch, C., Malik, A., Lelliott, P., et al (2008) Assessing psychiatric competencies: what does the literature tell us about methods of workplace-based assessment? Advances in Psychiatric Treatment, 14, 122–130. General Medical Council (2010a) GMC National Training Surveys 2010: Key Findings. GMC (http://www.gmc-uk.org/National_Training_Surveys_2010_Key_findings. pdf_36304800.pdf) General Medical Council (2010b) Generic Standards for Training. GMC (http://www.gmc-uk. org/education/postgraduate/generic_standards_for_training.asp). Hatala, R. & Norman, G. R. (1999) In-training evaluation during an internal medicine clerkship. Academic Medicine, 74, S118–S120. Lelliott, P., Williams, R., Mears, A., et al (2008) Questionnaires for 360-degree assessment of consultant psychiatrists: development and psychometric properties. British Journal of Psychiatry, 193, 156–160. Maatsch, J. L., Huang, R., Downing, S., et al (1983) Predictive validity of medical specialist examinations. Final report for Grant HS02038-04. Office of Medical Education Research and Development, Michigan State University. Menon, S., Winston, M. & Sullivan, G. (2009) Workplace-based assessment: survey of psychiatric trainees in Wales. Psychiatric Bulletin, 33, 468–474. Noel, G. L., Herbers, J. E., Capow, M. P., et al (1992) How well do internal medicine faculty members evaluate the clinical skills of residents? Annals of Internal Medicine, 1, 757–765. Norcini, J. (2003) Work-based assessment. BMJ, 326, 753–755. Norcini, J. & Burch, V. (2007) Workplace-based assessment as an educational tool: AMEE Guide No. 31. Medical Teacher, 29, 855–871. Norcini, J. J., Blank, L. L., Arnold, G. K., et al (1995) The mini-CEX (Clinical Evaluation Exercise): A preliminary investigation. Annals of Internal Medicine, 123, 795–799. Norman, G. (2002) The long case versus objective structured clinical examinations. BMJ, 324, 748–749. Oakley, C., Malik, A. & Kamphuis, F. (2008) Introducing competency-based training in Europe: an Anglo-Dutch perspective. International Psychiatry, 5, 100–102. Turnbull, J., MacFayden, J., van Barneveld, C., et al (2000) Clinical works sampling: a new approach to the problem of in-training evaluation. Journal of General Internal Medicine, 15, 556–561. Whelan, P., Meerten, M., Rao, R., et al (2008) Stress, lies and red tape: the views, success rates and stress levels of the MTAS cohort. Journal of the Royal Society of Medicine, 101, 313–318. Wilkinson, J., Crossley, J., Wragg, A., et al (2008) Implementing workplace-based assessment across the medical specialties in the United Kingdom. Medical Education, 39, 309–317.
180
Chapter 16
Conclusions Amit Malik, Dinesh Bhugra and Andrew Brittlebank
The clinical, scientific and regulatory milieu in which postgraduate training is delivered is constantly evolving. The pace of change in medical education is evenly matched by the pace of change in healthcare delivery and regulatory structures. This pace has been, and will continue to be, more dramatic than it was even a couple of decades ago. First, the clinical context in which training has been delivered has been transformed dramatically in the past 30 years. Significant developments in therapeutics, along with a move from asylum-based to community-based care, have helped bring about this transformation. The National Service Framework for Mental Health (Department of Health, 1999) has led to the creation of specialised teams such as the crisis resolution and home treatment teams which required trainees to be trained and assessed in a new set of skills. Many services are now considering amalgamation of these teams into more generic teams to enhance continuity of care while retaining a specialist focus. Other initiatives such as Improving Access to Psychological Therapies will pose new challenges to the skill mix within mental health teams and the demands made on a psychiatrist’s competence. Although these changes in service configuration and delivery do not necessarily alter the psychiatrist–patient relationship, specific leadership, service development and management skills will be required to ensure that psychiatrists remain central to mental health delivery. The New Ways of Working initiative (Department of Health, 2005) has significantly affected the role of the consultant psychiatrist in many parts of the UK and the end product of postgraduate training must constantly be reviewed to ensure that psychiatrists are ‘fit for purpose’ for the contemporary roles. Similarly, the European Working Time Directive will continue to have major implications on the training and assessment of trainees, and the impact of this legislation should be monitored over the next decade to understand its full impact on the training, competence and confidence of future new psychiatrists. Moreover, new scientific developments such as the new pharmacological and psychological therapies or more advanced neuroradiological techniques will necessitate different clinical competencies to be acquired. Therefore, 181
malik et al
all training programmes and curricula must constantly be updated to keep pace with science and clinical practice. The evidence base in medical education has now unequivocally shifted towards performance- and competency-based learning. In the past decade a few competency frameworks have been widely accepted in the Western countries. These define the roles (general categories of competencies) expected of doctors. Notable among these initiatives are the general competencies defined by the Accreditation Council for Graduate Medical Education in the USA, the Royal College of Surgeons and Physicians in Canada’s CanMEDS model (Royal College of Physicians and Surgeons of Canada, 2005) and the General Medical Council’s Good Medical Practice guidance in the UK (General Medical Council, 2009). All three models define the broad roles within which various specialties have defined competencies that form the basis of their curriculum. Alongside these changes in training, there have also been considerable developments in assessments as an integral and continuous component of postgraduate medical training rather than just being discrete high-stake knowledge and skills assessments. Increasingly, there is greater focus on assessing the higher levels (‘shows’ and ‘does’ of Miller’s four levels of assessments (Miller, 1990)). Workplace-based assessments aim to focus on the highest level of performance assessments and have a definite place within a wider assessment system. Finally, the regulatory and organisational milieu is forever changing. The PMETB has now merged into the GMC (General Medical Council, 2010); the Department of Health has established Medical Education for England to oversee the delivery of postgraduate medical training in England, among other functions; deaneries have established postgraduate schools of psychiatry; the Irish psychiatrists have formed their own College of Psychiatry of Ireland, separating from the Royal College of Psychiatrists in the UK; training has gone from being uncoupled, to run-through, to being uncoupled again, and we are gradually and safely testing the waters once more with national selection. The central relationship between trainee and trainer will constantly be tested by this ever-changing context and once again the professional experience and expertise of the trainer will guide the professional commitment of the trainee through this bureaucratic confusion and lead to the development of the well-trained professional.
The Royal College of Psychiatrists The Royal College of Psychiatrists has been at the forefront in adapting to all the changes over the past decade. New competency-based curricula and a new assessment system have been developed, not once but thrice. The preceding chapters discussed in detail the new assessment system, which includes national examinations and workplace-based assessments. This has been set out in the context of the wider changes within the health 182
conclusions
service and postgraduate medical education. The assessment tools that are now used across the UK have been described and discussed in detail. Their strengths and weaknesses and the challenges posed by their use in psychiatric training are issues that have occupied the authors of this volume for quite some time. It is hoped that their reflections and experiences will benefit trainees and trainers in utilising these as both instructional and assessment tools. The utility of the portfolio not only as an implement of reflective practice but also as a catalogue of achievements has been detailed. It is hoped that the user-friendly electronic availability of assessment tools and the overall portfolio will encourage trainees to utilise both these educational modalities optimally and appropriately. The descriptions of local and national pilot programmes clearly highlight not just some of the practicalities of implementing assessment systems at local and national levels but also some of the early lessons learnt from these projects. The scientific and evidence base for some of these changes has been dispersed throughout the chapters to encourage those so inclined to explore further the academic prospects in the field of psychiatric education, including its delivery and scientific research.
Future directions The overwhelming evidence for workplace-based assessment tools and systems points towards their utility as formative or development instructional tools but many questions remain around their use for summative purposes. The Academy of Medical Royal Colleges in its review (2009) has highlighted this as well as other challenges and considerations in implementing workplace-based assessments in the UK. Significant emphasis has been placed on the importance not only of trainee and supervisor acceptability of these tools and assessment systems but also of recognising the resource implications, at a systemic and individual level, of undertaking these assessments. Concerns have also been raised by other experienced educators regarding the reductionist nature of competencies and the dangers of relying on a system that focuses on being ‘good enough’ rather than ‘aspiring for excellence’ in every instance. Even though workplace-based assessments have been part of the assessment framework within psychiatric training in the UK for over 3 years, it is still early days in the life of the new assessment programmes, especially the new workplace-based assessment tools. As the evidence from their implementation grows, they are bound to be developed further, both in terms of their structure and their application. It is important to state, however, that this development must be a gradual and considered process and it is crucial that there are periods of stability within the development of the assessment systems, to allow enhanced end-user acceptability as well as ensure that adequate lessons are learnt before changes are tested and implemented. The long-term success of such a vast system will be 183
malik et al
demonstrated by assessments being truly used to drive learning alongside a range of other educational experiences which will all be recorded and mapped on to the curriculum through a well-designed, user-friendly electronic portfolio, thus providing a basis for feedback and progression to becoming fully trained psychiatrists. All the proposed changes have major resource implications both in terms of the development and ongoing quality assurance of the assessments and their delivery at a trust, deanery and national level. It is hoped that computerised systems will, in the long-term, address some of these implications, especially with regard to recording and reporting of mandatory training information. However, the vast majority of resource implications are here to stay and should be seen by employing organisations and education providers as an investment in creating an improved and enhanced training and assessment system, and therefore an investment in the future of psychiatry. The resource implications must be addressed in a transparent manner. It cannot be assumed that these resources can be sucked out of an already stretched health service, and the ‘assessment time’ issues must be tackled head on if these changes are to succeed. This is a very challenging but at the same time exciting period for the profession and it is vital that we meet the needs of patients and their carers, who deserve and expect the best available service. In local and national discussions about medical education it is often argued that the key role of the health service is to ensure patient care, and everything else, including training and assessments, is secondary. It is important to remember that the health service has to ensure high-quality patient care, not just for the patients of today but also for the patients of tomorrow. It can only do this by training and assessing current medical students and postgraduate trainees to the highest possible standards.
References Academy of Medical Royal Colleges (2009) Improving Assessment. AMRC. Department of Health (1999) National Service Framework for Mental Health: Modern Standards and Service Models. Department of Health. Department of Health (2005) New Ways of Working for Psychiatrists: Enhancing Effective, Person-Centred Services through New Ways of Working in Multidisciplinary and Multiagency Contexts. Department of Health. General Medical Council (2009) Good Medical Practice. GMC. General Medical Council (2010) Standards for Curricula and Assessment Systems. GMC (http://www.gmc-uk.org/education/postgraduate/standards_for_curricula_and_ assessment_systems.asp). Miller, G. E. (1990) The assessment of clinical skills/competence/performance. Academic Medicine, 65, 563–67. Royal College of Physicians and Surgeons of Canada (2005) The CanMEDS Physician Competency Framework. Royal College of Physicians and Surgeons of Canada (http:// rcpsc.medical.org/canmeds/index.php).
184
Appendix 1: Assessment forms
These forms are for reference only. They are generated separately for each individual through the Royal College of Psychiatrists’ Portfolio Online system. Other generic workplace-based assessment forms can be downloaded from the Portfolio Online website (http://www.rcpsych.ac.uk/training/ assessmentsonlinesign-up/assessmentsonlineinformation.aspx).
185
Mini-Assessed Clinical Encounter Mini-Assessed Clinical Encounter (mini-ACE) CT1 level (mini-ACE) CT1 Level Trainee Name: Example Trainee GMC Number: Not specified Training level: CT1
Assessment details Date of assessment:
Focus of assessment (tick all that apply)
Assessment of a psychiatric emergency (acute psychosis) Assessment of change in functioning Assessment of a common psychiatric condition Assessment of a complex psychiatric condition Assessment of response to treatment Assessment of a severe and enduring mental illness Assessment of a psychiatric emergency (suicidal feelings and acts) Management of a psychiatric emergency (acute psychosis) Management of a common psychiatric condition Management of a complex psychiatric condition Management of a severe and enduring mental illness Management of a psychiatric emergency (suicidal feelings and acts) Obtaining informed consent If the focus of the assessment is not listed above then please describe it here
Clinical Setting
General hospital CMHT
Prev Contact
0
Complexity
Low
OPD
In-patient
Crisis/emergency
5-9
>9
Other (Pplease specify) 1-4
Average
High
Diag 1: F
Diag 2: F
Page 1 of 2
186
mini-ACE for ct1
Mini-Assessed Encounter Mini-AssessedClinical Clinical Encounter (mini-ACE) CT1 level (mini-ACE) CT1 Level Trainee Name: Example Trainee GMC number: Not specified Training level: CT1
Assessment gradings Please use the following rating scale:
1. History-taking 2. Mental state examination
1-3 4 5-6 U/C
Below standard for end of CT1 Meets standard for CT1 completion Above CT1 standard Unable to comment
1
2
3
4
1-3 4 5-6 U/C
Below expectations Satisfactory Better than expected Unable to comment
5
6
U/C
5
6
U/C
3. Communication skills 4. Clinical judgement 5. Professionalism 6. Organisation/efficiency 7. Overall clinical care Please use the following rating scale:
8. Based on this assessment, how would you rate the Trainee's performance at this stage of training?
1
2
3
4
Assessment comments Anything especially good
Suggestions for development
Agreed action
Page 2 of 2
187
Assessment ofof Clinical Expertise (ACE) Assessment Clinical Expertise CT1 level (ACE) CT1 Level Trainee Name: Example Trainee GMC number: Not specified Training level: CT1
Assessment details Date of assessment:
Focus of assessment (tick all that apply)
Assessment of a psychiatric emergency (acute psychosis) Assessment of change in functioning Assessment of a common psychiatric condition Assessment of a complex psychiatric condition Assessment of response to treatment Assessment of a severe and enduring mental illness Assessment of a psychiatric emergency (suicidal feelings and acts) Management of a psychiatric emergency (acute psychosis) Management of a common psychiatric condition Management of a complex psychiatric condition Management of a severe and enduring mental illness Management of a psychiatric emergency (suicidal feelings and acts) Obtaining informed consent If the focus of the assessment is not listed above then please describe it here
Clinical Setting
General hospital CMHT
Prev Contact
0
Complexity
Low
OPD
In-patient
Crisis/emergency
5-9
>9
Other (please specify) 1-4
Average
High
Diag 1: F
Diag 2: F
Page 1 of 2
188
ace for ct1
Assessment ofof Clinical Expertise (ACE) Assessment Clinical Expertise CT1 level (ACE) CT1 Level Trainee Name: Example Trainee GMC number: Not specified Training level: CT1
Assessment gradings Please use the following rating scale:
1. History-taking
1-3 4 5-6 U/C
Below standard for end of CT1 Meets standard for CT1 completion Above CT1 standard Unable to comment
1
2
3
4
1-3 4 5-6 U/C
Below expectations Satisfactory Better than expected Unable to comment
5
6
U/C
5
6
U/C
2. Mental state examination 3. Communication skills 4. Clinical judgement 5. Professionalism 6. Organisation/efficiency 7. Overall clinical care Please use the following rating scale:
8. Based on this assessment, how would you rate the Trainee's performance at this stage of training?
1
2
3
4
Assessment comments Anything especially good
Suggestions for development
Agreed action
Page 2 of 2
189
Mini-Peer Tool (mini-PAT) Mini-PeerAssessment Assessment Tool CT1 level (mini-PAT) CT1 Level Trainee
Assessor
Name: Example Trainee
Name: Example Assessor
GMC number: Not specified
Reference number: GMC number - 1234567890
Training level: CT1
Position: ST4-ST6
Setting Date of assessment:
Which environment have you primarily observed the practitioner in?
In-patients
Out-patients
Both in- and out-patients
Community specialty
Other
Good clinical care Please use the following rating scale:
1. Ability to diagnose patient problems 2. Ability to formulate appropriate management plans
1-3 4 5-6 U/C
Below standard for end of CT1 Meets standard for CT1 completion Above CT1 standard Unable to comment
1-3 4 5-6 U/C
Below standard for end of CT1 Meets standard for CT1 completion Above CT1 standard Unable to comment
1-3 4 5-6 U/C
Below standard for end of CT1 Meets standard for CT1 completion Above CT1 standard Unable to comment
1
2
3
4
5
6
U/C
6
U/C
6
U/C
3. Awareness of their own limitations 4. Ability to respond to psychosocial aspects of illness 5. Appropriate utilisation of resources e.g. ordering investigations
Maintaining Good Medical Practice Please use the following rating scale:
6. Ability to manage time effectively / prioritise 7. Technical skills (appropriate to current practice)
1
2
3
4
5
Teaching and training, appraising and assessing Please use the following rating scale:
8. Willingness and effectiveness when teaching/training colleagues
1
2
3
4
5
Page 1 of 3
190
mini-pat for CT1
Mini-Peer Tool (mini-PAT) Mini-PeerAssessment Assessment Tool CT1 level (mini-PAT) CT1 Level Trainee
Assessor
Name: Example Trainee
Name: Example Assessor
GMC number: Not specified
Reference number: GMC number - 1234567890
Training Level: CT1
Position: ST4-ST6
Relationship with patients 1-3 4 5-6 U/C
Please use the following rating scale:
9. Communication with patients 10. Communication with carers and/or family
Below standard for end of CT1 Meets standard for CT1 completion Above CT1 standard Unable to comment
1
2
3
4
5
6
U/C
1-3 4 5-6 U/C
Below standard for end of CT1 Meets standard for CT1 completion Above CT1 standard Unable to comment
6
U/C
5
6
U/C
5
6
U/C
11. Respect for patients' dignity and their right to privacy & confidentiality
Working with colleagues Please use the following rating scale:
12. Verbal communication with colleagues 13. Written communication with colleagues
1
2
3
4
5
1-3 4 5-6 U/C
Below standard for end of CT1 Meets standard for CT1 completion Above CT1 standard Unable to comment
14. Ability to recognise and value the contribution of others 15. Accessibility/reliability
Global ratings and concerns Please use the following rating scale:
16. Overall, how do you rate this trainee compared to others at the same grade? Please use the following rating scale:
17. How would you rate the trainee's performance at this stage of
1
2
1-3 4 5-6 U/C
Below expectations Satisfactory Better than expected Unable to comment
1
2
3
3
4
4
training?
Health and probity Do you have any concerns about this practitioner's health in relation to their fitness to practice
Yes
No
Page 2 of 3
191
mini-pat for ct1
Mini-Peer Tool (mini-PAT) Mini-PeerAssessment Assessment Tool CT1 level (mini-PAT) CT1 Level Trainee
Assessor
Name: Example Trainee
Name: Example Assessor
GMC number: Not specified
Reference number: GMC number - 1234567890
Training level: CT1
Position: ST4-ST6
Health and probity continued... If yes please state your concerns:
Do you have any concerns about this practitioner's probity?
Yes
No
If yes please state your concerns:
Do you have any additional comments?
Yes
No
Any additional comments
Page 3 of 3
192
Direct of non-Clinical Non-ClinicalSkills Direct Observation Observation of Skills (DONCS) CT1 level (DONCS) ST4-ST5 Level Trainee Name: Example Trainee GMC number: Not specified Training level: ST4
Assessment details Date of assessment:
Skill Observed
Chairing
Teaching
Written communication
Clinical supervision Testifying
Educational supervision
Consultation with other agencies
Other
Assessment gradings Please use the following rating scale:
1. Medical expert
1 2 3
1
Significantly short of readiness for consultant practice Approaching readiness for consultant practice Ready for consultant practice
2
3
N/A
2. Communicator 3. Collaborator 4. Manager 5. Health advocate 6. Scholar 7. Professional Please use the following rating scale:
8. Based on this assessment, how would you rate this doctor's performance at this stage of training?
1-3 4 5-6
1
Below expectations Satisfactory Exceeds expectations
2
3
4
5
6
N/A
Assessment comments Anything especially good
Suggestions for development
Page 1 of 2
193
doncs for ct1
Direct of non-Clinical Non-ClinicalSkills Direct Observation Observation of Skills (DONCS) CT1 level (DONCS) ST4-ST5 Level Trainee Name: Example Trainee GMC number: Not specified Training level: ST4
Assessment comments continued... Agreed action
Satisfaction with assessment process Please use the following rating scale:
9. Trainee's satisfaction 10. Assessor’s satisfaction
1-2 3-4 5-6
1
Not at all satisfied Reasonably satisfied Very satisfied
2
3
4
5
6
Approximately how long did it take to complete the form?
Page 2 of 2
194
Educational Supervisor Report (psychiatry specialty training)
Educational Supervisor’s Report (Psychiatry Specialty Training) The purpose of this report is to inform the regular reviews of a psychiatry specialty registrar’s progress through structured training. The report should reflect your experience of the trainee’s performance during their clinical placement and should be discussed with the trainee before submitting.
The Trainee
The report relates to two main areas: knowledge (relevant to the placement) professional competencies Full name
GMC number
Date of birth
National training number
Address
The Post or Placement
Hospital/institution
Specialty/subspecialty
Address Months
From
To
……day…..month……year
……day…..month……year
The training was full time Please delete as appropriate The training was part time and the ratio of part time to full time was……………. 1. Knowledge base relevant to the placement Insufficient evidence Needs further development
Competent
Excellent
Anything particularly good?
Areas for development
195
educational supervisor report
2. Professional competencies Insufficient evidence
Needs further development
Competent
Excellent
1. Providing a good standard of practice and care 1 2 2. Decisions about access to care 3 3. Treatment in emergencies 4 4. Maintaining good medical practice 5. Maintaining performance 5 6 Teaching and training, appraising and assessing 6 7. Relationships with patients 7 8. Dealing with problems in professional practice 8 9. Working with colleagues 9 10. Maintaining probity 10 11. Ensuring that health problems do not put 11 patients at risk Anything particularly good?
Areas for development
Endorsement
Endorsement by Educational Supervisor
1
I confirm that the above is based on my own observations and the results of workplace-based assessments and has been discussed with the trainee concerned Name
Signed
Date
This competency is about the clinical assessment of patients with mental health problems. It includes history-taking, mental state examination, physical examination, patient evaluation, formulation and record-keeping. It also includes the assessment and management of patients with severe and enduring mental health problems. Evidence to consider will include WPBA’s, particularly the ACE, mini-ACE, CbD and multi-source feedback. 2 This competency is about the application of scientific knowledge to patient management, including access to appropriate care and treatment. Evidence to consider will include WPBAs, particularly the ACE, mini-ACE, CbD and multi-source feedback. 3 This competency is about the assessment and management of psychiatric emergencies. Evidence to consider will include WPBAs, particularly the ACE, mini-ACE, CbD and multi-source feedback. 4 This competency is about the maintenance and use of systems to update knowledge and its application to professional practice. This will include legislation concerning patient care, the rights of patients and carers, research, and keeping up to date with clinical advances. Evidence to consider will include WPBA, reflective notes in the trainee’s portfolio, the trainee’s Individual Learning Plan and any record of educational supervision that they have kept. 5 This competency is about the routine practice of critical self-awareness, working with colleagues to monitor and maintain quality of care and active participation in a programme of clinical governance. Evidence to consider will include multi-source feedback, records of audit and research projects undertaken and the trainee’s reflective notes on these projects. 6 This competency is about the planning, delivery and evaluation of learning and teaching; appraising and evaluating learning and learners; supervising and mentoring learners and providing references. Evidence to consider will include multi-source feedback, completed Assessment of Teaching forms and any quality data kept by the relevant teaching faculty or programme 7 This competency is about the conduct of professional patient relationships, including good communication, obtaining consent, respecting confidentiality, maintaining trust and ending professional relationships with patients. Evidence to consider will include WPBAs, particularly the ACE, mini-ACE, CbD and multi-source feedback. 8 This competency is about handling situations where there are concerns regarding the conduct or performance of colleagues, handling complaints and formal inquiries, holding indemnity insurance and providing assistance at inquiries and inquests. Evidence to consider will include CbD, multi-source feedback and reflective notes, including critical incident reports. 9 This competency is about treating colleagues fairly, by working to promote value-based non-prejudicial practice; about working effectively as a member and a leader of multidisciplinary teams; arranging clinical cover; taking up appointments; sharing information with colleagues; and appropriate delegation and referral. Evidence to consider will include CbD and multi-source feedback. 10 This competency is about maintaining appropriate ethical standards of professional conduct which may include: providing information about your services; writing reports, giving evidence and signing documents; carrying out and supervising research; properly managing financial and commercial dealings; avoiding and managing conflicts of interest and advising others on preventing and dealing with them and appropriately managing financial interests that may have a relevance to professional work. Evidence to consider will include CbD and multi-source feedback and your review of reports written by the trainee. 11 This competency is about the doctor’s awareness of when his/her own performance, conduct or health, or that of others might put patients at risk, and the action taken to protect patients. Behaviours you may wish to consider: observing the accepted codes of professional practice; allowing scrutiny and justifying professional behaviour to colleagues; achieving a healthy balance between professional and personal demands; seeking advice; and engaging in remedial action where personal performance is an issue.
196
Appendix 2: Guide for ARCP panels in core psychiatry training
197
ARCP panels guide
For those in core training Table 1 shows the minimum number of each assessment that need to be undertaken. The minimum number has been arrived at in the light of the reliability of each tool, together with an estimate of the numbers that are likely to be needed to ensure a broad coverage of the curriculum. Many trainees will require more than this minimum, none will require fewer. Table A2.1â•… Number of workplace-based assessments (WPBAs) required in core training in psychiatry WPBA
Minimum number required per year CT1
CT2
CT3
Assessment of Clinical Expertise (ACE)
2
3
3
mini-Assessed Clinical Encounter (mini-ACE)
4
4
4
Case-based discussion (CbD)
4
4
4
Direct Observation of Procedural Skills (DOPS)
*
*
*
mini-Peer Assessment Tool (mini-PAT)
2
2
2
Case-based Discussion Group Assessment (CbDGA)
2
–
–
Supervised Assessment of Psychotherapy Expertise (SAPE)
–
1
1
Case Presentation (CP)
1
1
1
Journal Club Presentation (JCP)
1
1
1
Assessment of Teaching (AoT)
*
*
*
Direct Observation of Non-Clinical Skills (DONCS)
*
*
*
* There is no set number to be completed in core psychiatry training; they may be performed as the opportunity arises – Not required
198
appendix 2
Intended learning outcome
CT1
CT2
CT3
1 Be able to perform specialist assessment of patients and document relevant history and examination on culturally diverse patients to include: • presenting or main complaint • history of present illness • past medical and psychiatric history • systemic review • family history • sociocultural history • developmental history
1a Clinical history
By the end of ST1, the trainee should demonstrate the ability to take a history and perform an examination on an adult patient who has any of the common psychiatric disorders, including: affective disorders; anxiety disorders; psychotic disorders; and personality disorders
By the end of CT2, the trainee should demonstrate the ability to independently take a competent history and perform an examination on adult patients who present with a full range of psychiatric disorders, including: disorders of cognitive impairment; substance misuse disorders; and organic disorders
By the end of CT3, the trainee should demonstrate the ability to take a history and perform an examination of patients with psychiatric disorders who have a learning disability or are children, and be able to perform a competent assessment of a patient with medically unexplained symptoms or physical illness and psychiatric disorder
ACE conducted with an adult patient not previously known to the trainee
ACE taking a history from a person with cognitive impairment, if not completed in CT1
ACE taking a history from a not previously known patient who is either physically unwell or has medically unexplained symptoms, if not completed in CT2
ACE taking a history from a person with a substance misuse problem, if not completed in CT1
ACE taking a history from a not previously known child or patient with learning disability, including an interview with a parent or carer when appropriate, if not completed in CT2. This assessment must be conducted by an appropriate specialist
199
ARCP panels guide
Intended learning outcome 1b Patient examination
CT1
CT2
CT3
ACE conducted with an adult patient not previously known to the trainee, to include mental state examination and an appropriate physical examination
Mini-ACE, including an appropriate physical examination, to recognise and identify the effects of psychotropic medication
Mini-ACE to determine mood disturbance in a physically ill patient, if not completed in CT2
CbD of a case presentation of a patient the trainee has fully assessed, including a collateral history
Mini-ACE of assessment of cognition, if not performed in CT1
Mini-ACE of an examination of a child or a patient with learning disability, including an appropriate physical examination, if not completed in CT2. This assessment must be conducted by an appropriate specialist
Mini-ACEs of patients to demonstrate skilful identification of psychopathology
Mini-ACE of assessment of the physical effects of substance misuse, if not completed in CT1
2 Demonstrate the ability to construct formulations of patients’ problems that include appropriate differential diagnoses By the end of CT1, the trainee should demonstrate the ability to construct a formulation on an adult patient who has any of the common psychiatric disorders, including: affective disorders; anxiety disorders; psychotic disorders; and personality disorders
200
By the end of CT2, the trainee should demonstrate the ability to independently construct a formulation on adult patients who present with a full range of psychiatric disorders, including: disorders of cognitive impairment; substance misuse disorders; and organic disorders
By the end of CT3, the trainee should demonstrate the ability to construct a formulation of patients with psychiatric disorders who have a learning disability or are children
appendix 2
Intended learning outcome
CT1
CT2
CT3
2a Diagnosis
CbD of differential diagnosis in a patient with a common presenting problem
CbD in a person presenting to older adults service, if not completed in CT1
CbD of differential diagnosis in a child or patient with learning disability, if not completed in CT2. This assessment must be conducted by an appropriate specialist
2b Formulation
CbD of an adult patient with a common presenting problem, to describe the factors in the aetiology of the problem
CbD of an adult patient with a more complex problem, to describe the factors in the aetiology of the problem, if not completed in CT1
CbD to discuss the assessment of a child or patient with learning disability, if not completed in CT2. This assessment must be conducted by an appropriate specialist CbD to discuss the assessment of a child or patient with learning disability, focusing on the possibility of maltreatment, neglect or exploitation, if not completed in CT2. This assessment must be conducted by an appropriate specialist
3 Demonstrate the ability to recommend relevant investigation and treatment in the context of the clinical management plan. This will include the ability to develop and document an investigation plan including appropriate medical, laboratory, radiological and psychological investigations and then to construct a comprehensive treatment plan addressing biological, psychological and sociocultural domains By the end of CT1, the trainee should demonstrate the ability to describe further investigations and negotiate treatment with an adult patient who has any of the common psychiatric disorders, including: affective disorders; anxiety disorders; psychotic disorders; and personality disorders
By the end of CT2, the trainee should demonstrate the ability to describe further investigations and negotiate treatment to adult patients who present with a full range of psychiatric disorders, including: disorders of cognitive impairment; substance misuse disorders; and organic disorders
By the end of CT3, the trainee should demonstrate the ability to negotiate treatment options in more challenging situations and with patients with psychiatric disorders who have a learning disability or are children
201
ARCP panels guide
Intended learning outcome
CT1
3a Individual consideration
Mini-ACE negotiating a treatment plan or discussing investigations with patient, family and/ or carers
3b Investigation
CbD to discuss planning investigations in an adult patient with a common presenting problem
CT2
CT3 Mini-ACEs discussing treatment options in more challenging situations such as with a reluctant patient, i.e. someone with limited insight, an acutely physically ill patient or a patient whose first language is not English, if not completed in CT2
CbD to discuss planning investigations in an adult patient with a more complex problem, if not completed in CT1
CbD to discuss referral for specialist psychotherapeutic assessment, if not completed in CT2
CbD of planning investigation of a person with suspected dementia or delirium, if not completed in CT1 3c Treatment planning
202
Mini-ACE and CbD, repeated several times, focusing on different conditions
CbD to demonstrate awareness of issues in prescribing in common physical disease states, such as liver or cardiac disease, if not completed in CT2
CbD to discuss psychological treatment of a case
CbD of treatment planning for a child or a patient with learning disability, if not completed in CT2. This assessment must be conducted by an appropriate specialist
appendix 2
Intended learning outcome
CT1
CT2
CT3
4 Based on a comprehensive psychiatric assessment, demonstrate the ability to comprehensively assess and document a patient’s potential for self-harm or harm to others. This would include an assessment of risk, knowledge of involuntary treatment standards and procedures, the ability to intervene effectively to minimise risk and the ability to implement prevention methods against self-harm and harm to others. This will be displayed whenever appropriate, including in emergencies
4a All clinical situations
By the end of CT1, the trainee should demonstrate the ability to perform a competent risk assessment and construct a defensible risk management plan for an adult patient with a common psychiatric disorder
By the end of CT2, the trainee should demonstrate the ability to perform a competent risk assessment and construct a defensible risk management plan for an older adult patient and in more challenging situations
Mini-ACE of risk assessment interview
Mini-ACE of risk assessment interview with an older person, if not completed in CT1
By the end of CT3, the trainee should demonstrate the ability to perform a competent risk assessment and construct a defensible risk management plan for patients with a psychiatric disorder who have a learning disability or are children, and be able to perform a competent assessment of a patient who may require intervention using mental health or capacity legisltation
CbD of a risk assessment and management plan 4b Psychiatric emergencies
Several mini-ACEs of assessing risk in emergency situations (A&E departments, crisis team, out of hours); at least one must be conducted by a consultant assessor
CbD of the assessment and management of a violent or other serious untoward incident. This may involve management of violence, absconsion or seclusion, if not completed in CT1
Mini-ACE of assessment for rapid trainquillisation, if not completed in CT2
203
ARCP panels guide
Intended learning outcome
CT1
CT2
CT3 CbD of an emergency in child or adolescent psychiatry or in the psychiatry of learning disability, if not completed in CT2. This assessment must be conducted by an appropriate specialist
4c Mental health legislation
CbD of emergency assessment
CbD or mini-ACE of using mental health legislation in relation to capacity and consent, if not completed in CT2 CbD of mental health legislation as applied to the mentally disordered offender
4d Broader legal framework
Clinical supervisor report
5 Based on the full psychiatric assessment, demonstrate the ability to conduct therapeutic interviews (that is, to collect and use clinically relevant material). The doctor will also demonstrate the ability to conduct a range of individual, group and family therapies using standard accepted models and to integrate these psychotherapies into everyday treatment, including biological and sociocultural interventions
5a Psychological therapies
By the end of CT1, the trainee should demonstrate the ability to think in psychological terms about patients who have mental health problems and to foster therapeutic alliances
By the end of CT2, the trainee should demonstrate the ability to conduct a course of brief or long psychological therapy under supervision
By the end of CT3, the trainee should demonstrate the ability to conduct a second course of psychological therapy of a different duration and in a different modality from that conducted in CT2
CbDGA (two in the year)
SAPE for long or short case (must achieve at least satisfactory in all domains)
SAPE for a different modality and duration from CT2 (must achieve at least satisfactory in all domains) CbD to discuss psychological therapy in routine psychiatric practice, if not completed in CT2
204
appendix 2
Intended learning outcome
CT1
CT2
CT3
6 Demonstrate the ability to concisely, accurately and legibly record appropriate aspects of the clinical assessment and management plan
6a Recordkeeping
By the end of CT1, the trainee should demonstrate the ability to properly record appropriate aspects of clinical assessments and management plans
During CT2, the trainee should continue to demonstrate the ability to properly record appropriate aspects of clinical assessments and management plans
By the end of CT3, the trainee will be able to describe the structure, function and legal implications of medical records and medico-legal reports
To be assessed every time a CbD is conducted (at least four in the year)
To be assessed every time a CbD is conducted (at least four in the year)
To be assessed every time a CbD is conducted (at least four in the year, one of which should include a medico-legal report that the trainee has written; the latter may be in ‘shadow form’)
7 Develop the ability to carry out specialist assessment and treatment of patients with chronic and severe mental disorders and to demonstrate effective management of these disease states
7a Management of severe and enduring mental illness
By the end of CT1, the trainee should be able to describe long-term severe and enduring mental illnesses and the issues involved in the care and treatment of people with these problems
By the end of CT2, the trainee should demonstrate the ability to assess capacity in a person who has cognitive impairment and be able to construct a medication treatment plan of an older person’s mental illness
By the end of CT3, the trainee should demonstrate the ability to construct a treatment plan for a patient who has a severe and enduring mental illness and for either a child or person with learning disability who has a long-term neurodevelopmental disorder
CbD of a review of the care or treatment of a patient who has a severe and enduring mental illness
Mini-ACE assessing capacity in a person with cognitive impairment, if not completed in CT1
CbD of the care of a person who has a severe and enduring mental illness. The focus is to explore how well the trainee can understand the illness from the patient’s point of view. May be completed in CT2 or CT3
205
ARCP panels guide
Intended learning outcome
CT1
CT2
CT3
CbD of psychopharmacological management of an older person’s illness, if not completed in CT1
CbD/mini-ACE of the care of a person who has a severe and enduring mental illness. The focus is on the trainee’s understanding of quality of life. May be completed in CT2 or CT3 Mini-ACEs assessing several aspects of capacity or changes in capacity in a single patient over time, if not completed in CT2 CbD to discuss understanding of the assessment of capacity and its consequences, if not completed in CT2 ACE of history-taking from a paediatric neuropsychiatry case or a child with ADHD or autism or a person with learning disability who has one of these problems, if not completed in CT2. This assessment must be conducted by an appropriate specialist CbD to discuss management of a child with a longterm condition or with a person with learning disability, if not completed in CT2. This assessment must be conducted by an appropriate specialist
206
appendix 2
Intended learning outcome
CT1
CT2
CT3
8 Use effective communication with patients, relatives and colleagues. This includes the ability to conduct interviews in a manner that facilitates information gathering and the formation of therapeutic alliances
8a Within a consultation
By the end of CT1, the trainee should demonstrate the ability to competently conduct clinical interviews with patients
During CT2, the trainee should continue to demonstrate the ability to conduct clinical interviews with patients who have increasingly complex needs
By the end of CT3, the trainee should demonstrate the ability to conduct clinical interviews in increasingly challenging situations, including with children or people who have learning disabilities
Mini-ACEs to demonstrate a skilful approach to communicating, including use of emotional sensitivity
Two rounds of miniPAT
Mini-ACE or ACE of interviews with a child or patient with a learning disability, if not performed in CT2. This assessment must be conducted by an appropriate specialist
Two rounds of miniPAT
Mini-ACE/ACE of interview with a patient who has chronic delusions and hallucinations (if not completed in CT2) Two rounds of mini-PAT
9 Demonstrate the ability to work effectively with colleagues, including team working
9a Clinical teamwork
By the end of CT1, the trainee should demonstrate the ability to work effectively as a member of a mental health team
By the end of CT2, the trainee should demonstrate the ability to work effectively as a member of a mental health team that works with older people
By the end of CT3, the trainee should demonstrate the ability to work effectively as a member of a mental health team that works with children or with people who have learning disabilities
CbD of a patient who is being seen by other members of the MDT
CbD of an older person who is being seen by members of the older persons’ CMHT, if not performed in CT1
CbD of a child or patient with learning disability who is being seen by other health or social care agencies, if not performed in CT2. This assessment must be conducted by an appropriate specialist
207
ARCP panels guide
Intended learning outcome
CT1
CT2
CT3
Two rounds of miniPAT
Two rounds of miniPAT
Two rounds of mini-PAT
Supervisors’ reports
Supervisors’ reports
Supervisors’ reports
10 Develop appropriate leadership skills
10a Effective leadership skills
By the end of CT1, the trainee should demonstrate the ability to take on appropriate leadership responsibility, for example by acting as rota coordinator
By the end of CT2, the trainee should demonstrate the ability to take on appropriate leadership responsibility in increasingly challenging situations, for example by acting as a representative on a working group
By the end of CT3, the trainee should demonstrate the ability to take a lead in an aspect of the work of a mental health team
Two rounds of miniPAT
Two rounds of miniPAT
Two rounds of mini-PAT
Supervisors’ reports
Supervisors’ reports
DONCS/CbD focused on the trainee’s participation in a multidisciplinary meeting planning the care of patients, if not completed in CT2 Supervisors’ reports
11 Demonstrate the knowledge, skills and behaviours to manage time and problems effectively By the end of CT1, the trainee should demonstrate the ability to organise their work time in the context of a mental health service effectively, flexibly and conscientiously and be able to prioritise clinical problems
208
By the end of CT2, the trainee should demonstrate the ability to organise their work time more independently
By the end of CT3, the trainee should demonstrate awareness of the importance of continuity of care
appendix 2
Intended learning outcome
CT1
CT2
CT3
11a Time management
Two rounds of miniPAT
Two rounds of miniPAT
CbD focused on the trainee’s contribution over a period of several months to the care of a patient with enduring mental health needs. May be completed in CT2 or CT3 Two rounds of mini-PAT
11b Communication with colleagues
Two rounds of miniPAT
Two rounds of miniPAT
Two rounds of mini-PAT
Supervisors’ reports
Supervisors’ reports
Supervisors’ reports
11c Decision- Supervisors’ reports making
Supervisors’ reports
Supervisors’ reports
11d Continuity of care
Supervisors’ reports
Supervisors’ reports
Supervisors’ reports
11e Complaints
Supervisors’ reports
Supervisors’ reports
Supervisors’ reports
12 Demonstrate the ability to conduct and complete audit in clinical practice
12a Audit
By the end of CT2, the trainee should demonstrate the ability to perform and present an audit project
By the end of CT3, the trainee should demonstrate the ability to independently perform an audit project and apply its findings to the service as well as their own practice
Evidence of presentation of at least one complete audit project, if not completed in CT1
Evidence of presentation of a second complete audit project demonstrating application to a service, if not completed in CT2
13 to develop an understanding of the implementation of clinical governance By the end of CT1, the trainee should demonstrate participation in clinical governance work, including an awareness of the importance of incident reporting and knowledge of relevant clinical guidelines
By the end of CT3, the trainee should demonstrate the ability to deviate from clinical guidelines when clinically appropriate to do so
209
ARCP panels guide
Intended learning outcome 13a Organis� ational framework for clinical governance and the benefits that patients may expect
CT1
CT2
CT3
Supervisors’ reports
Supervisors’ reports
Supervisors’ reports
14 To ensure that the doctor is able to inform and educate patients effectively
14a Educating patients about illness and its treatment
By the end of CT1, the trainee should demonstrate the ability to advise patients about the nature and treatment of common mental illnesses, so the patient may be more able to participate in their treatment, and the ability to advise patients about environmental and lifestyle factors and the adverse effects of alcohol, tobacco and illicit drugs
By the end of CT3, the trainee should demonstrate the ability to help a patient with a relapsing illness construct a relapse prevention plan
Mini-ACE or CbD of advising a patient about the nature and treatment of their illness
Mini-ACE of negotiating a relapse prevention plan, if not completed in CT2
CbD around a patient with an enduring mental health problem, focused on the trainee’s understanding of how services may perpetuate and reinforce stigma. May be completed in CT2 or CT3 14b Envrionmental and lifestyle factors
210
Mini-ACE or CbD of advising a patient on environmental and lifestyle changes
appendix 2
Intended learning outcome 14c Substance misuse
CT1
CT2
CT3
Mini-ACE or CbD advising a patient concerning the effects of alcohol, tobacco and illicit drugs on health and well-being
15 To develop the ability to teach, assess and appraise
15a The skills, attitudes, behaviours and practices of a competent teacher
By the end of CT1, the trainee should demonstrate the ability to construct an effective learning plan
By the end of CT2, the trainee should demonstrate the ability to participate in appriasal
By the end of CT3, the trainee should demonstrate the ability to teach in a variety of settings and to conduct assessments
An effective individual learning plan outlining learning needs, methods and evidence of attainment
As CT1
As CT1
Completed AoT forms with evidence of reflection on feedback, if not completed in CT2 15b Assessment
Evidence of assessing Foundation Programme doctors and/or clinical medical students, if not completed in CT2
15c Appraisal
Completed NHS appraisal
Completed NHS appraisal
16 To develop an understanding of research methodology and critical appraisal of the research literature By the end of CT1, the trainee should demonstrate the ability to base their practice on best evidence
By the end of CT3, the trainee should demonstrate an understanding of basic research methodology and critical appraisal applied to the study of psychiatric illness and its treatment
211
ARCP panels guide
Intended learning outcome
CT1
16a Research techniques
CT2
CT3 JCP to demonstrate an understanding of basic research methodology, if not completed in CT2 JCP to demonstrate an understanding of the research techniques used in psychological therapies, if not completed in CT2
16b Evaluation and critical appraisal of research
JCP to demonstrate application of evidence to a clinical problem the trainee has encountered
JCP to demonstrate use of critical appraisal techniques, if not completed in CT2
JCP to demonstrate an understanding of the research base in psychological therapies and the particular difficulties in conducting research in this area, if not completed in CT2 17 To ensure that the doctor acts in a professional manner at all times By the end of CT1, the trainee should demonstrate an understanding of the tensions that can exist in the doctor– patient relationship, issues relating to confidentiality and the sharing of information, professional codes of practice and conduct, and responsibility for personal health 17a Doctor– patient relationship
212
CbD to demonstrate understanding of the emotional and professional tensions that can exist in the doctor patient relationship
By the end of CT3, the trainee should demonstrate skills in limiting information sharing appropriately, in obtaining consent, and in performing a risk assessment in children or people with learning disabilities who have a mental health problem
appendix 2
Intended learning outcome
CT1
CT2
CT3
17b Confidentiality
CbD to demonstrate appropriate sharing of information
CbD to demonstrate capacity to limit information sharing appropriately, if not completed in CT2
17c Consent
Mini-ACE of obtaining consent for treatment of a psychiatric disorder
Mini-ACE of obtaining informed consent in a child or patient with learning difficulties, if not completed in CT2. This assessment must be conducted by an appropriate specialist
17d Risk management
CbD of risk assessment and management of an adult patient with a common psychiatric problem
CbD of risk assessment and management in an adult patient with a more complex psychiatric problem, if not completed in CT2 CbD of risk management in a child or patient with learning difficulties, if not completed in CT2. This assessment must be conducted by an appropriate specialist
17e Recognise own limitations
CbD to demonstrate an appreciation of the extent of one’s own limitations
17f Probity
Supervisors’ reports
Supervisors’ reports
Supervisors’ reports
17g Personal health
Supervisors’ reports
Supervisors’ reports
Supervisors’ reports
18 To develop the habits of lifelong learning By the end of CT1, the trainee should demonstrate the ability to use learning opportunities to the greatest effect
During CT2, the trainee should continue to demonstrate the ability to use learning opportunities to the greatest effect
By the end of CT3, the trainee should demonstrate the ability to use systems to maintain up-to-date practice and demonstrate an understanding of the relevance of professional bodies
213
ARCP panels guide
Intended learning outcome
CT1
CT2
CT3
Supervisors’ reports
Supervisors’ reports
An effective individual learning plan outlining learning needs, methods and evidence of attainment
An effective individual learning plan outlining learning needs, methods and evidence of attainment
An effective individual learning plan outlining learning needs, methods and evidence of attainment
Evidence of selfreflection
Evidence of selfrefection
Evidence of selfreflection
Evidence of continued GMC registration
Evidence of continued GMC registration
Evidence of continued GMC registration
Evidence of registration with the Royal College of Psychiatrists
Evidence of registration with the Royal College of Psychiatrists
Evidence of registration with the Royal College of Psychiatrists
18a Maintaining good medical practice 18b Lifelong learning
18c Relevance of outside bodies
ADHD, attention-deficit hyperactivity disorder; A&E, accident and emergency; CMHT, community mental health team; GMC, General Medical Council; MDT, multidisciplinary team; NHS, National Health Service
214
Appendix 3: The MRCPsych examination
215
mrcpsych exam
The MRCPsych examination consists of three written papers and a Clinical Assessment of Skills and Competencies (CASC) examination. The eligibility criteria and other regulations for each paper can be found on the College website (www.rcpsych.ac.uk/exams.aspx). The different examination components are briefly outlined below.
The MRCPsych written papers All three MRCPsych written papers are 3 hours’ long and contain 200 questions. Each paper includes both ‘best answer 1 of 5’ style multiple choice questions (MCQs) and extended matching items (EMIs). The following subject areas are tested in each of the written papers:
MRCPsych Paper 1 •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢
General adult psychiatry History and mental state examination Cognitive assessment Neurological examination Assessment Aetiology Diagnosis Classification Basic psychopharmacology Basic psychological processes Human psychological development Social psychology Description and measurement Basic psychological treatments Prevention of psychiatric disorder Descriptive psychopathology Dynamic psychopathology History of psychiatry Basic ethics and philosophy of psychiatry Stigma and culture
MRCPsych Paper 2 •â•¢ •â•¢ •â•¢ •â•¢ •â•¢
216
General adult psychiatry General principles of psychopharmacology (pharmacokinetics, pharmacodynamics) Psychotropic drugs Adverse reactions Evaluation of treatments
appendix 3
•â•¢ •â•¢ •â•¢ •â•¢
Neuropsychiatry (physiology, endocrinology, chemistry, anatomy, pathology) Genetics Epidemiology Advanced psychological processes and treatments
MRCPsych Paper 3 •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢ •â•¢
General adult psychiatry Research methods Evidence-based practice Statistics Critical appraisal Clinical topics Liaison psychiatry Forensic psychiatry Addiction psychiatry Child and adolescent psychiatry Psychotherapy Learning disability psychiatry Rehabilitation psychiatry Old age psychiatry
MRCPsych CASC The CASC assesses a range of skills and competencies across six specialties of psychiatry (general adult, old age, child and adolescent, learning disabilities, psychotherapy and forensic psychiatry). The domains assessed are: •â•¢ history •â•¢ mental state examination •â•¢ risk assessment •â•¢ cognitive examination •â•¢ physical examination •â•¢ case discussion •â•¢ difficult communication. The examination format consists of two circuits completed on the same day. One circuit consists of eight individual stations of 7â•›min each. The other circuit consists of four pairs of linked stations. Here the stations each last 10â•›min. Each station is marked individually and overall the CASC is marked over 16 individual stations.
217
Index Compiled by Linda English
academic trainees 125 acceptability 7 Accreditation Council for Graduate Medical Education 24, 182 ACE see Assessment of Clinical Expertise ACP 360: 171 Angoff procedure 135 Annual Review of Competence Progression (ARCP) 11, 122–130 academic trainees 125 ACE 58 appeals of ARCP outcome 129 case example 123 deanery schools 122 educational supervisor reports 100, 106, 124–125 examinations 126 extension of training time 129 guide for panels 196–214 (Appendix 2) lay member of panel 125 mini-ACE 49–50 outcomes 124, 126–128, 129 panel 125–126 portfolios 111–112, 118–119, 123–124, 125, 129 process following 129 progress as specialty registrar 122–123 timing of 126 trainee perspective 173–174 trainee preparation 123–124 trainees attending panel 125–126 apprentice training model 56 ARCP see Annual Review of Competence Progression assessment and case-based discussion 32–34 educational impact 7 feasibility 7 formative 10–11, 53, 93, 173–174 forms 185–196 (Appendix 1)
218
literature overview of methods 14–27 local 7–10 national exams 7–8 purpose 133–134 reliability 7 summative 11, 93, 111, 174, 183 utility 7–10 validity 7 Assessment of Clinical Expertise (ACE) 56–67, 149 assessor training 66 background 56–57 domains 60–62 e-form completion 66 feedback 57, 60, 62–65 how it works 58–60 local assessors 57 long case 14, 16–17, 56–57, 169 number needed 58 performance descriptors 60 in pilot study 150, 151, 152 in Portfolio Online 188–189 (Appendix 1) practical issues 66 set-up 59–60 standardised patients 8 trainee perspective 169, 174 use of 57–58 when and with whom 58 Assessments Online 118, 154–166 audit 159 availability 157 back-up 158 benefits to various users 154–155 as case study 155–165 design phase 159–160 disaster recovery 158–159 implementation phase 161–163 LAMP stack 157 live phase 163–165 migration 159
index
mini-PAT nominations 163–164 multiple email addresses 163 multi-source feedback 72 navigation 164–165 project team 155–156 reflection 165 requirements are established 156–159 scoping phase 156 security 157–158 speed 157 timescales 159 user-friendliness 160–161 see also Portfolio Online Assessment of Teaching (AoT) 171 assessors ACE 58, 66 Assessments Online 155 case-based discussion 41–42 computer literacy 176 Direct Observation of Non-Clinical Skills 78–79 mini-ACE 48, 53, 54 mini-PAT 72–73 multidisciplinary team 175, 178 pointers for 9 reinforcement training 53 trainees as 177 training 48, 53, 54, 66, 150, 178
CanMEDS 3–4, 78, 79, 80, 182 CASC see Clinical Assessment of Skills and Competencies case-based discussion 22–23, 28–44 adoption and development in UK 30–31 aims and application 28–29 assessor skills and questions 41 assessor training 42 development in specialty training 31–32 discussion 36–37 domains 38–39 educational programme 42 European use 169 evidence-based practice 140 feedback 41 foundation programme 30–31 groups in psychotherapy 84, 86–92 judgement 38–39 key research messages 22–23 origins 29–34 performance descriptors 38–39 in pilot studies 43, 144–145, 146, 150, 151, 152 planning 34–36 postgraduate medical education 32–34 revalidation 42–43
trainee perspective 170, 174–175, 177 use 34–41 Case Presentation (CP) 171 Certificate of Completion of Training (CCT) 2, 4, 111, 122, 128, 177 CEX see Clinical Evaluation Exercise Chart-Stimulated Recall (CSR) 29–30, 42, 169 Clinical Assessment of Skills and Competencies (CASC) 31, 126, 137–139 170, 217 (Appendix 3) Clinical Encounter Cards (CEC) 169 Clinical Evaluation Exercise (CEX) 8, 15, 46, 57 clinical judgement assessment in ACE 61 in mini-ACE 51 Clinical Presentation (CP) 150, 151, 152 clinical supervisors 99, 104–105, 124, 172 Clinical Work Sampling (CWS) 169 communication skills assessment in ACE 60–61 and CASC 138 in mini-ACE 51 community-based care 181 competence definition 4–5, 109 long case 15 portfolios 109, 110–112 Competency-Based Curriculum for Specialist Training in Psychiatry 150 competency-based training 4–6, 182 criticism of 5, 132, 143–144 examinations 131–141 consultant psychiatrists 171, 172, 175, 181 CP see Clinical Presentation Critical Review Paper (CRP) 139, 140 CSR see Chart-Stimulated Recall
Direct Observation of Non-Clinical Skills (DONCS) 22, 76–83 assessing and appraising 80 chairing meetings 79, 82 clinical supervision 79 development 77–78 how to use 78–79 Mental Health Act assessments 82 number of assessments needed 80–81 piloting 76, 77–78, 81–82 in Portfolio Online 193–194 (Appendix 1) providing oral information 80 ‘ready for consultant practice’ scale 77, 79, 82 trainee perspective 171–172, 179 use in specific situations 79–80 written communication 82
219
INDEX
Direct Observation of Procedural Skills (DOPS) 20–22, 77 key research messages 21 in pilot studies 144–145, 146, 150, 151, 152 undertaking with psychiatric trainees 21–22
feasibility 7, 54 fixed-term specialty training 128 foundation programme 2 case-based discussion 30–31 mini-CEX 47–48 mini-PAT 71 portfolios 114–115
educational supervisor reports 11–12, 99–107 ARCP 100, 106, 124–125 audit 104 examination progress 103–104 experiential and other outcomes 101–104 individual learning plan 100 learning objectives 100 logbooks 102 management 104 in Portfolio Online 195–196 (Appendix 1) psychotherapy 104 purpose and structure 100–107 reflective practice 102–103 research 104 review of performance in workplace 101 special interest 104 teaching 104 trainee underperformance 104–107 educational supervisors Assessments Online 155 multi-source feedback 73 Portfolio Online 177 programmed activities 172 emergency settings 62, 102 European Federation of Psychiatric Trainees (EFPT) 169 European Working Time Directive (EWTD) 2–3, 109–110, 181 evidence-based practice 139–140 examinations 7–8, 131–141 ARCP 126 assessment methods 135–140 competency training 131–141 discrepancy with WPBA results 134 educational supervisor reports 103–104 eligibility criteria 134–135 evidence-based practice 139–140 external assessment in WPBA system 134–135 future 141 importance 131–132 multiple-choice paper 135–136 oral assessments 136–139 principles in psychiatry 132–133 purpose 133–134 writing skills 136 see also MRCPsych examination extended matching questions (EMQs) 135
General Medical Council (GMC) 3–4, 155 case-based discussion 28 Certificate of Completion of Training 122 Postgraduate Medical Education and Training Board 1, 2, 4, 182 revalidation 110 standards 142, 178 trainee–trainer relationship 172 Good Doctors, Safer Patients 68 Good Medical Practice 3, 71–72, 78, 106–107, 182 A Guide to Postgraduate Specialty Training in the UK (The Gold Guide) 99, 122, 124, 129
220
history-taking assessment in ACE 60 in mini-ACE 51 ill health 107 Improving Access to Psychological Therapies 181 individual learning plan 100 individual patient assessment (IPA) 137, 169 Journal Club Presentation (JCP) 23–24 key research messages 24 in pilot study 150, 151, 152 trainee perspective 171 Kolb’s learning cycle 108–109 leadership 68–69, 76, 77, 144 learning portfolios see portfolios linear equating formula 136 logbooks 102 long case 14–17 ACE 14, 16–17, 56–57, 169 improving 16 mini-ACE 46–47 oral assessment 137 reliability 7, 14–15, 17, 54, 56–57, 137 validity 15–16
index
McNamara fallacy 144 management 76, 77, 104 MCQs see multiple-choice questions Medical Education for England 182 Medical Leadership Curriculum 76 Medical Training Application Service (MTAS) 3, 154, 167, 175 mental state examination assessment in ACE 60 in mini-ACE 51 Miller’s pyramid 4, 69, 110, 182 mini-Assessed Clinical Encounter (miniACE) 19, 45–55 assessment set-up 49–50 assessor training 48, 53, 54 background 46–47 description 45–46 domains 51 feedback 47, 48, 53 foundation programme assessment 47–48 number required 48–49 person descriptors 51 in pilot study 150, 151, 152 planning assessments 50 in Portfolio Online 186–187 (Appendix 1) specialist training assessment 48–49 standardised patients 8 trainee perspective 169, 170, 174 mini-Clinical Evaluation Exercise (miniCEX) 19–20, 21, 169 key research messages 19–20 mini-ACE 45 in pilot studies 144–145, 146, 148, 149 reliability 137 standardised patients 8 mini-Peer Assessment Tool (mini-PAT) 17, 70–73 Assessments Online 154, 155, 163–164 descriptors 71 development 71–72 educational supervisor reports 101, 104–105 foundation assessment programme 71 in pilot study 151, 152 Portfolio Online 176–177, 190–192 (Appendix 1) process 72–73 trainee perspective 170–171, 179 Modernising Medical Careers (MMC) 2, 48, 122, 144, 150, 167 MRCPsych examination 49 ARCP 58, 174 CASC 217 (Appendix 3) trainee perspective 168, 171 written papers 216–217 (Appendix 3) multiple-choice questions (MCQs) 135–136 anchor questions 136 standardisation 135–136
multi-source feedback (MSF) 17–19, 68–75 Assessments Online 154 clinical skills 69 educational supervisor 101 European use 169 feedback 73 history 68–69 humanistic skills 69 key research messages 18 length of time taken 73 in pilot study 150 practicalities 70–73 response to receiving MSF 70 self-assessment 69–70 undertaking with psychiatric trainees 18–19 see also mini-Peer Assessment Tool New Ways of Working initiative 181 Nielsen’s top ten usability heuristics 161 Northern Deanery field trial 144–149, 152 Objective Structured Clinical Examinations (OSCEs) 7–8, 31, 46, 137, 169 online assessment system see Assessments Online; Portfolio Online opinion leaders 145 oral assessments 136–139 organisational efficiency assessment in ACE 62 in mini-ACE 51 out-of-programme time 128 overall clinical care assessment in ACE 62 in mini-ACE 51 patients care of 184 confidentiality 114 standardised 8 Patient Satisfaction Questionnaire (PSQ) 19, 150, 151–152 peer ratings see multi-source feedback performance 4–6 definition 3–4 methods for assessment of 6 multi-source feedback 69 trainee’s underperformance 104–107, 173 personal development day 124 personal development plan 100 pilot studies 142–153 case-based discussion 43, 144–145, 146, 150, 151, 152 Direct Observation of Non-Clinical Skills 76, 77–78, 81–82
221
INDEX
Northern Deanery field trial 144–149, 152 Royal College of Psychiatrists 150–152 PMETB see Postgraduate Medical Education and Training Board Portfolio Online 117–119, 152, 183 ARCP 118–119, 123–124, 129 assessment forms 185–196 (Appendix 1) educational supervisor reports 99–100 spinal column model 114, 117 trainee perspective 175–177 portfolios 108–121 ARCP 111–112, 118–119, 123–124, 125 ‘cake mix’ model 114 competence 109, 110–112 defined 108–109 educational supervisor reports 99–100, 103, 104 electronic 99–100, 115–119, 123–124, 155, 183–184 existing models 114–117 factors for success 112 future directions 119 Kolb’s learning cycle 108–109 message box 117 in Northern Deanery field trial 145, 146–148 portfolio station 110 potential barriers to use 112–114 professional v. learning 108 progression through training 110–112 reasons for use 109–110 revalidation 110, 111, 114 ‘shopping trolley’ model 114 ‘spinal column’ model 114, 117 ‘toast rack’ model 114 see also Portfolio Online Postgraduate Medical Education and Training Board (PMETB) 1–2, 3–4, 167, 182 case-based discussion 33–34 exams 132 piloting 149, 150 standards 142, 154 professionalism assessment in ACE 61–62 in mini-ACE 51 in psychotherapy 93–96 PSQ see Patient Satisfaction Questionnaire psychotherapy 84–98, 181 account of session 97 audio- or videotaping sessions 97 case-based discussion groups 84, 86–92 case studies 87, 90–91, 96 casework 92–97 educational supervisor reports 104 evaluation of competencies 85–86, 87–92, 93–97
222
future of WPBAs 97–98 group therapy 92 knowledge of modalities 84, 92 nine-cell grid 97 ‘psychotherapeutic attitude’ 84 Skills for Health programme 85, 93 supervision 92–93, 97 teaching in 85 reflective practice 102–103, 114, 123 reliability 7, 133, 174 CASC 138 inter-case 15 interrater 15, 56, 111 long case 7, 14–15, 17, 54, 56–57, 137 mini-CEX 137 in pilot study 151 portfolios 111 revalidation case-based discussion 42–43 portfolios 110, 111, 114 Royal College of Psychiatrists 2, 3–4, 109, 182–183 ACE 17 ARCP 123–124 Assessments Online 72, 118, 154–166 case-based discussion 28 consultants and training 172 Direct Observation of Non-Clinical Skills 76, 77, 78 educational supervisors 99 mini-ACE 46, 54 mini-PAT 70–72 pilot of WPBAs 150–152 portfolio framework 115 Portfolio Online 99–100, 114, 117–119, 123–124, 129, 152, 183, 185–196 (Appendix 1) Psychiatric Trainees’ Committee 167, 169 psychotherapy 84 run-through grade 2, 3 self-assessment 69–70 Sheffield Peer Review Assessment Tool (SPRAT) 71 SMART learning objectives 100, 112 standardised patients 8 Supervised Assessment of Psychotherapy Expertise (SAPE) 93 supervisor reports see educational supervisor reports Team Assessment of Behaviour (TAB) 17 360°-feedback see multi-source feedback Tooke report (2008) 2, 3 top-up training 128
index
trainees perspective 167–180 ARCP 173–174 higher trainees 178–179 importance of structured feedback 173 international experience 168–169 need for WPBAs 168 Portfolio Online 175–177 practical considerations 174–175 trainees as assessors 177 trainee–trainer relationship 172–173 way forward 177–179 WPBA tools 169–172 pointers for 9–10 underperformance 104–107, 173 training changes 1–13, 143, 177–178, 181–184
Unfinished Business – Proposals for the Reform of the Senior House Officer Grade 168 utility of assessment/assessment systems 7–10 validity 7 ACE 56, 152 long case 15–16 mini-ACE 47, 50, 152 multi-source feedback 69 Patient Satisfaction Questionnaire 151–152 in pilot study 151–152 writing skills 82, 136
223
This page has been left blank intentionally
This page has been left blank intentionally
This page has been left blank intentionally
This page has been left blank intentionally
This page has been left blank intentionally