METHODS
IN
M O L E C U L A R B I O L O G Y TM
Series Editor John M. Walker School of Life Sciences University of Hertfordshire Hatfield, Hertfordshire, AL10 9AB, UK
For other titles published in this series, go to www.springer.com/series/7651
High Throughput Screening Methods and Protocols, Second Edition
Edited by
William P. Janzen University of North Carolina, Chapel Hill, NC, USA and
Paul Bernasconi BASF Corporation, Research Triangle Park, NC, USA
Editors William P. Janzen UNC Eshelman School of Pharmacy Ctr Integrative Chemical Bio & Drug Disc Division of Medicinal Chem & Natural Products University of North Carolina Chapel Hill NC 27599-763 USA
[email protected]
Paul Bernasconi BASF Corporation Research Triangle Park NC 27709-3528 USA
[email protected]
ISBN 1064-3745 e-ISBN 1940-6029 ISBN 978-1-60327-257-5 e-ISBN 978-1-60327-258-2 DOI 10.1007/978-1-60327-258-2 Springer Dordrecht Heidelberg London New York Library of Congress Control Number: 2009929642 # Humana Press, a part of Springer ScienceþBusiness Media, LLC 2002, 2009 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Humana Press, c/o Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. While the advice and information in this book are believed to be true and accurate at the date of going to press, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein. Printed on acid-free paper Springer is part of Springer ScienceþBusiness Media (www.springer.com)
Preface In the 6 years since the first edition of this book, the field of high-throughput screening (HTS) has evolved considerably. In 2004, the Society for Biomolecular Screening (SBS) celebrated its 10th anniversary. The event and its timing were significant because SBS is the world’s largest association of scientists, engineers and technologists associated with HTS. While the creation of SBS did not mark the birth of HTS by any means, its foundation in 1994 helped HTS find a common voice. It provided a discussion forum and a means to define and enforce standards. In 2006, SBS became the Society for Biomolecular Sciences, underlining the expansion of the members’ interests beyond screening. Like any new technology, HTS went through growth stages. During the initial hype phase of the 1980s and 1990s, HTS, together with chemistry and genomics, was predicted to solve all of the pharmaceutical industry’s pipeline problems. A fundamental change in drug discovery was afoot: the time-consuming physiology or medicinal chemistry experiments would be replaced by a numbers’ game, made possible by screening large, combinatorially generated compound libraries against numerous genomically identified targets. While this approach did (and continues to) deliver, it fell short of the expected revolution, exposing it to criticism from within and outside the industry (1). Learning from its mistakes, the HTS profession entered a period of change marked by an increased integration. The once stand-alone HTS groups matured into an essential, integrated component of the discovery effort. Contrary to the fears of many of our colleagues, HTS did not replace hypothesis-driven research but rather expanded it. In addition, because an HTS campaign is inherently expensive, more effort was expended to insure the quality of the hypothesis. Finally, compounds discovered by HTS enabled the testing of marginal hypotheses, thereby increasing the serendipity role in discovery. To reach this maturity level, the HTS field had to learn to ‘‘play well with others.’’ Of course, robotics, automation engineering, and data handling remain the hallmarks of HTS. But to be truly useful, HTS had to be integrated with the other discovery disciplines: genomics, molecular biology, cell biology, enzymology, pharmacology, and chemistry. Successful discovery starts long before and continues long after an HTS campaign. It also became clear that a large number of tests is not a replacement for quality components. Long gone are the days in which a marginally active target, or a target in a marginally relevant physiological state, is screened against large collections of compounds of questionable quality, diversity, or purity. Success is measured less by the number of compounds screened or by the hit rate and more by the quality of the chemical series entering the clinical pipeline. As a reward, HTS researchers can now point to several marketed drugs whose birth place was a well in a microtiter plate (2). For example, in the breast cancer therapeutic indication alone, three drugs have been introduced, which originated from an HTS campaign and are worth mentioning here: (1) IressaTM, an ATP-competitive inhibitor of the epidermal growth factor receptor tyrosine kinase; (2) sorafenib tosylate or NexavarTM, a specific inhibitor of the kinase Raf-1; and (3) tipifarnib, or ZarnestraTM, an inhibitor of protein farnesyl transferases. v
vi
Preface
With technology comes training. The preface of the first edition described how, at that time, ‘‘Nearly every scientist working in HTS had a unique story for how they came to be there,’’ and that ‘‘All that is changing. Training programs are beginning to appear and the techniques created in HTS are being used more and more frequently in laboratories outside the field.’’ Six years later, reality surpassed even the most optimistic predictions. Over 55 academic screening centers have been created (3), which provide both HTS services and training. Universities have become a major player in this field, educating researchers who, in the past, had to rely on extramural institutions to learn the trade. The National Institute of Health Roadmap, created in 2002, has completed its first phase and created the Molecular Libraries Screening Center Network (MLSCN) as part of the Molecular Libraries Initiative (4). These 10 HTS centers were established as a pilot program to apply HTS techniques in academic research with the overarching goal to ‘‘expand the availability and use of chemical probes to explore the function of genes, cells, and pathways in health and disease and to provide annotated information on the biological activities of compounds contained in the central Molecular Libraries Small Molecule Repository in a public database’’. Historically, serendipity and keen observation of natural events have been the main source for these tools. HTS now allows the systematic search for such probes. In addition, HTS allows a better understanding of the specificity of these compounds, an essential characteristic for their usefulness. While much has changed, the core principles of HTS have largely remained unchanged. Each organization is structurally unique, but all retain key elements: an assay must be developed, a chemical library must be assembled and managed, a screen must be performed, and data must be analyzed. Each of these functions is discussed in this volume. While assembling this new edition, we made a few choices. First, we wanted to remain true to the mission of the first edition: to serve as an introduction to HTS for scientists who are just entering the field, as well as providing enough details to be useful for scientists in established HTS operations. Second, while the HTS field regularly sees the introduction of new screening technologies, we wanted to give the lion’s share of the volume to the well established methods. They are most likely to be widely used by the intended reader. Third, we wanted to give a detailed treatment of the activities that are immediately related to HTS: compound library management, data handling, and robotics. Finally, we purposely left out ancillary methods: natural compound selection, chemical diversity assessment, orthogonal assays, and ADME-Tox issues. These essential tools would have been underserved in this volume. The reader will encounter terminology that is unique to HTS and has unique connotations in this industry. To assist with this problem, the Society of Biomolecular Sciences has assembled a glossary (5). We encourage both experienced ‘‘screeners’’ and those new to the field to review these definitions. We hope this manual will be of use to you and would like to acknowledge the authors who contributed to this manual: not only are they experts in their field, they are also great teachers who wanted to share their knowledge and enthusiasm for HTS. Chapel Hill, NC Research Triangle Park, NC
William P. Janzen Paul Bernasconi
Preface
vii
References 1. Landers, P. (2004) Testing machines were built to streamline research – but may be stifling it. Wall Street Journal, 24 Feb 04 2. Fox, S., Farr-Jones, S., Sopchak, L., Boggs, A., Nicely, H. W., Khoury, R. and Biros, M. (2006) Highthroughput screening: update on practices and success. Journal of Biomolecular Screening, 11: 864–869 3. Society for Biomolecular Sciences website: http://www.sbsonline.org 4. NIH Roadmap website: http://nihroadmap.nih.gov 5. http://www.sbsonline.org/links/terms.php
Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Design and Implementation of High-Throughput Screening Assays . . . . . . . . . . . Ricardo Macarro´ n and Robert P. Hertzberg 2 Creation of a Small High-Throughput Screening Facility . . . . . . . . . . . . . . . . . . . . Tod Flak 3 Informatics in Compound Library Management . . . . . . . . . . . . . . . . . . . . . . . . . . Mark Warne and Louise Pemberton 4 Statistics and Decision Making in High-Throughput Screening . . . . . . . . . . . . . . . Isabel Coma, Jesus Herranz, and Julio Martin 5 Enzyme Assay Design for High-Throughput Screening . . . . . . . . . . . . . . . . . . . . . Kevin P. Williams and John E. Scott 6 Application of Fluorescence Polarization in HTS Assays. . . . . . . . . . . . . . . . . . . . . Xinyi Huang and Ann Aulabaugh 7 Screening G Protein-Coupled Receptors: Measurement of Intracellular Calcium Using the Fluorometric Imaging Plate Reader . . . . . . . . . . . . . . . . . . . . . . . . . . . . Renee Emkey and Nancy B. Rankl 8 High-Throughput Automated Confocal Microscopy Imaging Screen of a Kinase-Focused Library to Identify p38 Mitogen-Activated Protein Kinase Inhibitors Using the GE InCell 3000 Analyzer. . . . . . . . . . . . . . . . . . . . . . . . . . . . O. Joseph Trask, Debra Nickischer, Audrey Burton, Rhonda Gates Williams, Ramani A. Kandasamy, Patricia A. Johnston, and Paul A. Johnston 9 Recent Advances in Electrophysiology-Based Screening Technology and the Impact upon Ion Channel Discovery Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andrew Southan and Gary Clark 10 Automated Patch Clamping Using the QPatch . . . . . . . . . . . . . . . . . . . . . . . . . . . Kenneth A. Jones, Nicoletta Garbati, Hong Zhang, and Charles H. Large 11 High-Throughput Screening of the Cyclic AMP-Dependent Protein Kinase (PKA) Using the Caliper Microfluidic Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . Leonard J. Blackwell, Steve Birkos, Rhonda Hallam, Gretchen Van De Carr, Jamie Arroway, Carla M. Suto, and William P. Janzen 12 Use of Primary Human Cells in High-Throughput Screens . . . . . . . . . . . . . . . . . . Angela Dunne, Mike Jowett, and Stephen Rees Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
v xi 1 33 55 69 107 127
145
159
187 209
225
239 259
Contributors JAMIE ARROWAY • GlaxoSmithKline, Collegeville, PA, USA ANN AULABAUGH • Chemical and Screening Sciences, Wyeth Research, Collegeville, PA, USA STEVE BIRKOS • Nanosyn, Durham, NC, USA LEONARD J. BLACKWELL • Wyeth Pharmaceuticals, Sanford, NC, USA AUDREY BURTON • Scynexis, Inc., Research Triangle Park, NC, USA GARY CLARK • BioFocus DPI, Saffron Walden, Essex, UK ISABEL COMA • Molecular Discovery Research, Glaxo SmithKline, Tres Cantos, Madrid, Spain ANGELA DUNNE • Screening and Compound Profiling Department, GlaxoSmithKline, Harlow, Essex, UK RENEE EMKEY • Amgen Inc., Cambridge, MA, USA TOD FLAK • BioAutomatix Consulting, Alameda, CA, USA NICOLETTA GARBATI • Department of Biology, Psychiatry CEDD, Glaxo SmithKline SpA, Verona, Italy RHONDA HALLAM • Independent Consultant JESUS HERRANZ • Molecular Discovery Research, Glaxo SmithKline, Tres Cantos, Madrid, Spain ROBERT P. HERTZBERG • Molecular Discovery Research, GlaxoSmithKline, Collegeville, PA, USA XINYI HUANG • Chemical and Screening Sciences, Wyeth Research, Collegeville, PA, USA WILLIAM P. JANZEN • Assay Development and Compound Profiling, Division of Medicinal Chemistry and Natural Products, Center for Integrative Chemical Biology and Drug Discovery, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC, USA PATRICIA A. JOHNSTON • Discovery Programs, Cellumen, Inc., Pittsburgh, PA, USA PAUL A. JOHNSTON • Department of Pharmacology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA KENNETH A. JONES • Lundbeck Research, Inc., Paramus, NJ, USA MIKE JOWETT • Screening and Compound Profiling Department, GlaxoSmithKline, Harlow, Essex, UK RAMANI A. KANDASAMY • BASF Corporation, Research Triangle Park, NC, USA CHARLES H. LARGE • Department of Biology, Psychiatry CEDD, Glaxo SmithKline SpA, Verona, Italy • Molecular Discovery Research, GlaxoSmithKline, Collegeville, ´ RICARDO MACARRoN PA, USA JULIO MARTIN • Molecular Discovery Research, Glaxo SmithKline, Tres Cantos, Madrid, Spain
xi
xii
Contributors
DEBRA NICKISCHER • Thermo Fisher Scientific, Pittsburgh, PA, USA LOUISE PEMBERTON • Exelgen Ltd., Bude, Cornwall, UK NANCY B. RANKL • BASF Corp., Research Triangle Park, NC, USA STEPHEN REES • Screening and Compound Profiling Department, GlaxoSmithKline, Harlow, Essex, UK JOHN E. SCOTT • Department of Pharmaceutical Sciences and BRITE, North Carolina Central University, Durham, NC, USA ANDREW SOUTHAN • Ion Channel Biology, BioFocus DPI, Saffron Walden, Essex, UK CARLA M. SUTO • Independent Consultant O. JOSEPH TRASK, JR: • Cellular Imaging Technologies, Duke University Center for Drug Discovery, Durham, NC, USA GRETCHEN VAN DE CARR • Nanosyn, Durham, NC, USA MARK WARNE • Exelgen Ltd., Bude, Cornwall, UK KEVIN P. WILLIAMS • Department of Pharmaceutical Sciences and BRITE, North Carolina Central University, Durham, NC, USA RHONDA G. WILLIAMS • BD Diagnostics – Diagnostic Systems, TriPath, Burlington, NC, USA HONG ZHANG • Lundbeck Research, Inc., Paramus, NJ, USA
Chapter 1 Design and Implementation of High-Throughput Screening Assays ´ and Robert P. Hertzberg Ricardo Macarron Abstract HTS is at the core of the drug discovery process, and so it is critical to design and implement HTS assays in a comprehensive fashion involving scientists from the disciplines of biology, chemistry, engineering, and informatics. This requires careful analysis of many variables, starting with the choice of assay target and ending with the discovery of lead compounds. At every step in this process, there are decisions to be made that can greatly impact the outcome of the HTS effort, to the point of making it a success or a failure. Although specific guidelines should be established to ensure that the screening assay reaches an acceptable level of quality, many choices require pragmatism and the ability to compromise opposing forces. Keywords: HTS process, Assay technology, Biochemical assays, Cellular assays, HTS quality, HTS validation.
1. Introduction to the HTS Process In most pharmaceutical and biotechnology companies, highthroughput screening (HTS) is a central function in the drug discovery process. This has resulted from the fact that there are increasing numbers of validated therapeutic targets being discovered through advances in human genomics, and increasing numbers of chemical compounds being produced through highthroughput chemistry initiatives. Many large companies have over 100 targets in their pipeline at any given time, and lead compounds must be found to progress these targets. In some cases we know enough about the target and can apply knowledge-based approaches to hit discovery such as focused screening and structure-based design. However in many cases, particularly for more novel targets, there is limited knowledge about the types W.P. Janzen, P. Bernasconi (eds.), High Throughput Screening, Methods and Protocols, Second Edition, vol. 565 ª Humana Press, a part of Springer Science+Business Media, LLC 2009 DOI 10.1007/978-1-60327-258-2_1, Springerprotocols.com
1
2
´ and Hertzberg Macarron
of compounds that may interact with the protein. As such, pharmaceutical companies often rely on HTS as the primary engine driving lead discovery. The HTS process is a subset of the drug discovery process and can be described as the phase from Target to Lead. This phase can be broken down in the following steps: Target Choice
Reagent Procurement Screening Collections
Assay Development and Validation HTS Implementation Data Capture, Storage and Analysis Leads
It is critically important to align the target choice and assay method to ensure that a biologically relevant and robust screen is configured. The assay must be configured correctly so that compounds with the desired biological effect will be found if they exist in the screening collection. The assay must demonstrate low variability and high signal to background so that false negatives and false positives are minimized. The screen must have sufficient throughput and low cost to enable screening of large compound collections. To meet these requirements, organizations must ensure that communication between therapeutic departments, assay development groups, and screening scientists occurs early – as soon as the target is chosen – and throughout the assay development phase. Reagent procurement is often a major bottleneck in the HTS process. This can delay the early phases of assay development – e.g., when active protein cannot be obtained – and also delay HTS implementation if scale-up of protein or cells fails to produce sufficient reagent to run the full screen. For efficient HTS operation, there must be sufficient reagent available to run the entire screening campaign before HTS can start. Otherwise, the campaign will need to stop halfway through and the screening robots will have to be reconfigured for other work. Careful scheduling between reagent procurement departments and HTS functions is critical to ensure optimum use of robotics and personnel. Modern HTS laboratories have borrowed concepts from the manufacturing industry to smooth the flow of targets through the hit discovery process (e.g., supply chain management, constrained workin-progress, and statistical quality control) and these ideas have begun to pay off with higher productivity and shorter lead times. Successful HTS implementation is multidisciplinary and requires close alignment of computational chemists directing the synthesis or the acquisition of compound collections, sample
Design and Implementation of High-Throughput Screening Assays
3
management specialists maintaining and distributing screening decks, technology specialists responsible for setting up and supporting HTS automation, biologists and biochemists with knowledge of assay methodology, IT personnel capable of collecting and analyzing large datasets, and medicinal chemists capable of examining screening hits to look for patterns that define lead series. Through the marriage of these diverse specialties, therapeutic targets can be put through the lead discovery engine called HTS and lead compounds will emerge.
2. Choice of Therapeutic Target There are three major considerations for choosing a therapeutic target destined for HTS: target validity (i.e., disease relevance), chemical tractability, and screenability. Disease relevance is the most important consideration and also the most complex. Since there is an inverse relationship between target novelty and validity, organizations should choose a portfolio of targets, which span the risk spectrum. Some targets will have a high degree of validation but low novelty (fast follower targets) and others will be highly novel but poorly linked to disease. Target validity can be assessed with genetic approaches and/or compound-based experiments. Genetic approaches such as gene knockouts or RNAi can be time-consuming and sometimes lead to false conclusions but can be performed without the need for expensive screening. Compound-based target validation approaches require taking a risk with less-validated targets and spending money to screen for tool compounds, followed by cell-based or in vivo experiments. Both approaches have their advantages and disadvantages, and most organizations use a combination. However, many fail to fully analyze the economics of this equation. Efforts to reduce the cost and increase the success rate of HTS can shift the equation in favor of running screens for targets on the less-validated end of the spectrum. While disease relevance should be the primary consideration when choosing a target, one should also consider technical factors important to the HTS process. Chemical tractability considerations relate to the probability that drug-like compounds capable of producing the therapeutically relevant effect against a specific target are present in the screening collection and can be found through screening. Years of experience in HTS within the industry have suggested that certain target classes are more chemically tractable than others, including G protein-coupled receptors (GPCRs), ion channels, nuclear hormone receptors, and kinases. On the other side of the spectrum, targets that work via protein–protein interactions have a lower probability of being successful in HTS campaigns. One reason for this is the fact that compound libraries often do not contain compounds of sufficient
4
´ and Hertzberg Macarron
size and complexity to disrupt the large surface of protein–protein interaction that is encountered in these targets. Natural products are one avenue that may be fruitful against protein–protein targets, since these compounds are often larger and more complex than those in traditional chemical libraries. The challenge for these targets is finding compounds that have the desired inhibitory effect and also contain drug-like properties (e.g., are not too large in molecular weight). Recently, several groups have had success with protein–protein interactions by screening for small fragments that weakly inhibit the interaction and building them up to produce moderate-sized potent inhibitors. Certain subsets of protein–protein interaction targets have been successful from an HTS point of view. For example, chemokine receptors are technically a protein–protein interaction (within the GPCR class) and there are several examples of successful lead compounds for targets in this class (1). Similarly, certain integrin receptors that rely on small epitopes (i.e., RGD sequences) have also been successful at producing lead compounds (2). There may be other classes of tractable protein–protein interactions that remain undiscovered due to limitations in compound libraries. Based on the thinking that chemically tractable targets are easier to inhibit, most pharmaceutical companies have concentrated much of their effort on these targets and diminished work on more difficult targets. While this approach has some merits, one should be careful not to entirely eliminate target classes that would otherwise be extremely attractive from a biological point of view. Otherwise, the prophecy of chemical tractability will be self-fulfilled, since today’s compound collections will not expand into new regions and we will never find leads for more difficult, biologically relevant targets. There is clearly an important need for enhancing collections by filling holes that chemical history has left open. The challenge is filling these holes with drug-like compounds that are different from the traditional pharmacophores of the past. This is critical if we are to increase HTS success rates (proportion of targets which give starting points for medicinal chemistry) from the current 60% (3,4) to 80% or higher. A final factor to consider when choosing targets is screenability – the technical probability of developing a robust and high-quality screening assay. The impact of new assay technologies has made this less important, since there are now many good assay methods available for a wide variety of target types (see Section 3). Nevertheless, some targets are more technically difficult than others. Of the target types mentioned above, GPCRs, kinases, proteases, nuclear hormone receptors, and protein–protein interactions are often relatively easy to establish as screens. Ion channels are more difficult, although new technologies are being developed, which make these more approachable from an HTS point of view (5). Enzymes other than kinases and proteases must be considered on a case-by-case basis depending on the nature of the substrates involved.
Design and Implementation of High-Throughput Screening Assays
5
The reductionist approach in which a single target is hypothesized to be important for disease carries some risks. Have you chosen an irrelevant or intractable target? What if a combination of targets is required to elicit the desired biological effect? An alternative approach gaining favor is the use of phenotypic and/or pathway assays for hit discovery. Phenotypic assays, sometimes called ‘‘black box’’ assays, measure a cellular property in response to test compound. Examples include secretion of protein factors, chemotaxis, apoptosis, and cell shape change. Pathway assays are more precise in that protein properties within a cellular pathway are measured. Examples include intracellular protein phosphorylation and cellular trafficking. Often a combination of these approaches can be used to turn a ‘‘black box’’ into a ‘‘gray box.’’ An advantage of phenotypic and pathway assays is the fact that multiple targets are screened at once, providing multiple chances for compounds to ‘‘find’’ the most tractable and biologically relevant target(s) in the cell. However, phenotypic assays are more difficult to configure and more expensive to run, and hit deconvolution to define the specific target(s) of your hits is time-consuming and complex. Furthermore, provision of relevant cells is difficult but recent advances in human stem cells are beginning to alleviate this problem. All of these factors must be considered on a case-by-case basis and should be evaluated at the beginning of a Target-to-Lead effort before making a choice to go forward. Working on an expensive and technically difficult assay must be balanced against the degree of validation and biological relevance. While the perfect target is chemically tractable, technically easy, inexpensive, and biologically relevant, such targets are rare. The goal is to work on a portfolio that spreads the risk among these factors and balances the available resources.
3. Choice of Assay Method There are usually several ways of looking for hits of any given target. The first and major choice to make is between a biochemical and a cell-based assay. By biochemical we understand an assay developed to look for compounds that interact with an isolated target in an artificial environment. This was the most popular approach in the early 1990 s, the decade in which HTS became a mature and central area of drug discovery. This bias toward biochemical assays for HTS was partly driven by the fact that cellbased assays were often more difficult to run in high throughput. However, advances in technology and instrumentation for cellbased assays that translated to commercial products around the early 2000 s, together with disappointments in the success rates of molecular-based hit discovery campaigns, changed the tilt toward cell-based HTS. Among these advances are the emergence of
6
´ and Hertzberg Macarron
HTS-compatible technology to measure G protein-coupled receptor (GPCR) (6) and ion channel function (5), confocal imaging platforms for rapid cellular and subcellular imaging, and the continued development of reporter gene technology. In a recent survey (3), HTS labs reported a 50/50 split between biochemical and cell-based assays in 2006, with a projection to be 60% cell-based, 40% biochemical in 2008. For most drug discovery programs, both types of assays are required for hit discovery and characterization and subsequent lead optimization. Everything being equal (technical feasibility, cost, and throughput), cell-based assays are often preferred for HTS because compounds tested will be interacting with a more realistic mix of protein target conformations in their physiological milieu, i.e., with the right companions (proteins, metabolites, etc.) at the right concentration. Additionally, cell-based assays tend to avoid some common artifacts in biochemical assays such as aggregators (7). On the other hand, cell-based assays may identify hits that do not act on the target or the pathway of interest and may miss hits of interest that do not penetrate the cell membrane. If a cell-based assay is chosen for primary screening, a biochemical assay will often be used as a secondary screen to characterize hits and guide lead optimization. A wide variety of assay formats is now available at relatively affordable prices to cope with most needs in the HTS labs. The following sections provide a very succinct summary of some of the most popular choices. A recent comprehensive review by Inglese and coworkers is recommended for further reading (8). 3.1. Biochemical Assay Methods
While laborious separation-based assay formats such as radiofiltration and ELISAs were common in the early 1990 s, most biochemical screens today use simple homogeneous ‘‘mix-and-read’’ formats. This is particularly true for HTS assays run in industrial labs that are conducted on high-density microtiter plates (384 or 1536 wells). The most common assay readouts used in biochemical assay methods for HTS are optical, including absorbance, fluorescence, luminescence, and scintillation. Among these, fluorescence-based techniques are amongst the most important detection approaches used for HTS (9). Fluorescence techniques give very high sensitivity, which allows assay miniaturization, and are amenable to homogeneous formats. One factor to consider when developing fluorescence assays for screening compound collections is wavelength; in general, short excitation wavelengths (especially those below 400 nm) should be avoided to minimize interference produced by test compounds. Although fluorescence intensity measurements have been successfully applied in HTS, this format is mostly applied to a narrow range of enzyme targets for which fluorogenic substrates are available. A more widely used fluorescence readout is time-resolved fluorescence resonance energy transfer (TR-FRET) (10). This is a
Design and Implementation of High-Throughput Screening Assays
7
dual-labeling approach based upon long-range energy transfer between fluorescent Ln3+ complexes and a suitable resonance energy acceptor. These approaches give high sensitivity by reducing background and a large number of HTS assays have now been configured using TR-FRET. This technique is highly suited to measurements of protein–protein interactions and has also been tailored to detect important metabolites such as cAMP. Another versatile fluorescence technique is florescence polarization (FP), which can be used to measure bimolecular association events (10). Immobilized metal-ion affinity-based FP (IMAP, Molecular Devices) (11) is a variation of FP that can be applied to test activity of kinases and other enzymes. Radiometric techniques such as scintillation proximity assay (SPA, GE) (12) used to be very common in the 1990 s. Despite advances in imaging and bead technology that enabled faster readouts and reduced the occurrence of optical interferences, radiometric assays have several disadvantages including safety and limited reagent stability. In recent years, these techniques have been displaced by fluorescence assay technologies; current estimates from various surveys of HTS laboratories indicate that radiometric assays presently constitute around 5% of all screens performed. Other technologies able to circumvent technical hurdles for niche difficult assays are amplified luminescence proximity (AlphaScreen, Perkin Elmer) (13), electrochemiluminescence (ECL, Meso Scale Discovery) (14), fluorescence correlation spectroscopy (FCS), and other confocal techniques (9). Label-free assays are a diverse set of techniques of growing interest and demand. Many of the methods are modern adaptation to the high-throughput environment of well-established technologies such as mass spectroscopy or calorimetry. An overview of the commercial solutions in place and their principles has been recently published by Rich and Myszka (15). 3.2. Cell-Based Assay Methods
As recently as the mid-1990s, most cell-based assay formats were not consistent with HTS requirements. However, as recent technological advances have facilitated higher throughput functional assays, cell-based formats now make up a reasonable proportion of screens performed today. One of the most important advances in cell-based assay methodology is the development of the FLIPR1 (MDS Analytical Technologies), a fluorescence imaging plate reader with integrated liquid handling that facilitates the simultaneous fluorescence imaging of 384 samples to measure intracellular calcium mobilization in real time (6). This format is now commonly used for GPCR and ion channel targets. Based on the success of the FLIPR1, several additional cell-based assays for GPCRs were developed. One useful technology uses the photoprotein aequorin to measure intracellular calcium levels. When aequorin binds to calcium, it oxidizes coelenterazine with the
8
´ and Hertzberg Macarron
emission of light, which can be easily measured on a suitable plate reader. Another important cell-based assay method involves the measurement of intracellular cAMP levels, which allows the screening of Gi- and Gs-coupled GPCRs. Technologies for cAMP measurement include the older RIA, ELISA, and SPA methods as well as recent techniques such as TR-FRET, amplified luminescence proximity (AlphaScreen1), and enzyme fragment complementation (EFC, HitHunterTM), which are less expensive and have higher throughput (16). Significant advances in ion channel screening have occurred over the past decade (17). Calcium-sensing dyes read on FLIPR1 are commonly used to measure channels that conduct calcium, while voltage-sensing dyes are used to track changes in membrane potential. An important advance in high-throughput ion channel assays was the development of FRET-based voltage-sensing dyes, where a pair of molecules exhibit FRET, which is disrupted when the membrane is depolarized. Ion flux assays using nonradioactive tracers analyzed by atomic absorbance spectroscopy (AAS) can now be run in HTS format using recently available instrumentation. And the standard for measuring ion channel activity, patch clamp measurements, has been facilitated by the development of automated instrumentation such as Ionworks Quattro, PatchXpress, and QPatch (18). While these technologies are remarkable, further improvements are necessary before patch clamp measurements can be used for primary HTS. Cellular phenotypes and pathways are now routinely measured using a variety of techniques amenable to HTS. The reporter gene assay is the oldest and most well-studied method, which allows the discovery of compounds that modulate a pathway resulting in changes in gene expression (19). This method offers certain advantages relative to other cell-based assays, in that it requires fewer cells, is easier to automate, and can be performed in 1536 well plates. Descriptions of miniaturized reporter gene readouts include luciferase – undoubtedly, the most popular reporter gene (20) – secreted alkaline phosphate, and beta-lactamase. However, reporter gene assays are of relatively low resolution since they measure effects on an entire pathway at once. Recent advances in cellular imaging have allowed HTS of higher resolution phenomena such as intracellular protein redistribution, GPCR internalization, and other cellular pathway events. Methods using protein complementation assays and bioluminescence resonance energy transfer (BRET) combined with cellular imaging can be very useful (8). Another cellular phenotype that is commonly measured in HTS is protein secretion. Classical methods to measure protein secretion such as RIA and ELISA are being replaced by improved techniques such as AlphaLISATM (21) and MultiArray1 or MultiSpot1 electrochemiluminescence-based solutions (14).
Design and Implementation of High-Throughput Screening Assays
3.3. Matching Assay Method to Target Type
9
Often, one has a choice of assay method for a given target type (Table 1.1). To illustrate the various factors that are important when choosing an assay type, let us consider the important GPCR target class. GPCRs can be screened using cell-based assays such as FLIPR, aequorin, and reporter gene or biochemical formats such as SPA and FP. One overriding factor when choosing between functional or binding assays for GPCRs is whether one seeks to find agonists or antagonists. Functional assays are much more amenable to finding agonists than are binding assays, while antagonists can be found with either format. FLIPR assays are relatively easy to develop, but this screening method is more labor-intensive (particularly with respect to cell culture requirements) and more difficult to automate than reporter gene assays. In contrast, the need for longer term incubation time for reporter gene assays (4–6 h vs. min for FLIPR) means that cytotoxic interference by test compounds may be more problematic. On the plus side, reporter gene readouts for GPCRs can sometimes be more sensitive to agonists than FLIPR. Aequorin offers some advantages of FLIPR while being easier to run and less expensive. Regarding biochemical assays for GPCRs, SPA remains a common format since radiolabeling is often facile and nonperturbing. However, fluorescence assays for GPCRs such as FP and FIDA are becoming more important. Fluorescent labels are more stable, safer, and often more economical than radiolabels. However,
Table 1.1 The most important assay formats for various target types are shown Assay formats Target type
Biochemical
Cell-based
GPCRs
SPA, FP, FIDA
FLIPR, reporter gene, aequorin, TR-FRET, AlphaScreen, EFC, cell imaging
Ion channels
SPA, FP
FLIPR, FRET, AAS, automated patch clamp
Nuclear hormone receptor
FP, TR-FRET, SPA, AlphaScreen
Reporter gene, cell imaging
Kinases
FP, TR-FRET, SPA, IMAP
Cellular phosphorylation, cell imaging
Protease
FLINT, FRET, TR-FRET, FP, SPA
Reporter gene, cell imaging
Other enzymes
FLINT, FRET, TR-FRET, FP, SPA, absorbance
Protein– protein
TR-FRET, FRET. BRET, SPA, ECL, AlphaScreen
BRET, cell imaging, reporter gene
10
´ and Hertzberg Macarron
while fluorescent labeling is becoming easier and more predictable, these labels are larger and thus can sometimes perturb the biochemical interaction (in either direction). These examples illustrate some of the trade-offs one needs to consider when choosing an assay type. In general, one should choose the assay format that is easiest to develop, most predictable, most relevant, and cheapest to run. These factors, however, are not always known in advance. And even worse, they can be at odds with each other and thus must be balanced to arrive at the best option. Additional important quality considerations include compound interference issues and assay variability. It makes little sense to run a cheap and easy assay that is variable or overly sensitive to inhibition. In some cases it makes sense to parallel track two formats during the assay development phase and choose between them based on which is easiest to develop and most facile. Finally, in addition to these scientific considerations, logistical factors such as the number of specific readers or robot types available in the HTS lab and the queue size for these systems must be taken into account.
4. Assay Development and Validation
4.1. Critical Biochemical Parameters in HTS Assays
The final conditions of an HTS assay are chosen following the optimization of quality without compromising throughput, while keeping costs low. The most critical points that must be considered in the design of a high-quality assay are biochemical data and statistical performance. Assay optimization is often required to achieve acceptable HTS performance while keeping assay conditions within the desired range. This usually significantly improves the stability and/or the activity of the biological system studied and has therefore become a key step in the development of screening assays (22). The success of an HTS campaign in finding hits with the desired profile depends primarily on the presence of such compounds in the collection tested. But it is also largely dependent on the ability of the researcher to engineer the assay in accordance with that profile while reaching an appropriate statistical performance. A classical example that illustrates the importance of the assay design is how substrate concentration determines the sensitivity for different kinds of enzymatic inhibitors. If we set the concentration of one substrate in a screening assay at 10 times Km, competitive inhibitors of that enzyme–substrate interaction with a Ki greater than one-eleventh of the compound concentration used
Design and Implementation of High-Throughput Screening Assays
11
in HTS will show less than 50% inhibition and will likely be missed – i.e., competitive inhibitors with a Ki of 0.91 mM or higher would be missed when screening at 10 mM. On the other hand, the same problem will take place for uncompetitive inhibitors if substrate concentration is set at one-tenth of its Km. Therefore, it is important to know what kind of hits are sought in order to make the right choices in substrate concentration; often, one chooses a substrate concentration that facilitates discovery of both competitive and uncompetitive inhibitors. In this section, we describe the biochemical parameters of an assay that have a greater influence on the sensitivity of finding different classes of hits and some recommendations about where to set them.
4.1.1. Enzymatic Assays
The sensitivity of an enzymatic assay to different types of inhibitors is a function of the ratio of substrate concentration to Km (S/Km).
4.1.1.1. Substrate Concentration l
Competitive inhibitors: for reversible inhibitors that bind to a binding site that is the same as one substrate, the more of that substrate present in the assay, the less inhibition observed. The relationship between IC50 (compound concentration required to observe 50% inhibition of enzymatic activity with respect to an uninhibited control) and Ki (inhibition constant) is (23): IC50 ¼ ð1 þ S=KmÞ Ki
As shown in Fig. 1.1, at S/Km ratios less than 1 the assay is more sensitive to competitive inhibitors, with an asymptotic limit of IC50 ¼ Ki. At high S/Km ratios, the assay becomes less suitable for finding this type of inhibitors. l Uncompetitive inhibitors: if the inhibitor binds to the enzyme–substrate complex or any other intermediate complex but not to the free enzyme, the dependence on S/Km is the opposite to what has been described for competitive binders. The relationship between IC50 and Ki is (23): IC50 ¼ ð1 þ Km=SÞ Ki High substrate concentrations make the assay more sensitive to uncompetitive inhibitors (Fig. 1.1). l Noncompetitive (allosteric) inhibitors: if the inhibitor binds with equal affinity to the free enzyme and to the enzyme–substrate complex, the inhibition observed is independent of the substrate concentration. The relationship between IC50 and Ki is (23):
´ and Hertzberg Macarron 12
10
8 IC50/Ki ratio
12
Competitive inhibitor Uncompetitive inhibitor Non-competitive inhibitor
6
4
2
0 0.1
1 S/Km ratio
10
Fig. 1.1. Variation of IC50/Ki ratio with the S/Km ratio for different types of inhibitors. At [S] ¼ Km, IC50 ¼ 2Ki for competitive and uncompetitive inhibitors. For noncompetitive inhibitors IC50 ¼ Ki at all substrate concentrations.
IC50 ¼ Ki l
Mixed inhibitors: if the inhibitor binds to the free enzyme and to the enzyme–substrate complex with different affinities (Ki1 and Ki2, respectively), the relationship between IC50 and Ki is (24): IC50 ¼ ðS þ KmÞðKil þ S=Ki2Þ
In summary, setting the substrate(s) concentration(s) at the Km value is an optimal way of ensuring that all types of inhibitors exhibiting a Ki close to or below the compound concentration in the assay can be found in an HTS campaign. Nevertheless, if there is a specific interest in favoring or avoiding a certain type of inhibitor, then the S/Km ratio would be chosen considering the information provided above. For instance, many ATP-binding enzymes are tested in the presence of saturating concentrations of ATP to minimize inhibition from compounds that bind to the ATP-binding site. Quite often the cost of one substrate or the limitations of the technique used to monitor enzymatic activity (Table 1.2) may preclude setting the substrate concentration at its ideal point. As in many other situations found while implementing an HTS assay, the screening scientist must consider all factors involved and look for the optimal solution. For instance, if the sensitivity of a detection technology requires setting S ¼ 10 Km to achieve an acceptable signal to background, competitive
Design and Implementation of High-Throughput Screening Assays
13
Table 1.2 Examples of limitations to substrate concentration imposed by some popular assay technologies. These limitations also apply to ligand in binding assays or other components in assays monitoring any kind of binding event Assay technology
Limitations
Fluorescence
Inner filter effect at high concentrations of fluorophore (usually >1 mM)
Fluorescence polarization
>30% substrate depletion required
Capture techniques (ELISA, SPA, FlashPlate, BET, others)
Concentrations of the reactant captured must be in alignment with the upper limit of binding capacity
Capture techniques and anyone monitoring binding
Nonspecific binding (NSB) of the product or of any reactant to the capture element (bead, plate, membrane, antibody, etc.) may result in misleading activity determinations
All
Sensitivity limits impose a lower limit to the amount of product detected
inhibitors with a Ki greater than one-eleventh of the compound concentration tested will not likely be identified and will limit the campaign to finding more potent inhibitors. In this case, working at a higher compound concentration would help to find some of the weak inhibitors otherwise missed. If this is not feasible, it is better to lose weak inhibitors while running a statistically robust assay, rather than making the assay more sensitive by lowering substrate concentration to a point of unacceptable signal to background. The latter approach is riskier since a bad statistical performance would jeopardize the discovery of more potent hits (see Section 4.3). 4.1.1.2. Enzyme Concentration
The accuracy of inhibition values calculated from enzymatic activity in the presence of inhibitors relies on the linear response of activity to the enzyme concentration. Therefore, an enzyme dilution study must be performed in order to determine the linear range of enzymatic activity with respect to enzyme concentration. As shown in Fig. 1.2 for valyl-tRNA synthetase, at high enzyme concentrations there is typically a loss of linearity due to substrate depletion, protein aggregation, or limitations in the detection system. If the enzyme is not stable at low concentrations, or if the assay method does not respond linearly to product formation or substrate depletion, there could also be a lack of linearity in the lower end. In addition, enzyme concentration marks a lower limit to the accurate determination of inhibitor potency. IC50 values lower than one-half of the enzyme concentration cannot be measured;
14
´ and Hertzberg Macarron 2,500
Product formed (CPM)
2,000
1,500
1,000
500
0 0
1
2
3
4
5
[VRS] nM
Fig. 1.2. Protein dilution curve for valyl-tRNA synthetase. The activity was measured after 20 min incubation following the SPA procedure described (25).
this effect is often referred to as ‘‘bottoming out.’’ As the quality of compound collections improves, this could be a real problem since SAR trends cannot be observed among the more potent hits. Obviously, enzyme concentration must be kept far below the concentration of compounds tested in order to find any inhibitor. In general, compounds are tested at micromolar concentrations (1–100 mM) and as a rule of thumb, it is advisable to work at enzyme concentrations below 100 nM. On the other hand, the assay can be made insensitive to certain undesired hits (such as inhibitors of enzymes added in coupled systems) by using higher concentrations of these proteins. In any case, the limiting step of a coupled system must be the one of interest, and thus the auxiliary enzymes should always be in excess. 4.1.1.3. Incubation Time and Degree of Substrate Depletion
As described above for enzyme concentration, it is important to assess the linearity vs. time of the reaction analyzed. HTS assays are often end-point and so it is crucial to select an appropriate incubation time. Although linearity vs. enzyme concentration is not achievable if the end-point selected does not lie in the linear range of the progress curves for all enzyme concentrations involved, exceptions to this rule do happen, and so it is important to check it as well. To determine accurate kinetic constants, it is crucial to measure initial velocities. However, for the determination of acceptable inhibition values, it is sufficient to be close to linearity. Therefore, the classical rule found in Biochemistry textbooks of working at or below 10% substrate depletion [e.g. (26)] does not necessarily apply to HTS assays. Provided that all compounds in a
15
100
50
80
40
60
30
40
20 % S depleted uninhibited reaction
20
% Inhibition observed
% S depleted
Design and Implementation of High-Throughput Screening Assays
10
% S depleted with 50 % enzyme inhibition % I observed 0 0
1000
2000 3000 Time (arbitrary units)
4000
0 5000
Fig. 1.3. Theoretical progress curves at S ¼ Km of an uninhibited enzymatic reaction and a reaction with an inhibitor at its IC50 concentration. The inhibition values determined at different end-points throughout the progress curve are shown as well. Initial velocities are represented by dotted lines.
collection are treated in the same way, if the inhibitions observed are off by a narrow margin, it is not a problem. As shown in Fig. 1.3, at 50% substrate depletion with an initial substrate concentration at its Km, the inhibition observed for a 50% real inhibition is 45%, an acceptable error. For higher inhibitions the errors are lower (e.g., instead of 75% inhibition, 71% would be observed). At lower S/Km ratios the errors are slightly higher (e.g., at S ¼ 1/10 Km, a 50% real inhibition would yield an observed 42% inhibition, again at 50% substrate depletion). This flexibility to work under close-to-linearity but not truly linear reaction rates makes it feasible to use certain assay technologies in HTS – e.g., fluorescence polarization – that require a high proportion of substrate depletion in order to produce a significant change in signal. Secondary assays configured within linear rates should allow a more accurate determination of IC50 s for hits. In reality, the experimental progress curve for a given enzyme may differ from the theoretical one depicted here for various reasons such as non Michaelis–Menten behavior, reagent deterioration, product inhibition, and detection artifacts. In view of the actual progress curve, practical choices should be made to avoid missing interesting hits. 4.1.1.4. Order of Reagent Addition
The order of addition of reactants and putative inhibitors is important to modulate the sensitivity of an assay for slow binding and irreversible inhibitors.
16
´ and Hertzberg Macarron
A preincubation (usually 5–10 min) of enzyme and test compound favors the finding of slow-binding competitive inhibitors. If the substrate is added first, these inhibitors have a lower probability of being found. In some cases, especially for multisubstrate reactions, the order of addition can be engineered to favor certain uncompetitive inhibitors. For instance, a mimetic of an amino acid that could act as an inhibitor of one aminoacyl-tRNA synthetase will exhibit a much higher inhibition if preincubated with enzyme and ATP before addition of the amino acid substrate. 4.1.2. Binding Assays
Although this section is focused on receptor binding, other binding reactions (protein–protein, protein–nucleic acid, etc.) are governed by similar laws, and so assays to monitor these interactions should follow the guidelines hereby suggested.
4.1.2.1. Ligand Concentration
The equation that describes binding of a ligand to a receptor, developed by Langmuir to describe adsorption of gas films to solid surfaces, is virtually identical to the Michaelis–Menten equation for enzyme kinetics: BL ¼ Bmax L=ðKd þ LÞ where BL ¼ bound ligand concentration (equivalent to v0), Bmax ¼ maximum binding capacity (equivalent to Vmax), L ¼ total ligand concentration (equivalent to S), and Kd ¼ equilibrium affinity constant also known as dissociation constant (equivalent to Km). Therefore, all equations disclosed in Section 4.1.1.1 can be directly translated to ligand-binding assays. For example, for competitive binders IC50 ¼ ð1 þ L=KdÞ Ki Uncompetitive binders cannot be detected in binding assays; functional assays must be performed to detect this inhibitor class. Allosteric binders could be found if their binding modifies the receptor in a fashion that prevents ligand binding. Typically, ligand concentration is set at the Kd concentration as an optimal way to attain a good signal (50% of binding sites occupied). This results in a good sensitivity for finding competitive binders.
4.1.2.2. Receptor Concentration
The same principles outlined for enzyme concentration in Section 4.1.1.2 apply to receptor concentration, or concentration of partners in other binding assays. In most cases, especially with membrane-bound receptors, the nominal concentration of receptor is not known but can be determined by measuring the proportion of bound ligand at the Kd. In any case, linearity of response (binding) with respect to receptor (membrane) concentration should be assessed.
Design and Implementation of High-Throughput Screening Assays
17
In traditional radiofiltration assays, it was recommended to set the membrane concentration so as to reach at most 10% of ligand bound at the Kd concentration, i.e., the concentration of receptor present should be below one-fifth of Kd (27). Although this is appropriate to get accurate binding constants, it is not absolutely required to find competitive binders in a screening assay. Some formats (FP, SPA in certain cases) require a higher proportion of ligand bound to achieve acceptable statistics, and receptor concentrations close or above the Kd value have to be used. Another variable to be considered in ligand-binding assays is nonspecific binding (NSB) of the labeled ligand. NSB increases linearly with membrane concentration. High NSB leads to unacceptable assay statistics, but this can often be improved with various buffer additives (see Section 4.2). 4.1.2.3. Preincubation and Equilibrium
As discussed for enzymatic reactions, a preincubation of test compounds with the receptor would favor slow binders. After the preincubation step, the ligand is added and the binding reaction should be allowed to reach equilibrium in order to ensure a proper calculation of displacement by putative inhibitors. Running binding assays at equilibrium is convenient for HTS assays, since one does not have to carefully control the time between addition of ligand and assay readout as long as the equilibrium is stable.
4.1.3. Cell-Based Assays
The focus of the previous sections has been on cell-free systems. Cell-based assays offer different challenges in their setup with many built-in factors that are out of the scientist’s control. Nevertheless, some of the points discussed above apply to them, mutatis mutandi. One of the most important considerations is cell type. The most physiologically relevant cells are primary human cells, but these are very difficult and expensive to procure. Recent advances in stem cell science are beginning to facilitate the provision of cells for HTS that are closer to the primary human cell. However, recombinant cells remain the most commonly used cell type for HTS. Important considerations when developing cell-based assays include the following (22, 28): l Cell culture details should be well documented and reproducible. Most problems with cell-based assays can be traced to problems with the cells. l
Consider using cryopreserved cells as an assay source to reduce variability and improve screening scheduling logistics.
l
Adherent cells or suspension cells can be used, and the choice is based on the cell type and the assay readout method. In general, try to mimic the physiological conditions as much as possible while considering assay logistics.
18
´ and Hertzberg Macarron
4.2. Assay Optimization
l
Either stable cell lines or transient transfection can be used. Expression levels of the recombinant protein(s) should be confirmed. Extremely high expression levels should generally be avoided.
l
Consider using modified baculovirus (BacMam virus) gene delivery technology for transient expression of target proteins in mammalian cells (29).
l
When using stable cell lines, use early passages to avoid cells losing their responsiveness.
l
Lower numbers of cells are preferred for cost reasons, but at least 1000 cells per well should generally be used to minimize stochastic single-cell events. The response observed should be linear with respect to the number of cells.
l
Pay attention to cell clumps which can cause variability.
l
Preincubation of cells with compounds should be considered when applicable (e.g., assays in which a ligand is added).
l
Optimal incubation time should be selected in accordance with the rule of avoiding underestimation of inhibition or activation values (see Section 4.1.1.3). All other factors being equal, shorter incubation times minimize cytotoxic interference problems.
l
Cell-based assays tend to be more sensitive to DMSO than biochemical assays. Determine the DMSO sensitivity of the assay and configure the protocol to remain well below this level.
l
Use standard inhibitors and/or activators during the screening run to confirm the desired signal is observed.
l
Pay attention to edge effects, which occur commonly in cellbased assays due to problems with incubators or uneven cell distribution of cells in the well. Incubating seeded plates at room temperature before placing them in the incubator can help this problem (30).
In vitro assays are performed in artificial environments in which the biological system studied could be unstable or exhibiting an activity below its potential. The requirements for stability are higher in HTS campaigns than in other areas of research. In HTS runs, diluted solutions of reagents are used throughout long periods of time (typically 4–12 h) and there is a need to keep both the variability low and the signal to background high. Additionally, several hundreds of thousands of samples are usually tested, and economics often dictates one to reduce the amount of reagents required. In this respect, miniaturization of assay volumes has been in continuous evolution, from tubes to 96-well plates to 384-well plates to 1536 and beyond. Many times, converting assays from
Design and Implementation of High-Throughput Screening Assays
19
low-density to high-density formats is not straightforward. Thus, in order to find the best possible conditions for evaluating an HTS target, optimization of the assay should be accomplished as part of the development phase. HTS libraries contain synthetic or natural compounds that in most cases are dissolved in DMSO. The tolerance of the assay to DMSO must be considered. Typically, compounds are stored at concentrations ranging between 1 and 30 mM. Test compound concentrations in primary screening are in the 1–30 mM range. Therefore, DMSO concentrations from 0 to 10% are tested. It is critical to work at DMSO concentrations in a region of minimal variation, as otherwise compound effects can be obscured by variability in the addition of compound stocks (typically the smallest volume in the assay mix and thus the most sensitive liquid handling step). If significant decrease in activity/binding is observed at the standard solvent concentration – typically 0.5–1% (v/v) DMSO – lower test compound concentrations may be required. In some cases the detrimental effect of solvent can be circumvented by optimizing assay conditions. In all cases, key biochemical parameters (e.g., Km) should be checked in the final assay conditions (DMSO concentration) before starting the screening campaign. The stability of reagents should be tested using the same conditions intended for HTS runs, including solvent concentration, stock concentration of reagents, reservoirs, and plates. Quite often signal is lost with time not because of degradation of one biological partner in the reaction but because of its adsorption to the plastics used (reservoir, tips, or plates) (Fig. 1.4). Addition of detergents below their critical micellar concentration (CMC) and/or carrier proteins (e.g., BSA) is a common technique to minimize this undesirable phenomenon. These assay components can also aid in reducing nonspecific enzymatic inhibition caused by the aggregation of test compounds (7). The number of factors that can be tested in an optimization process is immense. Nevertheless, initial knowledge of the system (optimal pH, metal requirements, sensitivity to oxidation, etc.) can help to select the most appropriate ones. Factors to be considered can be grouped as follows: l Buffer composition l
pH
l
Temperature
l
Ionic strength
l
Osmolarity
l
l l
Monovalent ions ðNaþ ; K þ ; C1 Þ Divalent cations Mn2þ ; Mg2þ ; Ca2þ ; Zn2þ ; Cu2þ ; Co2þ Rheological modulators (glycerol, polyethylene glycol)
´ and Hertzberg Macarron 20
Enzyme activity (mOD/min)
20
15
10
5
0 0
10
20 30 40 Preincubation time (min)
50
60
Fig. 1.4. Example of loss of signal in an enzymatic reaction related with adsorption of enzyme (or substrate) to plasticware. The data are from a real assay performed in our lab. Stability of reagents was initially measured using polypropylene tubes and 384-well polystyrene plates, without CHAPS (circles). Once HTS was started, using polypropylene reservoirs and polystyrene 384-well plates (triangles), a clear loss of signal was observed. Addition of 0.01% (w/v) CHAPS not only solved the problem but also improved the enzyme activity (squares). Reactions were initiated at 10, 30, and 50 min after preparation of diluted stocks of reagents that remained at 4C before addition to the reaction wells.
l
Polycations (heparin, dextran)
l
Carrier proteins (BSA, casein)
l
Chelating agents (EDTA, EGTA)
l
Blocking agents (PEI, milk powder)
l
Reducing agents (DTT, b-mercaptoethanol)
l
Protease inhibitors (PMSF, leupeptin)
Detergents (Triton, Tween, CHAPS). Cell-based assays are usually conducted in cell media of complex formulation. Factors to be considered in this case are mainly medium, supplier, selection and concentration of extra protein (human serum albumin, BSA, gelatin, and collagen). One also needs to take into account cell density, plate type, plate coatings, incubation time, temperature, and atmosphere. Since cell-based assays generally have more variables than biochemical assays, extreme care must be taken when documenting and reproducing the cell culture and assay conditions. Besides analyzing the effect of factors individually, it is important to consider interactions between factors because synergies and antagonisms can commonly occur (31). Full-factorial or partialfactorial designs can be planned using several available statistical l
Design and Implementation of High-Throughput Screening Assays
21
packages (e.g. JMP, Statistica, Design Expert). Experimental designs result in quite complex combinations as soon as more than four factors are tested. This task becomes rather complicated in high-density formats when taking into consideration that more reliable data are obtained if tests are performed randomly. Therefore, an automated solution is necessary because manually running an experiment of this complexity would be extremely difficult. Several commercial packages exist that integrate design of experiments and necessary liquid handling steps to conduct the experiments. A good example is AAO (automated assay optimization) developed by Beckman Coulter (Fullerton, CA) in collaboration with scientists from GlaxoSmithKline (32). An example of the outcome of one assay improved in our lab using this methodology is shown in Fig. 1.5. The paper by Taylor et al. (32) describes examples of assay optimization through AAO for several types of targets and assay formats. A typical optimization process starts with a partial-factorial design including many factors (20). The most promising factors are then tested in a full-factorial experiment to analyze not only main effects but also two-factor interactions. These experiments are done with two levels per factor (very often one level is the absence of the ingredient and the other is the presence at a fairly typical concentration). Finally, titrations of the more beneficial factors are conducted in order to find optimal concentrations of every component. Usually the focus of optimization is on activity (signal or signal to background), but statistical performance should also be taken into account when doing assay optimization. Though this is not feasible when many factors and levels are scrutinized without replicates, whenever possible duplicates or triplicates should be run and the resulting variability measured for every condition. Some buffer ingredients make a reproducible dispensement very difficult, and so should be used only if they are really beneficial (e.g., glycerol). For some factors it is critical to run the HTS assay close to physiological conditions (e.g., pH) in order to avoid missing interesting leads for which the chemical structure or interaction with the target may change as a function of that factor. 4.3. Statistical Evaluation of HTS Assay Quality
The quality of an HTS assay must be determined according to its primary goal, i.e., to distinguish accurately hits from nonhits in a vast collection of samples. In the initial evaluation of assay performance, several plates are filled with positive controls (signal; e.g., uninhibited enzyme reaction) and negative controls or blanks (background; e.g., substrate without enzyme). Choosing the right blank is sometimes not so obvious. In ligand–receptor-binding assays, the blanks referred to as NSB controls are prepared traditionally by adding an excess of
´ and Hertzberg Macarron
22
Totals NP Plot 4200
H B
Expected Normal Value
3200
C A G BH BG DH FH AC CF FG AH DF AE DE AG BE GH E CE EG AF CD DG EF CH AB BD CG BF EH F AD
2200 1200 200 –800 –1800 –2800 –3800
D BC
– –3 – 0 00 200 100 0 0 0
6 5 7 4 10 8 9 2 3 1 1 00 000 000 000 000 000 000 000 000 000 100 0 0
Standardized Effects
A 25000
Predicted activity (CPM)
20000 +H +B +C +H +B –C 15000
+H –B +C +H –B –C –H +B +C –H +B –C
10000
–H –B +C –H –B –C 5000
0 0
5000
10000
15000
20000
25000
30000
Observed activity (CPM)
B Fig. 1.5. Example of optimization of a radiofiltration assay using Beckman Coulter’s AAO program and a Biomek 2000 to perform the liquid handling. The target was to increase activity of this enzyme, bacterial biotin-ligase, aiming to improve assay quality and reduce costs. The initial partial-factorial test included 20 factors, 8 of which were identified as positive. The test shown in this figure used these eight factors and was designed as a two-level full-factorial experiment with duplicates. Five hundred and twelve samples were generated. (A) The probability plot resulting from the statistical analysis of experimental data showed three factors being positive (H, B and C) although the interaction of B and C was
Design and Implementation of High-Throughput Screening Assays
23
unlabelled (cold) ligand; the resulting displacement could be unreachable for some specific competitors that would not prevent nonspecific binding of the labeled ligand to membranes or labware. A better blank could be prepared with membranes from the same cell line not expressing the receptor targeted. Though this is not always practical in the HTS context, it should be at least tested in the development of the assay and compared with the NSB controls to which they should be ideally pretty close. A careful analysis of these control plates allows identifying errors in liquid handling or sample processing. For instance, an assay with a long incubation typically produces plates with edge effects due to faster evaporation of the external wells even if lids are used, unless the plates are placed in a chamber with humidity control. Analysis of patterns (per row, per column, per quadrant) helps to identify systematic liquid-handling errors. Obvious problems must be solved before evaluating the quality of the assay. After troubleshooting, random errors are still expected to happen due to instrument failure or defects in the labware used. They should be included in the subsequent analysis of performance (removing outliers is a misleading temptation equivalent to hiding the dirt under the rug). The analysis of performance can be accomplished by several means. Graphical analysis helps to identify systematic errors (e.g., Fig. 1.6). The statistical analysis of raw data involves the calculations of a number of parameters, starting with mean (M) and standard deviations (SD) for signal and background, and combinations of these are as follows: l Signal to background S=B ¼ Msignal =Mbackground S/B provides an indication of the separation of positive and negative controls. It can be useful in early assay development to understand the potential of an assay format or to validate reagents under development. But it is a poor indicator of assay quality as it is independent of variability (33). l
Coefficient of variation of signal and background CV ¼ 100 SD=M ð%Þ A relative measure of variability provides a good indication of variability. Variability is a function of the assay stability and the precision of liquid handling and detection instruments.
Fig. 1.5. (continued) negative. D showed significant negative effect, while the other four factors had statistically marginal or no effect. (B) Applying the statistical model, the correlation between observed and predicted values was very good. The presence of H ¼ CHAPS 0.03% (w/v) (+H, –H) is clearly positive. In the absence of B ¼ 125 mM Bicine (+B squares, –B triangles) and C ¼ 125 mM TAPS (+C, –C), the enzyme was less active. The original conditions yielded 5,000 CPM vs. 25,000 CPM with the optimized buffer (backgrounds were 100 CPM in all cases).
´ and Hertzberg Macarron
24
Activity (mOD/min)
18 16
16–18
14
14–16 12–14
12
10–12
10
8–10
8
6–8
6
4–6
4
2–4 0–2
2 0
Row
22
24
18
M
20
Column
16
I
14
10
12
6
8
2
4
A E
14
14
12
12
12 10 8 6 4
Activity (mOD/min)
14
Activity (mOD/min)
Activity (mOD/min)
A 16
10 8 6 4 2
2 0
8 6 4 2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
10
0 A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
1
2
3
Column
Row
Quadrant
B
C
D
4
Fig. 1.6. Graphical analysis of a 384-well plate of positive controls of an enzymatic reaction monitored by absorbance (continuous readout). The plate was filled using a pipettor equipped with a 96-well head and indexing capability. (A) 3-D plot of the whole plate showing that four wells (I1, I2, J1, and J2) had a dispensement problem. The corresponding tip may have been loose or clogged. Analysis by columns (B), rows (C), and quadrants (D) reveals that the fourth quadrant was receiving less reagent.
l
Signal to noise S ¼ Msignal Mbackground Þ=SDbackground This classic expression of S/N provides an incomplete combination of signal window and variability. Its original purpose was to assess the separation between signal and background in a radio signal (33). It should not be used to evaluate performance of HTS assays. Another parameter referred to as S/N by some authors is
Design and Implementation of High-Throughput Screening Assays
25
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2ffi S=N ¼ Msignal Mbackground SDsignal þ SDbackground This second expression provides a complete picture of the performance of an HTS assay but as discussed below, the field has converged in using Z0 as the standard measure of HTS assay quality. l
Z0 factor
Z0 ¼ 1 3 ðSDsignal þ SDbackground Þ=Msignal Mbackground j Since its publication in 1999 (33), the Z0 factor has been widely accepted by the HTS community as a very useful way of assessing the statistical performance of an assay (34). Z0 is an elegant combination of signal window and variability, the main parameters used in the evaluation of assay quality. The relationship between Z0 factor and S/B is not obvious from its definition but can be easily derived as Z 0 ¼ 1 0:03 S=Bj CV Signal þ CV Background ðjS=Bj 1Þ The value of Z0 factor is a relative indication of the separation of the signal and background populations. It is assumed that there is a normal distribution for these populations, as it is the case if the variability is due to random errors. Z0 factor is a dimensionless parameter that ranges from 1 (infinite separation) to <0. Signal and background populations start to overlap when Z0 ¼ 0. In our lab, the minimal acceptable value for an assay is Z0 > 0.4, although in practice the majority of our assays demonstrate Z0 > 0.6. A Z0 of 0.4 is equivalent to having an S/B of 3 and a CV of 10%. Low variability allows for a lower S/B, but a minimum of 2 is usually required, provided that CVs are rarely below 5%. Figure 1.7 shows Z0 at work in three different scenarios. Full analysis of the corresponding data is collected in Table 1.3. Z0 should be evaluated during assay development and validation, and also throughout HTS campaigns on a per plate basis to assess the quality of dispensement and reject data from plates with errors. Chapter 5 describes in more detail the different tools used to assess statistical performance in HTS campaigns. 4.4. Assay Validation
Once an assay optimized to find compounds of interest passes its quality control with a Z0 greater than 0.6 (or whatever is the applied acceptance criteria), a final step must be done before starting an HTS campaign. The step referred to here as assay validation consists of testing a representative sample of the screening collection in the same way HTS plates will be treated; i.e., on the same robotic system using protocols identical to the HTS run. The purposes of this study are to
´ and Hertzberg Macarron Blanks
Positive controls
70 60 50 # cases
40 30
Plate 1 Plate 2 Plate 3
20 10
15
13
Enzymatic activity (mOD/min)
14
12
11
9
10
7
8
6
4
5
2
3
0
0 1
26
Fig. 1.7. Distribution of activity values (bins of 0.5 mOD/min) for three 384-well plates half-filled with blanks and half-filled with positive controls of an enzymatic reaction monitored by absorbance (continuous readout). Z0 factors were 0.59 for plate 1, 0.42 for plate 2, and 0.10 for plate 3. A complete analysis of performance is shown in Table 1.3.
Table 1.3 Statistical analysis of data from the three plates described in Fig. 1.7 Parameter
Plate 1
Plate 2
Plate 3
Msignal (mOD/min)
10.09
7.77
5.84
SDsignal (mOD/min)
0.84
0.81
0.96
Mbckg (mOD/min)
0.30
0.69
0.74
SDbckg (mOD/min)
0.51
0.57
0.57
S/B
34
11
8
SW (mOD/min)
9.80
7.08
5.09
S/N1
19
12
9
10.0
7.1
4.6
CVsignal (%)
8%
10%
16%
CVbckg (%)
173%
82%
77%
Z0 factor
0.59
0.42
0.10
S/N
1
2
S=N ¼ ðMsignal Mbackground Þ=SDbackground qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 S=N ¼ ðMsignal Mbackground Þ= ðSDsignal Þ2 þðSDbackground Þ2
Design and Implementation of High-Throughput Screening Assays
27
l
obtain production data on assay performance;
l
assess interferences from screening samples;
l
evaluate the reproducibility of results obtained in a production environment;
l
estimate the hit rate and determine optimal sample concentration.
A dramatic example of how the test of a pilot collection helps to detect interferences is shown in Fig. 1.8. This target, HCV RNA-dependent RNA polymerase, has been found to be 140 120
Number of cases
100 80
Without BSA With 0.05 % BSA
60 40 20
>100
90
100
80
70
60
50
40
30
20
0
10
–10
<–10
0
% Inhibition (bin) A 60,000
50,000
Activity (CPM)
40,000
30,000
Without BSA 20,000
With 0.05 % BSA
10,000
0 0
0.5
1
1.5 2 2.5 3 Preincubation time (h) B
3.5
4
4.5
Fig. 1.8. (A) Distribution of inhibition values (10% bins) in the validation of an HTS assay of HCV RNA-dependent RNA polymerase tested with and without 0.05% (w/v) BSA. The samples were 352 representative mixtures of compounds (11 components at 9.1 mM each). (B) It was shown that the stability and activity of the enzyme was greatly improved in the presence of BSA.
´ and Hertzberg Macarron
slightly unstable at room temperature (Fig. 1.8B, without BSA). Nevertheless, the effect of 352 mixtures of 11 compounds each was tested and an extremely high hit rate was observed (45% of the mixtures inhibited the enzyme activity 100 80
% Inh. Run2
60 40 20 0 –40
-20
0
20
40
60
80
100
-20 –40 % Inh. Run 1 A 100 80 60
% Inh. Run2
28
40 20 0 -40
-20
0
20
40
60
80
100
-20
–40 % Inh. Run 1
B Fig. 1.9. Comparison of duplicates from validation for two HTS assays. (A) This enzymatic assay showed a significant number of mismatched results between duplicate runs of the same 4,000 samples. Two actions should be taken in a case like this: liquid handling errors have to be avoided and the assay quality must be improved. (B) The data correspond to a ligand-binding assay that showed good reproducibility.
Design and Implementation of High-Throughput Screening Assays
29
greater than 70%). The problem was solved by stabilization of the system using BSA 0.05%. Similar effects have been observed in several other targets. It is, therefore, advisable to run a few plates with 500– 2,000 random samples just to spot any major interference as soon as possible. The size of the pilot collection can be as small as 1% of the total collection. Its usefulness to predict hit rates and interferences increases with its size. On the other hand, too many plates worth of work and reagents can be lost if any major problem is found in this step, as often happens. Therefore, it is not advisable to go beyond a 5% representation of the collection. With a randomized sample of 1% of a collection of 50,000 compounds, a hit rate of 1% can be estimated with an SD of 0.5%. For a 5% rate, the estimation’s SD would be 1% [approximate figures calculated as described in (35)]. Irrespective of the size of the pilot collection, at least 10–20 plates should be run to test the HTS system in real action. Duplicates of the same samples run in independent experiments provide a way to evaluate the reproducibility of results (Fig. 1.9). In a duplicated experiment without further retest, false negatives and false positives will be indistinguishable and will all appear as discrepant results. A third replica allows an estimation of the rates of false positives and negatives; additionally hit rate and confirmation rate after retest can be estimated providing the level of information required to assess the quality of the assay and achieve the level of performance required prior to initiation of the HTS efforts.
Acknowledgments The authors are grateful to the many colleagues at GlaxoSmithKline, which helped over the years to shape the screening process and to build the collective knowledge succinctly described in this introduction.
Abbreviations AAO:
automated assay optimization
AAS:
atomic absorbance spectroscopy
BRET:
bioluminescence resonance energy transfer
Bicine:
N,N-bis(2-Hydroxyethyl)glycine
30
´ and Hertzberg Macarron
Bmax:
maximum binding capacity
BSA:
bovine serum albumin
CHAPS:
3-([3-Cholamidopropyl]dimethylammonio)-1propanesulfonate
CV:
coefficient of variation
DMSO:
dimethyl sulfoxide
DTT:
dithiothreitol
ECL:
electrochemiluminescence
EDTA:
ethylenediamine-N,N,N0 ,N0 -tetraacetic acid
EFC:
enzyme fragment complementation
EGTA:
ethylene glycol-bis(2-aminoethyl)-N,N,N0 ,N0 -tetraacetic acid
ELISA:
Enzyme-linked immunosorbent assay
FCS:
fluorescence correlation spectroscopy
FIDA:
fluorescence intensity distribution analysis
FLINT:
fluorescence intensity
FRET:
fluorescence resonance energy transfer
FP:
fluorescence polarization
GPCR:
G protein-coupled receptor
HTS:
high-throughput screening
Kd:
dissociation constant
L:
ligand
M:
mean
NSB:
non specific binding
OD:
optical density unit
PEI:
polyethylene imine
PMSF:
phenylmethylsulfonyl fluoride
RIA:
Radioimmunoassay
S/B:
signal to background ratio
S/N:
signal to noise ratio
SD:
standard deviation
SW:
signal window
SPA:
scintillation proximity assay
TAPS:
N-tris (Hydroxymethyl)methyl-3-aminopropane sulfonic acid
TR-FRET: time-resolved fluorescence resonance energy transfer Vmax:
maximum velocity
Design and Implementation of High-Throughput Screening Assays
31
References 1. Cascieri MA, Springer MS. (2000) The chemokine/chemokine-receptor family: potential and progress for therapeutic intervention. Curr Opin Chem Biol; 4: 420–427. 2. Miller WH, Alberts DP, Bhatnagar PK et al. (2000) Discovery of orally active nonpeptide vitronectin receptor antagonists based on a 2-benzazepine Gly-Asp mimetic. J Med Chem; 43:22–26. 3. Fox S. (2007) Throughput Screening 2007: New Strategies, Success Rates, and Use of Enabling Technologies (HighTech Business Decisions Market report: http://www. hightechdecisions.com/reports.html). ´ R. (2006) Critical review of the 4. Macarron role of HTS in drug discovery. Drug Discov Today; 11:277–279. 5. Gonzalez JE, Oades K, Leychkis Y, Harootunian A, Negulescu PA. (1999) Cell-based assays and instrumentation for screening ion-channel targets. Drug Discov Today; 4:431–439. 6. Schroeder KS, Neagle BD. (1996) FLIPR: A new instrument for accurate, high throughput optical screening. J Biomol Screen; 1:75–80. 7. Shoichet BK. (2006) Screening in a spirit haunted world. Drug Discov Today; 11:607–615. 8. Inglese J, Johnson RL, Simeonov A et al. (2007) High-throughput screening assays for the identification of chemical probes. Nat Chem Biol; 3:466–479. 9. Eggeling C, Brand L, Ullmann D, Ja¨ger S. (2003) Highly sensitive fluorescence detection technology currently available for high throughput screening. Drug Discov Today; 8:632–641. 10. Pope AJ, Haupts U, Moore KJ. (1999) Homogeneous fluorescence readouts for miniaturized high-throughput screening; Theory and practice. Drug Discov Today; 4:350–362. 11. Gaudet EA, Huang KS, Zhang Y, Huang W, Mark D, Sportsman JR. (2001) A homogeneous fluorescence polarization assay adaptable for a range of protein serine/threonine and tyrosine kinases. J Biomol Screen; 8:164–175. 12. Wu S, Liu B. (2005) Application of scintillation proximity assay in drug discovery. Biodrugs; 19:383–392. 13. Glickman J, Wu X, Mercuri R et al. (2002) A comparison of ALPHAScreen, TR-FRET,
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
and TRF as assay methods for FXR nuclear receptors. J Biomol Screen; 7:3–10. Debad JD, Glezer EN, Wohlstadter JN, Sigal GB. (2004) Clinical and biological applications of ECL. In: Bard AJ (ed.), Electrogenerated Chemiluminescence. New York: Marcel Dekker, pp. 43–78. Rich RL, Myszka DG. (2007) Higherthroughput, label-free, real-time molecular interaction analysis. Anal Biochem; 361:1–6. Gabriel D, Vernier M, Pfeifer MJ, Dasen B, Tenaillon L, Bouhelal R. (2003) High throughput screening technologies for direct cyclic AMP measurement. Assay Drug Dev Technol; 1:291–303. Zheng W, Spencer R, Kiss L. (2004) High throughput assay technologies for ion channel drug discovery. Assay Drug Dev Technol; 2:543–552. Terstappen GC. (2005) Ion channel screening technologies today. Drug Discov Today: Technol; 2:133–140. Hill SJ, Baker JG, Rees S. (2001) Reportergene systems for the study of G-proteincoupled receptors. Curr Opin Pharmacol; 1:526–532. Fan F, Wood KV. (2007) Bioluminescent assays for high-throughput screening. Assay Drug Dev Technol; 5:127–136. Poulsen F, Jensen KB. (2007) A luminescent oxygen channeling immunoassay for the determination of insulin in human plasma. J Biomol Screen; 12:240–247. Assay Guidance Manual Version 4.1. Eli Lilly and Company and NIH Chemical Genomics Center (2005) (Accessed December 14, 2007, at http://www.ncgc.nih. gov/guidance/manual_toc.html.). Cheng YC, Prussof W. (1973) Relationship between the inhibition constant (Ki) and the concentration of inhibitor which causes 50 per cent inhibition (I50) of an enzymatic reaction. Biochem Pharmacol; 22:3099–3108. Bush K. (1983) Screening and characterization of enzyme inhibitors as drug candidates. Drug Metab Reviews; 14:689–708. Macarron R, Mensah L, Cid C et al. (2000) A homogeneous method to measure aminoacyl-tRNA synthetase aminoacylation activity using scintillation proximity assay technology. Anal Biochem; 284:183–190. Tipton KF. (1980) Kinetics and enzyme inhibition studies. In: Sandler M (ed.), Enzyme Inhibitors as Drugs. Baltimore: University Park Press, pp. 1–23.
32
´ and Hertzberg Macarron
27. Burt D. (1986) Receptor binding methodology and analysis. In: O’Brien RA (ed.), Receptor Binding in Drug Research. New York: Decker, pp. 4–29. 28. Gupta S, Indelicato S, Jethwa V et al. (2007) Recommendations for the design, optimization, and qualification of cell-based assays used for the detection of neutralizing antibody responses elicited to biological therapeutics. J Immunol Methods; 321:1–18. 29. Kost TA, Condreay JP, Ames RS, Rees S, Romanos MA. (2007) Implementation of BacMam virus gene delivery technology in a drug discovery setting. Drug Discov Today; 12:396–403. 30. Lundholt B, Scudder K, Pagliaro L. (2003) A simple technique for reducing edge effect in cell-based assays. J Biomol Screen; 8:566–570. 31. Lutz MW, Menius JA, Choi TD et al. (1996) Experimental design for high-
32.
33.
34.
35.
throughput screening. Drug Discov Today; 1:277–286. Taylor P, Stewart F, Dunnington DJ et al. (2000) Automated Assay Optimization with Integrated Statistics and Smart Robotics. J Biomol Screen; 5:213–225. Zhang JH, Chung TDY, Oldenburg KR. (1999) A simple statistical parameter for use in evaluation and validation of high throughput screening assays. J Biomol Screen; 4:67–73. Iversen PW, Eastwood BJ, Sittampalam GS, Cox KL. (2006) A comparison of assay performance measures in screening assays: signal window, Z’ factor, and assay variability ratio. J Biomol Screen; 11: 247–252. Barnett V. (1974) Elements of Sampling Theory. London: The English Universities Press, pp. 42–46.
Chapter 2 Creation of a Small High-Throughput Screening Facility Tod Flak Abstract The creation of a high-throughput screening facility within an organization is a difficult task, requiring a substantial investment of time, money, and organizational effort. Major issues to consider include the selection of equipment, the establishment of data analysis methodologies, and the formation of a group having the necessary competencies. If done properly, it is possible to build a screening system in incremental steps, adding new pieces of equipment and data analysis modules as the need grows. Based upon our experience with the creation of a small screening service, we present some guidelines to consider in planning a screening facility. Key words: High-throughput screening, HTS, Cell-based assay, Instrumentation, Design, Planning, Robotic system.
1. Introduction High-throughput screening (HTS) has been used for several decades in the search for new lead compounds. The large investment in instrumentation and development of the necessary expertise is a large commitment and has previously been largely in the realm of major pharmaceutical companies. However, in recent years, many smaller organizations have become interested in carrying out HTS campaigns. This is due to the development of many small biotechnology companies with only a few drug targets, as well as academic groups that have become interested in doing their own drug discovery. There are now multiple HTS service providers who will run HTS campaigns for a fee, often also having the option of providing chemical libraries. Nevertheless, some small research organizations would prefer to create an internal HTS facility to screen their own drug targets, or possibly to offer screening services. W.P. Janzen, P. Bernasconi (eds.), High Throughput Screening, Methods and Protocols, Second Edition, vol. 565 ª Humana Press, a part of Springer Science+Business Media, LLC 2009 DOI 10.1007/978-1-60327-258-2_2, Springerprotocols.com
33
34
Flak
In this chapter, we would like to present one particular view of how to build a small high-throughput screening facility. Specifically, we will speak to our experience of evolving from the use of small pipetting workstations to a complete screening system, the creation of data handling systems, and the growth of a functional HTS group. We will discuss the pros and cons of do-it-yourself versus relying upon vendors, for both instrumentation and software. And we will show the evolution of a screening system over time. Please be aware that this is essentially a case study. Vendors that are mentioned are not necessarily the best choice but represent our particular choices. In examination of the evolution of this small screening group, we will point out some of the important choices made along the way. Hopefully this case study can give some insights to guide others in building up an HTS facility.
2. Screening System Design 2.1. Evolution from Workstations
Axxam SpA in Milan, Italy, is a small organization that evolved from a group within Bayer HealthCare, which had been responsible for assay development and configuration to feed the ultrahigh-throughput screening carried on by Bayer. The group was spun off as an independent organization in 2002 and started to provide assay development services for paying customers. Eventually the customers requested that the developed assay be screened on site rather than transferred back to them, creating the need for an HTS setup. Within the organization there was already a strong background in cell-based and biochemical assay configuration, cell culture facilities, and related technical capabilities. However no one had ever examined more than a few hundred compounds, and there was little automated equipment to support a true HTS campaign. Axxam was faced with the task of building a complete screening facility, including selection of equipment, management and analysis of the data, and development of the necessary skilled personnel. Since there were no pre-existing contracts in place to pay for screening services, the budget was limited, dictating the decisions in two ways: the need to start small without limiting the ability to expand and favoring in-house solutions as opposed to relying upon commercial software and vendors. Starting small meant evolving from individual workstations with built-in scalability to reach a fully automated robotic screening system. It should be emphasized that, with proper organization, a great deal of work can be accomplished using only
Creation of a Small HTS Facility
35
workstations and relatively simple data analysis methodologies. For example, it is quite feasible to screen several hundred thousand compounds using only pipetting workstations to prepare plates, placing plates manually into reader instruments, and processing data using Microsoft Excel#. That approach will not perform as fast as an automated HTS system, but if the number of screening campaigns is limited, it will be adequate, with faster implementation and troubleshooting than a fully automated operation. 2.2. Considerations for System Design
If you already understand pretty clearly the types of assays that will be run, the reading instrumentation that is necessary, and the ultimate level of automation desired in a screening system, you may be able to design a system from the outset that will meet your needs for many years. In this case, future expandability may not be a high priority. You may be able to design a compact system with limited expansion options. However, if your goal is to start with a small instrumentation investment and gradually expand, then obviously you must select a system that is flexible and open. In general, all vendors offer packages that are customizable with your desired components, but some are more amenable to future expansion than others. You should consider how much work will be necessary to add new instruments. Some systems may be difficult to expand simply because of physical limitations of space, such as the case of a central robotic arm that can access only a limited work envelope. Other systems may have limitations that are more in the realm of software. You should ask the vendor about the possibilities of integration of various types of equipment that you might imagine using in the future. If possible, it should even be possible to add a simple component into the system with little or no reliance upon the original vendor.
2.2.1. Type of Transport
There are several different robotic transport mechanisms that are prevalent. The reasons to consider the differences in transport mechanisms are that the mechanism you choose will have an impact upon the system throughput, the ease of expansion, and the overall size of the final system. For the facility that desires to start small and expand in several directions, ease of expansion will be quite important. The most flexible systems should include plate transport mechanisms that can be readily adapted to accessing new instruments. The most typical example of this is the anthropomorphic arm with a plate gripper. With such arms the physical process of adding a new instrument into the system can be as simple as affixing the instrument to fixed position of the work surface and then teaching the plate loading/unloading movements to the robot. The issue of logically adding the new instrument into the
36
Flak
control software system can be much more complicated (discussed below). Such arms are also capable of reaching into confined spaces, such as shelves in a plate ‘‘hotel.’’ Anthropomorphic arm robots are flexible, but they do have some drawbacks. The gripping fingers need to have a great degree of structural strength, as that mechanism of gripping a plate is inherently difficult to maintain without slipping. Some key things to look for would include sharp points used to firmly grasp the plastic; pivoting pressure points to ensure even application of force; strong metal fingers; and secure mounting to the robot wrist assembly to ensure minimal play. It is also very important that the robots have some mechanism to sense gripping force so that it will sense a plate presence or absence. Another important force sensing that should be present is the ability to quickly stop if it encounters an unexpected obstacle; all axes of the robot should be able to detect collision with an obstacle. This capability is important to avoid the robot breaking or bending itself or some fixed labware when the inevitable mistake is made, for example, when teaching a new position or when something is left inside the work envelope that should not be there. It will happen, the only question is when. If the arm cannot understand that it is impacting something, it may very well have sufficient power to bend or break some components. It is possible to design a system using only a central arm, with devices arrayed radially around the arm. A good commercially available example of such a system is the Velocity11 BioCel1 product line, which features a central three-axis robot arm that moves plates in a tightly packed workspace. Having only an anthropomorphic arm at a fixed location obviously limits the instruments to a work envelope that can be reached by that arm. For some applications, this can be quite sufficient. Such systems can be perfect if you know clearly what you need and if you would be satisfied to buy a different system in the future to address new desires instead of expanding on the current system. These systems also have the benefit of being relatively compact, taking minimal floor space. One disadvantage is that the equipment integrated in the system may be very difficult to utilize in ‘‘manual’’ mode for some simple operations, since the operational side of the instruments is generally facing into the central robot. Some of these systems can be expanded, basically by handing a plate over from one work cell to another work cell, which may have its own robot arm. A more traditional approach is to have some linear translation ability. This can mean simple carriages that move the plate or more complex tracks that move the anthropomorphic arm. One of the earliest robotic systems used in biological laboratories was the ORCA from Beckman Coulter (Sagian). This is a four-axis arm with a plate gripper, mounted on a linear track of 1–3 meters. They
Creation of a Small HTS Facility
37
have designed CORE systems around this transport system, with instruments lined up on both sides of the robot track. Similar systems have been developed by multiple companies, many with more robust robots (for example, Thermo CRS). A common problem with such systems is that the transport system becomes a bottleneck, the limitation of overall throughput, since it is responsible for all movements of plates. Some thought must be given to this issue, especially if you imagine adding many more instruments in the future. There are also systems that rely upon conveyor belts to move plates from one device to the next in a sequential arrangement, with the plate generally moving under the active position of the device. Examples of this type include PerkinElmer Minitrak. This type of transport mechanism is simple and robust. However, these devices are limited in their ability to add random instruments and in the ability to use the devices in a different order. Incubations can become more complicated because they must remove a plate from the flow into a storage device. These devices would not be recommended for a start-up HTS group, unless the assays that you want to implement are very clear and amenable to such an assembly-line approach. Another similar type of simple linear conveyor relies upon carriages to move plates from one part of the system to another, but with vertical movements to transfer plates to instruments or additional carriages. Some vendors that provide such mechanisms include CyBio AG and Hudson Control Group. At each instrument there must exist a simple mechanism to move the plate from the carriage to the instrument. In the case of CyBio this is accomplished by plate lifters and simple grippers, while Hudson uses their PlateCrane device which can lift a plate from above and rotate. Also fixed anthropomorphic robot arms can be added at strategic positions within such systems to move plates from the carriage to instruments. 2.2.2. Other Design Considerations
Typically the robot arm needs to be enclosed in a work cell with a safety interlock. Even robots that can detect collisions can cause significant harm if they hit a person. They have plenty of power, and they are heavy enough that they have a lot of momentum when moving. Another problem is that the robot may be still during long periods of time during execution of a protocol and then unexpectedly start moving. It is quite easy for a person to enter a room, unaware that the system is even active, and reach for a plate in the robot’s work area at the same moment that the robot starts to move. For this reason, it is good to have a signal light on the system that indicates when it is active. However, care must be taken to not make the safety system too difficult to work with. Whenever possible the interlock should be clear plastic; or in some cases, even a light curtain is acceptable.
38
Flak
When opened, it should not hinder access to the work cell. Also it should be possible to override the safety interlock – many times a technician needs to see up close something operating. Maybe the company safety officer will not be pleased by this idea, but it is important that someone has the ability to override the safety interlock. 2.2.3. Vendor
Obviously you should consider the reputation of the vendor, but equally important is the quality and relationship with the local representatives. Before making a final choice of vendor, you should make an effort to meet the technical people, not just the sales representatives. You need to meet people such as the system design engineers, software specialists, application specialists, and even service engineers/technicians, if possible. Having a good working relationship with these people is critical to getting the system that you want and in resolving problems that will occur in the future.
2.2.4. System Software
Another large consideration in choice of a screening system is the quality of the system control software. This software is the glue that unites all of the disparate methods that must run on each instrument, executing each in a carefully orchestrated sequence to move plates through the process with the maximal amount of parallelism to optimize the overall system throughput. There are many approaches to system control software, and every vendor will want to convince you that their software is wonderful. Some of the buzz words that are frequently seen are ‘‘dynamic rescheduling,’’ ‘‘flexible timing,’’ ‘‘LIMS connectivity,’’ and many others. Some of these features are important and others are not. For example, the number of times that ‘‘dynamic rescheduling’’ is a valuable feature for you may be rather small, but this is a feature that can make the software much more complicated and expensive. You must decide which features are important for your application, and whether the lack of some feature is a strong enough reason to look to another vendor. The list of features that we would consider important to understand would include the following: – How easy is it to integrate new instruments? Does the vendor have already written device drivers for most of the instruments you might imagine adding in the future? If not, what is the cost and time necessary to create the driver to allow integration? Is it possible for a skilled person in your staff to add a new instrument? – How good is the error handling? Can the system automatically (or semiautomatically) recover from problems? Is there an easy way of continuing a run after a particular cycle has problems? Are there notification events, such as email, SMS, and flashing lights on the system?
Creation of a Small HTS Facility
39
– Is the scheduling dynamic or fixed? Can you add more plates into the run after it has been started? Can you use parts of the system in stand-alone mode (for example, a reader) if not being used in a schedule? Can the schedule be interrupted for a short run of another protocol, such as using the reader for a single plate? – How ‘‘open’’ is the software for development by your staff? Is it well documented? Can users write external programs that can interact with the system? In our opinion, it is very important that there exists the possibility of execution of a user program (an executable program or a VBScript or a JavaScript program) and that these external programs should be able to exchange information with the system (for instance, plate barcodes, cycle number, database information, error conditions, etc.). 2.2.5. Future Expansion Possibilities
When planning a system, try to imagine where you might want to physically expand in the future. If possible, leave extra space for future instruments around an anthropomorphic arm. If you have a system that includes linear transport mechanisms, it may be possible to expand the system at the end of one of these transport sections by including a robot arm that can access the linear carriage. If possible, also leave extra space within the facility on the side system that might be a future expansion point. However, if space is at a premium in the room where the system is installed, it is possible to move the system in the future to a new location when more space is needed for expansion.
2.2.6. Flexibility and Throughput Considerations
Many managers are immediately interested in the potential throughput of a system. This is natural, since obviously the goal of high-throughput screening is to achieve high throughput! But it is important not to overemphasize the speed of the system, for several reasons. First, with some assays, it is the length of the reading that is the essential limiting factor. This is especially true for cell-based fluorescent dye assays. For example, a typical antagonist screening experiment using a membrane-potential dye may involve first injection of the test compounds, waiting several minutes for the compound effects to stabilize, then injection of the agonist and observation for another few minutes to record the possible antagonistic effects of the test compounds. Under these conditions, one experiment may require 5–10 minutes. In such a situation, the speed of the plate handling has a relatively minor effect upon the overall throughput of the system. Much more important to the overall output of the screening facility is the reliability of the screening system, the ability of the system to run unattended dependably (including overnight runs), and the capabilities of the support systems such as a cell culture.
40
Flak
There is also the consideration of what plate format you want to be using. For most standard screening applications, there are three choices for plate density in common use today: 96-well, 384-well, and 1536-well plates. While there may be the temptation to work with 1536-well plates in order to maximize throughput and conserve reagents, working at this density is typically rather difficult. Furthermore, there are limited options available for simultaneous pipetting of 1536 wells. The typical approach is to use a 384-channel pipettor head four times, indexed to the four quadrants of the plate. In this case, some of the throughput gain of the higher density will be sacrificed. Unless there is some overwhelming reason to use such a high-density plate, it is probably not a good choice for a young screening group. The 384-well plate is a good compromise between miniaturization and ease of use. Most assays that work in 96-well plates can be translated to 384-well format with little difficulty. Virtually all vendors offer equipment that works with 384-well plates (pipettors, washers, readers, etc.). Furthermore, some equipment that works with 384-well plates can also be made to work with 1536-well plates so that you can utilize that format if necessary. A more significant consideration for many groups is whether they want to select equipment that is capable of working with both 384-well and 96-well plates. For example, it may or may not be possible to use only 96 tips on a pipettor that has 384 channels. Some equipments are designed with this as an explicit feature, so you can select these devices if you believe that you will need this capability. 2.3. Case Study: A Scalable System
The example which follows is based on our experience in gradually building an HTS system from individual workstations to a fully automated system. It should be noted that, although the building was done in four phases, each phase yielded a functional system which was used to perform HTS campaigns. Therefore, this example will show that productive work can be done while the system is being built.
2.3.1. Individual Workstations
We decided to initially buy a standardized system from CyBio AG (Jena, Germany) called ‘‘CyBi1-Cellight,’’ which featured a luminescent plate reader (Lumax flash HT), along with a pipetting device and two sets of stackers. A diagram of this system is shown in Fig.2.1. We selected this system because it was a preconfigured design, it was simple to use, and it incorporated the type of reader that we were interested in. The heart of this system was a CyBiWell 384-well pipettor, which had a four-position carriage to move plates to the pipettor. Stackers provided storage of compound, dilution, and assay plates. The mechanism of transporting plates between the various devices consisted of a rotary arm, plus plate lifters at each transfer position.
Creation of a Small HTS Facility
41
Fig. 2.1. The initial standardized ‘‘CyBi-Cellight’’ system from CyBio AG (Jena, Germany), which features a luminescent plate reader (Lumax flash HT) along with a pipetting device and two sets of stackers.
42
Flak
Since this CyBi-Cellight system had no capability for plate incubation, the system was essentially a workstation. For screening applications, plates were run in small batches to minimize the time that they were sitting at room temperature. While the functionality was somewhat limited, it presented good opportunities for future expansion. The rotary arm was an obvious point through which additional instrumentation could be linked. In addition, each of the stacker carriages was on rails that were open at the end, thus allowing the possibility of expansion in both of those directions. This was further supported by the fact that this vendor had multiple options for ways to expand the system at these open integration points. 2.3.2. Integrating a Third-Party Reader
After some time using this system we were ready to move to the next step. We desired to add an automated incubator and a second plate reader, the Molecular Devices FLIPRTETRA for fluorescence plate imaging. This presented the challenge to move away from a single-vendor solution to a composite system, integrating equipment from multiple vendors. After consultation with CyBio, they added a rail/carriage transport system that took plates from the existing rotary arm device. In this system, shown in Fig.2.2, the location of the original components remained essentially the same, and the transport system could move plates between that part of the system and to the incubator and the FLIPRTETRA. The transfer of the plate from the linear carriage to the FLIPRTETRA presented a small challenge. The FLIPRTETRA has a robotic interaction location, which is a top-loaded plate holder. Thus CyBio added a Z-motion arm with a simple plate gripper, which can grasp a plate that is presented under the gripper on the linear motion carriage. After grasping the plate, the carriage is moved out of the way, and the plate is lowered directly down onto the FLIPRTETRA plate holder. Along with the new devices, we added the CyBio scheduling software. This software was necessary to manage the relatively complex plate movements of our assays. A typical cell-based fluorescence assay involves bringing a cell plate out of the incubator; adding fluorescent dye using the 384-well pipetting device; placing the plate back into the incubator for 1 hour; doing a small-volume transfer of compound from the compound library plate to a compound dilution plate, again using the same 384-well pipetting device; taking into the FLIPRTETRA the cell plate, the compound dilution plate, and an activator plate filled with the agonist; running the experiment in the FLIPRTETRA; and finally putting all of the plates into output stacks. The scheduling software allows us to manage all of these plate movements, optimize the timing for maximal parallelism, and track the information associated with each cycle.
Creation of a Small HTS Facility
43
Fig. 2.2. The initial system was extended with a transport system that moved plates to a 37C incubator and FLIPRTETRA plate reader. 2.3.3. Automation of a Manual Step, Custom Automation
We embarked upon a third phase of expansion after about 1 year. We desired to add into the system a Biotek plate washer in order to speed up the processing of cell plates. We also discovered that some of our assays would benefit from a period of room-temperature incubation. In order to accomplish a room-temperature incubation without blocking the system, there must exist a location to temporarily store plates during the incubation time. Therefore, we also added into the system a plate ‘‘hotel,’’ a rack with multiple shelves that could store individual plates. As no ‘‘off-the-shelf ’’ solution existed, we needed to work with a vendor to create the desired solution. We went back to CyBio to design these system additions. They proposed the use of a four-axis anthropomorphic robotic arm,
44
Flak
the KiNEDx SCARA robot from Peak Robotics, with a plate gripper. The use of this robot simultaneously resolved the issue of loading the plate washer from above and moving plates in and out of the plate hotel. The new devices were installed above the existing automated incubator, as shown in Fig.2.3. 2.3.4. Final, Fully Automated System
For further expansion, we have designed the inclusion of instruments to better manage compound source plates: a cold incubator for compound storage, a plate piercer and plate sealer, and some warming stations to thaw the DMSO compound plates. Since the system is now rather densely packed, we developed a plan to
Fig. 2.3. The system was expanded by the inclusion of a SCARA robot that could transport plates to a plate washer and a room-temperature incubation hotel.
Creation of a Small HTS Facility
45
include these new instruments at the open end of the pipettor carriage, using another anthropomorphic robotic arm to move plates among the devices. This design is shown in Fig.2.4. However, in the end we decided that we would make the investment in a second screening system that incorporated these features.
Fig. 2.4. This design included compound plate handling by expanding the system on the left side of the pipetting device. A SCARA robot could move compound plates from a 4C incubator to heat blocks, a plate piercer to open sealed plates, and a plate sealer to reseal plates after processing.
46
Flak
2.3.5. Comments
2.4. Additional Equipment
We have described the gradual buildup of a fully automated HTS system. While the example is based on our experience with CyBio, similar systems can be built with other vendors. It is important to note that l
Starting with individual workstation eliminates the need for internal customization.
l
A good collaboration with the primary vendor is essential. By now, they all understand that they cannot provide the optimum solution for every detector or liquid handler. A vendor who emphasizes compatibility with other equipment should be selected over one who insists on using only their equipment.
l
Special attention should be paid to scheduling software since its quality drives the quality of the setup.
l
A combination of off-the-shelf and custom solution will almost always be necessary. It is important to minimize the latter as it is more expensive.
l
Maintenance should not be underestimated. Again, standard equipment will be cheaper and simpler to maintain than custom solutions.
For an HTS group, there are numerous additional instruments necessary to make a functional pipeline. For example, for follow-up after the primary screening, a typical practice is to retest the putative hit compounds. For this confirmation test, it is essential to have some pipetting device that is capable of cherry picking the desired wells. The minimum requirement of such a device is that it should take plates from a stack, read barcodes, and work from a worklist that describes which wells of each plate need to be picked. Additional desirable features include the ability to open and reseal the source plates and possibly to perform dilutions if that is desired for the confirmation experiments. In our case, we chose to use a Hamilton Starlet, with an eight-channel pipetting system and some features that allow it to use plates in stacks. This system is essentially a stand-alone workstation. It is possible to automate some aspects of compound management. At the minimum, there should be investment in goodquality freezers for storage of compound plates and plate sealing and piercing devices. For more extensive compound management functionality, there are several good systems available for fully automated sample processing and storage. For a small screening group, such systems are typically not necessary, but they may become attractive after the growth of the internal screening library. Vendors of such systems include REMP, The Automation Partnership, Hamilton Storage Technologies (TekCel), MatriCal, and TTP LabTech.
Creation of a Small HTS Facility
47
3. Data Handling 3.1. Requirements
Well-managed data handling is a critical, and maybe sometimes underappreciated, issue for a successful HTS group. There are basically two things that must be considered: first, how to manage the data, store them, and get the data into whatever program will be used for analysis and second, how to analyze the data. In most cases, there must be a central database to act as a repository for the data coming from the screening system and to hold information regarding plate contents (barcodes, compound identifiers, etc.). With only a small number of compounds, it is possible to work without a database. One can import data directly into the analysis software and work with the entire data set in one analysis session. What the limit is for that sort of approach depends upon your analysis software. We have analyzed the primary screening results for 100,000 compounds within a single Spotfire session. Using pipeline processing tools such as PipelinePilot, even larger data sets can be processed. However, for most practical purposes, a centralized database system is appropriate when working with more than a few hundred plates worth of data. Another requirement is a flexible data analysis engine. While the software included with most reader instruments includes some mechanism to compute the results, this is usually not sufficient for sophisticated screening assays. Some framework must exist that allows the specification of formulas and rules for the computation of the results, the detection of outliers, the determination of average responses, and the determination of hits. This framework should allow easy configuration with various processing options. Finally, there is great benefit to be obtained from tools that allow visualization of your data, in many different formats, ranging from the most detailed view of the signal from individual wells up to the most gross view of the performance of one screening campaign versus another. Such tools are useful to understand trends in the data that may be difficult to understand by simply looking at numbers. For example, you will want to be able to see things such as the response across the plate to see if there is a gradient due to temperature or cell growth; the Z’ factor throughout the day, or from one day to another, to look for temporal effects; the average response from all wells across many plates to look for instrument problems related to individual wells, rows, or columns. The more visualization options that are present in the analysis software, the better.
3.2. Build or Buy?
There are many commercial products available to help manage and analyze the data. Some of these include Genedata Screener1, IDBS ActivityBase XE, SciTegic Pipeline Pilot, MDL1 Assay Explorer, and Spotfire DecisionSite1 for Lead Discovery.
48
Flak
But there is also the possibility to build in-house a great deal of the data handling system if you have sufficient time and talent available. You have to make a decision early on whether you want to buy or build the software. In general, we have gravitated toward the do-it-yourself approach. The reasons for this preference include the following: – If you gradually grow from a small operation to become a large HTS group, then there is time to organically grow the data handling mechanisms. It can be almost a natural growth, in which you build new parts as you have new needs. – The cost of some commercial systems can be quite high. – In general, someone (you or the vendor or both) must do customization of commercial systems, and often the customizations take as much time and effort to manage as it would have been to just build it yourself. To make the commercial software truly valuable, it must provide a very solid framework, which would be prohibitively expensive and complicated for you to create from scratch (for example, complex visualization programs). – To run an effective screening group, you need to have people on staff who can write computer code, even if only to customize a commercial product. With some good guidance and sufficient time, in general those people also have enough skills to build an in-house designed system. – If you build it yourself, you have complete control over how it works. Of course, you also have the complete responsibility for doing it correctly and validating the results. There are certainly disadvantages to building from scratch. There is a greater up-front time investment, and there may be certain sophisticated analysis tools that are too difficult to build. The algorithms for most basic HTS data analysis are not very complicated, really just simple math. But tools that allow visualization and pattern detection may be beyond the scope of what you want to build. If you decide to build some part of your data system on your own, inevitably you will make some mistakes in your implementation. You should accept the fact that you will probably have to re-engineer some parts of your system after working with it for a while and possibly even completely rebuild some parts. 3.3. Database
The database system can range in complexity. At the most simple, it should be able to hold plate identifiers, compound information for each well, and the biological result for each well. At the most complex, the database will also contain the molecular structure of each compound, and additional information such as performance in other screenings and results of secondary tests.
Creation of a Small HTS Facility
49
Early on, there must be a decision on several points: – Do you want to store chemical information in the database or simply an identifier of the compound? Because chemically aware database are typically rather specialized, the most common approach is to have the screening database hold only screening data and simply contain a reference to the chemical compound; the chemistry can be managed in a separate cheminformatics database system. – Should there be cross-screening knowledge or is each project distinct and unrelated to the others? For a service provider, it may be quite acceptable to have no relation between the compounds in one screening and another. However, if you will be running the same chemical library on several different targets, there can be some advantages in performing metaanalysis across several screening campaigns to allow identification of things such as frequent hitters and toxic compounds. – Should the database contain only screening data or should it also incorporate other related data such as dose–response data or the results of secondary testing? Incorporating this type of data will typically be done in a fashion that is quite distinct from the plate-based screening data. Regardless of whether you make the decision to build or buy a database system, you will have to decide how to integrate the database with other existing data within the organization. If you buy a commercial system, it is important that the system be open, properly documented, and employ standard technologies to allow easy federation to other informatics systems. 3.4. Analysis
There are several levels of analysis that must be considered. First, there is the conversion of the raw data collected by the reader into some meaningful number. For example, a biochemical endpoint assay may simply require a single absorbance reading; a biochemical kinetic assay may need the computation of the slope of the absorbance curve over time; and for a cell-based kinetic response, the appropriate value may be the Max–Min or the area under the curve. In most cases, the instrument software is capable of producing some ‘‘reduced’’ value that is appropriate for use. In some cases, however, there can be some advantage to capturing all of the raw data in the data system and performing the data reduction during the analysis process. The second level of analysis is the conversion of the well response into a biologically meaningful value. For example, if you are performing an antagonist screening, you may choose to compute the percent inhibition, based upon the average response of control wells. From this biological response, one can determine putative hit compounds based upon some criteria, such as a simple threshold.
50
Flak
Important parameters that should be computed include the plate quality metrics such as Z’ and Z factors, the number of outlier control wells, and the number of putative hits per plate. Other analysis capabilities that are good to have include the ability to easily detect systematic effects, such as bad wells, rows, or columns, which can indicate instrumentation problems; repetitive patterns, such as every third plate having a lower response, which can indicate a problem in the overall processing of plates; and gradients across the plate, which can relate to temperature effects, cell seeding problems, etc. 3.5. Data Handling Case Study
In the case of Axxam, we decided to build almost all of the data management and analysis system in-house. The only commercial product that we rely upon is Spotfire, with some customizations as noted below. The data analysis system that we designed is constructed around a central database, which is implemented in Microsoft SQL Server. The components include the database schema itself, importer code to read the instrument data files, stored procedures to perform the analysis, and some reporting tools. The database schema consists of about 30 tables. We decided early on to store in the database all of the raw data from the instrument. Since most of our experiments are kinetic readings (occurring over the course of 30 seconds to 10 minutes), this means that we store the luminescence or the fluorescence value of every time point for every well. Therefore, this table alone will have several millions of rows of data related to a single screening. Obviously proper management of indexes and table partitions is important for acceptable performance. There is the possibility of storing only reduced data, such as a single number that represents the kinetic response of each well (such as Max–Min or some similar reduction of the kinetic response). However, storing all of the data is beneficial in case some change needs to be made in the computation of the response and also in the examination of the raw kinetic responses. When trying to understand a problem, it is sometimes advantageous to be able to see all of the data from a certain range of plates; this is made much easier by being able to retrieve the data from the database as opposed to retrieving it from many original instrument files. The first step in analyzing screening data involves importing the data from the file produced by the instrument into the database. For this we have created specific importer code, written in a scripting language. Since the output of each instrument is unique, there must be some specific code written for each file type. These importers are then further automated by an additional application that scans folders on the instrument computers to effect the import of new data files. In this way, data that are generated can be imported into the database immediately after the plate is run and the analysis triggered.
Creation of a Small HTS Facility
51
For the analysis, we implemented a type of configurable execution. The analysis is broken into many steps, each of which is accomplished by running a stored procedure. The particular procedures to run and the order of execution are dictated by entries on a database table. Thus, different steps can be configured for different projects. When defining a new project, we typically copy the analysis steps from a similar project processed in the past and make any changes necessary. Furthermore, we have another database table containing specific values defined for each project, such as the Hit Threshold, acceptable Z’ value, and acceptable number of bad wells. The database also holds the plate layout for each project, which defines the location of control and test wells. After importing the data, the primary analysis is automatically triggered. Primary analysis includes computation of the kinetic response (for example, Max–Min and area under the curve); determination of outliers for the controls; computation of the average and standard deviation of the kinetic response of the controls; computation of the desired biological response value for each test well (such as percent activation or inhibition) with respect to the controls; and computation of plate quality parameters, including Z’ and Z factors, number of hits, and number of outlier control wells. A flag is set on the test wells that meet the Hit criterion for the project, and a flag is set at the level of the plate to indicate if it met the established quality criteria. For visualization of the information, we use Spotfire DecisionSite. This software is a general-purpose data visualization and analysis tool. The software is very open to developers, and we have performed a number of customizations. We have defined numerous information links, which allow the retrieval of specific data from the screening database, such as plate quality statistics or the biological response from all wells. Examination of such statistics allows us to detect problems with the screening and to finetune our hit thresholds. We also use Spotfire for some data analysis that is not amenable to the automated processing of the database. This includes processing of assay optimization experiments, small screenings and compound profiling that do not conform to the standard organization of samples, and dose–response experiments. To assist in these analyses, we have created numerous custom tools that perform specialized functions and guides that take the user step by step through the analysis process. A great advantage of using Spotfire to accomplish the data analysis is that each step of the process can be checked visually, and unexpected problems can be detected. Sometimes the best way to process the data is not clear, so some experimentation in Spotfire with different approaches can help us to configure the eventual processing rules that are defined in the database for a particular screening project. Spotfire is also very valuable to confirm the results produced by database processing;
52
Flak
obtaining the same results from both systems for a few plates is a good assurance that the automated database processing is configured correctly.
4. Operational Issues 4.1. Personnel
The selection of the proper personnel is truly critical for a successful screening group. Obviously there must be qualified biological scientists who can run the screening campaigns, interpret the biological responses being observed, and understand when there are problems. It may be possible to have less sophisticated technicians running the system. But having talented, knowledgeable people in charge of the execution of the screening, who are in daily contact with the process, is critical to achieving good quality results. Depending upon your organization and the type of screening assays being performed, the screening group may also include people responsible for cell culture, enzyme preparation, compound management, and management of consumables. There are several other technical positions that are very important for a successful screening group. These include people involved in automation, instrument programming, data system development, and data analysis. Regarding the automation personnel, there are at least two distinct roles that are important. First, someone must be an expert in the system software. This means understanding how to create automated protocols, optimize schedules for proper parallel execution and maximal throughput, interact with the database, and operate the system properly. In most circumstances, the actual users of the screening system will not be experts in developing new screening protocols, but rather will simply run the protocols that are perfected by the system expert. Another important automation role is more of an automation development position. Depending upon the complexity of your system and the frequency with which you need new functionality, this may or may not be a full-time position. This person should be capable of debugging system errors, lower-level programming to accomplish things such as reconfiguring system communications, adding new error handling, adapting the system to new plate handling concepts, and possibly integrating new instruments. The competencies of such a person would include database programming, script programming, some knowledge of mechanics and electronics, and a great deal of generalized computer knowledge. In some cases, it is not possible or convenient to have a person on your staff with such specialized skills. For many of these functions, it is possible to rely upon the system vendor to perform
Creation of a Small HTS Facility
53
such customizations or automation consultants. However, when using a vendor in such a way, it is still critical that there be someone within the screening group with a relatively high level of technical knowledge who can interact with the vendor. The technical specialists working for the vendor absolutely need a counterpart within your organization with whom they can interact and speak the same technical ‘‘language.’’ On the data analysis side, there are also several important roles. Regardless of whether you are using a commercial analysis software or an in-house-created data system, there must be someone within the screening group who understands the software in detail. Obviously if the system is created in-house, there should be an expert within the organization (maybe not within the screening group) who understands all details of the data system and who can make corrections and modifications. If the system is from a commercial source, then the experts will be within that external provider. In either case, there should be a person within the screening group who understands both the software and the biology in sufficient detail to be able to communicate problems and desired improvements. Finally, there must be one or more persons responsible for data analysis. It would be optimal if the people in charge of the execution of the screening campaigns were also responsible for the data analysis, because they are most intimately aware of the special issues for each screening. However, due to the complexities of analysis software, it is often more practical to have some expert users dedicated to overseeing the data analysis. 4.2. Making It Work Reliably
The design and establishment of a screening instrumentation platform is a substantial investment of time and effort, but it is really only the first part of creating a functional screening system. The process of improving the screening system in terms of throughput, functionality, and reliability is a never-ending process. The company’s management needs to understand that the screening team will devote substantial time to small but important improvements, especially during the first year of operation. Furthermore it must be anticipated that there will be bugs and equipment failures that can result in weeks of downtime. Regarding instrumentation problems, it is typically advisable to maintain service contracts with the equipment vendors to ensure a rapid response. Outright failure of instruments is simple enough to deal with: if the robot stops moving, or the reader does not read, the vendor will normally be able to understand and fix the problem quickly. However, the problems that cause the greatest loss of system efficiency are those that are not so simple to understand, especially problems that are not reproducible. Typical problems of this type include occasional problems in loading a plate from a stacker or an incubator, random tip loading problems, dropping plates or misplacement of instrument locations, and
54
Flak
random strange software problems. These types of tiny problems can interrupt a screening run and possibly cause the loss of an entire days’ work. In such cases, simply calling the vendor may not result in a quick solution. It is imperative that someone watch the system carefully, make some hypothesis regarding the cause of the problem, and test various possibilities to try to reliably reproduce the problem. Only then can the vendor solve the problem easily. Of course this type of debugging takes a great amount of time. While it is possible to convince a vendor that they must send a service technician for several days until they can find the problem, this is typically not the best solution. It is much better to have someone on staff who can invest the necessary time to debug the problem.
5. Conclusion Building a high-throughput screening facility within an organization that has no HTS experience can be a daunting proposition. Obviously it is not as simple as just purchasing a screening system and some analysis software. Regardless of whether you decide to rely entirely upon external vendors and commercial software or you decide to build much of the functionality yourselves, there is a steep learning curve and many pieces that must be put in place in order to make a functional screening facility. The most important asset that must be acquired is knowledge – knowledge of how to design a screening system, how to accomplish the required data handling and analysis, and how to run the facility. This knowledge can be acquired by bringing in experienced people or by slowly building up the expertise from within the organization. Instrumentation and software vendors can offer a lot of possibilities, but someone within the screening team must have a clear idea of what is required. We have presented one view of how to build a screening capacity in a step-by-step approach, growing in instrumentation and data handling abilities over the course of several years. While it is not easy, the creation of a well-functioning screening facility can add great value to a research organization.
Chapter 3 Informatics in Compound Library Management Mark Warne and Louise Pemberton Abstract The ability to accurately and efficiently manage inventory is critical to ensure cost-effectiveness and guarantee integrity of samples in drug discovery. While many large companies utilise both customised and off-the-shelf automated systems for compound library management, these systems do not come cheaply. Without doubt, for large pharma the return on investment for one of these systems can be justified; however, the upfront cost is typically prohibitive for smaller businesses looking to stretch their limited cash reserves as far as possible. At Exelgen we have shown that for any business with the combination of fit-for-purpose informatics, relatively inexpensive laboratory hardware and well-constructed SOPs (standard operating procedures), it is possible to undertake cost-effective, large-scale compound library management in a small business environment. The informatics and hardware environment deployed at Exelgen are described in detail. Key words: Informatics, Drug discovery, Compound management.
1. Introduction For Exelgen (1), when providing small-molecule compounds to the pharmaceutical and life science industries, the ability to accurately and efficiently manage inventory is critical to ensure cost-effectiveness and guarantee integrity of samples for the customer. Equally, companies receiving compounds, whether established pharmaceutical giants or emerging biotechnology companies, will have a need to manage their ever-growing compound collections that serve as a bedrock for biological screening efforts. To ensure the financial outlay associated with high-throughput screening (HTS) is justifiable, it is critical that compounds for testing be located simply and that one be confident of the material identity, purity and weight without duplicated effort. W.P. Janzen, P. Bernasconi (eds.), High Throughput Screening, Methods and Protocols, Second Edition, vol. 565 ª Humana Press, a part of Springer Science+Business Media, LLC 2009 DOI 10.1007/978-1-60327-258-2_3, Springerprotocols.com
55
56
Warne and Pemberton
While many larger companies utilise both customised and off-the-shelf automated systems for compound library management, these systems do not come cheaply. For example, the fully bespoke system implemented by AstraZeneca (2) costs many millions of dollars (3). Without doubt, for large pharma the return on investment for one of these systems can be justified; however, the upfront cost is typically prohibitive for smaller businesses looking to stretch their limited cash reserves as far as possible. At Exelgen we have shown that for any business with the combination of fit-for-purpose informatics, relatively inexpensive laboratory hardware and well-constructed SOPs (standard operating procedures), it is possible to undertake cost-effective, large-scale compound library management in a small business environment.
2. Key Components in Compound Library Management
Compound library management brings together all the principle components of daily laboratory work processes in a single cohesive entity as depicted in Fig. 3.1. To be able to rapidly locate stored compounds and be confident of the material identity, purity and weight, effective compound library management system requires three key components:
Reagents availability
Products Analytical Methods and Results
Synthetic Protocols
Fig. 3.1. Principle components of daily laboratory work processes are brought together in a single cohesive entity.
Informatics in Compound Library Management
57
l
An integrated informatics environment allowing rapid access to all data associated with a compound
l
Analytical hardware and infrastructure
Suitable storage and compound-handling infrastructure The discussion of analytical methodologies and standards are well known and extensively covered in the literature (4). Exelgen has not looked to reinvent the wheel in its implementation of these techniques and methodologies. Using standard analytical equipment combined with some customised assay protocols/conditions, data are interpreted using commercially available software packages (5). The data generated are then simply parsed from the analytical instruments directly into the informatics system and stored in its raw format for retrieval as necessary. Discussion of storage solutions was the primary focus of previous editions of this book (6), and therefore, in this edition of the chapter we shall focus on the implementation of an informatics environment and the use of appropriate SOPs. l
3. Compound Library Management Informatics
When Exelgen was founded back in 1997, throughout the industry the idea of fully integrated compound library management was still in its infancy. Today, off-the-shelf and customised solutions are commonplace; however, at the time Exelgen was looking to implement its own system, it was reliant on its then parent company Tripos (7) to develop a solution. While not an option available to all businesses, this enabled the system to be built entirely fit-for-purpose. Named ChemCoreTM, Exelgen’s informatics system combines chemical registration, inventory and sample management. The system goes beyond the traditional definition of compound library management, specifically logging reagent purchase, product synthesis, analysis, use, movement and transfer of all compounds, essentially following a compound from cradle to grave. Used throughout Exelgen by computational, research, analytical and high-throughput chemists, chemical inventory managers and sample preparation scientists, this informatics system exists in a single environment, ensuring its effectiveness in an integrated workflow. In addition to being fully integrated, the ChemCore system is able to easily adapt to scale, an essential requirement since Exelgen synthesizes and analyses in excess of 300,000 compounds annually. This means the ChemCore system is required to track 20 million containers, 10 million structures, 6 million analyses, 1 million purifications and 5 million reactions.
58
Warne and Pemberton
While Exelgen was fortunate that it could have an entirely tailormade informatics solutions, it is now possible to purchase a number of software tools off-the-shelf, or semi-customized for this task (8). ChemCore itself was also commercialized to some extent, and in addition to being deployed at Exelgen, it has been implemented in a customized form at Schering Ag, now part of Bayer Healthcare (9). The following narrative describes what we believe are the essential features to be considered when implementing an informatics solution for a compound library management system.
4. Informatics System Requirements 4.1. Ensuring an Informatics System Is Fit for Purpose
One must ensure that when implementing an informatics system for compound library management, it is fit for purpose. The Exelgen process for library production is depicted in Fig. 3.2 and the ChemCore system was designed to integrate with each element of the process. Off-the-shelf systems and semi-customizable systems are usually built using multiple components and selecting the correct components is key. At Exelgen the system consists of three main pieces: – Registration – Reagent selection and ordering – Inventory management Additionally, a graphical user interface (GUI) front end provides an effective electronic lab notebook where synthetic protocols and alike can be associated with product containers.
Idea Drug type Receptor fit Chemistry driven Reagent driven
Pre-HT Synthesis 1st plate synthesis 1st plate analysis Synthetic changes
HT_Synthesis Robotic synthesis Robotic purification
Selection Scoring Prioritization Literature precedent
Working Lib. Design Monomer ordering HT_Analysis LC-MS Interpretation
Exploration Traditional chemistry Does Chemistry work?
Virtual Library Design Virtual Library built
Monomer Validation Intermediate synthesis
Intermediate Validation
HT_Purification
Compound Handling
LC-MS LC-UV, followed by analytical LC-MS
Core scale-up
Data integrity Sample processing Shipping
ChemCore ChemCore ChemCore ChemCore ChemCore ChemCore
Fig. 3.2. Exelgen library production process.
Informatics in Compound Library Management
4.2. Registration of Compounds/Structures
59
Compound registration is central to any compound library management system. At the entry level, either as a single structure or as a series of structures from an SD file, compounds/structures should be registered with a unique compound identifier. In a departure from the standard methodology of compound registration, Exelgen scientists register samples they intend to make rather than, as is the norm for most scientists, final characterized samples. The rationale for this is twofold. First, the fact that the scientist did not get the expected product should not be considered a failure, but rather an opportunity to gather further information about the proposed reaction and potentially seek learning into the root causes for the reaction not working. Second, at Exelgen, synthesis of between 1 to 200 samples at a time would preclude the scientist from registering each product individually. Proposed structures are therefore registered using high-throughput computational methodologies. Subject to the nature and size of your business, it may also be important to consider how you enforce compound security within your management system. At a minimum, one should consider a structure-based system that one can control for personnel access, ensuring any requirements one may have with regard to exclusivity and confidentiality. The registration of structures at Exelgen is depicted in Fig. 3.3, demonstrating the registration and security dialogues.
Fig. 3.3. Registration and security dialogues.
60
Warne and Pemberton
Beyond the scope and requirements of most compound library management systems is the capability to implement an ideas management. However, the contract research services nature of the Exelgen business necessitates the provision of security around not just physical compounds but also ideas. This means we have also implemented the registration of ideas and reactions as well. These dialogues are depicted in Figs. 3.4 and 3.5.
Fig. 3.4. Idea registration and reaction registration dialogues.
4.3. Reagent Management
While not essential for all companies, should one be engaged in compound synthesis, a reagent management component is a particularly useful utility within a compound library management system. In the same way that it is useful to be able to track purity, age and stock quantity of a screening compound, when reagents are entered into the system, it becomes possible to determine whether to retrieve reagents from in-house stores or purchase new reagents. By adopting a reagent management process, in addition to a more cost-effective management of internal reagent inventory, one can enable improvements in timing and quality of reagent delivery from suppliers. In considering the wealth of reagent suppliers available, it is not inconceivable that precious scientific resource is deployed in selecting samples from one of these reagent suppliers. However, once implemented, a simple algorithm that scores the supplier based on a variety of metrics including quality of service, reagent pack size and cost per unit of material can equally propose a supplier. When considering single reagent purchase, the time saved is inconsequential; however, for the generation of sparse matrix sample libraries, where selection of several hundred reagents from a variety of suppliers is a requirement, the task becomes much more cumbersome.
Informatics in Compound Library Management
61
Fig. 3.5. (A) Addition of synthetic protocols tied to reaction types. (B) Addition of synthetic protocols ties a unique identifier and text-based synthetic protocol to a series of products.
Having the system select, based on the criteria already outlined, a suitable supplier can be a saving of several hours of manpower. This system, called the price-supplier score, is depicted in Fig. 3.6. In addition to using an algorithm scoring system for suppliers, at Exelgen the reagent management group supports the scientific team with the provision of reagent samples, ultimately allowing the scientists to focus their area of expertise, scientific research and development. The price-supplier score is further supplemented with financial actuals reported as part of the acquisition process. With the advent of online ordering and invoicing, the importation of this data back into the system is simple and worthwhile.
62
Warne and Pemberton
Fig. 3.6 Exelgen’s price-supplier scoring system.
The programming logic for reagent management mimics that of screening compound management, but fields associated with the structures are modified to be fit for purpose. 4.4. Retrieving Compound Structures and Associated Information
With structures recorded in an informatics system, key to being able to retrieve the information quickly and effectively is a flexible search engine. One does not have to be worrying about whether the correct naming convention is being utilized or on which plate a particular compound may be located. A multi-parameter search algorithm that can search compound identifiers, exact structures, substructures and all associated data fields is essential in any system. At Exelgen our customized system uses the approach to enable the searching of reaction schemes encoded in the system, including fields such as exact product, product substructure, reagent substructure and product and reagent substructure and the specific synthetic protocol. Dialogues depicting these search options are depicted in Fig. 3.7.
4.5. Benefits of Associating Newly Generated Physical Data with Structures in the Informatics System
With structures already registered into the informatics system, at the point of synthesis, a unique bar code on the reaction vessel is linked to the structure by association with the protocol ID. This bar code serves as the linchpin for all the associated physical transformations during the sample lifetime. Post-synthesis, the relationship between the parent reaction vessel and the derived samples, be it synthetic or analytical (‘daughter samples’), is maintained. For example, analysis of samples is performed using Waters and Agilent (10, 11) samples, with the results being interpreted through the use of the standard MassLynx (5) software. The raw
Informatics in Compound Library Management
63
Fig. 3.7. Compound and reaction search dialogues.
data are stored, with the interpreted purity value being parsed into the compound management system. There can be multiple analytical results associated with each structure, but only one unique analysis per bottle. Consequently, for a final product or reagent, when daughter samples are created directly from the parent bottle, the analytical ID of the parent is automatically transferred. These recorded purity values can then be consolidated and utilized to provide decision support in synthetic operations. When synthesizing hit sets around a particular product of interest, it may be a requirement for the same reagent to be used multiple times. In a set of 20 reactions using the same reagent, the poor recovery of product from one reaction would not necessarily be a cause for alarm. However, should this be the case for 15 reactions, further investigation would be warranted. In reviewing the results for a given synthesis effort, the analytical results can be used to depict the following:
64
Warne and Pemberton l
Impact of reagent
l
Impact of reaction conditions
Investigation of poor success rates Once armed with this information, ChemCore can be interrogated to determine which reagent batch was used and further investigation can be initiated. Additionally, when considering the purification of samples, whether it be the selection of samples for purification or the selection of purification method, data can be extracted from out informatics system allowing us to proactively implement changes to our process to improve sample recovery in terms of purity and sample amount. l
4.6. Inventory Management
As with all warehouses, being able to locate the item of interest is vital to the operation of the business. With sample inventory management, the compound products and reagents are likely to be in multiple locations, be that stored at –20C in a solution or stored in various quantities ‘neatly’ on the shelf. At Exelgen the ChemCore system will accommodate a strategy to return the material to exactly the same location, as required for reagents, which by their very nature have specific storage requirements or segregation conditions. In addition, the system will have to put the samples back in an empty location and tell the system where the samples are stored as in the case of our product storage. Exelgen, as a small business, does not have the luxury of a multi-million dollar storage facility, as is frequent within Major Pharma; however, by working to our strengths, we have built a robust inventory management process that, with a small team of dedicated sample technicians, enables us to compete on the same playing field for sample delivery. Each sample is tracked using a unique system-assigned bar code encoded with a code-128 symbology or 2D bar codes; different samples of the same batch or of the same compound are tracked using their own unique bar code. Plates, storage bins, rooms and sites are also hierarchically bar coded in a similar manner to provide a full, unique inventory location for each sample. All movement operations and material-handling operations, such as liquid handling and hand weighing, are tracked for each sample. Once the parent containers are released into circulation, and daughter samples generated from the parent, the history for each sample diverges. Equally, should the same structure be remade, and the material stored, the bar code will record the batch history as being different from the original sample. In a simple search for the structure, both batches of material will be returned; on further investigation, the differences in sample history will be obvious. An additional twist with sample management is one of sample integrity. Samples that have been stored for many years may have the tendency, however good the storage system be, to degrade. At Exelgen we have a rolling QC process, where, with multiple
Informatics in Compound Library Management
65
analysis associated with each sample bottle, we can evaluate the potential degradation of samples and cost-effectively determine what the fate of such a sample should be. 4.7. Sample Formatting and Delivery
With the advent of HTS, samples formats have been developed internally to match each company’s internal process. As a service provider to the pharma/life science industry, Exelgen is required to be the ultimate flexible partner, since each company has a subtly different delivery requirement based on these ‘in-house’ processes. Over the past decade, as methodology in HTS has become more sophisticated, we have seen the requirements move from the provision of a single sample, such as 1 mg in a 1.4-ml tube, to samples being requested as micromolar concentrations and deliver higher density plate formats with more complex delivery strategies. Since the process within each company is different, the data requirements are also different. Delivery of samples to compound acquisition teams must be supported with corresponding data in their specified format. Data are typically requested as a single SD file containing all standard information; however, requirements have been as diverse as sketch files, with corresponding data supplied in .csv format or in a Microsoft1 AccessTM database(12) not unheard of. While this may not be perceived as arduous, typically pharma provide their specific containers to meet their HTS sample format requirements. In an effort to keep the integrity of the internal bar code numbering system within ChemCore intact, we have had to customize our system to ensure that each sample can be assigned a proxy bar code, usually the customer-supplied vial identifier, which can be in a variety of formats including 1D symbologies and 2D bar codes.
4.8. Avoiding Errors
Having automated the tracking of samples from registration of reagents and products, through analysis, purification and preparation for shipping, one can imagine that should an error in handling occur, the ability to reproduce the same error many hundreds of times is not inconceivable. Therefore, having an intervening and final QA prior to delivery ensures that the sample integrity is absolute. Exelgen follows a series of SOPs designed to provide this assurance, which include a random correlation of actual data to the system anticipated data by intrusive sampling throughout the process.
5. Case Study: Provision of 50,000 Samples in CustomerSpecified Format
Exelgen was required to provide 50,000 of its off-the-shelf LeadQuest compounds to a client in the following complex format:
66
Warne and Pemberton
1 625 plates, 96-well format containing 20 ml leftover LQ library 1 157 plates, 384-well format containing 15 ml Cherry Picking mother 1 157 plates, 384-well format containing 45 ml Mother leftover 1 157 plates, 384-well format containing 5 ml daughter plate Source plates for the products to be delivered were 50,000 compounds, at fixed concentration in DMSO, plated as 80 compounds per rack with columns 1 and 12 empty. Based on the customer-specified format, the following series of processes were enabled using the ChemCore informatics system, with manpower of less than four people in over 2 weeks. (a) Source samples were thawed, but at a rate to ensure all thawed samples could be plated within a working day. This was to minimise freeze-thaw cycles. (b) For each 96-well source plate, create 2 384 master plates (Fig. 3.8). (c) Freeze remaining samples after plating of 2 384 master plates. (d) Freeze Cherry Picking mother. (e) Dilute master plate. (f) Create six daughter plates. (g) Freeze and store. This whole process from library plate to assay plate is depicted in Fig. 3.9.
Fig. 3.8. Transfer of 96- to 384-well plate format.
Unit well compounds copy (ies) plates plates ul % mM nmol
Unit well well compounds copy (ies) plates plates ul % mM nmol
Fig. 3.9. Case study flow chart.
# of units Format 96 compound/plate 80 Copies 1 Individual plates per copy 625 total plates 625 volume per well 20 DMSO concentration 100% compound concentration 10 compound amount 200
Left-overs
Original LQ library # of units Format 96 empty wells 8 compound/plate 80 Copies 1 Individual plates per copy 625 total plates 625 volume per well 50 DMSO concentration 100% compound concentration 10 compound amount 500 1 copy ressuspend and aliquot
# of units 384 320 2 157 314 15 100% 10 150
50000
Cherry picking plates # of units Format 384 compound/plate 320 Copies 1 Individual plates per copy 157 total plates 157 volume per well 15 DMSO concentration 100% compound concentration 10 compound amount 150
Format compound/plate Copies Individual plates per copy total plates volume per well DMSO concentration compound concentration compound amount
Master plates
Plate flow From library to reaction plate Total compounds
Unit well compounds copy (ies) plates plates ul % mM nmol
1 copy Store
Unit well compounds copy (ies) plates plates ul % mM nmol
1 copy Dilute 5 fold
# of units 384 320 1 157 157 75 100% 2 150
# of units Format 384 compound/plate 320 Copies 1 Individual plates per copy 157 total plates 157 volume per well 45 DMSO concentration 100% compound concentration 2 compound amount 90
Mothers left-overs
Format compound/plate Copies Individual plates per copy total plates volume per well DMSO concentration compound concentration compound amount
Mother plates
Unit well compounds copy (ies) plates plates ul % mM nmol
Unit well compounds copy (ies) plates plates ul % mM nmol aliquot 6 copies
# of units 384 320 6 157 942 5 100% 2 10
Unit well compounds copy (ies) plates plates ul % uM pmol
# of units 384 320 1 157 157 100 0.5% 10 50
Assay plates Format compound/plate Copies Individual plates per copy total plates volume per well DMSO concentration compound concentration compound amount
Dilute 10 fold in rxn
Unit well compounds copy (ies) plates plates ul % uM pmol
1 copy Dilute with rxn buffer
Unit well compounds copy (ies) plates plates ul % mM nmol
Discard left-overs
Assay source plates # of units Format 384 compound/plate 320 Copies 1 Individual plates per copy 157 total plates 157 volume per well 100 DMSO concentration 5% compound concentration 100 compound amount 500
Format compound/plate Copies Individual plates per copy total plates volume per well DMSO concentration compound concentration compound amount
Daughter plates
Informatics in Compound Library Management 67
68
Warne and Pemberton
6. Summary As discussed, the combination of fit-for-purpose informatics, relatively inexpensive laboratory hardware and well-constructed SOPs (standard operating procedures) enables a small business, such as Exelgen, to have a flexible compound library management system that competes with the fully automated systems deployed within large pharma. The benefits extolled on the businesses are then twofold. First, the benefit of improved inventory since ChemCore enables us to work to a truly just-in-time philosophy: l For example, we can manage our reagent inventory, so we know when and how much to order needed reagents. l
Similar to managing our product inventory, we know when and how much to synthesize needed compounds.
We reduce our excess inventory and therefore minimize hazardous material on site. Because the turnover of reagents and products is relatively quick, an improvement in reagent quality is also observed. Second, work process improvements are seen because the centralized database of information enables scientists to l plan their work effectively, including anticipation of reagents and provision of analytical instrumentation time; and l
view data within the database and learn from past successes and failures, therefore avoiding redundant experiments. Ultimately, the benefit to Exelgen is one of a more streamlined work flow that supports our scientists to make better decisions faster. To evidence this, we have outlined a compound plating exercise, which was done with the minimum of resource in the shortest possible timeframe in the above case study. l
References 1. Exelgen Ltd. (formerly Tripos Discovery Research Ltd.), Bude-Stratton Business Park, Bude, Cornwall, EX23 8LY UK 2. AstraZeneca UK Ltd., AstraZeneca R&D Alderley Park, Alderley Park, Macclesfield, Cheshire, SK10 4TF UK 3. IQPC Best Practice for Compound Management & Integrity 2005, May 23rd–25th 2005, The Selfridge Hotel, London 4. Changing Requirements of Purification as Drug Discovery Programs Evolve from Hit Discovery, John Isbell, J. Comb. Chem., 2008; 10(2), pp. 150–157 5. MassLynx, Waters Corporation, 34 Maple Street, Milford, MA 01757 USA 6. Compound Library Management: An Overview of an Automated System Wilma W. Keighley, Terry P. Wood, High Throughput
7. 8.
9. 10. 11. 12.
Screening: Methods and Protocols Series: Methods in Molecular Biology, Apr-182002; Vol. 190, pp. 129–152 Tripos, 1699 South Hanley Road St. Louis, MO 63144-2319 USA For example, Symyx Technologies, Inc. 3100 Central Expressway Santa Clara, CA 95051 and ID Business Solutions Ltd., 2 Occam Court, Surrey Research Park, Guildford, Surrey, GU2 7QB UK Bayer Healthcare, Leverkusen, Germany Waters Corporation, 34 Maple Street, Milford, MA 01757 USA Agilent Technologies, Inc., 5301 Stevens Creek Blv, Santa Clara CA 95051 USA Microsoft Corporation, 1 Microsoft Way, Redmond WA, USA
Chapter 4 Statistics and Decision Making in High-Throughput Screening Isabel Coma, Jesus Herranz, and Julio Martin Abstract Screening is about making decisions on the modulating activity of one particular compound on a biological system. When a compound testing experiment is repeated under the same conditions or as close to the same conditions as possible, the observed results are never exactly the same, and there is an apparent random and uncontrolled source of variability in the system under study. Nevertheless, randomness is not haphazard. In this context, we can see statistics as the science of decision making under uncertainty. Thus, the usage of statistical tools in the analysis of screening experiments is the right approach to the interpretation of screening data, with the aim of making them meaningful and converting them into valuable information that supports sound decision making. In the HTS workflow, there are at least three key stages where key decisions have to be made based on experimental data: (1) assay development (i.e. how to assess whether our assay is good enough to be put into screening production for the identification of modulators of the target of interest), (2) HTS campaign process (i.e. monitoring that screening process is performing at the expected quality and assessing possible patterned signs of experimental response that may adversely bias and mislead hit identification) and (3) data analysis of primary HTS data (i.e. flagging which compounds are giving a positive response in the assay, namely hit identification). In this chapter we will focus on how some statistical tools can help to cope with these three aspects. Assessment of assay quality is reviewed in other chapters, so in Section 1 we will briefly make some further considerations. Section 2 will review statistical process control, Section 3 will cover methodologies for detecting and dealing with HTS patterns and Section 4 will describe approaches for statistically guided selection of hits in HTS. Key words: Automated SDI computational tool, Pattern recognition, Quality assurance, Quality control, Quantitative structure activity relationship, Standard deviation of inactives, Statistical process control, Screening quality control, ultra-high-throughput screening.
1. Introduction Experimentation is naturally and inevitably subject to variability. When an experiment is repeated under the same conditions or as close to the same conditions as possible, the observed results are W.P. Janzen, P. Bernasconi (eds.), High Throughput Screening, Methods and Protocols, Second Edition, vol. 565 ª Humana Press, a part of Springer Science+Business Media, LLC 2009 DOI 10.1007/978-1-60327-258-2_4, Springerprotocols.com
69
70
Coma, Herranz, and Martin
never exactly the same, and there is an apparent random and uncontrolled source of variability in the system under study. This fluctuation in responses from one experiment to another is known as experimental variation or experimental error. If something is certain for screeners it is that certainty is false. Nevertheless, ‘‘randomness isn’t haphazard; it often displays an underlying order that can be quantified, and thus used to advantage’’ (Charles Annis (1)). In this context, we can see statistics as the science of decision making under uncertainty. Screening is about making decisions on how active or inactive one particular compound is against a biological system. Thus, the usage of statistical tools in the analysis of screening experiments is the rational way of correctly interpreting screening data, qualifying them with meaning and hence converting them into valuable information that supports sound decision making. Repeated observations that differ because of experimental error often vary on a central value in a roughly symmetrical distribution in which small deviations occur more frequently than large ones. Amongst the numerous distributions described in the literature, the Gaussian or the normal distribution is the simplest and the one that plays the most important role in statistics for experimental sciences. The normal distribution for the frequency of a measure is a symmetrical curve centred around the mean and tailing off towards zero in both directions in a shape defined by the experimental error or the spread of data, which is estimated by the standard deviation. Experimental data of one particular compound tested in one biological assay protocol follow a normal distribution. When measures do not fit this model, it is an indication of either extreme errors, patterned heterogeneity or anisotropy of the testing unit in the experiments (e.g. non-random variability owing to different state of the compound, the biology and the equipment) or an asymmetrical scale in the measured variable. Furthermore, according to the central limit theorem, the distribution of an average tends to be normal, even when the distribution from which the average is computed is decidedly nonnormal (2). Hence, the distribution of any phenomenon under study does not have to be normal because its average will be, and it will have the same mean as the parent distribution. In the HTS workflow, there are three primary stages where key decisions have to be made based on experimental data: 1. Assay development: How good is the assay in terms of reliability, reproducibility and sensitivity for detecting the kind of modulating compounds we are pursuing? Which assay protocol is better? If the assay is not good enough, we will have to invest additional resources to adjust it. If it is, we will put the assay in production, with the consequent implications of resources and data generation. Assessment of assay quality is reviewed in other chapters (3). In Section 3 we will briefly make some further considerations.
Statistics and Decision Making in High-Throughput Screening
71
2. HTS campaign process: Is the screening process performing at the expected quality? Do all quality indicators stay within the range of acceptance? Are we maintaining the same quality and range of variability across all compounds tested, i.e. all wells within the plate, all plates within a daily run and all runs within the campaign. If the HTS experiment is analysed on the fly, and the assessment is that the process is not behaving properly, we may consider pausing it, invalidating the data generated and troubleshooting the process before resuming production activity. If the analysis is run retrospectively, we will try to identify patterned signs of experimental responses that are not randomly distributed in terms of time (temporal patterns) or space (spatial patterns). Depending on the strength of the pattern and the knowledge of the rule quantifying the underlying order, we may consider correcting or rejecting the data. Section 3 will review statistical process control, and Section 4 will cover methodologies for detecting and dealing with HTS patterns. 3. Data analysis of primary HTS data: Which compounds are giving a positive response in the assay? Those deemed as positive or hits will deserve additional investment and attention in further experiments. On the other hand, those graded as negative will be abandoned, probably forever. Section 5 will describe approaches for statistically guided selection of hits in HTS. In this chapter we will focus on how some statistical tools can help to cope with these three aspects, which are more closely related to primary HTS. Needless to say they do not comprehensively cover all the plethora of statistical issues that can be found by a screening scientist, such as design of experiments in assay optimisation (i.e. how to reduce the number of experiments while improving interpretation and decision making) and assessment of quality and reliability of dose–response screens in QSAR (i.e. how precisely and accurately can one model QSAR from screens run in different experiments over time, with different platforms or laboratories or by different scientists? Do all assays correlate or agree?). See Refs. (4–6) for a review of these subjects.
2. Assessment of Assay Quality A review on statistical evaluation of assay quality is included in an earlier chapter (3), where the reader can find a description of the most widely used parameters. Some of these and other alternative
72
Coma, Herranz, and Martin
ones have been critically reviewed elsewhere (4, 7, 8). As a rule of thumb, parameters that do not contain any information regarding data variation are less appropriate in evaluating assay quality. Since the publication of Z-prime parameter (9), this has become the cornerstone of assay plate quality control and assay performance adopted by screening scientists. Z0 ¼ 1 3 SDsignal þ SDbackground Msignal Mbackground where SD is the standard deviation, M is the mean, signal is the control of response for an active assay (100% assay activity), and background is the control of response for an inactive assay (0% assay activity). The attractiveness of Z-prime lies in the following: (1) it combines the two key quantitative features of an assay, namely signal amplitude (i.e. distance from the mean of signal controls to the mean of background controls) and variability (i.e. standard deviation of data); (2) it is dimensionless, so it can be used as a direct comparator across assays regardless of assay modality; (3) ease of calculation; and (4) it is a relative indicator of the power of the assay to discriminate active and inactive compounds, so it directly correlates with the minimum statistically significant threshold of activity in order for a compound to be deemed hit (i.e. hit cut-off). Nevertheless, there are several limitations to the use of Z-prime as a single quality indicator in the assessment of assay quality and performance: (1) Since it is asymptotic, it is scarcely sensitive to changes in assays with high values close to 1. That limits the comparison of assays and protocols as well as the process control of screens in production. In these cases, we would recommend the use of other supplementary ndicators, such as signal to background (though this does not account for data variability) and signal to noise (3). (2) It deals with only control samples and not compound samples. Since controls in microtitre plates are usually arrayed in a small number of fixed positions, Z-prime does not account for the variability of other positions of the plate. In other words, by using Z-prime, we are blind to errors or source of variability beyond control wells. One possibility is interspersing plates fully filled with ‘‘signal’’ or ‘‘background’’ controls in separate plates or randomly distributed in the same plate. Alternatively, in these cases it is recommended to use Z parameter (note: Z is calculated as Z-prime but signal being referred to actual samples or compounds instead of controls) or other quality indicator looking at samples. In this chapter we will describe tools to identify sequential or spatial patterns in an automated manner.
Statistics and Decision Making in High-Throughput Screening
73
(3) By itself, Z-prime is not an indication of the robustness of the assay in the sense of reproducibility of the assay performance across days, scientists, equipments, labs, reagent batches, compounds, etc. Actually, poor correlation has been found between Z-prime and confirmation rate for real screening campaigns (10). For this purpose, evolution of Z-prime and other quality indicators for signal window and variability can be monitored. Even so, high Z-prime values do not guarantee a reproducible percentage of response at single shot or XC50 values in dose–response experiments. Thus, more adequate statistical tools have been developed for the proper assessment of reproducibility and repeatability of single shot (e.g. precision radius and accompanying statistical parameters (9)) or correlation and agreement of dose–response screens (e.g. B-score and R-score (2, 11–13)) or correlation and agreement of dose–response screens (e.g. MSR, minimum significant ratio (4, 6)). a. The precision radius statistical analysis is based on the premise that HTS assays contain two distinct sample populations, i.e. the population of inactive samples and the population of outlier or active samples. The former is modelled by normal distribution and highlights the assay noise, whereas the latter is modelled by an extreme value distribution function (a sigmoid-like Gumbel distribution) and highlights the detection power of the assay. Both populations overlap and show convergence in distribution. Precision radius statistics can also be employed for the setting of hit cut-off thresholds, as will be described in another section. b. MSR (minimum significant ratio) is described as the smallest potency ratio between any pair of compounds that is real, i.e. statistically significant. It is calculated as follows: pffiffi MSR ¼ 102 2SD where SD is the standard deviation of the replicate XC50 results from a test–retest experiment or historical QC data. A last consideration is that we should bear in mind that the ultimate goal of HTS is to identify novel compounds. The higher the sensitivity of the assay to pick up modulators of weak potency, the better the screen. The quality of the assay must be subordinated to this aim. In this regard, optimising quality indicators of an assay based on configuring the assay in a way that is insensitive to small fluctuations in the levels of the response that is measured (e.g. high conversion rate of substrates in enzymatic assays (14)) might lead to a very robust but useless screen.
74
Coma, Herranz, and Martin
3. Statistical Process Control High-throughput screening (HTS) has undergone critical changes in the past decade. These changes have covered all aspects of the HTS business from compound management to the production and evaluation of hits (12, 15). The business is immersed in the socalled ‘‘ultra-high-throughput’’ (uHTS) era. Nowadays compound libraries typically surpass 1.5 million compounds available for diversity screening. Biochemical and cellular assays (screens) are carried out in high-density plates containing reactions from a few micro- to nanolitres per well. Assay plates are processed at a large scale by robots or workstation platforms. The quantity and speed of data production have increased the benchmark values of the 1990s by more than 20-fold. A typical day of HTS operation provides more than 100,000 data points. Such volumes of data need to be properly managed, stored and analysed. As this in-depth change in ‘‘industrial’’ data production has settled down, an important requirement has emerged more strongly than ever. ‘‘Cost-efficient’’ management of the HTS processes is vitally demanded. The new uHTS systems need to minimise waste, rework, cycle time as well as the likelihood that poor-quality data may be passed on to the customers. Organisations have tried to solve the problem by seeking and adapting traditional quality strategies including quality control and quality assurance methods. The result is the ‘‘screening quality’’ culture in which ‘‘screening quality control’’ forms the core. The culture of statistical process control (SPC) was pioneered by Walter A. Shewhart and W. Edwards Deming. With its emphasis on early detection and prevention of problems, SPC has a distinct advantage over quality methods, such as inspection, that apply resources to detecting and correcting problems in the end product or service. Events or occurrences are the result of an input leading to a certain output, following a process. The fundamental assumption from which theory of quality control is explained stands on the differentiation of causes of variation for any system or process. In general, any quality control system will use a variety of tools to detect and minimise assignable variability to a given process. It includes many procedures such as preventive maintenance, instrument function checks and validation tests (16). Statistical quality control is a term used to describe the aspects of a control system in which statistics are applied to determine whether observed measurements fall within the range expected due to random variation of the process. Much of SPC power lies in the ability to monitor both process centre and its variation around that centre. By collecting data from samples at various points within the process, variations in the process that may affect the quality of the end product or service can be detected and corrected (16).
Statistics and Decision Making in High-Throughput Screening
3.1. Design and Implementation of Statistical Quality Control Systems for uHTS Operation: The Complexity of PseudoManufacturing Processes
75
One of the principal difficulties of implementing quality control practices in uHTS processes lies in their complexity. uHTS processes include multiple parameters potentially acting on the quality of the final output (data). For instance, an anomaly during any HTS run may be due to mechanical reasons (e.g. liquid handling, clogged tips, temperature gradients and plate storage conditions) or biological reasons (e.g. wrong enzyme concentration and faulty batch of cells). The anomaly may be in an isolated plate or may be more pervasive, indicative of a more serious systematic pattern affecting a run. On the other hand, every screen or production line may have unique quality matters related to particular aspects (e.g. molecular mechanism, stability of reagents, additives, cofactors and tool compounds). Besides these factors, the variety of assay formats currently available requires that every line needs specialised people who are familiar with these requirements. In this context, the term controller can be used to refer to the scientist or the team of scientists in charge of analysis and problem solving. Screen controllers must have an in-depth knowledge not only of biochemical aspects of the screen but also of assay technologies, automation, statistical and data handling areas. Another point of difficulty lies in the cultural challenges that people working on this field need to absorb. The way to implement typical ‘‘manufacturing’’ values such as supply chain working or 6s assumptions has to be adapted for HTS teams that normally belong to discovery research organisations. There are underlying cultural barriers to excessive metrics, steps, measurements and to the intense focus on reducing variability that is seen to water down the discovery process. Striking the right balance between the application of statistical quality control and the unencumbered research is a key issue. Moreover, there are other factors to consider when adapting statistical quality control to uHTS processes, for instance, reliability of instrumentation and design of the HTS production lines. Reliability can be defined as the probability that a device will perform its intended function during a specified period of time under stated conditions (17). In general, company suppliers of uHTS equipment tend to put more emphasis on innovation and enhanced functionality than on device reliability. This situation adds another level of complexity to the HTS statistical quality control systems. Production lines comprise a plethora of instruments with different levels of reliability and automation. Setting appropriate error limits is actually a general principle in quality management (16). Cost–benefit implications of type I errors (probability of a false error rejection) or type II errors (probability of not rejecting an error) must be optimised when setting quality limits in order to guarantee the overall quality of the
76
Coma, Herranz, and Martin
product at the minimum cost. A balanced mixture of absolute and flexible quality limits is required in order to cover the variety of screens, assay formats and production currently available. It is also important to ensure enough room for controllers to make final decisions about passing or failing results. At the same time there must exist a global frame of quality business rules in order to assure standards of data quality to customers. Finally, the ability to monitor lines in real time vs. offline and the ability to monitor quality of production units (plates) vs. production rows are key elements to consider in efficient quality control systems. 3.2. Efficient Quality Control Systems in uHTS
Several quality HTS quality control systems have been described in recent years. Gunter et al. (12) described a custom-developed software application for HTS quality control called StatServer1 HTS System built up from the commercially available S-PLUS1 and StatServer1. The system comprises a series of charts for the analysis of shifts and trends in data. Different tools of analysis are described like the graphical analysis of plate centre (i.e. median raw data for samples) vs. plate sequence, high- and low-control averages (i.e. raw values) vs. plate order and robust plate coefficient of variation (i.e. by using a smoothing statistical algorithm) vs. plate. There was a second group of QC plots developed in order to address positional effects like box plots of row/column effects and plate maps with appropriate colour scales. The entire software interaction process includes manual uploading of data and offline analysis of results. Flexibility for plotting and downloading was also provided. The system described by Padmanabha et al. (15) implies a more holistic approach to HTS quality control. They emphasised the importance of quality in all processes around HTS including compound management. Their quality system depicts results in real time through a variety of control charts for controls (e.g. raw data values and Z0 ) and samples (e.g. mean of raw plate values). Several other analyses are carried out, such as well average of activity across plates, in order to track anomalous trends. Once the screen is completed, the data undergo a process of manual cleanup prior to hit selection. They finally point out the importance of real-time data analysis and processing in order to quickly mine HTS results to build models that rank or select compounds for further testing and early value to the HTS results. Gribbon et al. (10) nicely illustrated the value of screen monitoring beyond Z0 and variability of controls. They proposed to explore errors in whole assay plates and across them as temporal or spatial plate effects. Emphasis was focused on visualisation of large data sets in two-dimensional plate-based formats (plate maps) in order to recognise trends on plates and within the runs. A deeper
Statistics and Decision Making in High-Throughput Screening
77
review about pattern recognition and correction in HTS can be found in the next section. Certain assay methodologies, such as fluorescence polarisation and TR-FRET, provide ratiometric readouts that can eventually be used for in-well QC. They also described a general workflow to give a global coherence to the elements of QC that they proposed. The wave of quality control systems for HTS has been empowering tools for analysis and user’s interface capabilities. In the next section we are describing the quality control system developed in GlaxoSmithKline (Screening Quality Control, i.e. SQC software). It comprises basic elements for quality control and provides new and differentiated key features with regard to existing systems including quality limits, business rules and a broad spectra of analytical tools covering full relevant aspects to monitor uHTS processes. Figure 4.1 describes the SQC framework design and the principal components that comprise the system. Both intra-plate and inter-plate data are the system inputs. There are analytical tools and business rule algorithms that are applied automatically in order to provide corresponding intra- and inter-plate qualifiers or system outputs. The system operates in two temporal sessions including different algorithms for analysis: online and offline sessions. Σ Input = Intraplate TOOLS
Input = Process TOOLS
automated Business Rules for every tool
on-line
Business rules for every tool
off-line
Output = Process qualifier
Output = Plate qualifier A) PASS (2) B) FLAG (1) C) FAIL (0)
automated on-line
A) PASS (2) B) WARNING (1)
off-line final
Fig. 4.1. Screening quality control software (SQC) framework design.
Online QC attempts to identify unusual plate behaviour based on the likelihood of unusual occurrences or on expected behaviour of plate parameters. The expected behaviour of parameters is initially captured during the HTS validation stage (3, 18). The system provides online dynamic calculations in order to correct potential drift throughout the life of the screen. The online QC process is fully automated, with no intervention from the
78
Coma, Herranz, and Martin
controller unless a problem is identified and the controller is notified by e-mail alert. For that to happen successfully, business rules have been defined accordingly. In some cases the delay between plate preparation and final reading makes online QC impossible, but in any event the same tools and additional ones made available after the run should be applied to ensure quality HTS post-run (offline QC). Analytical tools have been designed to evaluate different aspects of screening experiment quality and are used to define the business rules. Such business rules trigger qualifiers for individual plates (intra-plate: pass, flag or fail) and group of plates in a run (inter-plate: ok or process out-of-control). The qualifiers generated during online QC are saved and fed into the post-run analysis. During the post-run analysis, the controller will have the opportunity to review the information generated. He/she will validate the production line every day and will look for any factor affecting data quality, as in the case of process problems not caught in real time. The controller will finally fail or pass plates according to general guidelines, screen specific limits and context of the run in question. The original qualifiers themselves cannot be removed from the record. This enables business rule sensitivity to be reviewed across screens. The qualifiers also provide easily accessible QC metrics for process improvement data mining. They serve as a basis for subsequent planning of quality assurance (QA) guidelines focused on increasing productivity by reducing the rate of errors and rework. In order to avoid manual intervention by controllers to identify and remove random outliers within the replicated controls and standards, algorithms based on robust statistics [see Section 5 and (19)] are employed to calculate mean and standard deviation for each set of controls and standards. The graphical users interface (GUI) of the software provides tabular and graphical information. The tabular reports are updated in real time including information about individual plates. They also provide information on process status, along with a description of the process flags. Graphical information includes a display of all the charts being used, also updated in real time. The data generated in every tool with the limits used in every rule are also available for graphical and tabular display on demand. There are secondary reports, accessed by controller on demand, including plate maps and particular rule details. See Fig. 4.2 for details of the user’s interface. 3.2.1. Intra-Plate QC
Intra-plate QC parameters are calculated for each plate in the assay using raw and normalised (i.e. percent inhibition or activation) data for controls, samples and standard compounds. Several intra-plate QC tools are assessed so that plates are classified accordingly as passed, failed or just flagged.
Statistics and Decision Making in High-Throughput Screening
79
Fig. 4.2. Screening quality control software (SQC) graphical user’s interface. Screenshot of tabular and graphical information.
Z0 is a measure of the separation between the distribution tails of the low and high controls in an assay (9). An adequate value of Z0 is necessary to ensure that the assay has the ability to distinguish between inactive controls and compounds with a certain degree of biological activity. Our goal in GSK is to run assays with average Z0 greater than 0.6. If after reasonable optimisation trials an assay does not reach this average, we can accept Z0 < 0.6. There are algorithms to determine various Z0 limits including 0 Z absolute minimum and running Z0 at 2 and 3 s levels for both online and offline sessions. The location of control outliers as detected by the robust statistical algorithms is recorded whenever the bias is greater than 10% from the average. Keeping assay sensitivity constant along screen rows and across screen runs is a key metric to monitor in uHTS. The standard is defined as a stable compound (available in sufficient quantity) that helps monitor the biological relevance of reagents. When used for QC purposes, it is added at XC50 concentration. The mean of
80
Coma, Herranz, and Martin
normalised response for standards is monitored by a Shewhart control chart with limits (–3 SD) based on the estimates of mean and SD from HTS validation runs. Sample means of normalised data are also monitored to identify factors affecting only the samples. Although any plate with an unusual sample mean will be flagged, a single unusual plate may not be a cause for concern. Therefore, action is recommended only after persistent behaviour is observed. Limits are defined by mean and SD from HTS validation runs. Plates with a high hit rate are also denoted by the system. Although hits may be clustered in certain plates, most plates should not give exceptional hit rates. After every run, cut-off is calculated as mean samples + 3 SD samples (run). The business rule for high hit rate plates is based on hit rate values from screen validation. Plates with unusually high variability of samples (e.g. tail of negative or positive results) are analysed and flagged offline. Threshold is set at a certain level of significance according to a normal distribution. See Fig. 4.3 for details on the software display for the record of samples in the run distribution tails. When the proportion of samples in one tail of the distribution run consistently appears over limit, this algorithm is very powerful to address systematic errors in plate areas. Please refer to the next section on systematic error detection in online mode and to the next chapter for a deeper analysis of patterns in HTS.
Fig. 4.3. Graphical display for proportion of samples in run distribution tails (offline). The example shows a real case where there is a bias to the left tail of the distribution run for most of the plates in a process line. Although many plates fall below limit, the graph warns about a potential pattern in assay plates (negative values). This fact must be investigated by the inspection of individual plate maps and the incidence of other tools as systematic error.
Statistics and Decision Making in High-Throughput Screening 3.2.2. Inter-Plate QC (Process)
81
Random data failure is normal in HTS experiments. Such plates are flagged as ‘‘failed’’ during the data acquisition process and may be subsequently retested. Failed plate data are excluded from all subsequent analyses. Repeated, non-random occurrences of plate failure, on the other hand, indicate a systematic problem with the process, which could be the result of a biological, mechanical, software or human error. This component of the system makes use of tools that seek patterns and trends within runs and alert for possible systematic errors in the run resulting in a process flag. Unlike intra-plate QC, inter-plate QC does not result in decisions to pass or fail a plate. Instead, it generates warning flags to inform the controller that something unusual is happening in the run. Several inter-plate QC tools are defined. Trend of robust Z0 per plate is monitored by a CuSum chart to detect consistent decreasing trends in the Z0 values. A ‘‘process outof-control’’ message appears if the drift in Z0 falls below the limit of the CuSum control chart. Upon investigation of the cause of the problem, the controller may (i) identify and correct the problem (the CuSum will be reset as if it were the beginning of the run), (ii) be unable to identify or correct the problem and continue or (iii) be unable to identify or correct the problem and stop. The online software keeps track of flag or fail qualifiers on individual plates and will trigger alarms according to the number and magnitude of errors detected. For instance, four consecutive ‘‘fail’’ or ‘‘flag’’ plates or five plates within any segment of ten plates trigger a ‘‘process out-of-control’’ online alarm. Systematic errors detected in real time can result in substantial savings by avoiding waste of time and reagents. As a standard practice, patterns identified as systematic errors will trigger process flags and alert the controller of potential problems with the screen that must be investigated. Three types of systematic errors are detected and flagged: l Well level: Wells are flagged when they consistently give the highest or lowest responses that are significantly different from the rest of the wells. l
Row level: A row is flagged if it consistently gives the highest or lowest average value that is also significantly different from the other row averages.
l
Column level: A column is flagged if it consistently gives the highest or lowest average value that is also significantly different from the other column averages.
See Fig. 4.4 for details on the software display for systematic error tool. This is a key quality indicator in production lines. Thanks to this algorithm, controllers can explore the magnitude and possible causes of any systematic error in real time.
82
Coma, Herranz, and Martin
Fig. 4.4. SQC display of plate maps and systematic error wells. Wells are coloured according to actual activity. Systematic error wells are depicted highlighted.
False positives and false negatives in screen rows are also estimated from control populations. Signal and background control wells are defined as false positives or false negatives if they exhibit a normalised effect greater or lower than a threshold. This gives an idea of how many inactive samples appear to be active and how many active samples appear to be inactive in every run. Runs are flagged according to different criteria. Observation of false-positive and -negative rates in production lines is another key quality indicator of the actual performance of the HTS processes. This tool is particularly powerful when it is related to run hit rate. 3.3. Summary
HTS quality control systems need special working frameworks. We have described key elements to consider in efficient uHTS quality control systems. The screening quality control system described contains all the requirements that we consider are of key importance in statistical process controlling for HTS. It monitors a very wide range of relevant statistical aspects, which have been adapted from classical SPC to HTS, including tendencies and systematic pipetting errors. Thanks to the development of a core set of business rules, the software automatically audits and grades screening
Statistics and Decision Making in High-Throughput Screening
83
plates and runs. The system also allows controllers to input particular limits to certain rules according to actual screen performance. By measuring online quality levels, the controller has the opportunity of pausing the run and quickly correcting an obvious problem or stopping the run altogether in an effort to save reagents and/or compounds until the problem is solved. Controllers have the ability to make final decisions on pass/fail criteria according to the current cost–benefit requirements of particular production lines.
4. Detecting and Dealing with Patterns in HTS
4.1. Spatial Patterns
When designing an HTS experiment, we aim to attain identical experimental conditions in every well, with compound identity or its concentration as single variables. As discussed earlier, we expect experimental errors but if our quest for uniformity were successful, errors should be exclusively of random nature. However, we often see spatial and temporal systematic errors, because homogenous experimental conditions are very difficult to maintain throughout the plate, and also in the course of time. A full HTS campaign usually takes several days or weeks. Often, the activity data of the compounds increase or decrease at the edges of the plates. This is normally referred to as systematic errors or spatial patterns. At other times, we observe temporal systematic errors when the errors are repeated in the same position in consecutive plates. A visual inspection of plate data can help to detect these patterns if they are present, but we need objective measures to evaluate the importance of these patterns and to deal with them. From a statistical point of view, these two kinds of systematic errors are treated as different problems, and different statistical techniques need to be applied in order to detect each one (10, 12). HTS experiments use automation that can contribute to the occurrence of spatial patterns in the HTS and also to the difficulty in maintaining constant experimental conditions in the course of time. The reasons why spatial patterns arise can vary greatly, and although it is not the purpose of this chapter to examine these reasons, we can put forward some of the most frequent reasons: evaporation, liquid handling errors, gradual loss of reagent’s activity, temperature gradients, reader calibrations, physical handling of plates, freezing, centrifugation, etc. (Fig. 4.5). The presence of patterns has consequences in the selection of active compounds. Some active compounds can be misclassified as inactive (the so-called false negatives) and some of the inactive compounds can be misclassified as active (the so-called false
10
15
20
15 5 5
10 15
20
5
10 15
20
15
20
5
10
15
20
5
10
15
20
10 5
10 15 10 5 15 10 5
10
15
5
5
10 15 10 5 15 10 5
10
15 5 20
5 15
15 20
15
20
20
10
10
5
10 5 15 10 5 15 10 5
10
15 10 5
10
10 15
15
5
20
5
5
10
15
15
15 20
20
5
10
10
10
15
20
5
5
5
10
10 15
15
20
5
5
10
15
15 20
20
5
10
10
15
15
5
5
10
10
20
5
5
15
15 20
15
10
10
10
5
5
5
20
15
15
15
10
15
5
5
10 5
5
10
15
Coma, Herranz, and Martin
15
84
Fig. 4.5. Different examples of spatial patterns found in consecutive plates in a HTS screen. Grey colour scale range from –20% activity (white) to 30% activity (black).
positives). When the HTS campaign is complete, and we add up the total number of hits per row and column, we can assess how the presence of spatial patterns is affecting the hit selection. In these situations, the number of hits in the rows or columns located around the edges may be higher or lower than expected. This usually happens gradually, depending on the distance from the edges (10, 12). The experimental conditions of each HTS are very different, and in consequence, the patterns generated are also different. But as an HTS campaign progresses, these conditions can change. Therefore, a very flexible and robust statistical method is necessary, which is able to analyse the great variety of spatial patterns that can be found in the hundreds or thousands of plates run in a whole HTS campaign. Also, in most of the cases, HTS data analysis methods are run automatically, and we have only a general perspective of the effects of applying a data correction method, because we cannot analyse all the plates generated in an HTS campaign in detail. Although visual inspection has proven to be a simple way of recognising spatial and temporal patterns, there exist software packages that can better cope with pattern recognition and
Statistics and Decision Making in High-Throughput Screening
85
correction for complex and massive data sets. Some of these are public domain [e.g. HTS Corrector (20, 21)] or commercially available (e.g. Paratek and Genedata), whereas others are developed by companies for their internal use [e.g. StatServer HTS (12, 13)]. Pattern recognition methods are reliable only in plates where the majority of the compounds are inactive, and the few active compounds present are randomly scattered. 4.2. Description of a Spatial Pattern Recognition Method
There are different statistical approaches for spatial pattern recognition and treatment in HTS plates. The most common techniques use discrete Fourier transform (22), Bayesian statistics, median polish algorithm (13, 23), etc. We propose an approach based on exploratory data analysis (EDA). The steps followed are the following: – Detection of patterns – Evaluation of the importance of these patterns – Correction of the HTS data, if necessary A pattern recognition method has to be very flexible and robust. It has to be capable of analysing a great variety of HTS data. The variety of biological conditions, experimental designs, readers, previous calculations with the data, etc. lead to different kinds of data, and we have to take all of them into consideration. Also, we need a robust method, which is not sensitive to outliers. An outlier is an observation that is a long way from the majority of the values in the data set. In HTS data, the presence of outliers is quite common, due to data errors and especially due to the presence of active compounds, with values that are far off the majority of inactive compounds. Experience shows that patterns in each plate are different in strength and shape. Likewise, patterns may appear and disappear along the HTS campaign. Hence, we propose an intra-plate methodology where the analysis unit is the plate and where we do not use information from previous or subsequent plates.
4.2.1. Detection of Patterns
We can consider HTS data as values in a three-dimensional space, where the rows and columns in the plate represent the x- and y-axis and the activity value of each sample is represented along the z-axis. The main aim of the pattern recognition method is to describe the surface that best fits this data set. Figures 4.6 and 4.7 show examples of plates with spatial patterns. We assume that the majority of assayed compounds are inactive. If the values are randomly scattered on the plate and no spatial systematic errors are present, we can expect the data to be randomly situated below or above the plane z ¼ 0, and then, when we fit a surface to this data set, the surface will
86
Coma, Herranz, and Martin
Uncorrected Data
Pattern
Corrected Data
20
5
10
15
20
5
10
15
20
5
10
15
20
15 5
10
15
20
5
10
15
20
5
10
15
20
5
10
15
20
10
15
20
5
10
15
20
5
10
15
20
5
10
15
20
10 5
10
15
5
10
15
5
10 5
10
15
5
10
15
5
10 5 15 10 5 15 10 5
5
15
15
15
10
15
5
5
10
15 10 5
5
10
15
Fig. 4.6. Example of a robust surface fitted by the pattern recognition method. Three graphs are shown in the figure: uncorrected data (left), pattern found (centre) and corrected data (right).
Fig. 4.7. Examples of the correction done by the pattern recognition method in four plates. Three columns are shown in the figure: uncorrected data (left), pattern found (centre) and corrected data (right). Grey colour scale range from –20% activity (white) to 30% activity (black).
Statistics and Decision Making in High-Throughput Screening
87
be precisely on the plane z ¼ 0. We could say that our data set has one spatial pattern when the surface fitted is not a plane. Later we will discuss how to assess whether the fitted surface is relevant for the original data. The method to detect the pattern is based on a two-dimensional extension of the repeated running median procedure defined by Tukey for smoothing data sequences (24, 25). Therefore, the method is a nonlinear smoothing method. The technique is descriptive and does not imply a theoretical model for the data. It is an exploratory data analysis and can also be considered as a robust local regression model. Its versatility means that it can be used in very different data sets such as those generated in HTS, and it is sufficiently flexible for a much wider range of situations found in HTS. The robustness of the method comes from the use of the medians to estimate the surface. One of the best features of the running median procedure in analysing data series is that it detects the increment or the decrement parts of the data series extremely well. In the two-dimensional extension of the algorithm, this feature means that it is very easy to describe the areas of the data where they are increasing or decreasing, i.e. the gradients of the patterns we want to detect. Consequently, this method will adjust patterns well if they occur gradually, which is usually the case when they are generated by changes in temperature, centrifugation, reader calibration, etc. The most basic smoother for data sequences is the ‘‘running median of 3’’, called the ‘‘3’’ smoother (24). For example, for a sequence of data fxt : t ¼ 1; :::; T g, the ‘‘3’’ smoother replaces xt with the median of fxt1 ; xt ; xtþ1 g. Repeated smoothing or ‘‘resmoothing’’ (R) uses the output of a smoothing operation as input to the same smoothing operation and is repeated until no change occurs. The basic idea of the two-dimensional running median extension for smoothing surfaces is to substitute each data point by the median of the surrounding data. Suppose fxrc ; r ¼ 1; :::; R; c ¼ 1; :::C g are the activity data of the compounds in an HTS plate with R rows and C columns. Formally, the ‘‘9R’’ smoother is defined as a repetitive algorithm with the following iterations: l Iteration 1: Each xrc is replaced by the median of the nine surrounding data xrc1 ¼ medianfxij g where r ¼ 1; :::; R and c ¼ 1; :::; C i¼r1;rþ1 j ¼c1;cþ1
l
Iteration 2: Each data are replaced by the median of the nine surrounding data obtained in iteration 1 xrc2 ¼ medianfxij1 g where r ¼ 1; :::; R and c ¼ 1; :::; C i¼r1;rþ1 j ¼c1;cþ1
l
The algorithm stops when xrcp ¼ xrcp1 8r; c.
88
Coma, Herranz, and Martin
The use of the median in this iterative method produces a noncontinuous surface, like a ‘‘staircase’’, and to smooth this effect, as a last step, a smoother is applied in the form of a weighted mean with the 25 surrounding data, similar to the Hanning (after von Hann) defined by Tukey (24). Each xrcp value calculated in the last iteration is replaced by X p yrc ¼ average fwij xij g where wij ¼ 1 i¼r2;rþ2 j ¼c2;cþ2
i¼r2;rþ2 ej ¼c2;cþ2
This smoother is called 9RH. The fact that this is an iterative method makes it particularly appropriate to fit gradients that gradually increase or decrease, because the peaks present in the surface disappear when the median is calculated in each step. 4.2.2. Variance Explained by Patterns: How to Evaluate Patterns
Visual inspection is the intuitive method for evaluating whether or not HTS data present spatial patterns. In today’s HTS environment with thousands of plates analysed at once, this visual analysis has been aided by programs such as Spotfire. However, evaluating results by means of visual inspection can be subjective, and in fact we could easily get distracted by little patterns, without practical impact, that are present in the majority of the HTS. The method for detecting patterns described earlier provides the grounds for putting forward a more objective way of evaluating the found pattern. We believe that part of the variability in the original data is due to the pattern, because on the edges or in the centre, we find data that are greater or smaller than expected, thus increasing variance. After fitting the surface that describes the pattern, we can use it to evaluate how much of the original variability is due to the data and how much is due to the pattern. We define a measure of the strength of the patterns, called variance explained by the patterns (VEP), as the ratio between variability of the found pattern and variability of the uncorrected data (Fig. 4.8). This is similar to the R-square, the coefficient of determination used in linear regression. To estimate variability, we use a robust variance estimator, c MAD2 , where c is a constant and MAD is the median of absolute deviations. Formally VEP ¼
MAD2patt varianceðpatternÞ ¼ varianceðuncorrectedÞ MAD2uncorr
A low VEP means that the found pattern does not contribute significantly to the variability of the original data, and therefore, the pattern is small, and we do not need to correct the data. If there is no pattern, the fitted surface adjusted by the method is near to the plane
Statistics and Decision Making in High-Throughput Screening
–40
0 20
60
40
Frequency –40
Inhibition Percentage
20 0
0
0
60
Corrected Data
Frequency 20 40 60
Pattern
Frequency 20 40 60
Uncorrected Data
89
0 20
60
Inhibition Percentage
–40
0 20
60
Inhibition Percentage
Fig. 4.8. Variance explained by the patterns (VEP). Part of the original variance (left histogram) is due to the pattern found (centre histogram) and after correcting the data, this variability decreases (right histogram).
z ¼ 0, which has null variability, and therefore, we would obtain VEP = 0. A high VEP means the pattern found contributes significantly to the variability of data, and the data must therefore be corrected. Our experience has led us to define limits for VEP: l VEP > 0.3 strong patterns
4.2.3. Screening Data Correction
l
VEP of 0.2–0.3 medium patterns
l
VEP < 0.2 smooth patterns, or lack of pattern
In some cases, after detecting the patterns, we can correct the experimental conditions to make some of these patterns disappear or decrease in intensity. However, this is not always possible and sometimes we need to correct the original data, in order to make the best decisions about which compounds are active. The basic idea of correction is to take the distance to the surface as a new value of compound activity. In practice, this means adding or subtracting a quantity to the original data, depending on whether the well is positioned in a high or low area of the surface. Corrrc ¼ Uncorrrc þWeightrc ðmedianðpatternÞPatternrc Þ8r;c where r and c are row and column identifiers, respectively. This weight function will take values near to 1 for all the low activity values and near to 0 for high activity. The use of a weight function compensates for the lack of linearity in the response.
4.3. Temporal Patterns
The use of automation in HTS is the main cause of systematic errors across time, associated with a certain position in several consecutive plates. For example, these problems relate to compound dispensing, obstructed pipettes, contaminated wells, etc. The presence of temporal systematic errors principally affects falsepositive findings, because inactive compounds are misclassified as active compounds. The statistical techniques that we use to detect this kind of systematic error are different from those used to detect the spatial
90
Coma, Herranz, and Martin
patterns, but the mathematical foundation is similar, since we use smoothers based on medians as defined by Tukey (24), which can estimate the tendency of data coming from a temporal series. This inter-plate analysis is based on studying the data of each position on the plate, and each well, as an independent temporal series. In other words, in an HTS run of p plates with n samples each, we study n temporal series with p point each. The idea is to adjust a very robust temporal series to find strong trends in one of these series, describing what happens in each position of the plate across time. Systematic error level (SEL) can be defined by applying a robust smoother. Normally, we use estimators such as the 11RH or the 15RH (24), which are stronger than those advised by Tukey (4253H, 3RSSH), since our objective is to find a strong trend that is maintained in several consecutive plates and thus identify it as a systematic error. If a position on a plate does not present a systematic error, and assuming that the majority of the compounds are inactive, we can hope that the data will be randomly distributed below or above 0% of the activity. In this case, when we fit to a robust smoother, the trend of the series (i.e. SEL) should be close to 0%. Figure 4.9 shows different examples of the evolution of the activity in a well across time. Figure 4.9A shows a high systematic error, where all values are around 100% of activity in a sequence of 100 consecutive plates. We imagine that this will be the case in the majority of the wells. However, Fig. 4.9B shows a position of the plate where the values are around 20% of activity, and this could be classified as a low systematic error. Finally, Fig. 4.9C shows a position of the plate without systematic error, during 100 consecutive plates, when the values are randomly below or above 0% activity, and the fitted trend is near to 0% in all cases.
400
450
500
Plate Sequence
550
Activity
–20 0
20 40 60 80 100
20 40 60 80 100
C Systematic Error Level
–20 0
Activity
–20 0
Activity
B Systematic Error Level
20 40 60 80 100
A Systematic Error Level
600
650
700
Plate Sequence
750
400
450
500
550
Plate Sequence
Fig. 4.9. Evolution of the response values in a well along a set of consecutive plates, and the systematic error level (SEL). (A) A well with a high systematic error (SEL 100%). (B) A well with a medium systematic error (SEL 45%). (C) A well without systematic error (SEL 0%). Dots correspond to experimental activity. Curve line is the calculated activity upon fitting of the temporal sequence to the SEL robust smoother.
Statistics and Decision Making in High-Throughput Screening
91
After calculating the SEL for all the plate positions, the data analysis is focused on detecting errors in the set of hits. 4.4. Summary and Conclusions
5. Statistically Guided Selection of Hits in HTS
The presence of spatial and temporal patterns in HTS data can be a significant problem in some assays. Exploratory techniques based on repeated running median has been found to be an effective tool for detecting and correcting these systematic errors. The method has been found to be very flexible and robust for dealing with them in all HTS where they have been used. We have validated the method, and it shows a great capacity to detect false positives and recover false negatives.
Besides a sound scientific rationale, an appropriate compound collection and a high-quality execution, the process of data mining is a key to success of every screening campaign. At the end of the day, HTS is a number game. In order to make the screening valuable, the vast amount of data and information that is gathered in any HTS blitz has to be conveniently processed, interpreted and transformed into real and meaningful information and knowledge. We assume here that in primary HTS every compound is tested at the same concentration and just once. Screening in replicates (11) or at different compound concentrations, such as quantitative HTS (qHTS) described by Inglese et al. (26), can certainly help reduce the identification as hits of compounds associated with assay errors. It should be noted though that some claims of diminishing the burden in false negatives and false positives are based on generic assumptions, such as fixed activity threshold or 3 and 6 SD of the mean of actives being commonly used cut-offs. As will be reviewed below, the field has moved away from simple cut-offs, and high-quality assays are routinely run. Therefore, some of these claims should be downplayed. The decision point relates to which compounds from a single shot test will be pushed forward as positives in the screen. Although the selection of hits can be guided biologically (e.g. potency and profiling of positives through secondary assays for specificity, selectivity or enablers) and chemically (e.g. clustering into representative chemotypes and deselecting intractable structures), the first question is merely statistical, i.e. where to set the threshold of activity that best segregates true positives from true negatives? On the other hand, the screening scientist struggles with a logistical constraint: the number of samples selected cannot surpass a limit dictated by the maximum number of samples that can be reasonably (that is in a timely and resource-efficient manner) prepared and tested through the subsequent assays.
92
Coma, Herranz, and Martin
It is not uncommon for primary positives in HTS simply to be selected on the basis of potency above a particular cut-off of activity that accommodates logistics. The ultimate consequence is that some putatively weak, but still valuable, true hits may have to be abandoned. However, the assignment of potency from a single shot experiment is rather risky. First, reliability of the activity value depends much upon the quality of the assay and the distribution of activity of the sample population tested. In other words, the same threshold of activity does not have the same reliability meaning in all assays. Second, the actual concentration of the compound in the assay might significantly differ from the nominal value (27), so apparently weak compounds may turn out to be potent if they were actually tested as a trace. In all, a hit selection process that minimises the rates of false positives and negatives, regardless of their level of activity, would eventually optimise the use of limited screening resources. False positives are annoying. False negatives are unacceptable, because they are usually abandoned forever. Highly potent positives that eventually turn out to be false are worthless, disappointing and can negatively bias the selection. On the other hand, true weak positives are valid for SAR and provide a starting point in a hit-to-lead chemical programme. Our preference is giving the highest chances of picking up weak hits, accepting the risk of progressing with them false positives that will be unveiled in subsequent stages. The process of hit identification involves three steps: (1) validation, removal and adjustment of screening data (see Section 4), (2) ranking compounds by activity and (3) setting a meaningful threshold to declare positive compounds. Below, we will describe statistical approaches to address the two last steps. 5.1. What Is a Statistical Cut-Off?
Currently, the most commonly used method for hit selection in HTS experiments is statistical significance (or p-value) for testing no mean difference and in particular the mean – k SD method and its variants, where SD is the standard deviation of the negative reference (i.e. background controls or inactive samples) and k is a multiplying scalar. Alternatively, methods of clustering have been described on the basis of finding two statistically significant clusters of samples, namely active and inactive samples (28). The statistical significance approach addresses the question of controlling the rate of false positives, also known as type I error or . The value of k is chosen so that the false-positive rate (i.e. ) can be kept below a certain level. The higher the value of k, the more stringent the cut-off we set to lower the rate of false positives, but the higher the rate of false negatives (also known as type II error or b), and vice versa. Hence, it is a hard challenge to make both rates low at the same time, and there is always a trade-off between minimising the two types of error.
Statistics and Decision Making in High-Throughput Screening
93
A simple way of estimating the probability of making true assignments for negative compounds at one particular threshold of compound activity is the calculation of the power of an assay (19) (Fig. 4.10). The power parameter is calculated as the complementary probability of the type II error (i.e. 1 b) of the assay:
Fig. 4.10. Meaning of power of discrimination for an assay, alpha- and beta-errors.
0
1
B jS C j C 1 ffiC Power ¼ 1 ¼ B @rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi A ðÞ ðSD2S Þ ðSD2C Þ nS þ nC where is the cumulative distribution function of the standard normal distribution (i.e. (x) represents the area under the standard normal distribution between minus infinity and x), n is the number of replicates, SD is the standard deviation, is the mean of activity in the assay, S is the population of samples and C is the population of controls or inactive samples. Zhang et al. (29) and Fogel et al. (30) have developed predictive models of hit confirmation rates based merely on the primary HTS data obtained from compound testing in singlet. Recently, a new statistical parameter (SSMD, strictly standardised mean difference) has been introduced that also contemplates falsenegative rate in an attempt to achieve a balanced control of both (31). The idea of power is underlying SSMD, which can also be applied to quality control of HTS assays. We will focus later in this chapter on the description of methodologies that estimate the lowest threshold with statistical significance for the call of true positives. Although there may be labs where HTS is run in simultaneous multiple testing (e.g. qHTS (11, 26)), we will not discuss statistical approaches based on replicate testing (2, 11) because these data are not ordinarily available in routine HTS.
94
Coma, Herranz, and Martin
As a result of an HTS campaign, we obtain the distributions of three populations: signal controls, background controls and samples. The distribution of sample population in an HTS experiment can be modelled as a composition of one major population of inactive compounds (>95%) with a single mean and experimental variability, and a combination of many other minor populations corresponding to several active compounds (<5%) whose means of activity are spread throughout a broad range of values. Furthermore, there will be a non-normal distribution of extreme errors. The composed distribution results from the sum of all individual distributions. The statistical cut-off is defined as the lowest threshold of activity that distinguishes between active and inactive compounds with statistical significance. The statistical significance of this segregation is determined by the distance of separation between the means of the inactive samples and the weakest active population that falls apart from the noise of the screen. The fundamental question is how to estimate the noise of a screening experiment. 5.2. Permutations in the Methodological Approach
There are a number of alternatives that can be independently permutated for estimating screen noise and hence calculating a statistical cut-off: l Raw data or normalised values: Although any statistical approach described can be applied to raw data from individual wells in a plate, the normalisation of data as a percentage of the response of controls in the plate is a simple way of mitigating the systematic plate-to-plate variation derived from different status of reagents or reading equipments (2). Simplest normalisation formulae are percent of control or percent of response, both of them using the means of signal and background control wells. Unfortunately, a systematic error in control wells will affect all measurements on the samples of the plate. Gribbon et al. (10) have nicely illustrated the shortfalls of data analysis approaches solely relying on controls and measures derived from them. Although more robust normalisation estimators based on median polish have been defined in order to estimate and remove row and column effects [e.g. B-score (2, 12, 13) and R-score (11)], simple percent of response (%response) becomes a reliable parameter when preceded by pattern recognition and correction algorithms, as those described in this chapter. Msignal x %response ¼ 100 Msignal Mbackground where M is the mean of the raw data for signal or background controls and x is the raw data for compound or well x.
Statistics and Decision Making in High-Throughput Screening l
Sample or background control populations: Although the variability of the background control population is a fair and indeed the simplest way of calculating an indicator of screen variability, it has some shortcomings: l
A
The inactive sample population is not centred at 0% of activity for all screens (Fig. 4.11B).
B
12
4
10
3 Center_Controls
SD_Controls
95
8
6
4
2
1
0
2 –1 4
6
8 10 12 SDI_Samples
14
16
–1
0
1 2 3 Center_Samples
4
Fig. 4.11. Comparison of mean variability (A) and mean (B) of the populations of inactive samples (SDI, standard deviation of inactives) and signal controls (SD_Controls, standard deviation of controls with no compound effect) for 14 different screening campaigns. l
The variability of the inactive samples (compounds) is often higher than that observed for the low controls (only DMSO is present). This is known as the ‘‘matrix effect’’ (e.g. organic load, chemical nature of the sample and procedure to dispense samples vs. controls) (Fig. 4.11A).
l
The number of inactive samples in an HTS campaign is much higher than the number of background controls (>95% vs. <5%, respectively), and they are randomly distributed in time and space, which results in a higher goodness of the estimation method whenever inactive samples are mostly abundant.
Thus, our preference it is to make use of the sample population distribution rather than the controls. Size and identity of the batch of analysis: Compounds in an HTS campaign are tested in groups or batches. Each compound is tested in a well within a plate, several plates are tested in an experimental run or independent experiment (e.g. same batch of reagents and single load in an automated platform), l
l
96
Coma, Herranz, and Martin
and several runs are performed along the full course of the HTS campaign. In the pursuit of identifying a homogenous population of samples and controls with high significance, the run-wise analysis may be a good compromise. Ideally, screen quality should remain invariable throughout the campaign, but this does not always happen. On the other hand, platebased variability may be misleading when some individual plates behave distinctly (e.g. spatial pattern and high number of active compounds). l
Separation distance from screen noise (statistical significance): As discussed earlier, variability of experimental measures in screening can be modelled to normal distributions. Since the variability of an experimental population is estimated by its standard deviation (SD), statistical significance can be simply interpreted as distance from the mean in number of standard deviations. In this sense, 1, 2 and 3 SD accounts for approximately 68, 95 and 99% of the whole distribution. The probability of a value falling within a specified region of a distribution is equal to the area under the curve within the region, as a fraction of the total area under the curve. Based on the well-known 3s rule, it is commonly accepted that a distance of 3 SD is the lowest statistical significance. The probability for values greater than 3 SD is 0.13% in a normal distribution, or in other words, the probability for a compound falling further than 3 SD from the mean of inactive samples of being a true positive is 99.97%. This estimate is based on the assumption that the whole population of inactive samples or controls is all randomly distributed and not subjected to any kind of patterns or extreme accidental errors. It is a good practice that the activity of compounds in single-shot screening is normalised as distance from the mean in number of SDs, as Z score does (2).By doing so, the comparison of the profile of activity of a compound across different assays or targets will be more meaningful and reliable. Zscore ¼
x Msample SDsample
where x is the raw data on compound or well x and Msample and SDsample are the mean and standard deviation of the whole sample population, respectively. l Statistical methodologies for estimating screen noise: The list of statistical approaches described to address the issue of screen noise estimation is so large that it cannot be comprehensively reviewed within the scope of this chapter (32, 33). As discussed above, we recommend estimating screen noise through the analysis of sample distribution rather than from controls. Nevertheless, a simple approach to estimate screen noise, and
Statistics and Decision Making in High-Throughput Screening
97
hence screen cut-off, can be done from Z-prime values. As mentioned earlier, Z-prime is a relative indicator of the power of the assay to discriminate active and inactive compounds. A rearrangement of the Z-prime equation can be derived as follows: %Hco ¼ 100
3 SDsignal ð1 Z 0 Þ ¼ 100 ð1 þ Þ MSignal MBackground
where %Hco is the hit statistical cut-off as normalised percentage of activity and = SDbackground/SDsignal. Hence, a correlation between Z-prime and a minimum statistically significant threshold of activity for a compound to be deemed hit (i.e. hit cut-off) can be simply estimated (Fig. 4.12). Note that equals the inverse of signal-to-background ratio if we assume an identical coefficient of variation (CV = SD/M) for both types of controls. This approach can be easily used to accommodate hit thresholds of activity according to assay quality. Below we will describe some methodologies based on the analysis of real screening compounds or samples. 5.3. Estimation of Screen Noise Based on Distribution of Sample Populations in the Screening Campaign
As discussed above, the whole population of biological activity from samples in a screening campaign does not follow a sheer normal distribution. Instead, it comprises overlapping observations from inactive samples, hits and uncorrected systematic errors, which will all contribute to skew the normality of the distribution. Since screen noise is dictated by the variability of inactive samples, which constitute the majority (>95%) of the whole population, we can conceptually approach the problem through two different routes: (1) sample population is just one single homogenous population containing outlier observations and (2) sample populations can be modelled as an overlapping sum of normal distributions from inactive samples (>95%) and hits (<5%). Despite the fact that the population of inactive samples constitutes the vast majority of the distribution, the usual estimates of the standard deviation and the mean are not accurate because they can be substantially inflated by extreme outliers, the number of which will depend on the proportion of true positives and extreme errors. The sample mean minimises the sum of squares (SS), and this is the source of its sensitivity to gross outliers as these large errors or actives inflate SS significantly. Usually, sample variance is even more sensitive to outliers than the sample mean. Therefore, more robust and adequate methods are needed in order to properly estimate screen noise from sample distribution.
5.3.1. Robust Statistics
The concept of robust statistics (sometimes also called resistant statistics) was initially developed to cope with the problem of outliers caused by errors in the measurement of an observation.
98
Coma, Herranz, and Martin 70%
Hit Statistical Cut-off
60% SD-bck/SD-signal
50%
0.1 0.2
40%
0.4 0.8
30%
1 20% 10% 0% 0.3
0.4
0.5
0.6 0.7 Z-prime
0.8
0.9
1
70% 60% %Hit Statistical Cut-off
Z-prime 50%
0.3 0.4
40%
0.5 0.6
30%
0.7 0.8
20%
0.9 10% 0% 0.1
0.3
0.5 0.7 SD-bck/SD-signal
0.9
Fig. 4.12. Correlation between Z-prime and statistical cut-off based on the screen noise estimated from variability of controls. Hit statistical cut-off (%Hco) is calculated from the following expression derived from the Z0 definition: 0 3SD Þ ¼ 100 ð1Z %Hco ¼ 100 M Msignal ð1þÞ where %Hco is the hit statistical cut-off as normalised percentage of j Signal Background j activity and = SDbackgroundSDsignal for population of control samples.
Real errors do not fit the normal distribution and mask the true values of mean and standard deviation of the distribution of results. Although it has been common practice to reject outliers as errors and to delete them from the set of data, the prevailing philosophy of robust statistics has changed the emphasis from rejection to accommodation. Robust methods are as useful in assessing variability, which is even more sensitive to outliers than the mean, as they are for central tendency of the true value. The two main principles of robust statistics are the following: (1) they have to work well for heavy-tailed distributions close to the normal
Statistics and Decision Making in High-Throughput Screening
99
and (2) they have to protect against gross errors. Both specifications are relevant for the problem of screen noise from the sample population, so they can also be applied to our advantage. Many different procedures have been described that vary in complexity of calculus and adequacy. Although trimmed means and IQR (inter-quartile range: difference between observations one quarter in from each end; note: IQR = 1.35SD for a normal distribution) are easily calculated by hand and obey both principles of robust statistics, more sophisticated methods have been developed that require simple computation. For instance, in Ref. (19), a method and its corresponding computational programme are described. It is based on an iterative minimisation of sum of squares of errors by downweighting extreme errors. The method is easy to compute and rapidly converge. When applied to screening data, it can be flexibly used for any set of data no matter the size, i.e. from a few controls within a plate to a whole set of millions of samples in a complete HTS campaign. 5.3.2. Standard Deviation of Inactives (SDI)
Below we will discuss four methodologies described for the estimation of SDI. Although they conceptually differ from the robust statistics for outliers, these approaches are also based on the calculation of robust statistical descriptors, and thus they all share methodological practices.
5.3.2.1. Normal Probability Plot (NPP)
This well-known graphical tool provides a simple approach for calculating SDI (30, 34). Since the population distribution is not normal, only a central portion of it turns out to follow a straight line. The SDI is calculated from the slope of the tangent at the origin (Fig. 4.13).
A 600
B
Inactive Inactive Low_Active Low_Active Mid_Active Mid_Active High_Active High_Active
120 100 80
500 Value
60
400 300
40 20 0 –20
200
–40 –60
100 0 50 Binned % Response
100
–2 –1 0 1 2 Normal Projection
Fig. 4.13. Sample population is simulated as overlapping Gaussian distributions of inactive (N ¼ 10,000, mean ¼ 0, SD ¼ 15; in blue), low active (N ¼ 500, mean ¼ 30, SD ¼ 15; in red), mid active (N ¼ 300, mean ¼ 60, SD ¼ 15; in green) and high active (N ¼ 200, mean ¼ 90, SD ¼ 15; in yellow). Panel B is a normal probability plot for the whole sample population.
100
Coma, Herranz, and Martin
5.3.2.2. Empirical Cumulative Function
The SDI can also be estimated numerically through a nonlinear algorithm based on the empirical cumulative function of the activities of the whole population of samples [see Ref. (30) for a comprehensive description of mathematical formula]. The SDI can be obtained by minimising the sum of squares for the difference between the empirical cumulative function Fn(x) and the standard error function Erf(x/): n n x o2 X i SDI ¼ arg min Fn ðxi Þ Erf i¼1 Based on this, an estimate of the proportion of the inactive compounds is derived. Ultimately, an algorithm is set up to reach the quantitative description of the whole distribution of samples by a mixture distribution model for inactives and actives, and hence a calculated probability for a sample being a true active (29, 30). From this kind of model, a predicted confirmation rate can be derived, so the number of primary positives selected for follow-up can be optimised to maximise the number of true positives without picking up too many false negatives.
5.3.2.3. Precision Radius
In an earlier section, we have discussed the basis of precision radius and its applicability in assessing assay reproducibility (35). Based on the hypothesis that any HTS assay contains two distinct populations, namely inactive samples (parent population) and outlier/ active samples (child population), the statistics are performed within the IQR (inter-quartile range) of the sample distribution. Therefore, to some extent, it can be considered as a derivation of robust statistics. The first step consists of establishing the centre of the assay. The bottom and top 25% of the samples are removed from the statistical analysis in order to clearly eliminate the child population of actives from the analysis of the centre. Then, a 3s band is calculated from the IQR from which all the analysis of the centre takes place. The noise ratio of population, and hence the statistical cut-off, is contained within this band of 3 SD of its mean (namely, the centre). From the analysis of this parent population, and based on ANOVA, other statistical parameters are calculated aimed at describing the quality of the assay: l Precision radius, an estimate of the expected variability in future measurements of the same sample. l
5.3.2.4. ASDIC (Automated SDI Computational Tool)
Repeatability ratio, percentage of the variation within the 3s band of the assay that can be explained by variation in the measurements of the same sample. Its complementary parameter is reproducibility ratio, i.e. the percentage explained by variation between samples.
A new statistical tool has been developed for the automated computation of SDI from the distribution of screening samples. The method follows three steps (Fig. 4.14):
Statistics and Decision Making in High-Throughput Screening
101
1. Find the Center Center
3.3.Normal NormalDistribution Distribution Fitting (N, SD) Fitting (N, SD)
2. Symmetrical Data Subset
% Response
Fig. 4.14. Steps for the computation of SDI in ASDIC.
1. Establish the data centre 2. Establish the range within data that is symmetric around the centre 3. Fit to a normal distribution for this region of data, the mean of which is the data centre and standard deviation is the SDI estimator. Step 1: The data centre is the point where the histogram of the distribution reaches its maximum, or in other words at the highest data point density. As discussed above, the mean and the median are not robust estimators of this centre when asymmetry and outliers are significant. The method makes use of LTSq, i.e. least trimmed squares quartile. LTSq is equivalent to the mean for the subset containing the quarter of data, where these data points are mostly concentrated. Both mean and LTSq minimise the sum of squared residuals: mean, for all data set, and LTSq, for just the most populated quarter of the data set. LTSq turns out to be a more robust estimator than LTS (least trimmed square, which uses half of the population) and LMS (least median squares) for cases of high asymmetry (36, 37) (see Appendix 1 for a description of the statistical parameters). Step 2: The symmetry around the data centre can be analysed adapting a methodology commonly used in exploratory data analysis for the study of symmetry in the tails of distribution. The method consists of setting a data interval centred on the LTSq value, with the same number of data points at each side of the centre. Then, the midrange for this interval (i.e. the average of the two corresponding limits of the interval) is calculated. If the midrange is equal to the LTSq estimator, the distribution symmetry is deemed
102
Coma, Herranz, and Martin
as being symmetrical in this interval. The difference is evaluated within a level of tolerance defined as a percentage of the value of the inter-quartile range (IQR), a robust estimator for the dispersion of the distribution. Successive iterations are run by broadening the interval to include a bigger number of data. It is said that there is asymmetry when the difference between the LTSq estimator and the midrange is greater than a certain tolerance parameter (e.g. jMidRange LTSqj40:02 LTSq). Step 3: Once the data centre (i.e. LTSq) and the range of symmetry are determined, SDI is calculated as the standard deviation (SD) of the normal distribution where the mean equals the LTSq that best fit the data within the symmetrical interval. The size of the sample population (N) and the SD are iteratively varied, and the goodness of fit assessed by the chi-square test is based on the following statistics: k X ðOi Ei Þ2 2 ¼ Ei i¼i where Qi are the observed frequencies, Ei are the expected frequencies for a normal distribution and k is the number of intervals. The test evaluates the disagreement between observed and expected frequencies and concludes that the observed data follow the theoretical normal distribution if the sum of these weight differences is smaller than a critical point. Figures 4.15 and 4.16 show an illustration of how ASDIC software works. ASDIC has proved to be more resistant to long tails and asymmetry, usually rendering lower SD values and hence
Mean Median LTSq
= 3.15 = 1.89 = 1.11
LTSq
Median
Mean
Fig. 4.15. Estimation of centre of data by ASDIC. LTSq (least trimmed squares quartile), median and mean are calculated for the whole population of samples. The plot depicts a zoom around the calculated LTSq. Theoretical distribution of the population of inactive samples is displayed in blue.
Statistics and Decision Making in High-Throughput Screening 1200
1200
1000
1000
800
800
600
600
400
400
200
200
0
0 5000 data Lo
Where: C Lo Up M
5000 data C=M
= = = =
5000 data Up
Lo
103
5000 data CM
Up
Center of distribution of all dataset Lower limit for a interval containing 5000 data at each side of the center Upper limit for a interval containing 5000 data at each side of the center Midrange for the interval, average between Lo and Up
Fig. 4.16. Setting symmetrical range by ASDIC. Left panel: symmetry, the midrange for this interval equals the centre of distribution. Right panel: asymmetry, the midrange for this interval is greater than the centre of distribution.
lower statistical hit cut-offs. ASDIC has proved to be useful for setting statistical cut-offs of promiscuous screens (e.g. hit rates higher than 10%), circumventing spatial patterns (e.g. sample population from patterned positions can be segregated and separately analysed) and assessing screen quality as a QC indicator (e.g. inter-run comparison or comparison with SD from controls for establishing matrix effects).
Acknowledgements The authors are greatly indebted to Ricardo Macarron, Mike Snowden, Mark Lennon, Gavin Harper, Martin Everett, Liz Clark, Glenn Hofmann, Geoff Mellor, Chris Molloy, Andy Vines, Dave Bolton and Javier Sanchez-Vicente for all the productive discussions about how to best implement statistical methodologies in the HTS process at GlaxoSmithKline. Likewise, we would like to thank many other colleagues in IT and Screening for their ideas and experimental data. SQC software has been the result of a joint collaborative effort with Tessella. We are also grateful to Robert Hertzberg, Stephen Pickett and Emilio Diez for their support in the writing of this manuscript.
104
Coma, Herranz, and Martin
Abbreviations EDA: Exploratory Data Analysis IQR: Inter-Quartile Range M: Mean MSR: Minimum Significant Ratio PR: Pattern Recognition QA: Quality Assurance QC: Quality Control QSAR: Quantitative Structure Activity Relationship SD: Standard Deviation SDI: Standard Deviation of Inactives SEL: Systematic Error Level SPC: Statistical Process Control SQC: Screening Quality Control uHTS: ultra-High-Throughput Screening VEP: Variance Explained by the Patterns
Appendix 1: Estimation of the data centre in ASDIC
If the results of an HTS campaign are as below, we note: n
size of the sample, number of compounds or pools
ðx1 ; x2 ; :::; xn Þ
activity values
ðx1:n ; x2:n ; :::; xn:n Þ
ordered activity values
xi:n
ith value in the ordered sample
^y
location estimator
^y
ri ¼ xi ^y 2 r i:n
location estimator residuals ordered squared residuals
– The mean is the LS (least squares) estimator, because it minimises the expression
Statistics and Decision Making in High-Throughput Screening
min ^y
n X
105
ri2
i¼1
– The LMS (least median squares) estimator minimises the expression
2 min median ri i¼1;:::;n
^y
– The LTS (least trimmed squares) estimator minimises the expression min ^y
h X r 2 i:n i¼1
where h ¼ ½n=n22 þ 1 is the half sample size – The LTSq (least trimmed squares quarter) estimator minimises the expression min ^y
q X 2 r i:n i¼1
where q ¼ ½n=4 þ 1 is the quarter sample size. References 1. Charles Annis, Statistical Engineering. Available online at http://www.statisticalengineering.com 2. Malo N, Hanley JA, Cerquozzi S, Pelletier J, Nadon R. (2006) Statistical practice in highthroughput screening data analysis. Nat Biotechnol; 24(2): 167–175. 3. Macarron, R and Hertzberg R. Chapter 2 of this book, Design and Implementation of High Throughput Screening Assays. 4. Assay Guidance Manual Version 4.1. (2005) Eli Lilly and Company and NIH Chemical Genomics Center. Available online at http:// www.ncgc.nih.gov/manual/toc.html 5. Taylor P, Stewart F, Dunnington DJ et al. (2000) Automated assay optimization with integrated statistics and smart robotics. J Biomol Screen; 5: 213–225. 6. Eastwood BJ, Farmen MW, Iversen PW, Craft TJ, Smallwood JK, Garbison KE, Delapp NW, Smith GF. (2006) The minimum significant ratio: a statistical parameter to characterize the reproducibility of potency estimates from concentrationresponse assays and estimation by replicateexperiment studies. J Biomol Screen; 11(3): 253–261. 7. Sittampalam GS, Iversen PW, Boadt JA, Kahl SD, Bright S, Zock JM, Janzen WP, Lister
8.
9.
10.
11.
12.
MD. (1997) Design of signal windows in high throughput screening assays for drug discovery. J Biomol Screen; 2: 159–169. Iversen PW, Eastwood BJ, Sittampalam GS, Cox KL. (2006) A comparison of assay performance measures in screening assays: signal window, Z’ factor, and assay variability ratio. J Biomol Screen; 11: 247–252. Zhang JH, Chung TDY, Oldenburg KR. (1994) A simple statistical parameter for use in evaluation and validation of high throughput screening assays. J Biomol Screen; 4: 67–73. Gribbon P, Lyons R, Laflin P, Bradley J, Chambers C, Williams BS, Keighley W. (2005) Sewing A. Evaluating real-life highthroughput screening data. J Biomol Screen; 10(2): 99–107. Wu Z, Sui, Y. (2008) Quantitative assessment of hit detection and confirmation in single and duplicate high-throughput screenings. J Biomol Screen Online First; first published on January 23, 2008 as doi:10.1177/1087057107312628. Gunter B, Brideau C, Pikounis B, Liaw A. (2003) Statistical and graphical methods for quality control determination of highthroughput screening data. J Biomol Screen; 8(6): 624–633.
106
Coma, Herranz, and Martin
13. Brideau C, Gunter B, Pikounis B, Liaw A. (2003) Improved statistical methods for hit selection in high-throughput screening. J Biomol Screen; 8(6): 634–647. 14. Wu G, Yuan Y, Hodge CN. (2003) Determining appropriate substrate conversion for enzymatic assays in high-throughput screening. J Biomol Screen; 8(6): 694–700. 15. Padmanabha R, Cook L, Gill J. (2005) HTS quality control and data analysis: a process to maximize information from a high-throughput screen. Comb Chem High Throughput Screen; 8(6): 521–527. 16. Westgard JO. (2001) Six Sigma Quality Design & Control. Desirable Precision and Requisite QC for Laboratory Measurement Processes. Westgard QC, Inc., Madison. 17. Enrick NL. (1985) Quality, Reliability, and Process Improvement. Industrial Press Inc, New York. 18. Coma I, Clark L, Diez E, Harper G, Herranz J, Hofmann G, Lennon M, Richmond N, Valmaseda M, Macarron R. (2009) Process validation and screen reproducibility in high-throughput screening. J Biomol Screen; 4(1): 66–76. 19. Analytical Methods Committee. Robust Statistics-How Not to Reject Outliers. (1989); Analyst 114: 1693–1697. 20. Kevorkov D, Makarenkov V. (2005) Statistical analysis of systematic errors in highthroughput screening. J Biomol Screen; 10(6): 557–567. 21. Available online at http://www.info2. uqam.ca/makarenv/HTS/old/hts.html 22. Root DE, Kelley BP, Stockwell BR. (2003) Detecting spatial patterns in biological array experiments. J Biomol Screen; 8(4): 393–398. 23. Makarenkov V, Zentilli P, Kevorkov D, Gagarin A, Malo N, Nadon R. (2007) An efficient method for the detection and elimination of systematic error in high-throughput screening. Bioinformatics; 23(13): 1648–1657. 24. Tukey JW. (1977) Exploratory Data Analysis. Addison-Wesley, Reading, MA. 25. Hoaglin J, Mosteller F, Tukey J. (1983) Understanding Robust and Exploratory Data Analysis. John Wiley, New York. 26. Inglese J, Auld DS, Jadhav A et al. (2006) Quantitative high-throughput screening: a titration-based approach that efficiently identifies biological activities in large chemical libraries. Proc Natl Acad Sci USA; 103(31): 11473–11478. 27. Popa-Burke IG, Issakova O, Arroway JD, Bernasconi P, Chen M, Coudurier L, Galasinski S, Jadhav AP, Janzen WP, Lagasca
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
D, Liu D, Lewis RS, Mohney RP, Sepetov N, Sparkman DA, Hodge CN. (2004) Streamlined system for purifying and quantifying a diverse library of compounds and the effect of compound concentration measurements on the accurate interpretation of biological assay results. Anal Chem; 76(24): 7278–7287. Gagarin A, Makarenkov V, Zentilli P. (2006) Using clustering techniques to improve hit selection in high-throughput screening. J Biomol Screen; 11(8): 903–914. Zhang JH, Chung TD, Oldenburg KR. (2000) Confirmation of primary active substances from high throughput screening of chemical and biological populations: a statistical approach and practical considerations. J Comb Chem; 2(3): 258–265. Fogel P, Collette P, Dupront A, Garyantes T, Guedin D. (2002) The confirmation rate of primary hits: a predictive model. J Biomol Screen; 7(3): 175–190. Zhang XD. (2007) A new method with flexible and balanced control of false negatives and false positives for hit selection in RNA interference high-throughput screening assays. J Biomol Screen; 12 (5): 645–655. Wu X, Sills MA, Zhang JH. (2005) Further comparison of primary hit identification by different assay technologies and effects of assay measurement variability. J Biomol Screen; 10(6): 581–589. Sui Y, Wu Z. (2007) Alternative statistical parameter for high-throughput screening assay quality assessment. J Biomol Screen; 12(2): 229–234. Li Z, Mehdi S, Patel I, Kawooya J, Judkins M, Zhang W, Diener K, Lozada A, Dunnington D. (2000) An ultra-high throughput screening approach for an adenine transferase using fluorescence polarization. J Biomol Screen; 5(1): 31–38. Janzen W, Bernasconi P, Cheatham L, Mansky P, Popa-Burke I, Williams K, Worley J, Hodge N. (2004) Optimizing the chemical genomics process. In: Darvas F, Guttman A, Dorman F (eds) Chemical Genomics: Advances in Drug Discovery and Functional Genomics Applications. Marcel Dekker, New York. Rousseeuw PJ, Leroy AM. (1987) Robust Regression and Outliers Detection. John Wiley, New York. Ripley BD, Venables WN. (2000) Modern Applied Statistics with S. Springer.
Chapter 5 Enzyme Assay Design for High-Throughput Screening Kevin P. Williams and John E. Scott Abstract Enzymes continue to be a major drug target class for the pharmaceutical industry with high-throughput screening the approach of choice for identifying initial active chemical compounds. The development of fluorescent- or absorbance-based readouts typically remains the formats of choice for enzyme screens and a wealth of experience from both industry and academia has led to a comprehensive set of standardized assay development and validation guidelines for enzyme assays. In this chapter, we generalize approaches to developing, validating, and troubleshooting assays that should be applicable in both industrial and academic settings. Real-life examples of various enzyme classes including kinases, proteases, transferases, and phosphatases are used to illustrate assay development approaches and solutions. Practical examples are given for how to deal with low-purity enzyme targets, compound interference, and identification of activators. Assay acceptance criteria and a number of assay notes on pitfalls to avoid should provide pointers on how to develop a suitable enzymatic assay applicable for HTS. Key words: Enzyme, High-Throughput Screening, Fluorescent, Absorbance, Activator, Interference, Kinase, Phosphatase, Methyltransferase.
1. Introduction Enzyme targets have proven to be one of the most tractable classes for small-molecule drug discovery with many of the new molecular entities approved by the FDA in 2007 targeting enzymes (1). In particular, a number of small-molecule drugs targeting kinases and proteases have progressed through the clinic and into the market (1). High-throughput screening (HTS) has long been the approach of choice in the pharmaceutical industry for the initial identification of novel chemical series (2). The search for smallmolecule inhibitors is also becoming of major interest for academic labs (3). Whether in industry or academia, the development of W.P. Janzen, P. Bernasconi (eds.), High Throughput Screening, Methods and Protocols, Second Edition, vol. 565 ª Humana Press, a part of Springer Science+Business Media, LLC 2009 DOI 10.1007/978-1-60327-258-2_5, Springerprotocols.com
107
108
Williams and Scott
successful screens is typically the result of a collaborative partnership between a screening center/drug discovery core and a biology lab with a novel target or assay technology. It is our goal in this chapter to use our experiences in both industry and academia to provide some recommendations and examples regarding development and implementation of enzymatic screening assays and to illustrate some of the pitfalls that may be encountered.
2. Assay Development Guidance
With the recent advent of the Molecular Libraries Screening Center Network (MLSCN; http://mli.nih.gov/mli/) funded by the NIH Roadmap for Medical Research, academic-based screening centers have emerged with the goal of implementing novel and innovative assays submitted by the academic community into an HTS format with subsequent screening to identify molecular probes. A good deal of the assay development and HTS knowledge gathered from industry has found its way into the academic screening centers (3), and in particular much of enzyme assay development has become standardized. A number of excellent reference texts (4, 5) exist describing step-by-step approaches to developing an HTS-ready assay and HTS assay design and implementation has been covered in great detail elsewhere (5, 6). Furthermore, Eli Lilly and the National Institutes of Health Chemical Genomics Center (NIHCGC) have developed a comprehensive assay guidance manual that includes an excellent section on enzymatic assays; a recent update (2008) is available online (http://www.ncgc.nih.gov/guidance/manual_toc.html). In general, the development of an assay for HTS can be summarized in these steps: Assay Development # Assay Adaptation # Assay Validation and Acceptance # High-Throughput Screening Recently, the search for small-molecule inhibitors has expanded beyond kinases and proteases with many new enzyme targets originating from academic groups, including phosphatases (7), deacetylases (8), lipases, esterases (9), transferases, isomerases, reductases, recombinases (10), ligases (11), glycosidases, and convertases. Many of these enzyme classes have proven somewhat less
Enzyme Assay Design for High-Throughput Screening
109
tractable [e.g., phosphatases (12)] and have brought new challenges in developing relevant and productive HTS-amenable assays.
3. Assay Development There are a number of basic decisions and considerations that need to be made in the process of enzyme assay development. Many assays have been developed as a conveyer-belt approach: purify enzyme, identify substrates, select assay format, measure kinetic parameters, and optimize the assay for HTS. The following flow chart describes a typical path to developing a robust enzymatic assay for HTS and each step of the process will be addressed in this section. Form of Enzyme # Choice of Substrate # Choice of Assay Format # Optimization of pH and Buffer Components # Determine Assay Kinetic Parameters # Identify Control Inhibitor # Kinetic or End Point # Assay Stability and DMSO Tolerance 3.1. Form of Enzyme
Many enzymes consist of multiple domains beyond the catalytic domain and expression of a full-length enzyme may prove difficult. Frequently, a truncated form of the target can be expressed more readily to give active enzyme. This approach is often used for kinases where the target site is the ATP-binding pocket and hence only the catalytic domain is used for the enzyme assay. The downside to this approach is that regulatory control of the full-length enzyme may be lost (e.g., kinases such as Akt (13)) and inhibitors that are potent for the truncated enzyme may be ineffective when tested with the full-length enzyme (13). A further
110
Williams and Scott
complication is that for an enzyme to be fully active it may have to be expressed as a multiprotein complex rather than as a purified enzyme (e.g., protein kinase C isoforms, histone deacetylases). Compounds effective on isolated component activities may be ineffective when tested against the functional complex. Indeed, isolation of active complexes may be necessary if activators rather than inhibitors are the goal of the screen (e.g., AMP kinase (14)). In addition, there is potential to discover novel compounds with a novel binding site that is present in the native full length, but not the truncated enzyme (15). 3.1.1. Enzyme Purity
To unambiguously measure only target enzyme activity, it is preferable to use purified enzyme. However, as shown in the following example, it is possible to develop robust assays using an unpurified source of enzyme. This usually requires a selective substrate (or buffer conditions) that allows the measurement of only the enzyme target of interest. This approach should be carefully validated by using controls such as known inhibitors and/or suitable control crude preparations that do not contain target enzyme. The most important requirement for the use of an enzyme preparation for assay development is enzymatic purity (i.e., all the signal in an assay is due to a single enzyme) as opposed to only mass purity (e.g., ‘‘95% pure’’). The use of a control inhibitor in the assay to reproduce known IC50 values and generation of Hill slope close to 1.0 will go a long way to demonstrating enzymatic purity. A Hill slope much different than 1.0 may indicate the presence of contaminating enzyme activity in the enzyme preparation.
3.1.2. Enzyme Assay Example 1: Using Crude Lysate as Protease Enzyme Source
Attempts to develop an HTS-compatible assay for a protease expressed in Escherichia coli were initially unsuccessful using the purified form of the enzyme due to problems with the stability of the purified enzyme preparation. However, a robust HTS assay could be developed using a crude lysate preparation as the source of the enzyme activity. The availability of a specific inhibitor for the enzyme allowed for successful development of this assay. The IC50 value of the known inhibitor was determined as 17.8 1.5 nM (mean SD, n ¼ 16) (Fig. 5.1).
3.2. Assay Formats
Typically enzyme targets have been developed as biochemical assays. There are numerous formats for currently popular classes of enzyme drug targets (16) such as kinases, phosphatases, proteases, and ATPases, while other enzymes such as reductases and carboxylases have fewer format options. The ability to identify active compounds from a particular HTS assay depends in part on the suitability or the quality of the assay used in the screening. The estimated cost per well and resultant cost for the planned screen (including at least 20% extra for dead volumes, IC50 determinations, etc.) should be determined for the desired assay format
Enzyme Assay Design for High-Throughput Screening
111
Fig. 5.1. Replicate inhibition curves for protease enzyme activity. The assays were performed in 384-well plates with a volume of 50 ml in the presence of different concentrations of inhibitor as indicated. The 16 experiments gave an average of IC50 of 17.8 1.5 nM when fit to a sigmoidal dose–response function with a variable slope.
before any wet experiments are done and/or as soon as a working assay is obtained. If the projected cost for the screen is not in the budget, then an alternative format must be chosen. 3.2.1. Homogeneous Versus Nonhomogeneous Assay Format
Assay formats can be classified as homogeneous or nonhomogeneous. Homogeneous formats are often referred to as ‘‘mix-andmeasure’’ assays, a name that reflects the simplicity of the assay protocol. Nonhomogeneous formats usually require a separation of substrate from product for enzyme reactions, which can involve plate washing, plate to plate liquid transfers, and/or centrifugation. Although homogeneous assays have some disadvantages, the advantages of homogeneous assay formats (Table 5.1) for HTS
Table 5.1 Comparison of attributes for homogeneous versus nonhomogeneous assay formats Homogeneous
Nonhomogeneous
Fewer steps
Multiple steps
Easy to automate and miniaturize
Difficult to automate
Fast, robust screens
Time consuming, labor intensive
Compound interference can be an issue
Less susceptible to compound interference
Less sensitive, may require higher amounts of enzyme and substrate
Very sensitive to enzyme activity, can tolerate high concentrations of substrate
112
Williams and Scott
far outweigh the disadvantages (17). A limited number of assay formats allow a direct readout of inhibition, including microfluidics (18). One significant pitfall of homogeneous formats is the potential for compound interference with the assay detection method since a diversity of compounds are screened, typically at relatively high concentrations in the 1–100 mM range. In contrast, nonhomogeneous formats typically wash away compound after the assay and are thus less susceptible to compound interference. Compound interference in homogeneous formats can be detected by performing secondary assays less susceptible to interference such as nonhomogeneous assays. Fluorescent and colorimetric readout-based enzyme assays remain a major contributor to HTS labs throughout the field, although there are some advantages to using radioactivity. The use of radioactivity can allow very sensitive detection of enzyme activity and in some cases may be the only means to detect enzyme activity in high-throughput mode. There are two types of formats readily available for radioactive homogeneous enzyme assays, scintillation proximity assay (17) and FlashplateTM technology (Perkin Elmer, USA). In general, the goal is to develop homogeneous nonradioactive assay formats to measure enzyme activity and so this chapter will focus on absorbance- and fluorescence-based assay formats. 3.2.2. Practical Considerations for Absorbance Assays
Absorbance is by far the most traditional detection method for following enzymatic reactions. Homogeneous absorbance assays have the advantage of being relatively inexpensive, usually simple to design, and they use the ever-prevalent spectrophotometers. For example, there are many enzymes that utilize coenzyme nicotinamide adenine dinucleotide (NAD+ or the reduced form NADH) or nicotinamide adenine dinucleotide phosphate (NADP+ or the reduced form NADPH) in redox reactions or as substrates for posttranslational modifications. Enzyme reactions that generate or consume NADH or NADPH can be detected by change in absorbance. NADH and NADPH absorb strongly at 340 nm, while the oxidized forms (NAD+ and NADP+) do not absorb at this wavelength. Hence, enzyme activity that generates NADH (or NADPH) as one of its products can be detected by an increase in absorbance at 340 nm or if an enzyme uses NADH (or NADPH) as a substrate, a decrease in absorbance indicates activity. For enzyme reactions that do not use NAD, it has been possible in some cases to utilize a coupled assay whereby the initial reaction generates a product that can serve as a substrate by a second NAD-utilizing enzyme. Therefore, the activity of the primary enzyme can be determined by following NADH consumption or generation by the second detection enzyme, which is present at a high nonlimiting concentration. Although the
Enzyme Assay Design for High-Throughput Screening
113
coupled enzyme assays function well, the assay is more complex requiring more assay development, is more costly due to the requirement of another enzyme, and potentially more difficult to troubleshoot and deconvolute. In addition, for screening chemical libraries, there is potential for inhibition of the detection enzyme causing false positives. Therefore, hits need to be tested in a direct enzyme assay format to rule out this possibility. These disadvantages apply to all enzyme-coupled assays and therefore coupled formats should be a last resort to obtain a homogeneous assay. Absorbance assays, of which colorimetric assays are a subset, are usually not very sensitive in the detection of enzyme activity compared to the other homogeneous detection methods (primarily fluorescence). This lack of sensitivity requires higher concentrations of enzyme and/or substrate, which can be prohibitive depending on the availability and cost of these reagents. 3.2.3. Practical Considerations for Fluorescence Assays
Fluorescence is another commonly used detection method for homogeneous enzyme assays. Fluorescent formats tend to be very sensitive to the detection of enzyme activity, allowing low concentrations of frequently precious and/or expensive target enzyme. There are a number of fluorescence-based assay formats that have been used for HT enzyme assays (19).
3.2.3.1. Fluorogenic Assays
The simplest type of fluorescence assay is the use of an artificial nonfluorescent substrate, which becomes a fluorescent product with enzymatic activity. An example of this is the use of 6,8difluoro-4-methylumbelliferyl phosphate (DiFMUP from Invitrogen) for detection of phosphatase activity. DiFMUP is nonfluorescent due to the phosphate group on this small molecule. Upon removal of the phosphate group by a phosphatase, the highly fluorescent 6,8-difluoro-4-methylumbelliferyl molecule (Ex = 360 nm, E = 450 nm) is generated and thus fluorescence is proportional to enzyme activity. This is a very sensitive homogeneous method for detecting either acid or alkaline phosphatase activity and many phosphatases, including protein tyrosine phosphatases, can act on nonphysiological small-molecule substrates like DiFMUP. Similarly, there are commercially available substrates for many hydrolases that increase fluorescence upon hydrolysis by enzymes such as proteases, glycosidases, lipases, and esterases. These types of fluorescence assays can be performed in kinetic or end-point mode and are relatively simple assays to develop.
3.2.3.2. Fluorescence Resonance Energy Transfer Assays
Another fluorescent assay format that is extensively used for enzyme assays is fluorescence resonance energy transfer (FRET). In FRET assays, signal is dependent on the proximity of a fluorophore to a quencher or an acceptor fluorophore and is
114
Williams and Scott
intrinsically a homogeneous technique. It is now commonplace to use FRET-based substrates for detection of protease activity. Peptides derived from the natural substrate are synthetically modified to have a donor fluorophore at one end and a quencher or an acceptor dye at the other end. Hence, the fluorescence of the fluorophore is quenched in the intact peptide. When a protease cleaves an internal site within the peptide, the fluorophore is no longer in close proximity to the quencher which results in an increase in donor fluorescence. FRET-based activity assays are also commonly used for nucleic acid-modifying enzymes such as helicases, nucleases, reverse transcriptases, polymerases, and ligases. DNA can be readily and inexpensively modified to have a donor/acceptor pair positioned in close proximity at either end of the same molecule or at the 5’- and 3’-ends of complimentary oligos such that hybridization brings the pair in close proximity. Helicase activity, for example, can be detected by the separation of the labeled strands, resulting in an increase in fluorescence (20). Time-resolved FRET is a subtype of FRET that employs long half-life lanthanides as donors – most commonly europium and terbium chelates. The long excitation half-life allows a separation in time between excitation and detection of emission photons. This allows short half-life background and compound fluorescence to decay allowing detection of only the acceptor molecule. Thus, TRFRET has been extensively used for decreasing compound interference and it is also a very sensitive detection technique due to the high fluorescence of lanthanides. TR-FRET has been used for a variety of common enzyme targets including kinases and proteases. Europium- and terbium-labeled reagents, including antibodies, along with acceptor-labeled reagents are commercially available, including reagents for custom-labeling reagents with chelates. 3.2.3.3. Fluorescence Polarization
Fluorescence polarization (FP) is a well accepted and frequently used technology for high-throughput assays (21). FP takes advantage of the inverse relationship between the rotational speed of fluorescent molecules in solution and the size of the labeled molecule or complex. Since FP is a ratiometric fluorescence technique, it is subject to less variability than are other nonratiometric fluorescent assays. A variety of configurations of FP assays are possible for the detection of enzyme activity. Possibly the most common type is the competitive FP immunoassay (FPIA). This method uses antibodies that bind the product of the reaction and a fluorophore-labeled product analog (tracer). Conditions are set up such that product from the reaction competes with tracer for binding product, resulting in the depolarization of the tracer. FP assays are commonly used for the detection of kinase activity using a peptide substrate, an antiphosphopeptide antibody, and a fluorophore-labeled phosphorylated peptide tracer. For maximal sensitivity to enzyme-generated product and minimal use of
Enzyme Assay Design for High-Throughput Screening
115
antibody, the tracer is usually used at low nanomolar concentration. For assay development, the concentration of tracer is usually set to generate fluorescence that is 10–20-fold above buffer background and then the optimal concentration of antibody determined by titration into this concentration of tracer. The amount of antibody that is slightly less than the amount that binds all the tracer is chosen for maximal sensitivity to product detection. This low concentration of tracer produces an assay that is very sensitive to enzyme activity, but it also makes the assay more sensitive to compound interference. Redshifted dyes can be used in place of the commonly used fluorescein to dramatically reduce compound interference (22). In another version of the FPIA for detection of kinase activity, an antibody is employed to detect the ADP reaction product instead of the phosphorylated product. This format has the advantage of being a universal format for measuring the activity of any kinase (or ATPase). The disadvantage though is that any contaminating kinase (or other ATPase) in the enzyme preparation may also be detected and result in a signal that results from a mixture of (or predominately from) nontarget activity. 3.3. Assay Development Approaches for Compound Interference
For absorbance- or fluorescence-based assays, there is a reasonable chance that a certain percentage of compounds will interfere with the signal.
3.3.1. Compound Interference in Absorbance Assays
Absorbance assays are highly susceptible to compound interference from certain compounds that absorb at the detection wavelength (for instance, colored compounds in a colorimetric assay). The absorbance of a test compound could potentially result in false negatives or positives, depending on whether an increase or a decrease in absorbance is the measure of activity. For absorbance assays, compound interference can be reduced by configuring the assay such that actives will be identified as those compounds that reduce absorbance instead of increase absorbance. Thus, compounds that absorb at the detection wavelength will not be detected as false positives. However, there is potential for false negatives for absorbing compounds that are also inhibitors. Simple secondary assays such as testing hits with the same enzyme using a totally different assay format or simply adding the compound postassay termination will eliminate this type of false positives. Despite these shortcomings, absorbance assays remain a viable option for HT enzyme assays due to their simplicity and low cost of detection.
3.3.2. Compound Interference in Fluorescence Assays
All homogeneous fluorescence assay formats are susceptible to compound interference to at least some extent. As a general rule, the higher the fluorescence signal, the less susceptible an assay will
116
Williams and Scott
be to compound-related quenching and fluorescence. Thus, an increase in fluorophore used in the assay will make the assay more resistant to compound interference and also generally more robust. However, for assays that employ expensive substrates, antibodies, or other expensive detection reagents, an increase in fluorophore concentration may be cost prohibitive. In this case, one just needs to perform secondary tests to eliminate false positives. Another proven method to reduce the frequency of compound interference is to use red-shifted fluorophores, since most fluorescent small organic molecules emit light in the green light range (as does fluorescein) (22). Despite the widely held belief that TR-FRET is impervious to compound interference, compound interference can still be observed with TR-FRET (23) and this should be addressed before a hit is declared. 3.4. Special Considerations for Enzyme Activator Assays
Traditionally, only inhibitors have been sought for enzyme targets to shut down their activity in vivo. As more potential targets are revealed by research, it is apparent that it would be clinically advantageous to enhance the activity of some enzymes. In principle, activators could be identified from any screen as long as the dynamic range for the assay will allow it. However, some assays may have limitations for identifying activators due to the amount of antibody used or the competitive nature of the assay as in the case of the FPIA, unless the assay is specifically designed to detect activators (or both activators and inhibitors). A general problem with finding activators is that certain formats are more susceptible to false positives. For instance, for fluorescence assays, fluorescent compounds can appear to be an activator. One assay approach we have taken is to ‘‘preread’’ before initiating the reaction with substrate, run the reaction, and subtract the prescreen fluorescence, from the final fluorescence thereby subtracting compound fluorescence.
3.4.1. Enzyme Assay Example 2: Prereading for a Phosphatase Assay
For one target, we were interested in inhibitors and activators of phosphatase activity. In this screen, the following steps were carried out: 1. 0.5 ml of compound was prespotted on the plate followed by the addition of 25 ml of phosphatase containing assay buffer. 2. The plate was then read in the fluorescence plate reader to obtain a fluorescence value for the compound alone. 3. Subsequently, the reaction was started by the addition of 25 ml of DiFMUP and stopped at the appropriate time followed by the final fluorescence read. The specific fluorescence was obtained by subtracting compound fluorescence from the final fluorescence. This type of background subtraction, though not a perfect method in terms of eliminating false-positive activators, eliminated
Enzyme Assay Design for High-Throughput Screening
117
most of the fluorescent compounds in a phosphatase screen of the small Prestwick chemical collection (Prestwick Chemical Company, France) (Fig. 5.2). In theory, a weakly fluorescent true activator could still be detected with this method versus just eliminating all fluorescent compounds from follow-up.
Fig. 5.2. Enzyme activation activities before and after compound: fluorescence subtraction in a fluorescence-based phosphatase assay. The phosphatase assay was performed in 384-well plates with a final volume of 50 ml. Twenty-five microliters of phosphatase in assay buffer was added to 0.5 ml of the Prestwick Chemical Collection compound prespotted on the assay plate. The plates were read in the fluorescence plate reader to obtain compound fluorescence values. Subsequently, the reaction was started by the addition of substrate (DiFMUP) in 25 ml assay buffer, incubated, and the reaction terminated followed by fluorescence detection. The percent inhibition values versus plate position (column number) for this library screen are shown before (A) and after (B) compound fluorescence was subtracted from the postreaction read. A negative percent inhibition is equivalent to activation.
Another approach to reducing the detection of false activators of enzyme activity is to use a kinetic read format such that the baseline for each well will be established with the first read and only an increase in velocity of the reaction (i.e., change in fluorescence or absorbance with time) will indicate a true activator.
118
Williams and Scott
3.5. Assay Buffer Considerations 3.5.1. Protein and Detergent Additives
3.5.2. Enzyme Assay Example 3: Effect of Detergent on Glycosidase Enzyme Linearity
Proteins are inherently sticky and finding nonselective compounds may be more prevalent in assays lacking detergent or with very low protein concentrations (24, 25). In such assays, compounds may form aggregates in solution and inhibit many enzymes nonspecifically. Bovine Serum Albumin (BSA), casein, Tween-80, and Triton X-100 (26) are all examples of additives that help reduce either enzyme or compound ‘‘stickiness’’ or aggregation. For example, the addition of 0.1% BSA has been used in kinase assays to reduce the binding of nonselective compounds (27). However, some legitimate inhibitors bind to BSA and therefore the resulting free compound for enzyme inhibition will be low, resulting in the missing of some hits. Therefore, some groups favor leaving out such protein-based additives to increase the number of hits and use secondary follow-up assays to screen out nonspecific compounds and later modify structure to enhance compound availability in the presence of albumin (or serum). In the course of developing a glycosidase assay, the time course was displaying nonlinear kinetics with the enzyme velocity decreasing with time. This assay was performed with picomolar concentration of purified enzyme in the absence of detergents or BSA. Suspecting that the enzyme might be slowly binding to plastic and being inactivated, we examined enzyme activity over time by preincubating diluted enzyme with and without 0.01% TX-100 from 0 to 2 hrs in a microtiter plate and then started the reaction (Fig. 5.3). A subsequent time course using assay buffer containing detergent resulted in a reaction that was linear with respect to time. This type of plastic-binding problem is more prevalent when using very low concentrations of highly purified enzyme in the absence of protein additives like BSA.
Fig. 5.3. Effect of detergent on the stability of a glycosidase. The stability of enzyme activity was followed over time by preincubating diluted enzyme in the presence (~) or the absence (&) of 0.01% TX-100 from 0 to 2 hrs in a microtiter plate. The reaction was initiated by the addition of substrate and enzyme activity relative to the zero time point (considered 100% activity) was determined.
Enzyme Assay Design for High-Throughput Screening
119
3.5.3. DMSO Tolerance
Another important issue in enzyme assays is the final concentration of DMSO in the screen reaction mixture; many enzymes are at least somewhat inhibitable by DMSO. Occasionally an increase in detection signal is observed with increasing DMSO concentration – this effect may be due to an affect on the enzyme or on the detection method. Since most compound libraries are initially solubilized in 100% DMSO then subsequently diluted in an aqueous buffer, the final reaction mixture will contain a certain concentration of DMSO. Depending on the initial dilution made, final DMSO percentages can range from below 1% up to 5%. Performing assay development experiments using the same concentration of DMSO as anticipated in screening is a good way to avoid these problems at the outset; however, a full range of DMSO concentrations should be tested (i.e., 0.5–10% DMSO). Should the compound library or dilution scheme change, knowing the acceptable range for the DMSO concentration in the assay is crucial.
3.6. Some Important Kinetic Considerations
Defining parameters (e.g., enzyme assay linearity, substrate concentration, and assay incubation time) that may impact the assay kinetics are important. Irrespective of the enzyme assay format used for HTS, kinetic considerations in assay design are critical for success (16, 18, 28).
3.6.1. Enzyme Linearity
The accuracy of inhibition measurements will be dependent upon the linearity of the enzyme reaction. An enzyme concentration titration and time course should be performed to ensure that the conversion of substrate to product is linear with respect to enzyme concentration and time. These experiments should be done as one of the initial experiments in assay development and also confirmed using final assay conditions after optimization.
3.6.2. Substrate Concentration
The substrate concentration in an assay relative to its Km is an important consideration (18). The substrate concentration must be low enough to yield hits with a range of affinities, yet it must be sufficiently high to design a robust, low variability assay. It is important to know the type of mechanistic hits desired from the assay, for example, competitive or uncompetitive. Competitive hits will be harder to uncover when increasing substrate concentration above Km but the opposite is true for hits that are uncompetitive. For competitive inhibitors, too high of a substrate concentration will interfere with the ability to identify weak competitors; conversely, a low substrate concentration will increase the incidence of finding weak competitors. A general rule for developing assays sensitive to competitive inhibitors is to set the substrate concentration in the assay to equal the Km of the substrate, or if practical considerations preclude this (such as cost), to use a concentration below the Km. According to
120
Williams and Scott
Cheng-Prusoff equation relating IC50 to Ki (29), setting the substrate concentration at or below the Km will result in the IC50 values generated being within twofold of the Ki. 3.6.3. End Point Versus Kinetic Read
Many assays are initially developed as kinetic assays, which may be difficult to translate into HTS. If possible, a method for terminating the reaction should be developed using either a known inhibitor or other means. We have found EDTA to be an effective method for terminating kinase reactions and a low percentage of SDS has proven effective for quenching a number of protease reactions. Once the enzyme and the substrate are added and the reaction is started, there is a minimal amount of time necessary to prepare for the termination of the reaction. A short incubation time must thus be avoided. Typically, a 30–60-min incubation time is chosen for enzyme assays. In addition, the stability of the signal from a stopped reaction should be determined over time to know when the assay can be read.
3.6.4. Control inhibitors
The identification and use of a control inhibitor greatly facilitates assay development and validation. A control inhibitor is extremely useful for verifying the enzymatic identity and purity of the enzyme preparation. To obtain reproducible and transferable dose–response curves, it is suggested to work from a single stock concentration of the inhibitor in DMSO. The dose–response curve should be set up as a serial dilution (usually a 1:2 or a 1:3 dilution scheme with 8–12 tested concentrations) in 100% DMSO and equal volumes of this dilution series is transferred into the assay. This dilution scheme minimizes compound solubility problems in aqueous solutions.
4. Is the Assay Ready for HTS? In many instances, assay methods developed in an academic or a biology lab to monitor a novel enzymatic activity are found to be underdeveloped for HTS. There is typically a need to simplify the assay and move to a more HTS-amenable format. 4.1. Strategies to Adapt and Optimize Assay Protocols for Automated Screening
The goal of assay adaptation is to change an assay configuration for HTS without negatively impacting the validity or the sensitivity of the original assay. Assays should be adapted as follows: 1. Miniaturized as much as possible by moving from cuvettebased assays to 96- or 384-well formats. 2. Converted to operate at room temperature.
Enzyme Assay Design for High-Throughput Screening
121
3. Assay volumes and order of addition should be adapted, simplified, and optimized for rapid automated assay assembly. 4. Where feasible, reduce reagent addition steps and minimize wash steps. If possible, a modified protocol should be tested in the laboratory using automation in an identical fashion as the assay will be performed in the HTS lab. 4.1.1. Typical Issues for Adapting Assays to HTS
4.1.2. Enzyme Assay Example 4: Adapting a Transferase Assay for HTS
l
Reagent availability and stability: When reagents cannot be obtained in relatively pure form and in sufficient amounts, are unstable, or exhibit contaminating activities with the potential to invalidate assay data.
l
Too many addition steps: Too many liquid handling steps that cannot be consolidated present timing issues during HTS that usually result in high assay variability.
l
Lack of appropriate response to control compounds: When the pharmacology of control compounds deviates significantly from published results.
l
Temperature: Not all detection instruments have temperature control. Assays developed at temperatures other than ambient that cannot be performed at room temperature.
l
Assay variability: Assays that exhibit indeterminate errors where it is not feasible to establish a Z-factor 0.5 with automation (see Section 4.2).
l
Incubation steps too short: Incubation steps should be long enough for handling stacks of plates in an automated robust fashion. Ideally, end-point enzyme reactions should be 30–60 min long with a 30 min window of time allowed to stop multiple plates without negative effect on the assay.
In the following example, methyltransferase was originally developed as a radioactive SDS-PAGE assay. The goal was to convert this methyltransferase assay to a format compatible with HTS and as a result, a competitive fluorescence polarization (FP) immunoassay capable of detecting the activity of any S-adenosylmethionine (SAM)-utilizing methyltransferase was developed (30). The competitive FP assay was developed to detect the S-adenosylhomocysteine (SAH) product of the methyltransferase reaction. The substrate of the reaction, SAM, is similar to chemical structure of the SAH product. The response of the assay to SAH was relatively linear up to about 40 nM product and the limit of detection was 5 nM (0.15 pmol) SAH in the presence of 3 mM SAM. An IC50 value was obtained for an inhibitor using the FP assay (Fig. 5.4). The IC50 value obtained with this FP enzyme assay was consistent with the value published using a mass spectrometrybased technique (31). Unlike many other published enzyme
122
Williams and Scott
Fig. 5.4. A typical dose–response curve for the methyltransferase assay. Methyltransferase and the indicated concentrations of inhibitor were preincubated together for 5 minutes and then the reactions were initiated by the addition of SAM. Reactions were terminated with an EDTA/antibody/tracer solution. Fluorescence polarization was determined 1 hour after terminating the reaction. Average IC50 from three independent experiments was 12 nM. Each data point is the average of three replicates and error bars indicate standard deviation.
assays, the FP assay is ‘‘universal’’ because it has the potential to quantify activity of any methyltransferase. It is also 1,000-fold more sensitive in detection of product than previously published homogeneous, enzyme-coupled assays for this enzyme class. This high sensitivity to product was a critical feature of this assay, allowing detection of activity from low amounts of this difficult to express and purify enzyme. This FP assay is miniaturizable, homogeneous, very sensitive, of low cost, and requires only commercially available reagents and equipment (30). 4.2. Determine Assay Variability
Assays adapted to HTS should undergo a series of tests to determine the variability of the assay. Small-scale variability studies provide preliminary data on assay performance. Control compounds or reagents used to validate the assay in assay development should be used to ensure that any changes implemented in the assay have not changed the validity or the sensitivity of the assay. The NIHCGC assay guidance manual is an excellent source for such tests and criteria. Briefly, small-scale testing (single plates) of an adapted protocol should be performed with whole-plate assays including control wells (typically maximum signal and minimum signal wells) and DMSO in the body of the plate. This experiment provides data concerning the modified assay’s overall performance including preliminary Z0 (32), standard deviations (SDs), coefficient of variation (CVs), and intra- and interplate variability. If these preliminary values meet the acceptance criteria, then the assay should be tested with at least one control compound or reagent, e.g., a
Enzyme Assay Design for High-Throughput Screening
123
known inhibitor with an established IC50 in the original assay. If the acceptance criteria are not met and no equipment malfunction is evident, then the protocol should be modified until the assay meets the criteria. 4.2.1. Assay Acceptance Criteria
1. Plate Uniformity Test a. Intraplate criteria i. CVmax and CVmid < 10% ii. SDmin average min signal iii. Normalized SDmid < 10% iv. Z’-factor 0.5 v. No edge/patterned effects b. Interplate and interday criteria i. Within and across any 2 days: 15% difference in percent inhibition of midpoint plates 2. IC50 Reproducibility Test a. Less than threefold difference between values b. Less than threefold difference compared to original assay values 3. Compound Test Set Screen a. Z’-factor 0.5 for each plate b. Hit rate < 3% c. Determine hit reproducibility
5. Notes 1. Assays that have multiple steps, including incubation times, numerous reagent additions, or washing steps are inherently noisier and are less precise. The more direct the assembly and readout of the assay, the easier to fulfill the requirements of acceptable data quality. 2. Homogeneous formats are generally preferred for HTS assays due to minimal steps, ease of automating the assay, ease of assay volume miniaturization, and typically lower variability. 3. In general, the goal of most HTS assays is to develop homogeneous nonradioactive assay formats to measure enzyme activity. 4. Activity in any homogeneous assay where compound interference with signal can occur should be confirmed in a completely different format (i.e., a nonhomogeneous radioactive assay such as a filtration assay).
124
Williams and Scott
5. Regardless of the chosen fluorescent format, the use of a concentration of the fluorophore-labeled substrate that is as high as practically and kinetically acceptable (within the limitations of screen cost and reagent availability) will minimize compound interference. If possible, use of a TRFRET or red-shifted dyes will further minimize compound interference 6. Many assays are initially developed as kinetic assays and may be difficult to translate into HTS. If possible, a method for terminating the reaction should be developed using either a known inhibitor or other means such as EDTA or pH change. 7. The temperature stability of the enzyme and assay should be checked. Typically, screens are run at room temperature and HTS assays should be developed to operate at room temperature whenever possible. The stability of diluted enzyme solutions over time at room temperature should also be determined. 8. Many assays use truncated or surrogate substrates and there should be a requirement to confirm inhibition using the fulllength or natural substrate. 9. The stability of potentially labile reagents, such as enzyme, cofactors, and substrates to freeze–thaw cycles should be tested. 10. ‘‘Universal’’ formats that can detect the activity of any member of a class of enzymes should be used with caution since the enzymatic purity is especially critical as there is no selectivity derived from the protein substrate. 11. The estimated cost for the planned screen using the desired format should be determined before any wet experiments are done and confirmed as soon as a working assay is developed. 12. In the authors opinion, it is preferable to use BSA (e.g., 0.1%) and/or a nonionic detergent in an enzyme assay buffer to reduce nonspecific interactions.
Acknowledgments The authors would like to thank Dr. Li-An Yeh, the Director of the BRITE at NCCU, for her continued support. We want to thank Mark Hughes and Ginger Smith from the automation group at BRITE for their excellent technical support, and Dr. Tiffany Graves for kindly providing data. We are grateful to them and all our colleagues in the Biomanufacturing Research Institute &
Enzyme Assay Design for High-Throughput Screening
125
Technology Enterprise (BRITE) at North Carolina Central University; including Dr. Gordon Ibeanu, Dr. Weifan Zheng, Dr. Al Williams, and Dr. Jonathan Sexton for their intellectual support. Finally, we would like to thank the Golden LEAF Foundation and the BIOIMPACT Initiative of the State of North Carolina through the Biomanufacturing Research Institute & Technology Enterprise (BRITE) Center at North Carolina Central University for financial support.
References 1. Hughes, B. (2008) 2007 FDA drug approvals: a year of flux. Nat Rev Drug Discov 7, 107–9. 2. Pereira, D. A., and Williams, J. A. (2007) Origin and evolution of high throughput screening. Br J Pharmacol 152, 53–61. 3. Inglese, J., Johnson, R. L., Simeonov, A., Xia, M., Zheng, W., Austin, C. P., and Auld, D. S. (2007) High-throughput screening assays for the identification of chemical probes. Nat Chem Biol 3, 466–79. 4. Bronson, D., Hentz, N., Janzen, W., Lister, M., Menke, K., and Wegrzyn, J. (2001) Basic considerations in designing high throughput screening assays. Handbook of Drug Screening (Seethala, R. and Fernandes, P.B., eds.), Marcel Dekker, NY, pp. 5–30. 5. Janzen, W. (ed.) (2002) High Throughput Screening: Methods and Protocols. Humana Press, New Jersey. 6. Minor, L. (ed.) (2006) Handbook of Assay Development in Drug Discovery. CRC Press, Florida. 7. Tierno, M. B., Johnston, P. A., Foster, C., Skoko, J. J., Shinde, S. N., Shun, T. Y., and Lazo, J. S. (2007) Development and optimization of high-throughput in vitro protein phosphatase screening assays. Nat Protoc 2, 1134–44. 8. Khan, N., Jeffers, M., Kumar, S., Hackett, C., Boldog, F., Khramtsov, N., Qian, X., Mills, E., Berghs, S. C., Carey, N., Finn, P. W., Collins, L. S., Tumber, A., Ritchie, J. W., Jensen, P. B., Lichenstein, H. S., and Sehested, M. (2008) Determination of the class and isoform selectivity of smallmolecule histone deacetylase inhibitors. Biochem J 409, 581–9. 9. Schmidt, M., and Bornscheuer, U. T. (2005) High-throughput assays for lipases and esterases. Biomol Eng 22, 51–6. 10. Wigle, T., and Singleton, S. (2007) Directed molecular screening for RecA ATPase
11.
12.
13.
14.
15.
16.
17.
18.
inhibitors. Bioorganic & Medicinal Chemistry Letters 17, 3249–53. Sun, Y. (2005) Overview of approaches for screening for ubiquitin ligase inhibitors. Methods Enzymol 399, 654–63. Bernasconi, P., Chen, M., Galasinski, S., Popa-Burke, I., Bobasheva, A., Coudurier, L., Birkos, S., Hallam, R., and Janzen, W. P. (2007) A chemogenomic analysis of the human proteome: application to enzyme families. J Biomol Screen 12, 972–82. Barnett, S. F., Defeo-Jones, D., Fu, S., Hancock, P. J., Haskell, K. M., Jones, R. E., Kahana, J. A., Kral, A. M., Leander, K., Lee, L. L., Malinowski, J., McAvoy, E. M., Nahas, D. D., Robinson, R. G., and Huber, H. E. (2005) Identification and characterization of pleckstrin-homology-domaindependent and isoenzyme-specific Akt inhibitors. Biochem J 385, 399–408. Li, Y., Cummings, R. T., Cunningham, B. R., Chen, Y., and Zhou, G. (2003) Homogeneous assays for adenosine 5’-monophosphate-activated protein kinase. Anal Biochem 321, 151–6. Lindsley, C., Zhao, Z., Leister, W., Robinson, R., Barnett, S., Defeo-Jones, D., Jones, R., Hartman, G., Huff, J., and Huber, H. (2005) Allosteric Akt (PKB) inhibitors: discovery and SAR of isozyme selective inhibitors. Bioorganic & Medicinal Chemistry Letters 15, 761–4. Macarron, R., and Hertzberg, R. P. (2002) Design and implementation of high throughput screening assays. Methods Mol Biol 190, 1–29. Wu, J. (2002) Comparison of SPA, FRET, and FP for Kinase Assays, in High Throughput Screening: Methods and Protocols (Janzen, W.P., ed.), Humana Press, NJ, pp. 65–85. Janzen, W., Bernasconi, P., Cheatham, L., Mansky, P., Popa-Burke, I., Williams, K. P., Worley, J., and Hodge, N. (2004)
126
19.
20.
21.
22.
23.
24.
25.
26.
Williams and Scott Optimizing the chemical genomics process, in Chemical Genomics (Darvas, F., Guttman, A. and Darman, G., eds.), Marcel Dekker, NY, pp. 59–100. Gribbon, P., and Sewing, A. (2003) Fluorescence readouts in HTS: no gain without pain? Drug Discov Today 8, 1035–43. Rasnik, I., Myong, S., and Ha, T. (2006) Unraveling helicase mechanisms one molecule at a time. Nucleic Acids Res 34, 4225. Pope, A., Haupts, U., and Moore, K. (1999) Homogeneous fluorescence readouts for miniaturized high-throughput screening: theory and practice. Drug Discov Today 4, 350–62. Turek-Etienne, T., Small, E., Soh, S., Xin, T., Gaitonde, P., Barrabee, E., Hart, R., and Bryant, R. (2003) Evaluation of fluorescent compound interference in 4 fluorescence polarization assays: 2 Kinases, 1 Protease, and 1 Phosphatase. J Biomol Screen 8, 176. Hemmila¨, I., and Webb, S. (1997) Timeresolved fluorometry: an overview of the labels and core technologies for drug screening applications. Drug Discov Today 2, 373–81. Ryan, A., Gray, N., Lowe, P., and Chung, C. (2003) Effect of detergent on ‘‘promiscuous’’ inhibitors. J Med Chem 46, 3448–51. Knowles, J., and Gromo, G. (2003) Target selection in drug discovery. Nat Rev Drug Discov 2, 63. McGovern, S., Helfand, B., Feng, B., and Shoichet, B. (2003) A specific mechanism of nonspecific inhibition. J Med Chem 46, 4265–4272.
27. Popa-Burke, I., Issakova, O., Arroway, J., Bernasconi, P., Chen, M., Coudurier, L., Galasinski, S., Jadhav, A., Janzen, W., and Lagasca, D. (2001) Streamlined system for purifying and quantifying a diverse library of compounds and the effect of compound concentration measurements on the accurate interpretation of biological assay results. Screening 5, 105–10. 28. Walters, W., and Namchuk, M. (2003) Designing screens: How to make your hits a hit. Nat Rev Drug Discov 2, 259. 29. Cheng, H. (2001) The power issue: determination of KB or Ki from IC50 A closer look at the Cheng–Prusoff equation, the Schild plot and related power equations. J Pharmacol Toxicol Methods 46, 61–71. 30. Graves, T., Zhang, Y., and Scott, J. (2008) A universal competitive fluorescence polarization activity assay for S-adenosylmethionine utilizing methyltransferases. Anal Biochem 373, 296–306. 31. van Duursen, M., Sanderson, J., de Jong, P., Kraaij, M., and van den Berg, M. (2004) Phytochemicals inhibit catechol-omethyltransferase activity in cytosolic fractions from healthy human mammary tissues: Implications for catechol estrogen-induced DNA damage. Toxicol Sci 81, 316–24. 32. Zhang, J., Chung, T., and Oldenburg, K. (1999) A simple statistical parameter for use in evaluation and validation of high throughput screening assays. J Biomol Screen 4, 67.
Chapter 6 Application of Fluorescence Polarization in HTS Assays Xinyi Huang and Ann Aulabaugh Abstract Steady-state measurements of fluorescence polarization have been widely adopted in the field of highthroughput screening for the study of biomolecular interactions. This chapter reviews the basic theory of fluorescence polarization, the underlying principle for using fluorescence polarization to study interactions between small-molecule fluorophores and macromolecular targets, and representative applications of fluorescence polarization in high-throughput screening. Key words: FP, Polarization, Anisotropy, Competition binding, High-throughput screening.
1. Introduction Fluorescence polarization (FP) is a powerful fluorescence-based technique for the study of biomolecular interactions in aqueous solution. For a small-molecule fluorophore or a small molecule labeled with a fluorescent moiety, the interaction with a macromolecule can be monitored through the increase in FP, which occurs with the change in fluorophore mobility upon complex formation. Perrin first described the quantitative relationship of the observed polarization with molecular size and solution viscosity in 1926 (1), and Weber subsequently applied FP to biological systems [for a review see (2)]. FP has since been applied to a wide range of interactions including DNA–DNA interactions, DNA–protein interactions, protein–protein interactions, and small molecule– protein interactions (3–7). Migration of high-throughput screening (HTS) in the biopharmaceutical industry to fluorescence and luminescence formats and development of commercial microplate FP instruments in the mid-1990s have resulted in the explosive growth of FP applications in HTS over the last decade. W.P. Janzen, P. Bernasconi (eds.), High Throughput Screening, Methods and Protocols, Second Edition, vol. 565 ª Humana Press, a part of Springer Science+Business Media, LLC 2009 DOI 10.1007/978-1-60327-258-2_6, Springerprotocols.com
127
Huang and Aulabaugh
1.1. Fluorescence Polarization Basic Theory
Fluorescence polarization, or anisotropy, is a property of fluorescent molecules that can be measured using a FP instrument. A polarized light is utilized to excite a fluorescent sample and the emission light intensities of the channels that are parallel (IS) and perpendicular (IP) to the electric vector of polarized excitation light are collected. The difference, IS – IP, can be normalized by the total fluorescence intensity of the emission beam, ISþ IP (polarization), or by the total fluorescence emission intensity from the sample, IS þ 2IP (anisotropy). Polarization (P) is then defined as (IS – IP)/(IS þ IP) and anisotropy (A) is defined as (IS – IP)/(IS þ 2IP). Both polarization (P) and anisotropy (A) terms have been widely used. Polarization and anisotropy can be interconverted by the equation A ¼ 2P/(3 – P). Anisotropy is preferred for analyzing complex systems because the equations are considerably simpler when expressed in this term (8). Since polarization (P) is nearly linearly correlated with anisotropy (A) because of a limiting anisotropy value of 0.4 (vide infra; Fig. 6.1), applications expressed in polarization are still valid in practical terms. 450 400 350 300 mA
128
250 200 150 100 50 0 0
100
200
300 mP
400
500
600
Fig. 6.1. The relationship between polarization and anisotropy. The data are generated by the equation A ¼ 2P/(3 – P ). The thin, straight line depicts theoretic perfect linear correlation.
In order to more fully comprehend FP, one needs to start with the absorption of excitation light by a fluorophore. When a fluorescent sample is illuminated by polarized light, those molecules with their absorption transition dipole aligned parallel to the electric vector of the polarized excitation have the highest probability of absorption, resulting in polarized emission light that is also parallel to the polarized excitation. Had all molecules in this sample been fully aligned parallel to excitation, the sample would have had anisotropy of 1. In reality, fluorescent molecules in solution are completely randomized relative to excitation. The probability of absorption is proportionally dependent on the angle between the fluorophore absorption dipole and the
FP-Based HTS Assays
129
polarized excitation. This photoselection process results in polarized emission with a theoretical maximum anisotropy of 0.4 (8) (see Note 1). The observed anisotropy of a given sample falls between 0 and 0.4, depending on many extrinsic factors at play during the fluorescence lifetime of the fluorophore. The primary determinant of fluorescence depolarization in dilute solutions is the rotational diffusion of the fluorophore (1, 9, 10). For ideal spherical rotors, anisotropy measured under steady-state conditions follows the Perrin equation [1], where A0 is the intrinsic anisotropy, is the fluorescence lifetime, and is the rotational correlation time of the fluorophore (the time the fluorophore rotates through an angle of 1 radian), which in turn is proportional to the viscosity of the solution () and the molecular volume of the rotor (V ) and inversely proportional to the temperature (T ) [2]. A0 [1] 1 þ = V ¼ [2] RT The consequence of this in practical terms is that fluorescence anisotropy can be used to measure changes in the rotational diffusion rate of a fluorophore as illustrated in Fig. 6.2. As a result, FP measurements can yield information on the size and shape of the A¼
Polarized Excitation
Polarized Excitation
Free Fluorophore
Bound Fluorophore
Rapid Rotation Depolarized Emission
Slow Rotation Polarized Emission
Fig. 6.2. The basics of fluorescence polarization-binding assays. The small-molecule fluorophores free or bound to macromolecules can be excited by vertically polarized light, resulting in polarized emission. The observed steady-state polarization of the sample depends on the extent of fluorophores bound to macromolecules. Free fluorophores have low observed polarization due to fast rotation relative to fluorescence lifetime, while bound fluorophores have high observed polarization due to slow rotation.
130
Huang and Aulabaugh
fluorophore and the molecule complexed with the fluorophore. In non-viscous aqueous solutions, a typical small-molecule fluorophore rotates on a timescale of 40 ps or less (11), much faster than a typical fluorescence lifetime of 10 ns (11), which results in depolarized emission. Upon fluorophore binding to a macromolecule, the complex will rotate much slower with a rotational correlation time on par with the timescale of typical fluorescence lifetime, resulting in polarized emission. This forms the basis for quantifying the fraction of the fluorophore bound to the macromolecule. Figure 6.3 delineates the relationship between fluorescence anisotropy, carrier molecular weight, and fluorescence lifetime (simulated data using [1] assuming a limiting anisotropy of 0.4 and assuming the fluorophore is rigidly attached to a spherical carrier) (12). It is evident that typical fluorophores such as fluorescein and BODIPY have ideal fluorescence lifetimes that allow FP measurements between a small labeled probe (<1,500 Da) and a macromolecule receptor (>15,000 Da).
0.45 0.40 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00 –0.05
Anisotropy
1 ns (Cyanine) 4 ns (Fluorescein) 6 ns (BODIPY) 20 ns (Dansyl) 200 ns (Ruthenium)
0
20000 40000 60000 80000 100000 120000
MW
Fig. 6.3. Simulated graph illustrating the dependence of fluorescence anisotropy on fluorescence lifetime of the fluorophore and molecular size of the carrier macromolecule. Data are generated using [1].
2. Methods Numerous FP applications have been developed for HTS based on the principles described above and shown in Fig. 6.2. Because fluorescence polarization is a ratiometric measurement, in theory the FP signal should have less interference from background fluorescence of the assay plate and buffer. Because of this advantage, FP has become a popular technique for HTS assays. FP can be determined by steady-state measurements or time-resolved measurements (13). In the time-resolved FP measurement, a short
FP-Based HTS Assays
131
pulse of light excites the sample and the emission is recorded with a high-speed detection system that allows measurements on the nanosecond scale. In a steady-state FP measurement, the sample is illuminated by continuous excitation light and the measurement is an average of the time-resolved phenomenon over the intensity decay of the sample. The steady-state FP averaging over a single exponential decay may thus mask complex exponential delays in some systems and could be one of the reasons for some nonideal, steady-state FP data observed. The overwhelming majority of FP applications in HTS are steady-state measurements using commercial fluorescence plate readers. The scope of this chapter is limited only to steady-state FP measurements. This chapter will take a look at three steady-state FP applications in HTS: (i) direct FP competition-binding assay; (ii) FP used as a detection method in a functional assay; (iii) determination of binding mechanism from an FP competition-binding assay. Representative examples of the applications are presented where applicable. 2.1. Direct FP Competition Assay
The objective of a direct FP competition assay is to identify compounds that compete with the small-molecule fluorescent probe for binding to the macromolecular target. This approach is a quick and easy method that has been extensively used to identify active-site binders of enzyme targets and small-molecule binders of nuclear receptors (NR). The disadvantage is that the assay may not identify compounds that bind at sites remote from the fluorescent probe (Section 2.3). In addition to active-site binders, fluorescent probes prepared from allosteric ligands can be used to identify compounds that bind to regulatory sites outside of the target enzyme active site. In the following sections, the process for developing a FP competition assay for the exosite of a protease (FVIIa) and the utilization of a quality control parameter instead of an interference assay (counterscreen) in a direct nuclear receptor FP competition assay to identify false-positive hits are described.
2.1.1. Design and Synthesis of an Appropriate Fluorescent Probe
FP assay development begins with designing a fluorescent probe. There are no universal rules for how to design an ideal probe for FP. When a label is attached to a known ligand to prepare a probe, the probe will work in FP assays only when the following conditions are satisfied: (1) the attachment of the label does not significantly alter how the ligand binds to the target; (2) the label cannot have a strong propeller effect, i.e., the rotational diffusion motion of the label needs to be restricted upon the binding of the probe to the target. A routine practice in our lab is to design probes with various linkers (both type and length) and to experimentally determine if the probes have the expected potency in the activity assay relative to the unmodified ligand and if the probes work in the FP assay. We applied the above approach to design a probe to identify compounds that bind at an allosteric site on the protease factor
132
Huang and Aulabaugh
VIIa (FVIIa) in complex with its cofactor, tissue factor (TF). TF/ FVIIa is a well-validated anticoagulant target (14, 15). E-76, AcALCDDPRVDRWYCQFVEG-NH2 (disulfide bond), is a reported partial inhibitor of TF/VIIA amidolytic activity that binds to an exosite outside of the active site of FVIIa with a reported IC50 of 9.7 nM (16). The mechanism of inhibition was confirmed in our lab, though an IC50 of 2.3 nM was obtained under our assay conditions. Next, the reported crystal structure of the FVIIa/E-76 complex was examined to determine the residues on E-76 that are solvent exposed and potential sites for probe attachment. Based on solvent accessibility in the three-dimensional structure, the Glu residue at the C-terminal end was mutated to a Lys residue for the covalent attachment of Hilyte Fluor 488. The designed probe retained the same interactions with FVIIa as E-76 in a computer model. The probe was then custom synthesized by Anaspec (San Jose, CA). 2.1.2. Determination of Kd Between the Fluorescent Probe and the Target
An appropriate concentration of the probe to use in the Kd determination is dependent upon several factors including the linearity of the fluorescent response with probe concentration, the quantum yield of the probe, the Kd between the probe and the target, and instrument sensitivity. The FVIIa experiments were carried out in an assay buffer containing 50 mM HEPES, pH 7.4, 100 mM NaCl, 5 mM CaCl2, and 0.005% (w/v) Triton X-100. Samples were prepared at a volume of 20 ml in a black 384-well low-volume polypropylene Matrical plate (cat# MP101-1-PP). Fluorescence intensity and anisotropy were measured on an Analyst AD plate reader (Molecular Devices). A probe dose titration was initially performed to determine the linearity of the fluorescent signal with probe concentration, and the probe fluorescence intensity was linear up to 500 nM. The probe concentration for the Kd determination should not be much greater than 2Kd to avoid stoichiometric titration (17). A concentration of the probe corresponding to the IC50 in the functional assay is often chosen as the initial concentration for the binding assay. In this case, an initial probe concentration of 5 nM was selected, which yielded a fluorescence intensity S/B of 70. Because there are intrinsic differences in instrument sensitivity for measuring the S channel and P channel, all plate readers need to be checked and calibrated for the instrument ‘‘G’’ or grating factor before obtaining any polarization data. The revised equation for polarization incorporating the G factor term is P ¼ (IS– G IP)/ (ISþ G IP). In this example, the G factor for the Analyst AD instrument was calibrated to achieve an mP of 60 for the free probe (see Note 2 for details on G factor calibration). Next, a FVIIa concentration dependence was performed in the presence of 5 nM fixed probe concentration and 2000 nM sTF (Fig. 6.4a). Fitting the anisotropy data to equation [3a], where L0
FP-Based HTS Assays
133
a 200
10 min 30 min
mA
150 100 50 0 0
50
100 150 FVIIa (nM)
200
250
b 22000000
10 min 30 min
RFU
18000000
14000000
10000000 0
50
100 150 FVIIa (nM)
200
250
c Fraction ligand bound
1.2
10 min
1.0
30 min
0.8 0.6 0.4 0.2 0.0 0
50
100 150 FVIIa (nM)
200
250
Fig. 6.4. The dose titration of FVIIa in the presence of fixed probe. The assay contained 5 nM probe, 2000 nM sTF, and various concentrations of FVIIa and was measured for anisotropy and fluorescence intensity on Analyst AD after 10 and 30 min incubation at room temperature. (a) Anisotropy measurement; (b) fluorescence measurement; (c) fb data converted from anisotropy data using [4 ].
is the total probe concentration, R0 is the total enzyme concentration, a is the probe signal in the absence of ligand, and b is the probe signal in the presence of saturating concentrations of ligand, yields A1 (free probe), A2 (bound probe), and an approximate Kd
134
Huang and Aulabaugh
value of 0.6 nM. Identical results were observed after 10 min and 30 min incubation, indicating that the binding equilibrium was reached quickly. The same dose titration was also measured by fluorescence intensity. Fitting of the fluorescence intensity data to equation [3a] yielded a Kd of 1.3–1.8 nM (Fig. 6.4b). The discrepancy in Kd values obtained from anisotropy and fluorescence intensity measurements is attributed to the change in quantum yield of the probe upon binding to FVIIa. The quantum yield ratio (Q) of the bound probe to the free probe can be calculated from the ratio of fluorescence intensity of the bound probe to the fluorescence intensity of the free probe. A Q value of 1.78 was obtained from the fluorescence data in Fig. 6.4b. The FP data in Fig. 6.4a were converted to the fraction bound probe (fb) by equation [4]. Fitting of the FP fb data to equation [3b] yielded a Kd of 1.5–1.6 nM (Fig. 6.4c), which now agrees with the Kd calculated from the fluorescence intensity measurement. In addition, the selected probe concentration of 5 nM satisfies the nonstoichiometric conditions (i.e., the probe concentration not much greater than 2Kd). Had the probe concentration been much greater than two times the calculated Kd value, a lower probe concentration would need to be selected and the Kd measurement repeated.
y ¼ a þ ðb aÞ
y¼
ðKd þ R0 þ L0 Þ
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðKd þ R0 þ L0 Þ2 4R0 L0 2L0
ðKd þ R0 þ L0 Þ
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðKd þ R0 þ L0 Þ2 4R0 L0 2L0
fb ¼
ðA A1 Þ ðA A1 Þ þ Q ðA2 AÞ
[3a]
[3b]
[4]
Parallel with the determination of Kd between the fluorescent probe and the target, additional HTS assay optimization should be performed. As mentioned above, FP is a ratiometric method and background fluorescence effects are expected to be minimal. However, in practice a number of experimental details including the chemical and physical properties of the test compounds can introduce artifacts into the results. Some plate types yield significantly larger assay windows for a given FP assay. Buffer optimization including detergent and carrier protein screening should be performed to select assay conditions that minimally affect the aggregation state of the probe and maximally enhance the stability of the target protein. For DMSO tolerance, it is critical that not only the assay signal window is maintained but also the interactions between the probe and the
FP-Based HTS Assays
135
target protein are not adversely affected. The order of reagent addition and the equilibration time need to be evaluated and optimized for FP competition assays that do exhibit time-dependent binding between the probe and the target. When multiple fluorescent probes are available, the most potent probe with a reasonable fluorescence quantum yield should be selected for the FP competition assay. In general, the most potent probe affords the best chance to identify compounds with the widest potency range (17). 2.1.3. Selection of Appropriate Screen Conditions for Direct FP Competition Assay
An fb of 0.5–0.8 is recommended for direct FP competition assays, which is a compromise between achieving a reasonable anisotropy signal window and adequate sensitivity to detect small-molecule compounds (17). An fb value higher than 0.8 will decrease the assay sensitivity to identify weakly active compounds, while an fb value less than 0.5 often leads to an assay with an inadequate anisotropy signal window. Based on data in Fig. 6.4c, the FVIIa concentration corresponding to an fb of 0.8 was selected for the TF/FVIIa FP competition assay. The optimized competition screen contains 5 nM probe and 12.5 nM FVIIa. The FP measurements are taken after 20 min incubation at room temperature. Upon finalization of the screen conditions, reagent working stock stability is then determined to ensure that the screen is conducted within the time constraints of reagent stability.
2.1.4. Validation of Direct FP Competition Assay
Validation of the FP competition assay can be achieved by the use of a nonfluorescent counterpart of the probe or a known positive control that binds at the same site as the probe. In this case, we performed the dose titrations with E-76 (Fig. 6.5a) as a positive control and compound 1, which binds to the FVIIa active site, as a negative control (Fig. 6.5b). E-76 is fully competitive with the probe, consistent with E76 and the probe binding to the same exosite on FVIIa. The IC50 of E-76 in the FP competition assay (23 nM) is several fold higher than the IC50 in the activity assay (2.3–9.7 nM), which is typical for FP competition assays. A lower starting fb will lead to a FP-derived IC50 closer to the functional assay IC50 but with the disadvantage of a smaller FP assay window (17). Compound 1 showed a minor degree of displacement with the probe, consistent with compound 1 and the probe binding at distinct sites on the target (18) and also consistent with partial inhibition of TF/FVIIa amidolytic activity by E-76. In the singledose screen, this direct FP competition assay would be unlikely to identify hits that bind to alternate sites, such as compound 1 in this example. In Fig. 6.5a and b, a control reaction was performed in
136
Huang and Aulabaugh
a 200 Reaction Control
160
mA
120 80 40 0 0.1
1
10
100
1000
10000
E-76 (nM)
b 180 Reaction Control
mA
140
100
60
20 0.1
1
10 100 Compound 1 (nM)
1000
10000
Fig. 6.5. The dose titration of E-76 and compound 1 in FVIIa FP competition assay. The assay contained 5 nM probe, 12.5 nM FVIIa, 37.5 nM sTF, and various concentrations of compound and was read for anisotropy on Analyst AD after 20 min incubation at RT. (a) E-76; (b) compound 1.
parallel to the competition reaction. The control reaction contains only the probe and the compound and is helpful in identifying compound interference issues. A screen was performed at 300 mM using a set of compounds identified by virtual screening. A limited number of hits were identified and then followed up with dose titration in parallel with a control reaction as described above. Three hits were confirmed while the other hits were identified as false positives in the control titration due to compound interference including compound interactions with the probe, compound solubility/aggregation, and compound fluorescence. The three confirmed hits were followed up for further characterization. 2.1.5. A Quality Control Parameter for a Direct FP Competition Screen
A HTS campaign was run using one of the NR FP competition red assays from Invitrogen. The compound library was tested at 10 mM in singlet in a 20 ml assay in a 384-well black Nunc polypropylene
FP-Based HTS Assays
137
plate (cat# 267461). The screen was carried out on a Thermo-CRS platform and read on Envision plate readers (Perkin Elmer). A total of 24,256 hits were identified in the primary assay based on 3 inter-quartile range standard deviation (IQR-SD) above the median. Next an interference screen (counter-screen) would be performed to determine if the hits are selective or just interfering in the assay. For direct FP competition assays there are no good options for a counter-screen. For this NR FP assay, the total fluorescence intensity (FLINT) was calculated from the S and P channel fluorescence and used for the calculation of a new quality control (QC) parameter, mP/total FLINT. We found that total fluorescence intensity of the bound probe is about 2 that of the free probe in this assay, resulting in a relatively constant mP/total FLINT, regardless of the free and bound states of the probe. When mP/total FLINT is significantly below the median of the screen, the sample well likely contains a fluorescent compound. When mP/total FLINT is significantly above the median, the sample well either has a fluorescent quencher or has under-delivery of the probe. We did not identify many fluorescent compounds, likely due to the fact that the red probe was used in the assay. By applying a 3 IQR-SD cutoff above the median for mP/total FLINT, we were able to eliminate 22% of the primary hits due to fluorescence quenching or probe under-delivery. The remaining primary hits were tested in a confirmation assay in triplicate and a large majority of hits (74%) were confirmed as active. Total fluorescence intensity has also been reported as a QC parameter for FP screens (19). 2.2. FP as a Detection Method in Enzymatic Assays
Another application of FP assays in HTS is its use as a detection method in enzymatic assays. The detection methods can be divided into two types: (1) direct measurement of the product formation and (2) measurement of the product via FP competition. The pros and cons of these two types are discussed. Since these assays still measure FP, the standard practices for selection of the probe concentration, determination of Kd between the probe and the detection macromolecule, selection of the detection macromolecule concentration, optimization of reader protocols, optimization of plate, buffer, and other assay parameters as described earlier for direct FP competition assays (Section 2.1) still apply.
2.2.1. FP Method That Directly Measures the Product
The IMAP kinase and PDE assays (Molecular Devices) are examples of FP assays that directly measure the product of an enzyme reaction. In an IMAP kinase assay, the fluorescently labeled phosphopeptide product is detected by IMAP beads (IMAP beads serve as the macromolecules). The advantages of assays such as IMAP include the following: (1) measures the product directly and the
138
Huang and Aulabaugh
signal increases with the product; (2) compound potency (percent inhibition and IC50) does not change significantly with the assay conversion rate when the conversion rate is not greater than 50% (20); and (3) an interference assay is available to remove falsepositive hits due to interference with the detection system. The latter is easily accomplished by using the detection reagents plus the substrate and the product mixed at a ratio that mimics the amount of product formed during the assay. The disadvantages of IMAP assay include the following: (1) the use of labeled substrate instead of the ‘‘native’’ substrate and (2) the assay requires relatively higher conversion (20–50%) of substrate than conventional functional assays using more traditional detection schemes to achieve a reasonable assay mP window. 2.2.2. FP Method That Measures the Product via Competition
Transcreener PDE assay (Bellbrooke), PolarScreen kinase FP assay (Invitrogen), and PI3K FP assay (Echelon Biosciences) are examples of assays that measure the product via competition. In a Transcreener PDE assay, the product (GMP or AMP) is detected by competition with the fluorescently labeled probe for binding to the antibody. The advantages of competition detection assays such as Transcreener include the following: (1) uses the nonlabeled substrate and (2) interference assay is available to remove false-positive hits due to interference with the detection system (using the detection reagents plus the substrate and the product mixed at the assay conversion ratio). The disadvantages of competition detection assays include the following: (1) measures the product through competition and the signal decreases with increasing product; (2) it is difficult to accurately determine the Km for the substrate because the product standard curve is nonlinear due to nonlinear competition detection of the product; (3) the assay window is typically selected between the EC50 and the EC90 of the enzyme dose titration curve; however, compound potency (percent inhibition and IC50) can change significantly with the assay conversion rate. These disadvantages are universal to all competition detection assays and by no means unique to FP competition detection assays. For example, in a simple rapidly reversible competition model, the observed compound potency (IC50) determined from the competition assay deviates from the true IC50 (50% inhibition of the enzyme activity) (unpublished results, Huang, X., 2007). The observed compound IC50 is 2 the true IC50 when the assay is conducted at EC50. However, the observed IC50 is 10 the true IC50 if the assay is performed at EC90. The actual assay is a compromise of an acceptable assay signal window and workable assay sensitivity, knowing the caveat that a larger assay signal window comes at the price of possibly not identifying weakly active compounds.
FP-Based HTS Assays
2.3. Determination of Binding Mechanism from FP CompetitionBinding Assay
139
The compound-binding mechanism can be derived from a compound dose titration in a FP competition-binding assay (18). Figure 6.6 shows three of the most common mechanisms for compound binding: competitive, uncompetitive, and noncompetitive binding. The a value in Fig. 6.6 describes cooperativity between the probe binding and the inhibitor binding in a noncompetitive mechanism. A plot of the fraction bound versus compound concentration in Fig. 6.7 shows the diagnostic displacement curves of compounds that bind competitively, uncompetitively, or noncompetitively to the macromolecule target (18). A competitive inhibitor fully displaces the probe, resulting in a decrease in the a)
E
+
L
EL Kd
+ I Ki EI b) E
+
L
EL Kd + I Ki ELI
c)
E
+
L
EL Kd
+
+
I
I αKi
Ki EI
+
L
ELI αKd
Fig. 6.6. Three common mechanisms for compound binding: (a) competitive mechanism; (b) uncompetitive mechanism; and (c) noncompetitive mechanism, where the a value describes cooperativity between the probe binding and the inhibitor binding. In a noncompetitive mechanism, a ¼ 1 represents no cooperativity between the probe binding and the inhibitor binding, a > 1 represents negative cooperativity (the inhibitor binds weaker to the receptor–probe complex than to the free receptor), while a < 1 represents positive cooperativity (the inhibitor binds stronger to the complex than to the free receptor). R is the receptor, L is the probe, and I is the inhibitor.
140
Huang and Aulabaugh
Fraction probe bound
1.0 0.8 0.6 0.4 0.2 0.0
0
1000
2000
3000
4000
5000
Compound (nM)
Fig. 6.7. Theoretical plots of competitive (.), uncompetitive (r), and noncompetitive (a ¼ 1 (.), 3 (*), and 0.33 (&)) inhibitors. The receptor and probe concentrations are 53.7 and 10 nM, respectively. The Kd for the probe is 20 nM. The Ki for compounds in cases of competitive and uncompetitive mechanisms is 100 nM, while the Ki values in cases of noncompetitive mechanism are 100 and 100a nM.
fraction of ligand bound with increasing inhibitor concentration. In contrast, an uncompetitive compound increases the fraction of bound ligand, resulting in a higher polarization value. The displacement of probe by a noncompetitive compound is more complex and is dependent upon the degree of cooperatively. When the compound reduces (a > 1), increases (a < 1), or has no effect on probe affinity (a ¼ 1), the resulting fraction bound will decrease, increase, or not change, respectively (Fig. 6.7). 2.4. Limitations of Steady-State FP Measurements
As with any technique, fluorescence polarization has its share of limitations. General FP complications include interactions between the fluorescent probe and the compound, compound aggregation, scattered light, sample turbidity, plate or buffer polarization, compound fluorescence, fluorescence quenching due to various factors, and instrument detector saturation. Probe–target (or compound– target) interactions may also not fully mimic the native interactions, in cases including labeled probes having altered affinity/binding mode relative to unlabeled counterparts; using mutant enzymes in place of active enzymes; multiple enzyme conformations; and enzymes with multiple substrates. In addition, nonideal FP data can result from steady-state FP measurements of systems that possess multiple fluorescence lifetimes and/or multiple rotational correlation times and thus complex exponential decays, which may be accurately determined only by time-resolved FP measurements. Finally, because the FP assay signal window does not change significantly with the G factor and the starting polarization value is in practice set arbitrarily, the standard S/B parameter no longer indicates the robustness of the assay. For FP assays, the meaningful statistical parameters are assay signal window (mP) and Z0 .
FP-Based HTS Assays
2.5. Summary
141
Fluorescence polarization is a powerful technique for the study of biomolecular interactions in solution and has been widely used in biochemical high-throughput screens. This chapter reviewed three representative steady-state FP applications and good practices in the development and execution of FP-based HTS assays. After hits are obtained from FP-based HTS assays, a good practice is to always confirm the hits in a secondary assay. For direct FP competition-binding assays, an orthogonal functional assay may be used. For FP detection assays, a second functional assay that employs a different detection method is recommended.
3. Notes 1. Fluorescence polarization measurements on commercial fluorescence plate readers follow one-photon excitation, which has a maximum anisotropy of 0.4. Anisotropy values higher than 0.4 indicate misalignment in the instrument or the presence of scattered light. Excitation with two photons or multiple photons by picosecond or femtosecond laser sources uses different photoselection processes, which can lead to a maximum anisotropy greater than 0.4 (21). 2. The common practice is to set the instrument to 27 mP for 1 nM fluorescein and to subtract the background fluorescence (buffer only) from the IS and IP values as the background fluorescence is often polarized. We recommend adjusting the G factor of the instrument such that the probe mP is between 50 and 100 mP instead. Empirical results have shown that a FP assay signal window (probe bound minus probe free) does not vary significantly with the G factor. A higher initial mP for the probe can avoid situations in screens where polarization values are close to zero or turn negative. When the assay fluorescence intensity S/B is low (<20), background fluorescence due to buffer should be subtracted from the IS and IP values during calibration and buffer-only controls added to the assay plate. There are some commercial FP assays that have a FLINT S/B as low as 5. These assays are much more prone to larger CVs when implemented in screens without designated buffer-only wells. If the probe concentration yields a fluorescence intensity S/B > 50, background effects are minimized and background subtraction is not required.
142
Huang and Aulabaugh
Acknowledgments The authors would like to thank Ray Unwalla of Wyeth Research for molecular modeling of E-76 and E-76 based probes; Rebecca Shirk and Belew Mekonnen of Wyeth Research for collaboration on the TF/FVIIa; Shannon Stahler, Nina Kadakia, Gary Kalgaonkar, William Martin, Mariya Gazumyan, Pedro Sobers, and Jim LaRocque of Wyeth Research for contribution to the NR project; and Richard Harrison of Wyeth Research for critical review of the chapter. References 1. Perrin, F. (1926) Polarisation de la lumie`re de fluorescence. Vie moyenne des mole´cules dans l’e´tat excite. J. Phys. Radium V, Ser.6 7, 390–401. 2. Jameson, D. M. (2001) The seminal contributions of Gregorio Weber in modern fluorescence spectroscopy. In: New Trends in Fluorescence Spectroscopy, Springer-Verlag, Heidelberg, pp. 35–53. 3. Jameson, D. M., and Sawyer, W. H. (1995) Fluorescence anisotropy applied to biomolecular interactions. Methods in Enzymol. 246, 283–300. 4. Checovich, W. J., Bolger, R. E., and Burke, T. (1995) Fluorescence polarization-a new tool for cell and molecular biology. Nature 375, 141–144. 5. Terpetschnig, E., Szmacinski, H., and Lakowicz, J. R. (1997) Long-lifetime metalligand complexes as probes in biophysics and clinical chemistry. Methods in Enzymol. 278, 295–321. 6. Hill, J. J., and Royer, C. A. (1997) Fluorescence approaches to study of protein-nucleic acid complexation. Methods in Enzymol. 278, 390–416. 7. Kakehi, K., Oda, Y., and Kinoshita, M. (2001) Fluorescence polarization: Analysis of carbohydrate-protein interactions. Anal. Biochem. 297, 111–116. 8. Lakowicz, J. R. (1999) Fluorescence anisotropy. In: Principals of Fluorescence Spectroscopy, second edition (Lakowicz, J. R.), Kulwer Academic/Plenum Publishers, New York, pp. 291–319. 9. Perrin, F. (1929) La fluorescence des solutions. Induction mole´culaires. Polarisation et dure´e d’e´mission. Photochimie. Ann. Phys. Ser. 10 12, 169–275. 10. Perrin, F. (1931) Fluorescence. Dure´e e´le´mentaire d’e´mission lumineuse. Confe´rences
11.
12.
13.
14.
15.
16.
17.
18.
19.
d’Actualite´s Scientifiques et Industrielles XXII, 2–41. Lakowicz, J. R. (1999) Introduction to fluorescence. In: Principals of Fluorescence Spectroscopy, second edition (Lakowicz, J. R.), Kulwer Academic/Plenum Publishers, New York, pp. 1–23. Cantor, C. R., and Schimmel, P. R. (1980) P. R. Biophysical Chemistry Part II: Techniques for the Study of Biological Structure and Function, W. H. Freeman, pp. 454–465. Lakowicz, J. R. (1999) Time-dependent anisotropy decays. In: Principals of Fluorescence Spectroscopy, second edition (Lakowicz, J. R.), Kulwer Academic/Plenum Publishers, New York, pp. 321–345. Mackman, N. (2004) Role of tissue factor in homeostasis, thrombosis and vascular development. Arterioscler. Thromb. Vasc. Biol. 24, 1015–1022. Shirk, R. A. and Vlasuk, G. P. (2007) Inhibitors of FactorVIIa/Tissue Factor. Arterioscler. Thromb. Vasc. Biol. 27, 1895–1900. Dennis, M. S., Eigenbrot, C., Skelton, N. J., Ultsch, M. H., Santell, L., Dwyer, M. A., O’Connell M. P., and Lazarus, R. A. (2000) Peptide exosite inhibitors of factor VIIa as anticoagulants. Nature 404, 465–470. Huang, X. (2003) Fluorescence polarization competition assay: The range of resolvable inhibitor potency is limited by the affinity of the fluorescent ligand. J. Biomol. Screening 8, 34–38. Huang, X. (2003) Equilibrium competition binding assay: Inhibition mechanism from a single dose response. J. Theor. Biol. 225, 369–376. Turconi, S., Shea, K., Ashman, S., Fantom, K., Earnshaw, D. L., Bingham, R. P., Haupts, U. M., Brown, M. J. B., and Pope, A. (2001)
FP-Based HTS Assays Real experiences of uHTS: A prototypic 1536-well fluorescence anisotropy-based uHTS screen and application of well-level quality control procedures. J. Biomol. Screening 6, 275–290. 20. Wu, G., Yuan, Y., and Hodge, C. N. (2003) Determining appropriate substrate conversion for enzymatic assays in high-
143
throughput screening. J. Biomol. Screening 8, 694–700. 21. Lakowicz, J. R., Gryczynski, I., Gryczynski, Z., and Danielsen, E. (1992) Time-resolved fluorescence intensity and anisotropy decays of 2,5-diphenyloxazole by two-photon excitation and frequency-domain fluorometry. J. Phys. Chem. 96, 3000–3006.
Chapter 7 Screening G Protein-Coupled Receptors: Measurement of Intracellular Calcium Using the Fluorometric Imaging Plate Reader Renee Emkey and Nancy B. Rankl Abstract G protein-coupled receptors (GPCRs) are the target of approximately 40% of all approved drugs and continue to represent a significant portion of drug discovery portfolios across the pharmaceutical industry. As a result, GPCRs are the focus of many high-throughput screening (HTS) campaigns. Historically, ligand-binding assays were used to identify compounds that targeted GPCRs. Current GPCR drug discovery efforts have moved toward the utilization of functional cell-based assays for HTS. Many of these assays monitor the accumulation of a second messenger such as cAMP or calcium in response to GPCR activation. Calcium stores are released from the endoplasmic reticulum when Gaq-coupled GPCRs are activated. Although Gai- and Gas-coupled receptors do not normally result in this mobilization of intracellular calcium, they can often be engineered to do so by expressing a promiscuous or a chimeric Gaprotein, which couples to the calcium pathway. Thus calcium mobilization is a readout that can theoretically be used to assess activation of all GPCRs. The fluorometric imaging plate reader (FLIPR) has facilitated the ability to monitor calcium mobilization in the HTS setting. This assay format allows one to monitor activation and inhibition of a GPCR in a single assay and has been one of the most heavily utilized formats for screening GPCRs. Key words: GPCR, FLIPR, Heterotrimeric G proteins, Calcium mobilization, HTS.
1. Introduction In the early days of GPCR drug discovery, there was limited choice with respect to assay formats and researchers relied primarily on radioligand-binding assays. The advances in understanding GPCR biology coupled with the introduction of new assay technologies have permitted the implementation of functional cell-based assays W.P. Janzen, P. Bernasconi (eds.), High Throughput Screening, Methods and Protocols, Second Edition, vol. 565 ª Humana Press, a part of Springer Science+Business Media, LLC 2009 DOI 10.1007/978-1-60327-258-2_7, Springerprotocols.com
145
146
Emkey and Rankl
in HTS campaigns (1). Perhaps the most widely used functional assay for screening GPCRs has been the measurement of calcium mobilization. This assay is applicable for Gaq-coupled GPCRs because this signaling pathway culminates in the release of calcium stores from the endoplasmic reticulum into the cytosol. Gai- and Gas-coupled GPCRs do not normally signal through this pathway, but they can be engineered to do so by expressing chimeric (2, 3) or promiscuous (4) Ga proteins. The basic principle of the calcium mobilization assay is to load cells with a calcium-sensitive dye, which fluoresces when bound to calcium; therefore, an increased fluorescence signal is indicative of activation of the target GPCR. The utilization of this assay in HTS applications was enabled by the development of the FLIPR (Molecular Devices Corporation, Sunnyvale, CA) in the mid-1990s (5). Other instruments with similar capabilities such as the FDSS (Hamamatsu Corporation, Hamamatsu City, Japan) and the CellLux (Perkin Elmer, Waltham, MA) have been released in recent years. The latest TM FLIPR model, FLIPRTETRA , has an integrated 96-, 384-, or 1536-well pipettor and is able to accommodate multiple reagent reservoirs allowing one to perform multiple additions to the assay plate. The earlier FLIPR models used an argon ion laser, but the TM FLIPRTETRA uses LEDs to excite the microtiter plate. The image of each well is captured simultaneously by a cooled charge-coupled device (CCD) camera, which is capable of updating images once per second. This is critical for measurement of the rapid calcium response that is typically observed for GPCRs. The kinetic data obtained on the FLIPR essentially provides a fingerprint for activation of the target GPCR. It enables one to extract more information regarding the activity of a test compound as compared to a single-read endpoint assay. The kinetic profile of an agonist can vary slightly depending on the GPCR and/or the cell background. The agonist-induced increase in intracellular calcium may be transient and return to baseline levels relatively quickly (Fig. 7.1A). Alternatively, in some instances, the level of intracellular calcium remains elevated for an extended period of time following agonist treatment (Fig. 7.1B). Knowing what the kinetic profile of a true agonist looks like can help the researcher distinguish between compounds that have agonist activity and those that are likely false positives (Fig. 7.1C,D). The most obvious false positive is a fluorescent compound that elicits a very distinct kinetic profile characterized by an extremely rapid increase in fluorescence signal, which remains relatively unchanged over time (Fig. 7.1D). The experimental workflow for a calcium mobilization assay on the FLIPR is shown in Fig. 7.2. Briefly, cells expressing the GPCR of interest are plated the day prior to the assay, the cells are incubated with a cell-permeable calcium-sensitive dye and then the plate is placed on the FLIPR for the assay. Here we describe the
Screening G Protein-Coupled Receptors
147
Fig. 7.1. Representative kinetic profiles of a calcium mobilization assay. CHO-K1 cells stably expressing a promiscuous Ga protein and a Gas-coupled GPCR were subjected to a calcium mobilization assay using a standard two-addition format 1 performed on the FLIPRTETRA . The first addition was either assay buffer (A and B) or test compound (C and D). The fluorescence signal was captured for 3 minutes and then a known agonist was added to the cell plate (A–D) and the fluorescence signal was monitored for an additional 2 minutes. (A) A representative kinetic profile of a GPCR that exhibits a transient increase in intracellular calcium upon treatment with an agonist. (B) An example of a GPCR that has a prolonged response to agonist. (C) An example of a compound that results in an increase in intracellular calcium, but with different kinetics than a known agonist. (D) A representative kinetic profile of a fluorescent compound. The sharp spike in fluorescence signal at the beginning of the kinetic profiles in A–C is an artifact of the clear FLIPR tips entering the well prior to the first addition.
Cell Line Generation
Plate Cells
Load Cells with Dye
Assay on FLIPR
Data Analysis
I
II
III
IV
V
Fig. 7.2. Experimental flow scheme for a calcium mobilization assay on FLIPR.
method for a two-addition calcium mobilization assay on the TM FLIPRTETRA for a Gas-coupled GPCR that was stably coexpressed with the promiscuous G protein, Ga16, in CHO-K1 cells. Recommendations regarding alternative conditions and parameters that may be applicable to other GPCRs and/or cell lines are also discussed.
148
Emkey and Rankl
2. Materials 2.1. Cell Culture
1. Cell culture medium for CHO-K1 cells: Ham’s F-12 (Mediatech Inc., Manassas, VA) supplemented with 10% characterized fetal bovine serum (FBS, Hyclone, Ogden, UT). 2. Solution of trypsin (0.05%) and ethylenediamine tetraacetic acid (EDTA) (0.53 mM) from Mediatech, Inc. 3. Phosphate-buffered saline (PBS) from Mediatech, Inc. 4. T175 tissue culture flasks (Greiner Bio-One, Monroe, NC). 5. 384-well black-walled, clear-bottom tissue culture-treated microtiter plates (Greiner Bio-One). 6. Multidrop (Thermo Scientific, Waltham, MA) or equivalent liquid dispenser for 384-well format. 7. FuGENETM 6 transfection reagent from Roche (Indianapolis, IN).
2.2. Calcium Mobilization Assay
1. Assay buffer: 20 mM HEPES, 11.1 mM glucose, 1.8 mM CaCl2, 1 mM MgCl2, 2.5 mM NaCl, 5 mM probenecid, and adjust pH to 7.4. Store at 4C and warm to room temperature before use. 2. 500 mM probenecid (MP Biomedicals, Solon, OH) made in 1 N NaOH and stored at room temperature. 3. Fluo-4/AM cell permeant from Invitrogen (Carlsbad, CA) is solubilized to a 1 mM stock solution with 100% DMSO by sonicating for 10 minutes. This stock solution is stable at –20C. 4. The 2 mM Fluo-4/AM dye-loading solution is prepared on the day of assay as follows. Mix the necessary volume of 1 mM Fluo-4/AM with an equal volume of 20% Pluronic F-127 in DMSO (Invitrogen). Add this solution to the appropriate amount of assay buffer. The final dye-loading solution consists of 2 mM Fluo-4/AM and 0.04% Pluronic F-127 in assay buffer and should not be stored and used the next day. 5. 384-well clear FLIPR tips from Axygen (Union City, CA).
3. Methods The method presented here was used to enable a calcium mobilization assay for a family of Gas-coupled GPCR using the FLIPRTETRATM . The signaling of this GPCR was redirected to the calcium mobilization pathway by generating a stable cell line
Screening G Protein-Coupled Receptors
149
(see Note 4.1, item 1) that coexpresses the GPCR and Ga16(see Note 4.1, item 2). Chinese hamster ovary (CHO-K1) cells (ATCC, Manassas, VA) were used for these studies. These cells are adherent, but nonadherent cells can also be used for these types of assays (see Note 4.1, item 3). 3.1. Cell Line Generation
Several methods exist to introduce the target GPCR and Ga proteins (if needed) into cells. The most commonly used methods for transfection include calcium phosphate, electroporation, or newer lipid transfection reagents such as FuGENETM 6. The example presented here utilizes FuGENETM 6. 1. The optimal DNA:FuGENETM 6 ratio is determined by performing several small-scale transfections in which the concentration of a plasmid encoding green fluorescent protein (GFP) and FuGENETM 6 is varied. The GFP signal is monitored and used to determine the DNA:FuGENETM 6 ratio for optimal transfection efficiency. 2. Seed cells in a 6-well plate at a density of 1 104cells per well in 3 mL of growth medium for 24 hours prior to transfection. Maintain cells in an incubator set at 37C/5% CO2. Transfect cells with 6 mL of FuGENETM 6 + 1.5 mg of DNA according to the manufacturer’s instructions (see Note 4.1, item 4). 3. Harvest cells 48 hours posttransfection and seed a T175 flask containing growth medium supplemented with the appropriate concentration of antibiotic encoded by the transfected plasmid. Maintain cells under antibiotic selection and monitor for the formation of drug-resistant colonies. 4. Harvest the cells once colonies have developed. This cell population is called the stable ‘‘pool.’’ A clonal line is isolated by single-cell sorting cells into a 96-well plate with the selection media (see Note 4.1, item 5). Other methods, such as seeding cells into 100 mm dishes and isolating colonies with cloning rings, may also be used. Wells are monitored over 2 weeks and wells with single colonies are harvested for testing and expansion. Each colony is tested in the FLIPR for correct response to agonist. Those clones with a positive response are chosen for further characterization with known agonists and antagonists. 5. Once a clonal cell line is chosen, expand the cells and make a liquid nitrogen cell bank for long-term storage. Cells are frozen at a density of 1 107cells/mL in 90% FBS/10% DMSO. 6. Cells are maintained in culture by splitting every 3 days with a seeding density of 3 106cells in a T175 flask (see Note 4.1, item 6).
150
Emkey and Rankl
3.2. Cell Plating
1. Cells for the calcium mobilization assay should be in exponential growth for optimal response. 2. Harvest cells with trypsin–EDTA (see Note 4.2, item 1). 3. Seed cells in a 384-well black-walled, clear-bottom plate (seeNote 4.2, item 2) at a density of 10,000 cells per well (see Note 4.2, item 3) in 50 mL of growth medium. 4. Leave plates in a single layer at room temperature for at least 60 minutes (see Note 4.2, item 4). 5. Incubate plates overnight in a 37C/5% CO2 incubator.
3.3. Loading Cells with Dye
1. Prepare dye-loading solution (2 mM Fluo-4/AM, 0.04% Pluronic F-127 in assay buffer) (see Note 4.3, items 1–2). 2. Completely remove media from the cell plate (see Note 4.3, item 3). This can be done by manually inverting the plate and flicking the media out of the plate or by using a plate washer to aspirate the media from the wells (see Note 4.3, item 4). 3. Add 20 mL per well of the dye-loading solution and place the plate into a 27C incubator in a single layer for 60 minutes (see Note 4.3, item 5). 4. After 60 minutes, remove dye and replace with 20 mL per well of assay buffer (see Note 4.3, item 6).
3.4. Assay on FLIPR
TM
These instructions are for an assay performed on the FLIPRTETRA instrument in 384-well format using LED excitation at 470–495 nm and emission at 515–575 nm. These instructions can be adapted to other instruments that can read fluorescent signals in real time and have liquid handling capabilities such as those mentioned in Section1. Prior to running the assay, a protocol file needs to be written on the FLIPR. The protocol file tells the instrument where the plates are located on the deck, the type of plates being used, the type of pipettor head (96, 384, or 1536), the excitation and emission wavelengths, and the height and speed for the integrated pipettor (see Note 4.4, item 1). The protocol also defines the length of the assay, when to add the reagents, and the frequency that images are captured during the run. The protocol for the example presented here consists of two additions to the cell plate (see Note 4.4) and has the following settings: a. Ten reads are collected with a 1-second read interval prior to the first addition. b. First addition: add 20 mL of compound. The dispense speed is 20 mL/second and the height is 20 mL. Images are collected at 1-second interval for 3 minutes.
Screening G Protein-Coupled Receptors
151
c. Second addition: add 20 mL of agonist with a dispense speed of 20 mL/second and a height of 40 mL. Images are collected at 1-second interval for 2 minutes. 1. After dye loading is complete, place the cell plate in the read TM position on the FLIPRTETRA . Wait for 5 minutes before running the assay to allow the cells to stabilize. Reading the plate immediately after dispensing buffer will result in a drift in the response. 2. Place the reservoirs containing the reagents to be added to the plate in the appropriate positions on the FLIPR. The reservoir for the first addition consists of compound at twice the desired final concentration. The reservoir for the second addition contains agonist at three times its EC80 concentration. 3. Place pipet tips in tip-loading position on the FLIPR (see Note 4.4, item 2). 4. Perform a signal test to determine the dye-loading efficiency of the cells and the variability across the plate. Set the LED camera gain to 100 and the excitation intensity at 70 with the CCD camera exposure held constant at 0.4 seconds. The CCD camera takes a picture of the entire plate and the images are converted to a numerical readout called relative fluorescent units (RFU). The maximal saturation of the LEDs in the TM FLIPRTETRA is reported to be 9000 RFU; however, the authors’ experience is that the saturation limit is 6000 RFU. Above 6000 RFU a saw-tooth pattern is visible in the data, which can result in false positives. Once the signal test is performed, the gain and excitation should be adjusted so that the average RFU across the plate is 1000. In addition to viewing the RFUs, it is possible to see the image of the plate. The image is extremely helpful in identifying smudges or lint that may be causing high or low RFU values in specific wells. It also helps to visualize the cell monolayer and patterns resulting from cell plating, dye, or buffer additions. The overall %CV for a 384-well plate under optimal culturing and assaying conditions should be <10%. 5. Once the signal test is complete and the camera gain and the excitation intensity are set, the run can be initiated. 3.5. Data Analysis
The software provided with the FLIPR instrument is used to reduce and export the data for additional analysis. 1. Apply spatial uniformity correction to the plate (see Note 4.5, item 1). 2. Define the time sequences for the first and second response as 20–180 seconds and 200–300 seconds, respectively. These sequences are termed ‘‘time cuts’’ (see Note 4.5, item 2).
152
Emkey and Rankl
3. Identify the maximal signal over each time cut using the FLIPR software and export the data as a statistics file. 4. This data can then undergo additional analysis such as log– concentration response curves using graphing applications such as GraphPad Prism1 (San Diego, CA).
4. Notes The method provided here describes the assay for one GPCR and the conditions will not necessarily apply to all GPCRs. The parameters as presented here may require some modifications in order to obtain an optimized assay for other GPCRs. In this section we describe the parameters that are often examined to deliver an optimized assay. 4.1. Cell Line Generation
1. A stable cell line expressing the target GPCR and promiscuous or chimeric Ga protein (where applicable) is not always required. Transiently transfected cells have been successfully used in calcium mobilization assays performed on the FLIPR (6, 7). 2. The redirection of Gai- or Gas-coupled GPCRs to the calcium mobilization pathway by chimeric (2, 3) or promiscuous (4) Gaproteins may not be without consequence. Alteration of ligand pharmacology in terms of efficacy and potency has been reported for some GPCRs that have been redirected to the calcium pathway (4, 8, 9). 3. The development of no-wash calcium dyes (see Note 4.3) has significantly simplified the protocol for suspension cells in the calcium mobilization assay (10, 11). 4. When both the target GPCR and a Ga protein need to be transfected into cells, one can either cotransfect both plasmids simultaneously or transfect them in series. The authors have had success with the latter method. Typically the Ga protein is transfected and a stable pool of cells is generated, which is then transfected with the target GPCR and stable clones are isolated and screened for response. In this instance, it is critical that the plasmids used encode different antibiotic resistance. 5. The pool of stably transfected cells may provide an acceptable assay, thus eliminating the need to screen and select a clonal line. However, due to the cell-to-cell variability of plasmid transfections, a better assay is generally obtained by generating a single-cell clonal line. The authors have had success using retroviral gene transduction to create stable cell lines
Screening G Protein-Coupled Receptors
153
that perform well in this assay without isolating a clone. This is attributed to less variability in the expression level of the transduced GPCR across the cell population than is observed with plasmid transfections. 6. It may be possible to use cells directly from frozen stocks rather than continuously culturing cells for the assay. This approach has been successful for a number of laboratories (12, 13), but it needs to be tested on a case-by-case basis. If the cells are amenable to use directly from frozen stocks, it can ease the burden of continuously culturing large amounts of cells during an HTS campaign. Moreover, if a large single batch of cells is frozen in aliquots for a screen, it may improve the day-to-day variability in response that is typically observed for cell-based assays. 4.2. Cell Plating
1. Some GPCRs contain trypsin-sensitive sites in their extracellular domain, resulting in a decreased response when the cells are harvested using trypsin. In these cases an enzyme-free dissociation buffer is recommended for harvesting the cells. 2. Nonadherent cells or cells that are not strongly adherent, such as HEK293 cells, may perform better on plates coated with agents such as poly-D-lysine, collagen, or fibronectin. 3. The optimal cell density needs to be determined for each cell line used. This should be done by plating an entire plate at a single density and examining the variability across the plate. Adjustments in the cell density can often overcome variability issues such as edge effects. Cell densities typically fall in the range of 10,000–25,000 cells per well. 4. Incubation of newly seeded plates at room temperature prior to placing them in a 37C/CO2incubator has been reported to reduce edge effects (14).
4.3. Loading Cells with Dye
The example presented here used Fluo-4/AM dye; however, additional cell-permeable calcium-sensitive dyes are available, including Calcium Green-1 (Molecular Probes, Leiden, Netherlands) and Fluo-3/AM (Molecular Probes). Kits utilizing no-wash dyes are also available and have the advantage of eliminating the need to replace the dye solution with assay buffer prior to the assay. These no-wash dyes include a quenching agent in the formulation thereby reducing the background fluorescence caused by the extracellular dye. Such kits include Screen QuestTM Fluo-8/NW Calcium Assay Kit (ABD Bioquest, Sunnyvale, CA), BDTM Calcium Assay Kit (BD Biosciences, Rockville, MD), Fluo-4/NW Calcium Assay Kit (Invitrogen, Carlsbad, CA), and FLIPR Calcium 3 or Calcium 4 Assay Kits (Molecular Devices, Sunnyvale, CA). The decision of which dye to use should be determined experimentally as the optimal dye can vary depending on the GPCR and/or the cell type (15).
154
Emkey and Rankl
More recently the use of photoproteins, such as aequorin (16, 17) and Photina1 (18), have been widely adopted to monitor intracellular calcium levels. These proteins enzymatically generate a luminescent signal upon elevation of intracellular calcium and may offer a larger signal-to-background ratio than the calcium sensitive dyes. Plate readers capable of reading flash luminescence include FLIPR3, FLIPRTETRA, LumiLux1 (Perkin Elmer, Waltham, MA), and CyBi1-Lumax flash HT (CyBio, Jena, Germany). 1. The optimal concentration of dye will vary depending on the cells and the dye used. 2. Some cell lines require the inclusion of the anion transporter inhibitor probenecid in the assay buffer to prevent the efflux of dye from the cells during the experiment. CHO cells are an example of a cell line that requires probenecid during and after the dye-loading step. 3. It is very important to completely remove all media from the wells prior to dye loading the cells. Residual media in the well can result in poor assay performance in terms of both response and variability. 4. Maintenance of the integrity of the cell monolayer is important because the FLIPR is designed to collect fluorescence from the bottom of the well. This will result in improved variability and robustness of the assay. 5. The incubation temperature and time for dye loading are cell line dependent and will need to be determined on a case-bycase basis. The authors’ experience has been that most cells are compatible with dye loading at room temperature; however, some cells may perform better when the dye loading is performed at an elevated temperature such as 37C. Typical time for dye loading is 1–2 hours. 6. When using dyes that are not classified as ‘‘no-wash’’ such as Fluo3/AM or Fluo-4/AM, it is necessary to remove the dye and replace it with assay buffer prior to performing the assay on the FLIPR. Many protocols include one or more iterations of washing following removal of the dye. This can reduce the performance of the assay and increase variability due to the disruption of the cell monolayer during the washing process. The authors’ experience is that additional washing steps are not required and simply exchanging the dye with assay buffer generally suffices. 4.4. Assay on FLIPR
The FLIPR is capable of performing more than one reagent addition to the cell plate. This allows one to design an assay that can detect both agonism and antagonism in the same experiment. The assay consists of two reagent additions to the cell plate (Fig.7.3A). First, compound is added and then after a defined period of time, a known agonist is added to the cell plate. A submaximal
Screening G Protein-Coupled Receptors
A
B
Inactive Compound
Compound
EC80 Agonist
3500
3000
2000
1500
1000
Relative Light Units
Relative Light Units
3000
2500
2500
2000
1500 1000
100
150
0
Time (seconds)
200
250
300
2500 2000
1500 1000
0
50
3000
50
100
150
200
Time (seconds)
250
300
0
Relative Light Units
Antagonist Compound
EC80 Agonist
Compound
EC80 Agonist
C
Agonist
155
50
100
150
200
250
300
Time (seconds)
Fig. 7.3. Two-addition FLIPR assay for detection of agonists and antagonists. CHO-K1 cells expressing a promiscuous Ga protein and a Gas-coupled GPCR were assayed using a standard two-addition calcium mobilization assay in 1 384-well format on the FLIPRTETRA . The first addition consisted of test compound and the fluorescence signal was captured for 3 minutes. Then an EC80 concentration of a known agonist was added to the cell plate and the fluorescence signal was monitored for an additional 2 minutes. (A) The kinetic profile of an inactive compound that does not elicit an increase in fluorescence signal and that does not alter the response to the agonist. (B) The kinetic profile of a compound that exhibits agonist activity. This compound induces an increase in fluorescence upon addition to the cells and reduces the subsequent response to the known agonist due to receptor desensitization. (C) The kinetic profile of a compound with antagonist activity. This compound does not result in an increase in fluorescence, but does reduce the subsequent agonist-induced response. The sharp spike in fluorescence signal at the beginning of the kinetic profiles is an artifact of the clear FLIPR tips entering the well prior to the first addition.
concentration of agonist is used, typically an agonist concentration corresponding to the EC80–EC90. The agonist activity of the compound is monitored following the first addition and is indicated by an increased fluorescent signal (Fig. 7.3B). The antagonist activity of the compound is assessed after the second addition. An antagonist results in a reduced response elicited by the addition of the known agonist (Fig. 7.3C). Note that many GPCRs desensitize in which case any compounds that exhibit agonism will fail to respond to the second addition of the known agonist. Thus, these compounds will appear to be antagonists as well as agonists. Care needs to be taken to exclude these compounds from follow-up as antagonists. 1. The FLIPR is designed to collect fluorescence from the bottom of the well, so it is important to maintain the integrity of the monolayer of cells. This will improve the variability and robustness of the assay. The dispense speed and height of the integrated pipettor on the FLIPR should be adjusted to minimize disruption of the cells while maintaining ample mixing of the reagent that is added to the well. 2. The use of clear pipet tips on the FLIPR may result in a spike of fluorescence signal as the tips enter the well during reagent additions to the cell plate. This can be observed in the kinetic profiles shown in Figures 7.1 and 7.3. This artifact can be eliminated by using black pipet tips.
156
Emkey and Rankl
4.5. Data Analysis
The authors have found that the ability to visualize the kinetic profiles at the time of data review is very valuable. This allows one to quickly identify instances of desensitization of the receptor by a compound with agonist activity. In addition, false positives such as fluorescent compounds or compounds that elicit a strange kinetic profile can be quickly identified and eliminated from follow-up (Fig. 7.1C–D). This can expedite the postscreening hit assessment phase. 1. A number of data processing methods exist in the FLIPR software and are used to compensate for variations in data due to variations in cell density, fluorescent intensity, dye loading, or other variability associated with the assay run. The data processing algorithms include, but are not limited to, spatial uniformity correction, subtract bias, response over baseline, negative control correction, and positive control
Fig. 7.4. Time-cut determination for data analysis. CHO-K1 cells expressing a promiscuous Ga protein and a Gas-coupled GPCR were assayed using a standard two1 addition calcium mobilization assay in 384-well format on the FLIPRTETRA . The first addition consisted of varying concentrations of test compound with known agonist activity and the fluorescence signal was captured for 3 minutes. Then an EC80 concentration of agonist was added to the cell plate and the fluorescence signal was monitored for an additional 2 minutes. The time cuts chosen for the first read (20–180 seconds) and the second read (200–300 seconds) are indicated by the hatched boxes. The maximum signal within each time cut was used for subsequent data analysis. The sharp spike in fluorescence signal at the beginning of the kinetic profiles is an artifact of the clear FLIPR tips entering the well.
Screening G Protein-Coupled Receptors
157
scaling. Generally, spatial uniformity correction is used when the cell type, cell density, and dye-loading conditions are the same for the entire plate. If different cell densities, cell lines, dye types, or dye concentrations are used in the same plate, then response over baseline can be used. 2. The term ‘‘time cut’’ refers to the time sequence over which the data are analyzed. In a dual-addition assay as presented here, two time cuts are defined. The first consists of a segment of time following the initial addition of compound and is used to monitor for agonist activity. The second is after the agonist is added and is used to measure antagonist activity of the compound. When a fluorescence spike is observed due to the use of clear pipet tips (see Note 4.4, item 2), it is important that the time sequences used for analyzing data begin after the spike (Fig.7.4). Also, for many GPCRs, not only does the magnitude of response decline with decreasing concentrations of agonist, but the peak of the response may also be delayed (Fig.7.4). In these cases, a time cut that is too narrow may capture the peak of the response to high agonist concentrations, but miss the peak response with lower concentrations of agonist. Therefore, the authors recommend using a time cut that spans the entire time following each reagent addition. References 1. Eglen, R. M., Bosse, R., and Reisine, T. (2007) Emerging concepts of guanine nucleotide-binding protein-coupled receptor (GPCR) function and implications for high throughput screening. Assay Drug Dev. Technol. 5, 425–451. 2. Coward, P., Chan, S. D., Wada, H. G., Humphries, G. M., and Conklin, B. R. (1999) Chimeric G proteins allow a highthroughput signaling assay of Gi-coupled receptors. Anal. Biochem. 270, 242–248. 3. Yokoyama, T., Kato, N., and Yamada, N. (2003) Development of a high-throughput bioassay to screen melatonin receptor agonists using human melatonin receptor expressing CHO cells. Neurosci. Lett. 344, 45–48. 4. Kostenis, E., Waelbroeck, M., and Milligan, G. (2005) Techniques: promiscuous Ga proteins in basic research and drug discovery. Trends Pharmacol. Sci.26, 595–602. 5. Schroeder, K. S. and Neagle, B. D. (1996) FLIPR: a new instrument for accurate, high throughput optical screening. J. Biomol. Screen. 1, 75–80.
6. Zhang, J. Y., Nawoschik, S., Kowal, D., Smith, D., Spangler, T., Ochalski, R., Schechter, L., and Dunlop, J. (2003) Characterization of the 5-HT6 receptor coupled to Ca2+ signaling using an enabling chimeric G-protein. Eur. J. Pharmacol. 472, 33–38. 7. Elshourbagy, N. A., Ames, R. S., Fitzgerald, L. R., Foley, J. J., Chambers, J. K., Szekeres, P. G., Evans, N. A., Schmidt, D. B., Buckley, P. T., Dytko, G. M., Murdock, P. R., Milligan, G., Groarke, D. A., Tan, K. B., Shabon, U., Nuthulaganti, P., Wang, D. Y., Wilson, S., Bergsma, D. J., and Sarau, H. M. (2000) Receptor for the pain modulatory neuropeptides FF and AF is an orphan G protein-coupled receptor. J. Biol. Chem. 275, 25965–25971. 8. Sawyer, N., Cauchon, E., Chateauneuf, A., Cruz, R. P., Nicholson, D. W., Metters, K. M., O’Neill, G. P., and Gervais, F. G. (2002) Molecular pharmacology of the human prostaglandin D2 receptor, CRTH2. Br. J. Pharmacol. 137, 1163–1172. 9. Kowal, D., Zhang, J., Nawoschik, S., Ochalski, R., Vlattas, A., Shan, Q., Schechter, L., and Dunlop, J. (2002) The
158
10.
11.
12.
13.
Emkey and Rankl C-terminus of Gi family G-proteins as a determinant of 5-HT(1A) receptor coupling. Biochem. Biophys. Res. Commun. 294, 655–659. Lubin, M. L., Reitz, T. L., Todd, M. J., Flores, C. M., Qin, N., and Xin, H. (2006) A nonadherent cell-based HTS assay for Ntype calcium channel using calcium 3 dye. Assay Drug Dev. Technol. 4, 689–694. Ott, T. R., Pahuja, A., Lio, F. M., Mistry, M. S., Gross, M., Hudson, S. C., Wade, W. S., Simpson, P. B., Struthers, R. S., and Alleva, D. G. (2005) A high-throughput chemotaxis assay for pharmacological characterization of chemokine receptors: utilization of U937 monocytic cells. J. Pharmacol. Toxicol. Methods 51, 105–114. Kunapuli, P., Zheng, W., Weber, M., Solly, K., Mull, R., Platchek, M., Cong, M., Zhong, Z., and Strulovici, B. (2005) Application of division arrest technology to cell-based HTS: comparison with frozen and fresh cells. Assay Drug Dev. Technol. 3, 17–26. Chen, J., Lake, M. R., Sabet, R. S., Niforatos, W., Pratt, S. D., Cassar, S. C., Xu, J., Gopalakrishnan, S., Pereda-Lopez, A., Gopalakrishnan, M., Holzman, T. F., Moreland, R. B., Walter, K. A., Faltynek, C. R., Warrior, U., and Scott, V. E. (2007) Utility of large-scale
14.
15.
16.
17.
18.
transiently transfected cells for cell-based high-throughput screens to identify transient receptor potential channel A1 (TRPA1) antagonists. J. Biomol. Screen. 12, 61–69. Lundholt, B. K., Scudder, K. M., and Pagliaro, L. (2003) A simple technique for reducing edge effect in cell-based assays. J. Biomol. Screen. 8, 566–570. Xin, H., Wang, Y., Todd, M. J., Qi, J., and Minor, L. K. (2007) Evaluation of no-wash calcium assay kits: enabling tools for calcium mobilization. J. Biomol. Screen. 12, 705–714. Dupriez, V., Maes, K., Le Poul, E., Burgeon, E., and Detheux, M. (2002) Aequorin-based functional assays for G-protein-coupled receptors, ion channels, and tyrosine kinase receptors. Receptors Channels 8, 319–330. Le Poul, E., Hisada, S., Miziguchi, Y., Dupriez, V. J., Burgeon, E., and Detheux, M. (2002) Adaptation of aequorin functional assay to high throughput screening. J. Biomol. Screen. 7, 57–65. Bovolenta, S., Foti, M., Lohmer, S., and Corazza, S. (2007) Development of a Ca(2+)-activated photoprotein, Photina, and its application to high-throughput screening. J. Biomol. Screen. 12, 694–704.
Chapter 8 High-Throughput Automated Confocal Microscopy Imaging Screen of a Kinase-Focused Library to Identify p38 Mitogen-Activated Protein Kinase Inhibitors Using the GE InCell 3000 Analyzer O. Joseph Trask, Debra Nickischer, Audrey Burton, Rhonda Gates Williams, Ramani A. Kandasamy, Patricia A. Johnston, and Paul A. Johnston Abstract The integration of fluorescent microscopy imaging technologies and image analysis into high-content screening (HCS) has been applied throughout the drug discovery pipeline to identify, evaluate, and advance compounds from early lead generation through preclinical candidate selection. In this chapter we describe the development, validation, and implementation of an HCS assay to screen compounds from a kinase-focused small-molecule library to identify inhibitors of the p38 pathway using the GE InCell 3000 automated imaging platform. The assay utilized a genetically modified HeLa cell line stably expressing mitogen-activated, protein-activating protein kinase-2 fused to enhanced green fluorescent protein (MK2–EGFP) and measured the subcellular distribution of the MK2–EGFP as a direct readout of p38 activation. The MK2–EGFP translocation assay performed in 384-well glass bottom microtiter plates exhibited a robust Z-factor of 0.46 and reproducible EC50 and IC50 determinations for activators and inhibitors, respectively. A total of 32,891 compounds were screened in singlicate at 50 mM and 156 were confirmed as inhibitors of p38-mediated MK2–EGFP translocation in follow-up IC50 concentration response curves. Thirty-one compounds exhibited IC50s less than 1 mM, and at least one novel structural class of p38 inhibitor was identified using this HCA/HCS chemical biology screening approach. Keywords: High-content imaging, High-content analysis, High-content screening, Confocal microscopy, Kinase, p38, MAPKAP-k2, GFP, InCell.
1. Introduction The mitogen-activated protein kinases (MAPK) sit at key nodes of the signaling pathways for extracellular stimuli that regulate the W.P. Janzen, P. Bernasconi (eds.), High Throughput Screening, Methods and Protocols, Second Edition, vol. 565 ª Humana Press, a part of Springer Science+Business Media, LLC 2009 DOI 10.1007/978-1-60327-258-2_8, Springerprotocols.com
159
160
O. Joseph Trask et al.
fundamental processes of cells in both normal and diseased states (1–5). The p38 (reactivating kinases (RKs) or p40) kinase module is known to mediate stress responses activated by heat shock, ultraviolet light, bacterial lipopolysaccharide, and proinflammatory cytokines, and p38 MAPK has been a major target for drug discovery by the pharmaceutical industry (3, 6–8). Mitogen-activated protein kinase-activating protein kinase-2 (MK2) is a substrate of the p38 MAPK, and phosphorylation of MK-2 by p38 induces a nucleus to cytoplasm translocation (9–12). The generation and characterization of a stable MK2–EGFPexpressing cell line that was subsequently utilized as an HCS assay to screen for inhibitors of the p38 MAPK signaling pathway has been described previously (12, 13). Briefly, HeLa cells (ATCC, CCL-2) were infected with MK2-ehanced green fluorescent protein (EGFP) retrovirus, placed under selective antibiotic pressure for 2 days, then sorted by flow cytometry to isolate single-cell clones that were then expanded and characterized. Interestingly, the majority of the single-cell clones with very high fluorescent signal did not show optimal signal-to-noise ratio in the MK2– EGFP translocation assay (data not shown). Several clones displayed a MK2–EGFP translocation concentration response to anisomycin with EC50 values ranging from 23 to 35 nM (Fig. 8.1). However, a cell line expanded from a single-cell clone designated ‘‘A4’’ (MK2–A4) that exhibited a translocation response that was very sensitive to both anisomycin and TNF- was used in a screening campaign to identify inhibitors of the p38 pathway.
Fig. 8.1. Comparison of HeLa MK2–EGFP clones induced by anisomycin. Cells selected for clonal expansion were seeded in 96-well plates and treated with the indicated concentrations of anisomycin for 25 minutes to stimulate MK2–EGFP translocation. Cells were fixed with 3.7% formaldehyde and 2 mg/ml of Hoechst 33342 for 10 minutes, images were acquired on the Cellomics ArrayScan II (Cellomics (Thermo Fisher), Pittsburgh, PA), and analyzed for MK2–EGFP translocation using the nuclear translocation bioapplication.
High-Throughput Automated Confocal Microscopy Imaging Screen
161
There are several commercial turnkey HCA/HCS automated imaging platforms on the market but the GE InCell 3000 is a unique combination of several features, such as laser line scanning confocality, environmental control, on-board pipetting, and fast unorthodox algorithms to quantify fluorescent cellular objects. The InCell 3000 is a confocal line scanning imager that projects a line of illumination into the specimen using three independent water-cooled lasers with excitation wavelengths of 351–364, 488, and 635 nm and images the fluorescence emission simultaneously on three independent water-cooled CCD line cameras (blue, green, and red). The InCell 3000 is equipped with a fixed Nikon 40 0.6 NA ELWD objective, which allows a large field of view (0.75 0.75 mm) that at 0.6 mm pixilation provides 1280 1280 pixels, and 100–900 cells/frame depending upon the seeding density utilized and is capable of 10 mm resolution. It has a near-IR fiber-coupled laser tracking autofocus that is very fast, with a time to focus of between 100 and 150 msec for up to 40 mm and with z-position focus errors around <0.2 mm in glass plates and around 0.5 mm in plastic plates. The InCell 3000 has two peristaltic pumps for pipetting and an environmental chamber to control temperature, CO2, and relative humidity, which provides kinetic live well imaging capability. The InCell 3000 system is capable of high resolution in X, Y, and Z. The InCell 3000 acquires and saves images that can be analyzed on the fly, or postacquisition, using the priority ‘‘Raven’’ software from GE Healthcare and the appropriate image analysis modules to produce feature sets appropriate to the assay being run. The InCell 3000 imaging platform has been designed to provide high-throughput image acquisition and analysis capability (14–16). The multiple excitation lasers and CCD cameras of the InCell 3000 when combined with the appropriate selection of fluorescent probes and sample cell density enable the user to utilize short exposure times together with simultaneous parallel acquisition of multiple fluorescent channels from a single field of view to achieve fast scanning times of 10 minutes for a 384-well plate. 1.1. Image Analysis and Algorithm
The nuclear trafficking analysis module is dependent upon cells labeled with nuclear dyes such as Hoechst 33342, DAPI, DRAQ5, or any other dye that fluorescently stains cellular DNA. The nuclear dye fluorescent signal is used to identify the nuclear region and to define a nuclear mask overlay based on thresholding of object size and intensity of fluorescent probe inside the object (Fig. 8.2A). The nuclear mask overlay is eroded by a user-defined pixel threshold to reduce cytoplasmic contamination within the nuclear area (Fig. 8.2B), and the final reduced mask overlay can be used to quantify the amount of fluorescence inside the nuclear area in any selected channel. The nuclear mask overlay is then dilated by
162
O. Joseph Trask et al.
Fig. 8.2. Nuclear trafficking analysis module. (A) Upper Left : In images acquired on the InCell 3000, cells labeled with Hoechst 33342 nuclear dye were used to identify the nuclear region and to define a nuclear mask in white, based on thresholding, size filtering, and fluorescence intensity criteria. (B) Upper Right : The mask is eroded to reduce cytoplasmic contamination within the nuclear area and the final reduced mask is used to quantify the amount of target channel fluorescence within the nucleus. (C) Lower Left : The nuclear mask is dilated to cover as much of the cytoplasmic region as possible without going outside the cell boundary. Removal of the original nuclear region from this dilated mask creates a ring mask that covers the cytoplasmic region outside the nuclear envelope. (D) Lower Right : The nuclear trafficking analysis module calculates the ratio between the nucleus and cytoplasmic intensities in the target channel. For the MK2– EGFP translocation assay, the intensity of the EGFP fluorescence is measured in the eroded nuclear mask area and divided by the EGFP intensity in the cytoplasmic ring area to give a Nuc:Cyt ratio that is calculated on a per cell basis, which may also be reported as a well-averaged value.
a user-defined pixel threshold to cover as much of the cytoplasmic region as possible without going outside the boundary of the cell. When the original nuclear region is removed from this dilated overlay mask, this creates a ring mask that covers the cytoplasmic region outside the previously defined nuclear area (Fig. 8.2C). The nuclear trafficking analysis module calculates the intensity ratio between the nucleus and cytoplasmic regions in the target channel. For the MK2–EGFP translocation assay, the intensity of the EGFP fluorescence is measured in the eroded nuclear mask area and divided by the EGFP intensity in the cytoplasmic ring area (Fig. 8.2D) to give a Nuc:Cyt ratio that is calculated on a per cell basis, which may also be reported as a well-averaged value. In the MK2–EGFP translocation assay, ‘‘no translocation’’ readout is a
High-Throughput Automated Confocal Microscopy Imaging Screen
163
result of a high Nuc:Cyt ratio, while ‘‘positive translocation’’ is a result of a low Nuc:Cyt ratio numerical value. There are several data output features generated by the Raven software, which are summarized in Table 8.1.
Table 1 Description of plate image algorithmic module data analysis output parameter features Plate
Plate Name
Cycle
Number of passes at this plate. Always zero for this MAPKAP project
Well
Well number, varies from A1 to P24
Msg
Errors such as dry well or focus error. 0 if no problems
NPasNc
Number of nuclei found passing both intensity and size filters
NPasSg
Number of cells found which pass above filters and cytoplasm intensity filter (see signal sampling threshold below), set at 100 cts for the original screen data
NPasAq
Number of cells passing both of the above filters and for which the fraction of pixels is above the threshold shown in the secondary analysis parameters. A significant difference between NPasSg and NPasAq is evidence of a toxic response
Nuc/Cyt
Ratio of the intensity in the eroded nuclear mask to the dilated cytoplasm ring, both measured in the signal channel (green)
Std Dev
Cell by cell std Dev of the above
Nuc Intsty
Average intensity over the well in the eroded nuclear mask measured in the signal channel (green)
Cyt Intsty
The well average intensity of the cytoplasm sample rings
Aqlity
A secondary analysis parameter. A threshold for cytoplasm intensity is chosen (see below). A cell by cell calculation is made of the fraction of the pixels above this threshold. The fraction of cells with their ring area above the quality threshold expressed as a percentage is Aqlity
Mrk Mode
The most probable intensity in the nuclear marker channel. This is a convenient measure of the background intensity in the marker channel (blue for the MAPKAP screen)
Sig Mode
The most probable intensity in the signal channel. This is a convenient measure of the background intensity in the signal channel (green for the MAPKAP screen)
164
O. Joseph Trask et al.
1.2. MK2–EGFP Translocation Assay Development
Microscopic observation shows that the vast majority of MK2– EGFP fluorescence in the A4 HeLa cell clone appears localized within the nuclear boundaries with relatively little signal apparent in the cytoplasm of the cell (Fig. 8.3A). However, when the cells undergo treatment with an activator of the p38 MK2 pathway such as anisomycin or TNF-, MK–EGFP redistributes from the nucleus into the cell’s cytoplasm (Fig. 8.3B). In cells treated with a known inhibitor of the p38 pathway such as SB203580 in conjugation with anisomycin, stimulation fails to induce redistribution of MK2–EGFP from the nucleus to the cytoplasm (Fig. 8.3C). Inhibition of MK2–EGFP translocation by SB203580 is equally effective whether cells are pretreated with inhibitor or added simultaneously with an activator of the p38 MAPK pathway. By using the InCell 3000 Raven software nuclear trafficking analysis module, an anisomycin concentration response for translocation was derived from the calculated ratio of MK–EGFP fluorescence signal in the nucleus to the cytoplasm. Similarly, when the HeLa A4 cells were treated with the p38 inhibitor SB203580 and subsequently stimulated with anisomycin, an inhibition curve of MK2–EGFP translocation from the nucleus to the cytoplasm in cells was derived using the InCell 3000 Raven software nuclear trafficking analysis module (Fig. 8.4).
Fig. 8.3. Images of MK2–EGFP translocation. (A) Left : Unstimulated control cells displaying predominantly MK2–EGFP nuclear localization. (B) Middle : Cells stimulated with 100 nM anisomycin for 25 minutes display translocation of MK2– EGFP from the nucleus to the cytoplasm. (C) Right : Cells simultaneously treated with 10 mM SB203580 and 200 nM anisomycin for 40 minutes; the p38 inhibitor blocks MK2–EGFP translocation to retain MK2–EGFP predominantly in the nucleus, similar to images from untreated media controls.
The assay was originally developed and validated in 96-well plastic plate format, then transferred to 384-well glass plate format to accommodate the high-end optics and confocality and robustness of the InCell 3000 analyzer (12). We followed the HTS guidelines for development and validation of this assay including development end points such as cell seeding density, time course of activation and inhibitor responses, 3-day EC50 of activator and 3-day IC50 of inhibitor, DMSO tolerance, and whole plate Zfactors, which were all optimized before progressing into screening compounds.
High-Throughput Automated Confocal Microscopy Imaging Screen
165
Fig. 8.4. MK2–EGFP activation and inhibition curves in 384-well plates. A total of 2.5 103 HeLa–MK2–EGFP A4 cells were seeded in 384-well Matrical glass plates in EMEM + 10% FBS and incubated overnight at 37C and 5% CO2. For activation of the response, HeLa A4 cells were treated with the indicated concentrations of anisomycin for 40 minutes, fixed in 3.7% formaldehyde + 2 mg/ml Hoechst dye and fluorescent images were collected on the InCell 3000. For inhibition of the response, the indicated doses of SB203580 were added simultaneously with the coaddition of 200 nM anisomycin (final) and plates were incubated for 40 minutes. Plates were fixed in 3.7% formaldehyde + 2 mg/ml Hoechst dye and fluorescent images were collected on InCell 3000. The nuclear trafficking analysis module was used to analyze the images captured on the InCell 3000 and quantify the Nuc:Cyt ratio response for anisomycin stimulation and SB203580 inhibition of anisomycin-induced MK2– EGFP translocation.
The optimal cell seeding density in 384-well Matrical plates was determined by testing cell densities at 2.5 103, 5 103, 10 103 cells per well. Cells were incubated overnight, then acutely treated with either media alone or 100 nM anisomycin for 40 minutes, or were pretreated with 1 mM of the p38 inhibitor SB203580 for 15 minutes followed by 40-minute treatment of 100 nM anisomycin. The translocation of MK2–EGFP was then quantified using the InCell 3000 analyzer with the nuclear trafficking analysis module. MK2–EGFP translocation was adequately measured at all three seeding densities (Fig. 8.5). Although the three cell seeding densities showed a ‘‘screenable’’ delta (max to min signal window), a seeding density of 2.5 103 cells/well was chosen for all further assay development for the best segmentation algorithm fit of cell objects and for reduction of the cell culture burden required in screening operation.
166
O. Joseph Trask et al.
Fig. 8.5. Cell seeding density. The indicated numbers of HeLa–MK2–EGFP–A4 cells were seeded into each of the 384 wells of Matrical glass plates in EMEM + 10% FBS and incubated overnight at 37C and 5% CO2. Cells were treated with –200 nM anisomycin for 25 minutes, fixed in 3.7% formaldehyde + 2 mg/ml Hoechst 33342 dye, fluorescent images were acquired on the InCell 3000, and the nuclear trafficking analysis module was used to quantify the Nuc:Cyt ratio translocation response.
Kinetic time course experiments to measure activation and inhibition of MK2–EGFP translocation were conducted to select the ‘‘ideal time’’ for an automated screening assay. In the activation time course when cells were either left untreated or stimulated with 100 nM anisomycin, cells showed an abrupt MK2–EGFP cytoplasmic translocation within 10 minutes and a peak response recorded after 20 minutes, and this remained unchanged for more than 40 minutes. After 60 minutes and up to 120 minutes there appears to be a slight increase in MK2–EGFP Nuc:Cyt ratio, suggesting a modest redistribution of MK2–EGFP back into the nucleus, although the Nuc:Cyt ratio never reestablishes pretreatment measurements (Fig. 8.6A). For compound screening logistics a 40-minute incubation time for anisomycin activation was selected for further assay development. In the inhibition time course, experiment cells were preincubated with SB203580 for the indicated times before anisomycin addition or SB203580 was added simultaneously with anisomycin. The effects on MK2– EGFP translocation were then quantified on the InCell 3000 by measuring the Nuc:Cyt ratio. Simultaneously adding SB203580 and anisomycin was as effective at blocking the MK2–EGFP Nuc: Cyt translocation event as preincubating with SB203580 prior to the anisomycin treatment (Fig. 8.6B). We elected to run the screen by simultaneously adding compounds and anisomycin to save time in screening operations.
High-Throughput Automated Confocal Microscopy Imaging Screen
167
Fig. 8.6. Time course experiments. (A) Stimulation time course: 2.5 103 HeLa–MK2–EGFP–A4 cells were seeded into each of the 384 wells of Matrical glass plates in EMEM + 10% FBS and incubated overnight at 37C and 5% CO2. Cells were treated with –200 nM anisomycin for the indicated times, fixed in 3.7% formaldehyde + 2 mg/ml Hoechst dye, fluorescent images were collected on the InCell 3000, and the nuclear trafficking analysis module was used to quantify the Nuc:Cyt ratio translocation response. (B) Inhibition time course: 2.5 103 HeLa–MK2–EGFP–A4 cells were seeded into each of the 384 wells of Matrical glass plates in EMEM + 10% FBS and incubated overnight at 37C and 5% CO2. The p38 inhibitor SB203580 was preincubated with the cells for the indicated times prior to the addition of 200 nM anisomycin. Plates were incubated for 40 minutes, fixed in 3.7% formaldehyde + 2 mg/ml Hoechst dye, fluorescent images were collected on the InCell 3000 and the nuclear trafficking analysis module was used to quantify the Nuc:Cyt ratio translocation response.
Since the compound library was dissolved in dimethyl sulfoxide (DMSO) for screening we tested the amount of DMSO that cells can tolerate before significantly altering the assay window. MK2–EGFP–A4 cells were treated with the indicated amounts of DMSO with and without 100 nM anisomycin for 40 minutes at 37C, 5% CO2, and 95% relative humidity. DMSO less than 0.625% did effect MK2–EGFP Nuc:Cyt translocation ratio (Fig. 8.7A). In contrast, DMSO concentrations greater than 0.625% altered MK2–EGFP Nuc:Cyt ratio as evident not only by the numerical data but also by altered cell morphology including nuclear swelling or shrinkage at very high DMSO concentrations, which resulted in a nuclear mask that is not proportional to the cytoplasmic mask as observed in normal untreated cells (Fig. 8.7B). To further validate the MK2–EGFP translocation assay performance using the InCell 3000 imager and Raven software nuclear trafficking analysis module, we ran activation and inhibition concentration response experiments to determine the EC50 for anisomycin and the IC50 for SB203580 in the presence of anisomycin. About 2.5 103 cells/well were seeded in 384well glass bottom Matrical plate overnight. Cells were treated with indicated concentrations of anisomycin or SB203580 plus a twofold increase in anisomycin (200 nM) for 40 minutes at 37C, 5% CO2, and 95% relative humidity. Using GraphPad Prism (San
168
O. Joseph Trask et al.
Fig. 8.7. DMSO tolerance. (A) A total of 2.5 103 HeLa–MK2–EGFP–A4 cells were seeded in 384-well Matrical glass plates in EMEM + 10% FBS and incubated overnight at 37C and 5% CO2. Cells were treated with –200 nM anisomycin containing the indicated concentrations of DMSO for 40 minutes, fixed in 3.7% formaldehyde + 2 mg/ml Hoechst dye, and fluorescent images were collected on the InCell 3000. The nuclear trafficking analysis module was used to analyze the images captured on the InCell 3000 and quantify the translocation response. (B) Images from untreated media control cells and 100 nM anisomycin treated cells in the presence or the absence of 5% DMSO.
Diego, CA) curve fitting and analysis software we obtained an EC50 for anisomycin stimulation of 28.45 nM and an IC50 for SB203580 of 112 nM, both had R2 curve fits greater than 0.95. It is clearly evident that the InCell 3000 can robustly measure changes in the Nuc:Cyt ratio in the HeLa–MK2–EGFP–A4 cell model using these assay conditions.
2. Materials 1. HeLa–MK2–EGFP–A4 stable clone cell line (see Note 1). 2. Cell culture maintenance medium: EMEM (Invitrogen/Gibco, Carlsbad, CA) supplemented with 10% fetal bovine serum (FBS) (Hyclone, Logan, UT) and 2 mM l-glutamine (Sigma-Aldrich, St. Louis, MO), 10 mM HEPES (Biowhittaker, Walkersville, MD), 800 mg/ml G418 (Sigma-Aldrich, St. Louis, MO). 3. Trypsin-versene (Biowhittaker, Walkersville, MD). 4. Dulbecco’s phosphate-buffered saline (DPBS) (Biowhittaker, Walkersville, MD). 5. Dimethyl sulfoxide (DMSO) (JT Baker, Phillipsburg, NJ). 6. 384-well black clear glass bottom plates (Matrical, Spokane, WA). 7. 37% formaldehyde (Sigma-Aldrich, St. Louis, MO).
High-Throughput Automated Confocal Microscopy Imaging Screen
169
8. Hoechst 33342 (Invitrogen/Molecular Probes, Eugene, OR). 9. Microplate Seals (Perkin-Elmer, Boston, MA). 10. MAPK stimulus – anisomycin (Sigma-Aldrich, St. Louis, MO). 11. p38 inhibitor SB203580 (Calbiochem, San Diego, CA). 12. p38 inhibitor RWJ68354 (synthesized in-house). 13. JNK inhibitor SP600125 (Calbiochem, San Diego, CA). 14. LOPAC collection (Sigma-Aldrich, St. Louis, MO). 15. Alexa-carboxylic Eugene, OR).
acid
(Invitrogen/Molecular
Probes,
16. Oregon Green 488 (Invitrogen/Molecular Probes, Eugene, OR).
2.1. Working Solution
1. Cell plating medium: EMEM, 10 mM HEPES, 0.5% FBS, 800 mg/ml G418. 2. Fixation solution: 3.7% formaldehyde in PBS containing 2 mg/ml Hoechst 33342. 3. Flat field dye solution: combine Alexa-carboxylic acid or Alexa-succinimidyl ester with Oregon Green 488 in one or more wells in microwell plate. Seal top of plate and protect from light.
3. Methods 3.1. Cell Plating Procedure
1. HeLa–MK2–EGFP–A4 cells in EMEM media supplemented with 10% FBS, 2 mM l-glutamine, 10 mM HEPES, and 800 mg/ml G418 less than passage 20 are grown to 70–80% confluence in tissue culture flasks at 37C, 5% CO2, and 95% relative humidity. 2. Detach cell monolayer from tissue culture flasks using trypsinversene rinse. Resuspend cells in complete media, count cells, and adjust cell density concentration to 6.25 104 cells/ml. If cells are clumpy, depending on your method of detachment, filter cells through 70 mm strainer that fits on top of the 50-ml conical tube (BD Falcon). 3. Plate 2500 cells/well (40 ml) using the Multidrop (Thermo Electron, Boston, MA) or other liquid dispenser in Matrical 384-well glass bottom plates (see Note 2). Allow plates to set at room temperature on flat surface for approximately 20 minutes before placing in incubator to minimize edge effects created by temperature fluctuations (see Note 3). Incubate overnight at 37C, 5% CO2, and 95% relative humidity. To
170
O. Joseph Trask et al.
reduce edge effects and enhance equivalent cell monolayer, prewet bottom of Matrical 384-well glass plates with cell plating media prior to plating cells (see Note 4). 3.2. Assay Development for Stimulation Dose– Response
1. Make 4 final concentration of anisomycin (100 nM) in cell plating media. Warm to 37C in incubator. Add 20 ml of media, DMSO, or compound. Incubate for 15 minutes at 37C. At desired time intervals, add 20 ml of prewarmed 4 anisomycin to wells (see Note 5). 2. Remove media and fix cells by adding 20 ml of prewarmed formaldehyde (3.7% final) in PBS containing 2 mg/ml Hoechst 33342. Incubate at room temperature for 10–15 minutes. Remove formaldehyde and replace with 100 ml of PBS (see Note 6). It is important to remember to follow safety procedures when working with formaldehyde fixation. Always prepare stock solutions in fume hood and follow safety guidelines for use and proper disposal at your institution. 3. Cover tops of wells with plate seals (see Note 7). 4. Analyze on the InCell 3000 (see Section 3.4).
3.3. Automated Screening Protocol for MK2–EGFP Translocation
1. Prepare compounds and plate controls in 384-well plastic plates. Transfer 20 ml of prewarmed (37C) plate controls and/or test compounds using the Beckman-Coulter MultiMek device or equivalent. Incubate plates at 37C, 5% CO2, and 95% relative humidity for approximately 40 minutes (see Note 8). 2. In a fume hood immediately add 20 ml/well of prewarmed formaldehyde and Hoechst 33342 fix solution using the Multidrop device. Incubate for 10–15 minutes at room temperature (see Note 9). 3. Remove fixation solution and wash cells twice leaving the last wash in wells with 100 ml of PBS using MultiMek device or equivalent. 4. Cover tops of plates with plate seals. 5. Analyze plate on the InCell 3000 platform (see Section 3.4).
3.4. Instrument Setup
Although we used the GE InCell 3000 confocal instrument in this study, any high-content imaging platform that is capable of twocolor acquisition with adequate imaging resolution could be used such as the BD Pathway, Cellomics ArrayScan, Evotec Opera, InCell 1000, MDC MicroXpress, Yokogawa and PMT-laser based units such as the Acumen Explorer and Blueshift Isocyte. 1. Turn on InCell 3000 instrument and allow to warm up. Launch the Raven software and choose the nuclear trafficking algorithms in the online mode. Adjust the Enterprise-II
High-Throughput Automated Confocal Microscopy Imaging Screen
171
488-nm argon laser (Coherent, Santa Clara, CA) at 50% full power (90 mW) and collect EGFP light emission through the 535/45-nm bandpass filter set and capture on the independent green CCD cameras (see Note 11). 2. Adjust the multiline UV (MLUV) argon Enterprise laser producing 351–364 nm light at approximately 10% full power (5 mW). Hoechst 33342 excited light is captured through 450/65-nm emission filter and detected on the independent blue CCD camera. 3. Image scans sequentially, 488 nm first, then MLUV line to reduce photobleaching of fluors. Bin the image capture 2 to 280 pixels per line and adjust the laser lines (488-nm and MLUV) at an appropriate exposure time (usually less than 1-2s) for an optimally capture image without saturating the CCD camera chip with bright ‘‘hot-spot’’ pixels. At 640 640 pixels the resolution is about 1.2 mm pixels. Binning at 2 will increase throughput speed of plates on the InCell 3000. 4. In the online mode, set up the collection stops at a minimum of 100 cells/well or a maximum of 2 frames/well, whichever came first (see Note 12). Double check the instrument parameters and make any additional adjustments. 5. Prepare flat-field solution for calibrating the unevenness of fluorescence in wells. Place calibration solution in same type of plates used for cell plating. In the plate setup window in Raven, be sure that the correct flat-field correction wells are selected. Flat-field solution is used as a calibration procedure to correct nonuniform fluorescent intensities measured in the captured image across the well. 6. Begin acquiring images on the InCell 3000 controlled by the Raven software following procedures outlined by the user’s manual. 7. Quantify captured images using the nuclear trafficking analysis module and determine if data from the population table are acceptable as compared and correlated with observed images (see Note 10). Make any necessary algorithm adjustments and reanalysis if appropriate. 3.5. Assay Validation
To determine the likelihood the assay can be used in screening operation over a period of days or weeks; it is necessary to perform day-to-day experiments to demonstrate the reproducibility and validity of the assay. We have outlined steps to run DMSO tolerance, 3-day validation on EC50 stimulation, 3-day validation on IC50 inhibition in the presence of stimulus, and Z-factor score to determine the signal window as it relates to variation in measured signal.
172
O. Joseph Trask et al.
3.6. Guidelines for DMSO Tolerance Procedure
Use the maximum tolerable amount of DMSO possible without irreversibly affecting the assay performance (see Note 13). Test DMSO from maximum of 8% final, twofold dilution, 8 points to determine if and when the assay performance is altered with increasing DMSO concentrations (see graph X-7). 1. Plate cells (40 ml/well) in multiwell plates as previously described. 2. Add 20 ml (4 final concentration) of DMSO to wells containing 40 ml of cells and media. 3. Add either 20 ml media or 20 ml anisomycin stimuli to induce MK2 signaling. 4. Stop reaction by fixation as previously described. 5. Analyze on InCell 3000 using Nuc:Cyt translocation module algorithm to determine the amount of redistribution of MK2–EGFP protein from the cytoplasm to the nucleus.
3.7. MK2–EGFP Translocation Assay Reproducibility and Signal Window Evaluation
Run three independent experimental EC50 dose–response assays on different days to assess the reproducibility of the MK–EGFP translocation response in HeLa–A4 cells after 40 minutes stimulation with the indicated concentrations of anisomycin (Fig. 8.8A). Run three independent IC50 dose– response assays using known inhibitors of the p38 pathway and at least one off-target selective compound to validate the assay model. The p38 inhibitor compounds SB203580 and RWJ68354 from three different runs produced an average IC50 of 101 and 84 nM, respectively (Fig. 8.8B,C). A selective inhibitor of the JNK pathway, SP600002, showed no evidence of MK2–EGFP translocation inhibition in the presence of anisomycin (Fig. 8.8D).
3.8. Procedure for 3-Day Reproducibility of Anisomycin EC50 Dose–Response
1. Plate cells as described in the cell plating procedure. Allow cells to attach overnight at 37C. 2. Add 20 ml of media to wells. This is used to mimic compound addition and correct the volume for addition of stimuli. Use automated liquid handling device such as a MultiMek or equivalent. 3. Make 4 final concentration of anisomycin starting at 1 mM (therefore make 4 mM concentration). Dilute twofold in media and at least 10 points. 4. Add 20 ml of anisomycin or other stimuli (see Note 14) to wells using automated liquid handling device such as a MultiMek or equivalent. 5. Incubate at 37C, 5%CO2 for 40 minutes. 6. Fix cells with formaldehyde as previously described.
High-Throughput Automated Confocal Microscopy Imaging Screen
173
Fig. 8.8. Three-day activation and inhibition curves. Three-day EC50 curves. (A) A total of 2.5 103 HeLa–MK2– EGFP–A4 cells were seeded in 384-well Matrical glass plates in EMEM + 10% FBS and incubated overnight at 37C and 5% CO2. For activation of the response, cells were treated with the indicated doses of anisomycin for 40 minutes, fixed in 3.7% formaldehyde + 2 mg/ml Hoechst dye, and fluorescent images were collected on the InCell 3000. Data are presented from three independent experiments, each performed in triplicate wells and run on separate days. Three-day IC50 curves. A total of 2.5 103 HeLa–MK2–EGFP–A4 cells were seeded in 384-well Matrical glass plates in EMEM + 10% FBS and incubated overnight at 37C and 5% CO2. For inhibition of the response, the indicated doses of (B) SB203580, (C) RWJ68354, or (D) SP600125 were added simultaneously with the addition of 200 nM anisomycin (final) and plates were incubated for 40 minutes. Plates were fixed in 3.7% formaldehyde + 2 mg/ml Hoechst dye, and fluorescent images were collected on InCell 3000. Data are presented from three independent experiments, each performed in triplicate wells and run on separate days. The nuclear trafficking analysis module was used to analyze the images captured on the InCell 3000 and quantify the Nuc:Cyt ratios of anisomycin-stimulated MK2–EGFP translocation and the compound-mediated inhibition of anisomycin-induced MK2–EGFP translocation.
174
O. Joseph Trask et al.
3.9. Procedure for 3Day Reproducibility of Inhibitors of the p38 Pathway
1. Plate cells as described in the cell plating procedure. 2. Prepare inhibitor compounds at 4 the final concentration starting at 10 mM, diluting threefold for a 10-point IC50 dilution series in media. Prewarm to 37C a. SB203580 b. RWJ68354, another inhibitor of the p38 pathway and c. SP600125 (this compound is used as a negative control). It is well documented (17, 18) to inhibit the JNK pathway and should not alter the translocation of MK2. 3. Prepare 200 nM final concentration of anisomycin. Make 4 concentration (800 nM) and prewarm to 37C. 4. Add 20 ml of inhibitors to wells using automated liquid handling device such as a MultiMek or equivalent. 5. Add 20 ml of anisomycin to wells. 6. Incubate at 37C, 5%CO2 for 40 minutes. 7. Fix cells with formaldehyde as previously described. 8. Analyze plates on InCell 3000.
3.10. Z-Factor Determination Procedure
To measure the robustness and variability of the automated assay signal window for the MK2–EGFP translocation assay in the HeLa–A4 cell line we determined the assay Z-factor (19). HeLa–MK2–EGFP–A4 cells were treated with media to determine the minimum baseline response, 200 nM anisomycin to determine the maximum of the assay signal window, and SB203580 inhibitor in the presence of 200 nM anisomycin to determine a reference range for p38 inhibitor compounds (Fig. 8.9). The assay signal window of MK2–EGFP translocation was approximately 2.16-fold, based on an average Nuc:Cyt ratio media control of 2.374 and an average anisomycin-stimulated ratio of 1.1. A Z-factor of 0.46 indicated that the assay was compatible with HTS. 1. Plate cells as described in the cell plating procedure. Allow cells to attach overnight at 37C. 2. Prepare three separate 384-well plates to contain (1) media, (2) 200 nM anisomycin, (3) p38 inhibitor reference compound plus 200 nM anisomycin. Prewarm plates to 37C. 3. Add 20 ml of media, 200 nM of anisomycin, or reference compound with 200 nM anisomycin to the cell plate. 4. Incubate at 37C for 40 minutes. 5. Fix cell plate with formaldehyde as previously described. 6. Analyze on the InCell 3000.
High-Throughput Automated Confocal Microscopy Imaging Screen
175
Fig. 8.9. Assay signal window and variability assessment. A total of 2.5 103 HeLa–MK2–EGFP–A4 cells were seeded in 384-well Matrical glass plates in EMEM + 10% FBS and incubated overnight at 37C and 5% CO2. Two full 384well plates each were treated for 40 minutes under the following conditions; media alone (blue squares), 200 nM anisomycin (red triangles), and the p38 inhibitor SB203580 + 200 nM anisomycin (green circles). Plates were fixed in 3.7% formaldehyde + 2 mg/ml Hoechst dye and fluorescent images were collected on the InCell 3000. The nuclear trafficking analysis module was used to analyze the images captured on the InCell 3000 and quantify the anisomycin induction and/or SB203580 inhibition of the translocation response. The Z-factor was calculated according to the method of (19). The Nuc:Cyt ratios from all the wells on the six 384-well plates are presented in (A) a scatter plot or (B) a plate heat map view.
3.11. LOPAC Screening MK2–EGFP Translocation HTS/HCS Assay for p38 Inhibitors 3.11.1. Procedure to Test LOPAC and Compound Library
Prior to initiating the kinase-focused library screening campaign, test the MK2–EGFP assay model by screening the Library of Pharmacologically Active Compounds (LOPAC) (Sigma – RBI, St. Louis, MO) cassette in two separate runs.
1. Plate cells as previously described. 2. Prepare compound plates using automated liquid handling robotics. Make working compound solution equal to 4 the final concentration of 50 mM by diluting compound stocks (10 mM) into media containing less than 0.5% DMSO final.
176
O. Joseph Trask et al.
3. Using liquid handling automation, add compounds from the LOPAC library cassette in single-well determinations. For the 4 concentration, transfer 20 ml of compound to the cell plate. 4. Fix cell plate with formaldehyde as previously described. 5. Analyze on the InCell 3000. 6. Use an activity threshold of 50% inhibition active criterion to identify compounds that modified the MK2–EGFP Nuc:Cyt translocation ratio. If available, use data analysis and visualization software to review an overlay of the entire LOPAC screening run to cluster compounds with similar MK2 Nuc: Cyt translocation ratio. 7. If possible run the LOPAC compound library on a separate day to confirm reproducibility and correlate runs. For ‘‘hits’’ or ‘‘actives,’’ run an IC50 dose–response curve to confirm activity. 8. Once satisfied with assay performance, begin screening compound libraries following the approach used in the LOAPC screen. We identified at least nine active compounds in the singlepoint determination screen with MK2 Nuc:Cyt translocation ratios >1.8 and approaching the 2.4 ratio of the assay signal maximum plate controls (Fig. 8.10). There was good correlation between the two independent LOPAC screens with eight active
Fig. 8.10. MK2–EGFP translocation assay LOPAC activity assessment. Scatter plot overlay of three 384-well plate from the LOPAC screen plotted on a single graph. X-axis represents the well position from 384-well plate and the y-axis represents the Nuc:Cyt ratio activity translocation response. The diamonds indicate active or ‘‘hits’’ above the 50% inhibition threshold in the MK2–EGFP assay; the black squares represent maximum and minimum plate controls from the three plates; and the black open-round circles represent inactive compounds below 50% inhibition threshold.
High-Throughput Automated Confocal Microscopy Imaging Screen
177
Fig. 8.11. MK2–EGFP translocation assay – correlation between two independent LOPAC screening runs. Correlation plot of 854 compounds run in two different LOPAC screening runs. X-axis represents screening run #1 and y-axis represents screening run #2.
compounds (0.91%) in screening run 1 and six active compounds (0.69%) in screening run 2 (Fig. 8.11). The active compounds were followed up in IC50 concentration response assays, which confirmed the six active compounds identified in both LOPAC runs. Interestingly, the six confirmed active compounds were from five different pharmacological classes and disease indications including the cognitive enhancer tacrine hydrochloride, the antipsychotic thioproperazine dimesylate, the vasodilator dilazep dihydrochloride, the antidepressant doxepin hydrochloride, and the antihypertensive compounds protoveratrine A and hexamethonium dibromide (Fig. 8.12). Upon review of the images from the wells of the six confirmed active compounds, it was apparent that five of the compounds significantly affected cell adherence or induced cytotoxicity at 50 mM, and one compound significantly increased the nuclear fluorescent signal, similar to nuclei interculators such as Hoechst or propidium iodide. Additionally, the Raven software provided tools that allowed us to rapidly visualize wells with active compounds and assess the performance of the plate controls to provide a quick QC review of the screening data prior to uploading into the internal database (Fig. 8.13). In a primary screen of 32,891 compounds, 110 384-well assay plates were processed over a 5-day period. An average of one frame or field of view per well was acquired by the InCell 3000 HCS imager, which required approximately 47.5 hours to scan and analyze all plates, with an average scan time of 25.84 minutes per 384-well plate or 4.02 seconds/well. Using 50% inhibition as a threshold criterion for activity, the majority of the compounds
178
O. Joseph Trask et al.
Fig. 8.12. Active compound from LOPAC screening run. Confirmation of active compounds from two independent LOPAC screening runs. Tacrine hydrochloride showed evidence of nuclear fluorescent localization at 50 mM. Some compounds shown were active but image data showed evidence that compounds tested at 50 mM were either cytotoxic and/or effected cell adherence.
exhibited no activity in the primary screen as shown in the results of frequency distribution (Fig. 8.14A). However, 474 compounds (1.44% of the library) produced 50% inhibition of the anisomycin-stimulated MK2–EGFP translocation in HeLa–A4 cells (Fig. 8.14A). Only 270 compounds were available for follow-up IC50 assays and these compounds were tested in a fivepoint, threefold concentration response, starting at a maximum of 50 mM. One hundred and sixty-three of the compounds (60.37%) were confirmed active with IC50 < 50 mM in follow-up assays that required only 1 day of screening operations to perform, with 3.57 hours scanning time on the InCell 3000. One hundred and fiftysix (95.71%) of the compounds confirmed in the 5-point IC50 assays were subsequently confirmed in 10-point, threefold
High-Throughput Automated Confocal Microscopy Imaging Screen
179
Fig. 8.13. Reviewing LOPAC screen using Raven software. The heat map (left) and scatter plot (right) visualizations demonstrate the GUI interfaces in the Raven software for displaying data from a 384-well plate. Columns 1, 2, 23, and 24 represent control wells. With two exceptions, all the compound wells in the heat map (left side) are green with a Nuc:Cyt ratio 1.2 indicating that the compounds failed to inhibit translocation. There are two active compounds above the 50% inhibition threshold that are brighter in the heat map and outlined with circles in the scatter plot (right ).
Fig. 8.14. MK2–EGFP translocation assay screen performance data from 32 K biased kinase library. (A) Primary screen data. All the calculated results (percentage inhibition) for the 32,891 compounds in the screen were exported to Spotfire1 for visualization in a results frequency distribution plot. The median percentage inhibition was –4, and the mean percentage inhibition was –3.19 – 16.26. The active criterion was set at 50% inhibition of anisomycin-induced MK2– EGFP translocation. (B) Five-point IC50 data. All the calculated data (Nuc:Cyt ratios) for compound wells from the fivepoint IC50 run were exported to Spotfire1 for visualization in a results frequency distribution plot. (C) Ten-point IC50 data. All the calculated data (Nuc:Cyt ratio’s) for compound wells from the 10-point IC50 run were exported to Spotfire1 for visualization in a results frequency distribution plot.
dilution series IC50 assays, starting at a maximum concentration of 50 mM (Table 8.2). The 10-point IC50 assays were completed in 1 day of screening operations with an average of 1.03 frames or fields of view per well and a total scan time of 4.01 hours or 26.8 minutes per 384-well plate or 4.08 seconds mean time per well As expected, both the 5-point and the 10-point IC50 data exhibited two distinct bimodal histogram distributions of activity with one population representing wells that were inhibited by compounds at higher concentrations and the other population representing wells receiving compound concentrations that did not inhibit anisomycin-induced MK2–EGFP translocation (Fig. 8.14B and C).
180
O. Joseph Trask et al.
Table 2 Summary of active hits from the primary screen and IC50 follow ups MK2–EGFP InCell 3000 Rapid MTS
5-point IC50s
10-point IC50s
Number Screened >50%
Number tested Number Confirmed
Number tested Number Confirmed
% of total 32,891
100
474
1.44
270
100
163
60.37
163
100
156
95.71
The InCell 3000 Raven software like many HCS imaging software provides a means to ‘‘post hoc’’ analyze potential artifacts such as noise, debris, fluorescent compound interference, and cell loss from suspected adherence issues and/or cytotoxicity as a result of morphological alterations in the cell. The Raven software provides a method to assess how captured images correlate with the numerical metadata from the images for the output parameters shown in Table 8.1 such as Nuc:Cyt ratio, cell number, and the NPasNC parameter, which indicates the number of nuclei that were found to pass the intensity and size filters. Since the InCell 3000 was set up to capture a minimum of 100 cells/well or a maximum of two frames/well, whichever came first, compounds that generated data from wells of cell objects less than 100 were flagged as either cytotoxic or effecting cell adherence. We found the NPasNC parameter to be useful in determining cytotoxic or cell adherence issues in the IC50 dose– response data sets at higher compound concentrations such as 50 and 16.7 mM. After reviewing the images we in fact confirmed that compounds with an abnormal NPasNC parameter showed a reduction in cell number as a result of dose-dependent cell adherence or cytotoxicity (Fig. 8.15A). There are two distinct populations from data scatter plot of the mean cytoplasmic intensity per well versus average nuclear intensity per well. The plate control well data from anisomycin-treated cells and untreated or media-only wells are clearly separated into the two populations. Neither control populations exhibited cytoplasmic intensity thresholds above 2000 and/ or nuclear intensity thresholds above 3000. Although the majority of compound-treated wells were also within these defined threshold ranges, there were a considerable number of compounds that
High-Throughput Automated Confocal Microscopy Imaging Screen
181
Fig. 8.15. Nuclear trafficking module secondary analysis for cytotoxicity and fluorescence. The nuclear trafficking analysis module provides data on a number of parameters that can be used to identify potential interference due to compound fluorescence or off-target compound effects such as cytotoxicity or disruption of cell adherence (Table 8.1). These data were exported from the Raven software to Spotfire1 and visualized in a variety of scatter plots: (A) the NPasNC parameter was plotted for the different plate controls versus compound concentration to assess cytotoxicity or a reduction in cell adherence; (B) a scatter plot of the average cytoplasmic intensity/well versus average nuclear intensity/well in the target channel (EGFP) was used to identify fluorescent compounds. (C) Representative images from compound wells exhibiting a dose-dependent cytotoxicity or reduction in cell adherence; (D) representative images from compound wells exhibiting a dose-dependent fluorescence.
exceeded the cytoplasmic intensity threshold above 2000 and/ or nuclear intensity threshold above 3000. After reviewing the images we confirmed that at high concentrations (50 or 16.7 mM), the majority of these compounds were either fluorescent and/or affected the MK2–EGFP fluorescence signal (Fig. 8.15B). Only 13 (8.33%) of the 156 active compounds from the 10-point IC50 dose–response curve showed NPasNC data with less than 100 cells/ well, which may be a result of reduced cell adherence and/or cytotoxicity. However, a higher number (25, 16%) of the 10-point IC50 dose–response compounds were considered fluorescent as a result of the cytoplasmic intensity threshold output parameter above 2000 and/or nuclear intensity threshold above 3000 (Table 8.3).
182
O. Joseph Trask et al.
Table 3 Summary of follow up active hit compounds with secondary image analysis parameters measured to determine false positive from high-levels of fluorescence in nucleus and/or cytoplasm, and number of cytotoxic or cytotoxic-like compounds Secondary Analysis Parameters Total number tested
163
%
>50 mM
7
4.29
<50 mM
156
95.71
Cytox
13
8.33
CYT INT NUC INT CYT þ NUC INT
8 14 3
5.13 8.97 1.92
<50 mM
118
75.64
In summary, there were 59 (36%) confirmed inhibitors of the p38 MAPK pathway with IC50 < 5 mM and 31 (19%) with IC50 < 1 mM (Table 8.4). At least one new structural class of p38- MAPK inhibitor that was identified in the screen was confirmed after additional secondary hit characterization assays.
Table 4 Breakdown of confirmed compound IC50 values Number Of Compounds
%
Number tested Number Confirmed
163 156
100 95.71
<1.0 1–10 mM 10–50 mM
31 73 52
19.02 44.79 31.90
>50 mM
7
4.29
4. Notes 1. HeLa–MK2–EGFP–A4 is a stable cell line derived from wildtype HeLa cells retrovirally infected with MAPKAP-k2 DNA fused with EGFP at 50 -end. Single-cell clones were deposited by flow cytometry sorting and clone A4 was selected based on
High-Throughput Automated Confocal Microscopy Imaging Screen
183
the expression and homogeneous of EGFP fluorescent intensity in the cell clone. It is critical to insure expression of stably transfected cell line after several passages in culture. For this cell line we did not observe substantial loss of expression since the cell line is under selective pressure with 800 mg/ml G418 to maintain stability. Further information can be found in a detailed description of the generation of the HeLa MK2– EGFP–A4 stable clone cell line (12). Also if unable to create a genetically modified cell line, BioImage (Thermo Fisher Scientific) offers the MK2–EGFP in U2OS cell line for screening. 2. Please note at the time this research was conducted we washed 384-well Matrigel glass bottom plates with isopropanol to remove a residue that was suspected of causing cytotoxicity. Matrigel now offers residue-free glass bottom plates; however, keep this in mind if cells do not behave the same way as plastic and coating plates with poly-d-lysine or other extracellular matrix, proteins may be beneficial. 3. It is important to use a level surface as much as possible so that cells do not pile up on one side of the well. This trick to allow cells to settle at RT works well in all microwell plates that we tested. As soon as cells begin to adhere to the plastic or extracellular matrix, they can be placed in incubation. 4. By prewetting plates with small-volume wells such as 384- and 1536-well plates, we have found cells are more uniform and distrust throughout the well versus cells plated directly on to a dry surface. Very low volumes of liquid cell culture medium or buffer are needed to accomplish this, i.e., 10 ml/well for 384well plates. Also, it is important for each assay and cell line used to optimize the number of cells plated per well. Although not discussed in this chapter, we previously optimized the cell number to 2500 cells/well using calculated anisomycin EC50 dose–response after extensive investigation of cells plated per well from 1250, 2500, 5000, 7500, and 10000 cells/well. 5. Alternatively add stimulus at the same time and fix selected wells at desired time intervals; however, we found this method to be more cumbersome. 6. Fixing cells expressing fluorescent proteins such as GFP and mutants of GFP decreases the fluorescent intensity. Unpublished work suggests that lowering formaldehyde or paraformaldehyde solution is beneficial and alcoholbased fixation tends to be worse than formaldehydebased fixation. 7. Plate seals can be problematic in some plate handling robotics systems if there is evidence of hanging tabs. This can result in plates sticking to one another in plate stackers and causing problems.
184
O. Joseph Trask et al.
8. For screening purposes we found that a coaddition of both the agonist stimuli and the inhibitor compound was effective. Alternatively we performed many experiments where we pretreated cells with inhibitor compound for 15 minutes at 37C before adding anisomycin stimuli. 9. For ‘‘live cell’’ experiments, Hoechst dye is recommended to label the nucleus for the algorithm module to identify objects. Cells may pump Hoechst out of the nucleus, thus a higher concentration may be required. For fixed cells, other nuclear dyes such as DAPI work well in the blue channel; DRAQ5 is an alternative for both live and fixed cells in the red channel. See Molecular Probes web site for other fluorescent nuclear protein choices. 10. Measuring and defining the appropriate mask overlay and segmentation can be complex. For optimal measurement of translocation and best fit, compare known controls with untreated media. Make necessary adjustments based on image, not the output values. Once set, recheck the recorded measurements of the cell population. At the time of study, there were three nuclear trafficking algorithms; we used the nuclear trafficking 2 algorithm, which gave additional information to allow us to sort out cell populations based on fluorescent intensity in the nucleus, cytoplasm, or both. Nuc:Cyt is the primary output feature to record and is used to optimize the algorithm. Additional secondary output parameters are also very useful in helping identify unusual morphology or modification in the fluorescent distribution. ‘‘NPasNc’’ is the number of nuclei found passing both intensity and size filters; ‘‘NPasSg’’ is the number of cells found passing above filters and cytoplasm intensity filter (see signal sampling threshold below), set at 100 cts for the original screen data; and ‘‘NPasAq’’ is the number of cells passing both of the above filters and for which the fraction of pixels is above the threshold shown in the secondary analysis parameters. A significant difference between NPasSg and NPasAq is evidence of a toxic response. 11. There are three independent water-cooled CCD cameras on the InCell 3000 Analyzer: red, green, and blue. The red camera was not used in this study. 12. Be sure to collect enough valid cellular objects to be statistically significant. Use the ‘‘50/500’’ as a rule of thumb. Fifty objects is minimum, 500 is more than enough for statistical significances if the window is at least twofold or greater with CV less than 8%. During screening process of libraries you will encounter a number of false-positive compounds; many of these are considered ‘‘toxic’’ to the assay. If the cell number is low, the number of ‘‘fields’’ output parameter feature is useful in
High-Throughput Automated Confocal Microscopy Imaging Screen
185
decision process. For example, if the count is set to 100 objects or 2 fields, whichever comes first, and then if the cells are healthy, typically one field is all it takes; however, if the number of fields exceeds 1, then it is likely an issue with cellular toxicity, cell adherence, or a result of uniformed plating of cells. 13. In most cell-based assays with short incubation times, 0.5% DMSO is commonly used for high compound concentrations. For longer incubation times, 0.1% DMSO may be appropriate. Although uncommon it may be necessary to use 1% or higher DMSO concentrations in some assays. There may be times when compound solubility is an issue that requires higher DMSO concentration in the assay; therefore, it is critical to know the limitation of the cell model. As long as you use an internal DMSO control at higher desired level, it is acceptable to compare to the compound treatment wells with appropriate DMSO control. However, keep in mind the original DMSO tolerance curve that indicates when the assay begins to alter, so this information is reported. Loss of cell adherence is the biggest side effect of high DMSO concentration, which can have a major impact on the assay. 14. Alternative stimuli include proinflammatory cytokines such as TNF- and IL-1b. It is important to test all stimuli related to the investigating target and biology. Be sure to measure the kinetic time course for each stimulus independently.
Acknowledgments We want to thank Tim Harris, Jennifer I. Colonell, and William J. Karsh formerly of GE Healthcare and Amersham Biosciences for the technical contributions for the work on the InCell 3000. References 1. Cowan, K. J. and Storey, K. B. (2003). Mitogen-activated protein kinases: new signaling pathways functioning in cellular responses to environmental stress. J. Exp. Biol. 206, 1107–1115. 2. Garrington, T. P. and Johnson, G. L. (1999). Organization and regulation of mitogen activated protein kinase signaling pathways. Curr. Opin. Cell Biol. 11, 211–218. Also Refs. (14, 15). 3. English, J. M. and Cobb, M. H. (2002). Pharmacological inhibitors of MAPK pathways. Trends Pharmacol. Sci. 23, 40–45.
4. Johnston, P. A. and Johnston, P. A. (2002). Cellular platforms for HTS: three case studies. Drug Discov. Today 7, 353–363. 5. Ono, K. and Han, J. (2000). The p38 signal transduction pathway, activation and function. Cell. Signal. 12, 1–13. 6. Noble, M. E. M., Endicott, J. A., and Johnson, L. N. (2004). Protein Kinase Inhibitors: insights into drug design and structure. Science 303, 1800–1805. 7. Regan, J., Breitfelder, S., Cirillo, P., Gilmore, T., Graham, A. G., Hickey, E., Klaus, B., Madwed, J., Moriak, M., Moss, N.,
186
8.
9.
10.
11.
12.
13.
O. Joseph Trask et al. Pargellis, C., Pav, S., Proto, A., Swinamer, A., Tong, L., and Torcellini, C. (2002). Pyrazole urea-based inhibitors of p38 MAP kinase: from lead compound to clinical candidate. J. Med. Chem. 45, 2994–3008. Fabbro, D., Ruetz, S., Buchdunger, E., Cowan-Jacob, S. W., Fendrich, G., Liebetanz, J., Mestan, J., O’Reilly, T., Traxler, P., Chaudhuri, B., Fretz, H., Zimmermann, J., Meyer, T., Caravatti, G., Furet, P., and Manley, P. W. (2002). Protein kinases as targets for anticancer agents: from inhibitors to useful drugs. Pharmacol. Ther. 93, 79–98. Zu, Y. L., Ai, Y., and Huang C. K. (1995). Characterization of an autoinhibitory domain in human mitogen-activated protein kinase-activated protein kinase 2. J. Biol. Chem. 270, 202–206. Engel, K., Kotlyarov, A., and Gaestel, M. (1998). Leptomycin B-sensitive nuclear export of MAPKAP kinase 2 is regulated by phosphorylation. EMBO J. 17, 3363–3371. Neininger, A., Thielemann, H., and Gaestel, M. (2001). FRET-based detection of different conformations of MK2. EMBO Rep. 2, 703–708. Williams, R. G., Kandasamy, R., Nickischer, D., Trask, O. J., Jr., Laethem, C., Johnston, P. A., and Johnston, P. A. (2006). Generation and characterization of a stable MK2EGFP cell line and subsequent development of a high-content imaging assay on the Cellomics ArrayScan platform to screen for p38 mitogen-activated protein kinase inhibitors. Methods Enzymol. 414, 364–388. Trask, O. J., Jr., Baker, A., Williams, R. G., Kickischer, D., Kandasamy, R., Laethem, C., Johnston, P. A., and Johnston, P. A. (2006). Assay development and case history of a 32 K biased library high-content MK2EGFP translocation screen to identify p38 MAPK inhibitors on the ArrayScan 3.1 imaging platform. Methods Enzymol. 414, 419–439.
14. Almholt D. L., Loechel, F., Nielsen, S. J., Krog-Jensen, C., Terry, R., Bjorn, S. P., Pedersen, H. C., Praestegaard, M., Moller, S., Heide, M., Pagliaro, L., Mason, A. J., Butcher, S., and Dahl, S.W. (2004). Nuclear export inhibitors and kinase inhibitors identified using a MAPK-activated protein kinase 2 redistribution screen. Assay Drug Dev. Technol. 2, 7–20. 15. Lundholt, B. K., Linde, V., Loechel, F., Pedersen, H. C., Moller, S., Praestegaard, M., Mikkelsen, I., Scudder, K., Bjorn, S. P., Heide, M., Arkhammar, P. O., Terry, R., and Nielsen, S. J. (2005). Identification of Akt pathway inhibitors using redistribution screening on the FLIPR and the InCell 3000 analyzer. J. Biomol. Screen. 10, 20–29. 16. Oakley, R. H., Hudson, C. C., Cruickshank, R. D., Meyers, D. M., Payne, R. E., Jr., Rhem, S. M., and Loomis, C. R. (2002). The cellular distribution of fluorescently labeled arrestins provides a robust, sensitive, and universal assay for screening G proteincoupled receptors. Assay Drug Dev. Technol. 1, 21–30. 17. Bennett, B. L., Sasaki, D. T., Murray, B. W., O’Leary, E. C., Sakata, S. T., Xu, W., Leisten, J. C., Motiwala, A., Pierce, S., Satoh, Y., Bhagwat, S. S., Manning, A. M., and Anderson, D. W. (2001). SP600125, an anthrapyrazolone inhibitor of Jun N-terminal kinase. Proc. Natl. Acad. Sci. USA 98, 13681–13686. 18. Han, Z., Boyle, D. L., Chang, L., Bennett, B., Karin, M., Yang, L., Manning, A. M., and Firestein, G. S. (2001). c-Jun N-terminal kinase is required for metalloproteinase expression and joint destruction in inflammatory arthritis. J. Clin. Invest. 108, 73–81. 19. Zhang, J. H., Chung, T. D., and Oldenburg, K. R. (1999). A simple statistical parameter for use in evaluation and validation of high throughput screening assays. J. Biomol. Screen. 4, 67–73.
Chapter 9 Recent Advances in Electrophysiology-Based Screening Technology and the Impact upon Ion Channel Discovery Research Andrew Southan and Gary Clark Abstract Ion channels are recognised as an increasingly tractable class of targets for the discovery and development of new drugs, with a diverse range of ion channel proteins now implicated across a wide variety of disease states and potential therapeutic applications. Whilst the field now ranks as one of the most dynamic fields for drug discovery research, it has historically been regarded by many researchers as a class of proteins associated with numerous technical challenges. Recent advances in our understanding of molecular biology and the increasing acceptance of electrophysiology-based screening methodology mean that ion channels are rapidly progressing towards universal acceptance as worthy and approachable targets for drug discovery. This chapter will outline the commercially available electrophysiology-based screening technologies and give an overview of the range of options for progressing pharmaceutical research and development against this important target class. Keywords: Ion channel drug discovery, High-throughput electrophysiology, Planar patch-clamp.
1. Introduction Ion channels are an extremely diverse family of proteins. Located in virtually all cell and tissue types, they make essential contributions towards a wide variety of physiological processes and fundamental homeostatic functions. Without ion channels, the heart would fail to contract or maintain pacing, skeletal muscle would not function, gastrointestinal motility would cease, the immune system would be severely compromised and neurones would fall silent. Ion channels are also implicated in a wide variety of disorders such as epilepsy, cystic fibrosis and cardiac arrhythmias W.P. Janzen, P. Bernasconi (eds.), High Throughput Screening, Methods and Protocols, Second Edition, vol. 565 ª Humana Press, a part of Springer Science+Business Media, LLC 2009 DOI 10.1007/978-1-60327-258-2_9, Springerprotocols.com
187
188
Southan and Clark
making them attractive as targets for therapeutic intervention (1). At the cellular level, ion channels play an essentially simple role, functioning as selective pores controlling the distribution of ions (e.g. Na+, K+, Ca2+, Cl–) across the impermeable lipid bilayer of the cell membrane (2). Acting together with ion pump enzymes, such as Na+/ K+-ATPase, ion channels separate ionic charges across the cell membrane, creating both an ionic concentration gradient and an electrical potential difference. Passive movement of ions down this electrochemical gradient is facilitated when ion channels open and, according to cell location and tissue type, modification of electrical potential difference may result in the release of neurotransmitters, firing of neuronal action potentials, muscular contraction, secretion from glandular tissue and activation of an immune response. One particularly important feature of ion channel proteins that must always be taken into consideration when designing screening assays is that they are dynamic structures, able to respond to subtle modifications of the environment surrounding them. This modification may then induce a conformational change, either allowing ions to pass through a central pore or prevent their movement via pore occlusion. Stimuli that can induce a conformational change include a change in potential gradient across the cell membrane, binding of an extracellular ligand or an intracellular messenger, mechanical deformation, change in temperature or a change in proton concentration. Triggering may also be multi-modal, with one example being the transient receptor potential (TRP) family (3), where more than one stimulus may be integrated to elicit a functional response. Although they present unique challenges as drug discovery targets, it is now clearly evident that ion channels have considerable potential to be exploited across a wide variety of therapeutic areas (1, 4–7).
2. Ion Channels as Drug Targets Therapeutic intervention via modification of ion channel activity is not just an academic concept, since there are well-established prescription drugs on the market that act via ion channel modulation (7). These include the local anaesthetic sodium channel blocker lidocaine (introduced 1948), the sedative chloride channel modulator diazepam (introduced 1963) and the calcium channel inhibitor antihypertensive verapamil (introduced 1982). The majority of such drugs were first marketed between the late 1940s and early 1980s as a result of discovery research where emphasis was placed upon small arrays of compounds screened for in vivo activity, rather than a focused effort to find specific
Electrophysiology-Based Screening Technology
189
modulators of a particular channel family or subtype. In the more recent era of high-throughput screening-based drug discovery, where large chemical libraries are generally screened against individual protein targets stably expressed in cell lines, research directed towards the ion channel class of proteins has been highlighted as an undertaking with significant potential pitfalls. The close homology between channel subtypes can mean that finding specific modulators for a particular subtype implicated in a disease state can be very challenging using the target-driven HTS techniques that have become the industry standard approach to discovery research since the 1980s. A further complication is that ion channels do not function as a simple on–off switch; they may pass through a number of conformations or ‘states’ during the opening or the closing process, changing the availability of potential compound binding sites with each transition. For example, in the case of voltage-gated ion channel blocking compounds, the interaction with the channel may be highly dependent upon the electrical field across the cell membrane (voltage-dependence), it may require the channel to open before the compound is able to bind (open channel block), it may involve an accumulation of block through an imbalance between compound binding and unbinding rate constants (use-dependence) or could occur when the channel is in an inactivated conformation (inactivated state block) (2). Direct clinical evidence that binding to a specific conformation can be advantageous for therapeutic intervention is available if we examine the literature for the iminodibenzyl antiepileptic carbamazepine (Tegretol1), which has been in clinical use since the 1960s. Although not a product of a rational design process to target a particular state of the channel, this compound reduces abnormal action potential discharge patterns and resultant seizure episodes by primarily binding to and stabilising the inactivated configuration of sodium channels (8). The early dihydropyridine calcium channel blockers such as nifedipine through to the more recent ‘blockbuster’ antihypertensive amlodipine (Norvasc1) also preferentially bind to inactivated channels (9). The knowledge that targeting particular conformational or functional states can be advantageous means that ion channel drug discovery researchers should consider the pharmacological background, the biophysical profile and the likely functional role for a given channel in the disease state before compiling a discovery research strategy. Bringing this knowledge together should facilitate design of a screen biased towards uncovering a particular, and hopefully the most relevant, type of compound interaction with the channel. This could also be advantageous where the actual subtype selectivity may be particularly difficult to achieve, yet where mechanistic selectivity may display an acceptable therapeutic window; for example, identifying use-dependent sodium channel blockers to target rapidly firing sensory neurones for analgesia.
190
Southan and Clark
3. Screening for New Drugs Whilst targeting specific mechanisms of compound interaction is a rational strategy, designing screening methodology for compounds that interact in a specific manner can prove to be a significant challenge. There are no major obstacles to producing evidence regarding interactions of compounds with ion channels when using conventional electrophysiology, a technique enabling very accurate resolution of ion channel activity from individual cells (see below for an overview of methodology). The disadvantages associated with pursuing this approach are the time taken to produce the data, dependence upon highly skilled personnel and the fact that generally only one compound can be examined at any one time. Scaling up to use high-throughput screening methods does not necessarily provide us with a solution. Although such assays enable many more compounds to be examined over a given time period, each assay format involves some element of compromise and may not be the most appropriate starting point for an ion channel research programme. If we consider radioligand binding, an assay technique developed in the 1960s, an appropriate radioactive tracer probe directed towards a single binding site must be prepared and the screen provides results regarding binding to only that site, or one allosterically coupled, with no temporal resolution or accurate voltage control. Hit compounds then require further detailed follow-up using conventional electrophysiology to confirm functional activity. The slow membrane potential-sensitive fluorescence assays, generally based on negatively charged oxonol dyes (10), have relatively good voltage sensitivity and are amenable to scale up to 1536-well format; however, their suboptimal temporal resolution (>1 s) and potential for data being influenced by fluorescence artefacts can reduce confidence in the results generated. They are also unsuitable for fast inactivating sodium channel assays unless an agent such as veratridine is introduced to the assay buffer to artificially prolong channel open time and extend the depolarisation (11). In this screening format, the channel is far removed from its physiological state and the effect veratridine may have on compound or channel interactions is unknown. Again there is no accurate control over cell membrane potential. A number of calcium-sensitive dyes have been developed since they were introduced in the 1980s (12) and although they are very useful tools for HTS assay design, they still lack voltage control, their temporal resolution is limited and the readout is secondary to the actual permeation process. Another dye-based approach to resolving membrane potential change is fluorescence resonance energy transfer (FRET). This methodology approaches the timescale required for accurate assessment of ion channel function, with
Electrophysiology-Based Screening Technology
191
resolution of potential change within a millisecond timescale. Researchers have also coupled this method with electrical stimulation to measure sodium channel activity under more physiological conditions (13); however, precise regulation of cellular membrane potential is still not available. Both non-radiometric and radiometric efflux have the advantage of assessing activity of compounds via quantification of actual ion permeation through the channel pore. Control of cell membrane potential and temporal resolution are both poor and this functional assessment is limited to ion channel targets with appropriate tracer ions, such as Li+for sodium channels and Rb+for potassium channels (14). It is the lack of control over cellular membrane potential that significantly compromises data obtained using the above traditional HTS screening assays for ion channel target screening. Additionally, the assay readout generally occurs over a timescale bearing little relevance to the millisecond timescale required to accurately resolve ion channel activity. It would be unreasonable to suggest that the fluorescence- and the efflux-based assay platforms are entirely unsuited to the identification of ion channel modulators though, since they can identify compounds interacting with ion channels, and relatively robust screening assays may be configured to examine large numbers of compounds in a short time period. Fluorescence-based assays combined with electrical stimulation have been shown to facilitate the identification of usedependent blockers (13) or inactivated state blockers (15, 16). However, the lack of accurate voltage control during these assays and inherent compromises of the assay format mean they are still some distance away from the ideal situation and useful compounds may be missed when screening in this type of format. Recent technological advances in electrophysiology-based screening technology have initiated a new era for the field of ion channel drug discovery research. Voltage control, adequate temporal resolution and ability to configure complex recording protocols are now available in electrophysiology-based screening formats with improved throughputs up to 384-well mode. Although still in the relatively early stages of development, these technologies have made a significant impact over a short time period and is set to revolutionise approaches to screening against this challenging target class.
4. ElectrophysiologyBased Screening
To fully comprehend the importance and impact of recent technological advances applied to ion channel research, it is important to have some understanding of the most accurate method used for
192
Southan and Clark
the study of ion channels, conventional patch-clamp electrophysiology. Pioneered in the early 1980s (17), this versatile method enables precise recording of the current flowing through ion channels contained within the cell membrane, accurately resolving current flow through ion channel pores down to the level of few picoamperes (pA). The method facilitates detailed pharmacological and biophysical characterisation and has been the driving force behind much of our understanding of ion channel function. Whilst immensely powerful, the technique is restricted by the need for complex recording equipment, a requirement for a highly skilled operator and is extremely manually intensive, resulting in throughput typically in the region of 20 data points or 1 EC50 or IC50 measurement per day. The most significant limitation of this technique is the requirement for a glass microelectrode to be manipulated on to the surface of the target cell to enable formation of a high-resistance seal between the electrode glass and the cell membrane (seal resistance 109 or greater). This tight connection, usually referred to as a ‘gigaseal’, is required to reduce background noise to a level where under the right conditions, amplification can produce recordings with sufficient fidelity to resolve opening of single ion channels. The requirement for significant levels of skilled manipulation and adjustment from the operator during an experiment has limited the scalability and application of this technique. A number of companies recognised this problem and set to work in the late 1990s to design systems aimed at overcoming some of the limitations of the conventional method. Sophion Biosciences launched the Apatchi-1TM automated patch-clamp system in 2001 (18), which used highly sophisticated optics and robotics to mimic the actions performed by a human operator. Originating out of a collaborative effort between NeuroSearch and Pfizer, the system was technically inspired but extremely complex and lacked the consistency or throughput required to give significant advantage over a skilled electrophysiologist using the conventional technique. Around the same time a company called CeNeS Pharmaceuticals in Cambridge, UK, was developing an automated system called the AutopatchTM (or Interface Patch), based on a single inverted glass microelectrode ‘blind’ patch-clamping cells suspended in a droplet of bathing solution (19). Again, an ingenious approach; but the performance of the equipment was inconsistent, throughput for the first-generation machine offered little if any advantage over the conventional technique and although some pharmaceutical companies evaluated and purchased the system, it was not universally adopted. The technology has since been adopted and refined by Xention Pharmaceuticals, but it is unlikely that the system will become commercially available in the near future. The early automated electrophysiology platforms served to provide evidence that glass microelectrode-based technologies were unsuitable for scale up to provide electrophysiology-based
Electrophysiology-Based Screening Technology
193
screening with throughput to compete with 96- or 384-well screening formats. Clearly a new approach was required and this came in the form of planar substrate electrophysiology, a technique that initiated a revolution for both throughput and the contribution that patch-clamp electrophysiology could make to the drug discovery process. This technique replaced the glass microelectrode used in conventional electrophysiology with a flat substrate containing a small hole of around 1–2 mm in diameter (Fig. 9.1). Although the actual substrate material varies between individual manufacturers, the recording technique remains fundamentally the same. The important advantage is that a suspension of cells is added to the recording well and formation of a highresistance seal between a cell and the substrate surrounding a hole is performed automatically by the machine controlling suction force applied behind the planar substrate. Since attraction of cells towards the hole on the substrate is placed under software control, there is no need for complex manipulation equipment and crucially, user intervention. By removing the once laborious and highly skilled process of seal formation, the planar substrate
A
B
Suction Antibiotic perfusion
1nA
1nA
50ms
50ms C
D Control solution
Compound evaluation
Fig. 9.1. Schematic diagram of a PPC planar substrate; 8 of the 64 available holes are visible in the figure. (A) Cells are introduced into the recording well at a defined density (0.2–1 million cells/ml) and suction is applied from below the planar substrate. (B) The suction attracts cells towards the holes and leads to the formation of a seal between the cell and the planar surface. Electrical access is then gained via perfusion of a pore forming antibiotic from below the PatchPlate1. (C) Control recordings are taken by moving the headstage electrode into the recording well. (D) Following addition of solution containing test compound, a second recording allows evaluation of compound activity.
194
Southan and Clark
approach allows all processes following preparation of the cell suspension to be automated. This is the critical advantage and, combined with equipment configuration and automation enabling multiple experiments to take place in parallel, lifts potential daily throughput by a highly significant margin. IonWorks1 HT, a planar substrate patch-clamp electrophysiology system developed by Essen Instruments, considerably increased the potential throughput for electrophysiology-based assays (20). Molecular Devices subsequently secured the worldwide rights to this system and the first commercially available systems were adopted in 2002. The machine is based on a planar substrate, called a PatchPlate1, comprising 384 individual wells each containing a small central hole. By placing the PatchPlate1 over a specialised plenum chamber, application of suction under the plate enables the formation of a seal between individual cells and the substrate. A pore-forming agent, such as the antibiotic amphotericin-B, perfused beneath the PatchPlate1 facilitates access and electrical control of the cell interior (perforated patchclamp technique). Once electrical continuity is achieved, a 48channel recording head automatically reads the plate over eight sets of 48 wells recorded in parallel. For most cell lines, seal resistance values observed with IonWorks1 recordings typically fall below the gigaohm level attained using conventional electrophysiology recording and with no inbuilt compensation for series resistance or capacitance, recording fidelity is lower than that obtained with conventional electrophysiology recordings. However, with no requirement for user intervention other than dissociation and loading of cells into the machine, 100–300 successful compound applications are possible in a recording protocol lasting around an hour. When the machine was launched, many researchers recognised that the slight compromise in voltage control and recording fidelity was more than compensated for by the increased throughput afforded by this machine compared to other electrophysiology-based systems. Although the test protocol voltage steps may deviate slightly from the programmed values according to the quality of recordings in individual wells, the machine enables far greater degree of control over the cell membrane potential and assay conditions than any other screening platform with a similar throughput. With appropriate voltage commands the machine can quickly establish information regarding the mechanism of compound block during evaluation of compounds with levels of accuracy that was not possible with previous screening technology. The main drawback of the first-generation IonWorks1 is the success rate of acceptable recordings often falling somewhere between 50 and 80% across the PatchPlate1. To compensate for this, four replicates of each drug concentration are typically applied to allow for failed wells. Although a great step forward, with compound supplies, consumables costs and speed
Electrophysiology-Based Screening Technology
195
being important commodities for the pharmaceutical industry, it became clear that a more efficient means of evaluating compounds would be required.
5. Population Patch-Clamp Quadruplicate compound additions for IonWorks1 experiments became unnecessary in 2005 when Molecular Devices launched IonWorks1 QuattroTM. Rather than a completely redesigned instrument, the IonWorks1 QuattroTM can be considered to be an evolution of the original IonWorks1 HT platform, with changes to the amplifiers and software to enable two distinct recording modes. The single-hole PatchPlate1 consumable associated with the IonWorks1 HT is still compatible, but a new configuration termed ‘population patch-clamp’ (PPC) greatly increases recording reliability (21). This configuration utilises a new planar substrate configuration that contains 64 individual holes per well and individual well recordings are composed of the average signal across the 64 holes. There were two main requirements for successful PPC mode recording: first all the 64 holes in a well need to be ‘sealed’ either by intact cells or by cell debris to give an average resistance across the well ideally of around 50 M or greater (below the 30 M level, data quality becomes unacceptably compromised due to the low seal resistance). The second requirement is that when signals from all holes on the plate are averaged, a sufficient number of intact cells covering the holes express current to a level that offsets the lack of signal from ‘nonexpressing’ cells or cell debris. With cell lines having appropriate expression levels, PPC mode recording success rates can approach 100% acceptable wells across each screening plate. The impact of this enhanced success rate has meant that electrophysiology-based primary screens against ion channel targets are now becoming more commonplace and acceptable to the pharmaceutical industry (Fig. 9.2). Whilst more expensive in terms of consumable costs, such screens can provide adequate levels of throughput with substantially higher quality data output than has been possible using fluorescence, binding or efflux methodology. Data are still slightly compromised by the quality of seal between the cells and the planar substrate; but with consistency of recordings, improved throughput and the range of recording metrics available for analysis, the machine significantly outperforms the other more traditional approaches to voltage-gated ion channel screens. IonWorks1 QuattroTM can be utilised for assays ranging between examining a few tens of compounds for IC50 potency and rank order profiling, up to larger scale primary screening campaigns
196
Southan and Clark
A Activator
B
C Control
XE991
Blocker
D
E
Control
F
Ruthenium red
Fig. 9.2. Electrophysiology-based screening is now possible for a wide range of voltagegated or ‘leak’ ion channel targets. IonWorks1 QuattroTM screening assays configured in the BioFocus DPI research laboratories include (A) cystic fibrosis transmembrane conductance regulator (CFTR); (B) hERG; (C) KCNQ2/3; (D) Kv1.4; (E) TASK3 and (F) Nav1.7.
exceeding 50,000 compounds. In a relatively short period of time, the field of ion channel research has moved on considerably, and the utility of the IonWorks1 QuattroTM platform for ion channel screens has been recognised by many investigators. Many large pharmaceutical and smaller biotechnology companies have acknowledged the contribution that IonWorks1 QuattroTM can make towards progression of their research programmes. The flexibility of assay plate formats and greater level of control over assay conditions mean that electrophysiology-based primary screens are now becoming more commonplace. Since multi-parameter voltage protocols capable of probing the particular states of the ion channel target are also possible, the information content from this type of screen far exceeds what was previously possible. In addition, researchers are beginning to routinely configure assays for ion channels that can present significant technical challenges, for example, calcium-activated potassium channels (22), chloride channels, two-pore domain channels (23) and hyperpolarisationactivated cyclic nucleotide-gated channel families (24). Saying that this approach has been universally adopted would be misleading though; the consumable spend, throughput and timeline for this type of screen are considered unacceptable by groups who prefer to use the more traditional fluorescence-based HTS approaches for primary screening and reserve electrophysiology-based methodology for more detailed follow-up studies. The IonWorks1 QuattroTM platform still cannot be considered totally satisfactory in the present format. Although the introduction of a 48-channel fluidics
Electrophysiology-Based Screening Technology
197
head for compound application has improved throughput compared to the previous 12-channel system, the discontinuous read cycle for the 48-channel electrode head severely limits the throughput of each 384-well plate read. The discontinuous read also means that cells are not voltage-clamped between the control and the drug challenge pulse, which is not an entirely satisfactory situation for an electrophysiology-based assay, although a lengthy pre-read holding period can help to compensate for this. Discontinuous read also means the machine has restricted utility for fast ligand-gated ion channel targets. An improved system with all 384 wells reading simultaneously combined with concurrent compound application would be a partial solution to the current recording limitations and it will be interesting to see whether the technical and financial challenges can be overcome to produce such an improved instrument. The recent launch of the Sophion A
B.
B
Fig. 9.3. IonWorks1 QuattroTM enables rapid functional assessment of putative clonal cell lines. (A) Twelve putative clones are examined using a special cell boat that enables 32 individual cells from each clone to be examined for functional expression on one PatchPlateTM. Filter metrics are set to show expression above a defined threshold. In this instance two clones are highlighted on the PatchPlate view where functional expression exceeded 1 nA for a significant majority of the cells evaluated. (B) Four example current traces from a clone selection exercise; current amplitude, inactivation characteristics and stability over two read cycles are all evident and can be rapidly evaluated.
198
Southan and Clark
QpatchTM HTX illustrates the attractiveness of planar substrates containing multiple holes. This instrument is a revision of the 48 channel QPatchTM HT and utilises and QPlate consumable containing multipleholes in each recording well to increase recording success rates. The facility for continuous recording and compound application has clear benefits for study of challenging ligand-gated ion channels and the machine offers significant potential for improving throughput in this area of ion channel research. With IonWorks1 QuattroTM PPC mode recording showing such high success rates, the utility of single-hole PatchPlatesTM could be questioned. However, the single-hole substrate is particularly well suited to rapid screening for functional current expression during cell line generation programmes. With a typical throughput of up to 120 clones in a day and data generated for 32 individual recordings per clone, this is a powerful technique for rapid progression of stable cell line generation (Fig. 9.3). Construction of high-quality clonal cell lines is a vitally important part of drug discovery and fast decision making for the progression of putative clones towards validation is a real benefit. The ability to select a particular cell line according to the expression level and seal quality can significantly impact timeline and cost for the subsequent screening campaign. There is also an added benefit that clones identified for functional expression on IonWorks1 generally perform well for conventional and other electrophysiology formats. Single-hole PatchPlate1 recording is also useful for assay validation and optimisation before transfer to PPC mode. Although not without technical limitations, IonWorks1 QuattroTM firmly led the medium throughput electrophysiology-based screening field for voltage-gated ion channel targets in the early stages of 2008. The diverse experimental applications and flexible recording formats mean it is likely to remain an invaluable tool for ion channel discovery research for some years to come.
6. Giga Seal Quality Automated Electrophysiology
Planar patch-clamp systems that give rise to higher fidelity data recordings include the PatchXpress1 7000A from Molecular Devices (introduced in 2002) (25), the QPatchTM 16 from Sophion Biosciences (introduced in 2003) (18, 26) and the NPC-16 Patchliner from Nanion Technologies (introduced in 2006) (27). All three machines operate via a 16-well planar substrate recording chip and subsequently have considerably lower throughput than IonWorks1 QuattroTM. However, they have the advantage of active monitoring of individual recordings with software control over decisions regarding actions such as the
Electrophysiology-Based Screening Technology
199
application of suction to maintain recording quality or the perfusion of the next compound. Data quality is comparable to conventional electrophysiology, with the advantage that the instruments can potentially record from up to 16 individual cells per recording session, although recording success rates more typically lie between 6 and 10 completed experiments. This boost in throughput over the conventional technique means the instruments are generally used for compound follow-up after a primary screen, safety pharmacology such as hERG profiling and pharmacological validation of clonal cell lines. PatchXpress1 and QPatchTM 16 are true 16-channel systems able to record in parallel from all wells simultaneously and independently. Both machines have now been adopted by a wide range of pharmaceutical companies and contract research organisations. QPatchTM has a more complex planar substrate arrangement than PatchXpress1, with each QPlate incorporating a compound reservoir, a laminar flow channel driven by a passive capillary action and a waste reservoir. This enables small volumes of compound to be applied to a cell with very fast solution exchange, making the system well suited to the study of both fast ligand-gated and voltage-gated ion channels. Due to the presence of an onboard cell handling station, which is able to maintain cells in good condition for up to 4 hours, extended periods of unattended use are possible with this system. Whilst PatchXpress1 employs a less-complex drug application system, via simultaneous compound perfusion and aspiration into a simple well structure, it is also able to achieve rapid solution exchange rates consistent with recording high-quality ligand- and voltagegated responses. PatchXpress1 does not have an integrated cell maintenance station and therefore does require more user input than QPatchTM. Potential data throughput is similar for both machines; the liquid handling system of the QPatchTM instruments can be configured with four or eight pipettes as opposed to the single pipettor specified on the PatchXpress1 though, which can help to speed up QPatchTM experiment times especially ligand-gated experiments where the pipettors are in great demand. The most significant difference between PatchXpress1 and QPatchTM 16 is scalability. PatchXpress1 took advantage of the Axon Instruments MultiClamp amplifiers normally used for conventional electrophysiology and, although very high-precision state-of-the-art recording units, their physical size means there is little scope for expansion of the number of recording channels within the PatchXpress1 instrument. Rather than taking advantage of existing technology, QPatchTM was designed from the outset to be a scalable system. By using miniaturised printed circuit board amplifiers (designed by Alembic Instruments) that contain 16 separate amplifier circuits per board, significant expansion of recording capability has been made available within the QPatchTM chassis. The QPlate consumable was also designed to offer capacity
200
Southan and Clark
upgrade and with the recent introduction of a new 48-channel QPatchTM HT chip, existing users with a QPatchTM 16 may now upgrade their existing system or new users can purchase the QPatchTM HT recording system. Both the upgrade and the QPatchTM HT have a significantly enhanced throughput compared to the other commercially available giga seal quality systems. Whether the increased capital outlay and cost of consumables will hinder widespread adoption of this system remains to be seen, although the cost of consumables has not proven to be a particularly significant barrier to companies adopting other highthroughput electrophysiology systems in their laboratories. An automated system that is probably less widely recognised is the Patchliner# from Nanion (27). This benchtop machine, based on HEKA EPC-10 amplifiers and a Tecan liquid handling robot, has a significantly smaller footprint than either the QPatchTM or the PatchXpress1. Patchliner# uses a 16-channel planar substrate and is able to produce high-quality giga seal recordings in an automated format. Patchliner# can be configured to run between 2 and 8 channels in parallel with unattended use for periods of up to 4 hours. It also has the advantage of being able to rapidly exchange internal and external recording solutions through a specially designed glass chip, facilitating detailed biophysical and pharmacological analysis of the compound mechanism of action. The flexibility of this system means it has been adopted by both academic and industry laboratories and it will be interesting to see whether the instrument, and the recently launched 96 channel SynchroPatch1 96, gain wider acceptance over the next few years. Another automated system based on HEKA amplifiers and a Tecan liquid handling robot is the Flyscreen1 8500 from Flyion (28). This screening system with options for three or six recording channels has been developed around a novel means of recording from cells. Rather than pursuing the planar substrate route, the device records from cells sealed within a ‘FlipTip’ microelectrode structure. The single-use glass consumables are similar to those used in conventional electrophysiology and high-resistance seals are formed by simply flushing a suspension of cells towards the tip. This imaginative approach to obtaining a giga seal quality recording not only facilitates automated software-controlled recording, but also negates the requirement for expensive micromanipulators or planar chip consumables. Since the high-resistance giga seal is formed within the tip of the glass electrode, recordings are less susceptible to failure than conventional electrophysiology recordings due to vibration or other external forces. The intracellular face of the microelectrode is housed within a plastic insert, which contains recording solution, and facilitates the application of suction to gain electrical access to the cell interior. Since compounds are applied inside the microelectrode structure, there have been some concerns regarding the speed of solution exchange within
Electrophysiology-Based Screening Technology
201
the tip of the FlipTip consumable and the potential for reduced compound potency due to clumps of cells restricting compound access. The concerns have been addressed via the introduction of a new microforge-based electrode manufacturing system, which significantly improves compound access for this type of recording. Microelectrodes are fabricated using an automated feedback-controlled, pressure-polishing microforge employing CCD-based tip recognition software. This system fabricates electrodes with both a very short tip section and a significantly enlarged shank region for perfusion of solutions. Using these electrodes, solution exchange rates as fast as 50 ms can be achieved and recordings of fast ligandgated responses have been demonstrated.
7. Single-Channel Planar PatchClamp Systems
Lower throughput giga seal quality semiautomated planar substrate electrophysiology systems are also now available. These single-channel systems enable researchers to perform patch-clamp electrophysiology recordings without the need for extensive periods of training. Since the new single-channel systems are bench mounted, there is a considerable space saving compared to a conventional electrophysiology system, which requires at least 1.5 m2 of floor space for an average sized vibration isolation table and amplifier. The time taken to set up and be ready to record is a matter of minutes with the new systems, in stark contrast to a conventional electrophysiology rig, which requires practical skills and at least half a day of assembly time during initial set-up, followed by an inevitable period of adjustment to both perfusion and electrical connections to achieve stable recordings with low levels of extraneous electrical noise. The Port-a-Patch# system from Nanion miniaturises patch-clamp electrophysiology recording apparatus down to a total benchtop footprint of around 0.18 m2 including the amplifier (27). Using essentially the same planar technology as the Patchliner#, but reduced to fit a singlechannel format, it enables scientists to make high-quality patchclamp measurements from a wide range of both cultured and primary cell types. The small size, ease of use and versatility to exchange both intracellular and extracellular solutions mean this instrument is likely becoming increasingly popular in both academic and industry electrophysiology laboratories. In addition, the Port-a-Patch# can utilise either the industry standard Axon or HEKA amplifiers with their respective software and therefore be incorporated quickly into an existing electrophysiology group. The PatchBox from Flyion also offers a benchtop single-channel patch-clamp recording system and uses the same principle as the Flyscreen1 8500, where the recording occurs inside a
202
Southan and Clark
microelectrode structure rather than patching on a planar substrate. With an amplifier and a PatchBox footprint of around 0.25 m2, this system uses electrodes fabricated using a conventional microelectrode puller. As with the Nanion Port-a-Patch, the system is compatible with HEKA amplification. Whilst this approach may be more suitable for experienced electrophysiologists, it does provide a level of experimental flexibility and consumable cost effectiveness that may be lacking in systems with prefabricated substrates.
8. Single-Channel Non-Planar PatchClamp Systems
The Cellectricon Dynaflow1 system, first launched in 2003, is a giga seal quality recording system that uses a prefabricated chip with inbuilt laminar flow channels to facilitate rapid solution exchange around a single cell suspended at the end of a conventional patch pipette (29). Cell preparation for this technique therefore involves a step to dissociate cells, enabling a cell to be patched, subsequently detached from the substrate and placed in the vicinity of solutions perfused under laminar flow conditions. These steps require the operator to be trained and experienced with conventional electrophysiology techniques. To facilitate accurate switching between different solutions, the Dynaflow1 chip is housed on a modified microscope stage, which allows the position of the laminar flow channels to be accurately controlled by a software-driven motorised scan stage. Solution exchange is then enabled by placing the cell in one laminar stream with rapid movement of the stage switching the cell between laminar streams containing control buffer or test solutions. Whilst the technique is applicable to rapid screening of compounds for both voltagegated and ligand-gated ion channels, the system is particularly suited to fast desensitising ligand-gated channels due to the very rapid solution exchange possible between laminar flow channels. Three different Dynaflow1 chips are now available; accommodating 8, 16 or 48 different compounds, facilitating testing of multiple compounds per recording. Early reports of the system identified issues of certain compounds being absorbed into the structure of the chip, resulting in inaccurate potency reporting; however, the chips have since been modified to minimise compound absorption and the manufacturers claim to have rectified this issue. Cellectricon have recently announced the launch of an automated high-throughput electrophysiological instrument based around their micro-fluidics expertise. The instrument is a 96 channel system with the capacity to make the rapid solution changes necessary for the screening of fast desensitising ligandgated ion channels and will be available in late 2009.
Electrophysiology-Based Screening Technology
9. Oocyte-Based Automated Voltage Clamp
10. Impact of the Technological Advances
203
Xenopus oocytes have been used as a model system for the expression and study of ion channels and transporter proteins for many years. Their robust nature and ability to rapidly express protein following injection of messenger RNA has meant that they have often been utilised for study of both voltage- and ligand-gated channels. Automated systems such as the eight-channel OpusX press1 6000A voltage clamp system (30) and the 96-well format Robocyte from Multichannel Systems (31) offer recording in a format requiring minimal user training and/or intervention. The Robocyte has the advantage of unattended overnight injection and recording of responses, whilst the OpusXpress1 records simultaneous responses in parallel to boost throughput. Both machines offer the capability of investigating ion channel targets without having to create a stable cell line, although some caution is advised when generating data from oocyte systems due to the nonmammalian cell background and the potential for compound partitioning or absorption into the yolk.
The early years of the 21st century have seen a dramatic growth in the number of commercially available electrophysiology-based technologies able to facilitate progression of both pharmaceutical and academic research campaigns. Among the technological advances outlined above, planar substrate recording must rank as the most exciting development to have emerged. Prioritisation of compounds can now occur over a timescale that would not have been possible a decade ago. Planar patch now enables rapid profiling of high-value compounds in secondary assays such as hERG with giga seal quality recording, all the way up to mediumthroughput electrophysiology-based screening of libraries up to 100,000 compounds. Crucially, electrophysiology-based primary screens recording in PPC mode can now quickly provide information regarding both level of compound activity and putative mechanism of action in a screening format with unrivalled control over the assay conditions. The technology is still relatively new and there is significant scope for refinement of most of the available systems. IonWorks1 QuattroTM in particular being compromised by the discontinuous compound addition and recording phase, which slows down throughput, limits voltage control to defined periods during the assay and makes assay development for
204
Southan and Clark
fast ligand-gated ion channels either impossible or particularly challenging. Whilst some researchers have begun to configure IonWorks1 QuattroTM assays to address fast ligand-gated channels such as GABAA, continuous voltage clamping, compound addition and reading would be a more satisfactory and versatile solution. All planar systems also suffer from one further disadvantage, the cost of the consumable. This is a particular burden for research budgets and deters some research teams from fully exploiting the potential of the new technology. Whilst the new electrophysiology-based technologies have probably had less impact for academic researchers than for groups from within the pharmaceutical industry, there are signs that this will change in the next few years. The main barriers to entry in this market for grant-funded university groups are the initial cost of the equipment, high consumables costs and lack of real need to generate many thousands of data points per week. The lower throughput giga seal quality 1- to 16-channel automated machines are most likely to be the first systems adopted, facilitating rapid highquality pharmacological profiling experiments and allowing molecular biologists to evaluate new ion channel clones without a protracted period of training. Some university groups such as the Faculty of Biological Sciences in Leeds (UK) have already taken the first step (32) and it will be interesting to see where the new technologies will begin to impact research in the academic environment over the next few years. The advantages to the pharmaceutical industry are much clearer; with an ever increasing demand for novel drugs to replace those nearing the end of their patent life, the wide range of ion channel targets now implicated across diverse therapeutic areas and the potential to secure vast amounts of future revenue, significant capital and consumable resource investment becomes much easier to justify. How automated electro physiology resource is used within each company will be determined by a combination of whether managements are willing to move back towards ion channel targets, whether they are willing to move away from the tried and trusted traditional HTS approaches and their ability to absorb the increased consumables spend that accompanies a move into electrophysiology-based screening. Many large pharmaceutical companies may still be smarting from their first move into ion channels, where significant amounts of money were invested for comparatively little return on the investment. Others will see this opportunity as the way to revisit and move forward earlier campaigns or start afresh with new insight and suitable screening tools. For the smaller specialist biotechnology companies it is likely that electrophysiology-based screening will be universally adopted over a faster timescale, with companies taking advantage of the technology to rapidly develop expertise within their own niche area. Unencumbered by a corporate bias towards particular experimental approaches, perhaps more freedom to
Electrophysiology-Based Screening Technology
205
explore novel approaches and a need for rapid data generation to secure funding, this sector may benefit significantly from the additional data available when adopting an electrophysiologybased screening approach. It is clear that a number of strategic options are now available to pharmaceutical industry researchers wishing to investigate ion channel targets (Fig. 9.4). The most conservative approach is to identify and characterise hits from a large compound library (100,000 compounds) using the traditional fluorescence-based or other HTS methodologies and then progress only a small subset of hits meeting potency and selectivity criteria towards giga seal quality electrophysiology-based validation. This approach is likely to be adopted by larger pharmaceutical companies who still wish to screen large libraries using an assay format consistent with previous screening campaigns. A slightly modified version of this approach is again to screen a large (100,000) library using fluorescence
≤100,000
500,000+
1,000’s
100’s
10’s
HTS: Fluorescence / Binding / Efflux Conv Ephys
Traditional approach
IonWorks ® Quattro ™ QPatch HT
16 Channel Giga Seal
New options
LT Planar
Fig. 9.4. A number of strategic approaches for progressing ion channel discovery research are now available. Whilst the traditional HTS-biased approach involves emphasis on standard plate-based assays with a relatively small contribution from conventional electrophysiology (Conv Ephys), the new commercially available electrophysiology formats give many options for the point where electrophysiology makes a contribution. This may still involve a standard HTS campaign, but could now involve IonWorks1 QuattroTM-based primary screening of a relatively large compound collection. Screen follow-up can now include profiling of hundreds to thousands of compounds using IonWorks1 QuattroTM or the multi-channel giga seal quality systems such as PatchXpress1, QPatchTM or Patchliner#. Finally detailed assessment of compounds can occur using either low-throughput planar substrate systems (LT planar) or the conventional technique.
206
Southan and Clark
techniques and then move directly to PPC and/or giga seal quality automated electrophysiology to characterise and progress the confirmed hits. Both of these approaches may be compromised by the use of the HTS methodologies in the initial phase, but they do have the advantage of speed for the primary screen and ability to examine very large compound libraries. Another approach that is starting to gain favour amongst a number of industry researchers is to perform the primary screen using PPC planar substrate electrophysiology. Here, either small focused compound collections of a few thousand compounds or even compound collections (100,000) representing the diversity space from a larger collection are examined for activity in the electrophysiology assay, the rationale behind this approach being that the larger consumable costs are offset by the increased likelihood of finding high-quality hits via the additional control of the assay conditions and the ability to develop relatively complex multi-parameter screening protocols. Significant further value can be added to the electrophysiology-based screening approach via input from experienced chemists and use of in silico techniques in the initial stages to design a focused library directed towards the target of interest. Large corporations may elect to compile such a focused or directed compound collection via selection from within their large diverse compound library, whilst smaller companies may opt to use specialist chemical providers such as Asinex, Chemdiv, Albany Molecular Research or BioFocus DPI to purchase an off-the-shelf ion channel-directed collection or to create a bespoke focused library. It would be unfortunate if the more traditional conventional patch-clamp techniques are written off prematurely though; the quality of data and sheer versatility of recording configurations remain as a significant strength that should not be overlooked. Indeed, in the short term the introduction of planar substrate electrophysiology will most likely benefit the conventional technique, with the planar technology platforms able to act as fast filters allowing valuable conventional electrophysiology resource to be reserved for complex biophysical and pharmacological analysis. It would be too easy to forget that the conventional technique has produced a generation of skilled electrophysiologists who have great technical ability and an in-depth understanding of the challenges associated with the investigation of ion channel proteins. This level of understanding is critical for effective prosecution of a discovery research programme directed against ion channels and needs to be maintained, regardless of the recording format being used. It is difficult to imagine that the field of automated electrophysiology will continue to evolve at the rate we have recently experienced. The technology and techniques are clearly robust and will no doubt be refined further to address more challenging
Electrophysiology-Based Screening Technology
207
targets and enable greater throughput and recording fidelity. Although most ion channel specialists believe we are now armed with powerful tools to facilitate progression of screening campaigns towards the eventual goal of a therapeutically useful drug, the crucial test will be whether this technology and associated expense actually produce that additional edge over the previous technologies. Clearly this debate will not be resolved for some time, but with the renewed appetite for companies to address ion channel targets and the quality and breadth of assays now being configured by many industry groups, it is difficult to imagine that within the next decade we will not see evidence for automated electrophysiology accelerating the identification of new ion channel modulators. References 1. Bernard, G., Shevell, M.I. (2008) Channelopathies: a review. Pediatr. Neurol. 38(2), 73–85. 2. Hille, B. (2001) Ion Channels of Excitable Membranes. Sinauer Associates, Inc. 3. Clapham, D. (2006) An introduction to TRP channels. Ann. Rev. Physiol. 68, 619–647. 4. Gosling, M., Poll, C., Li, S. (2005) TRP channels in airway smooth muscle as therapeutic targets. Nayun-Schmiedebergs Arch. Pharmacol. 371, 277–284. 5. Xie, M., Holmqvist, M.H., Hsia, A.Y. (2004, April) Ion channel drug discovery expands into new disease areas. Curr. Drug Discov., 31–33. 6. Southan, A., James, I.F., Cronk, D. (2005) Ion channels new opportunities for an established target class. Drug Discov. World 6 (3), 17–23. 7. Hogg, D.S., Boden, P., Lawton, G., Kozlowski, R.Z. (2006) Ion channel drug targets unlocking the potential. Drug Discov. World 7(3), 83–92. 8. Yang, Y.C., Kuo, C.C. (2005) An inactivation stabilizer of the Na+channel acts as an opportunistic pore blocker modulated by external Na+. J. Gen. Physiol. 125(5), 465–481. 9. Koidl, B., Miyawaki, N., Tritthart, H.A. (1997) A novel benzothiazine Ca2+channel antagonist, semotiadil, inhibits cardiac Ltype Ca2+currents. Eur. J. Pharmacol. 322(2–3), 243–247. 10. Baxter, D.F., Kirk, M., Garcia, A.F., Raimondi, A., Holmqvist, M.H., Flint, K.K., Bojanic, D., Distefano, P.S., Curtis, R.,
11.
12.
13.
14.
15.
Xie, Y. (2002) A novel membrane potential-sensitive fluorescent dye improves cellbased assays for ion channels. J. Biomol. Screen. 7(1), 79–85. Vickery, R., Amagasu, S., Chang, R., Mai, N., Kaufman, E., Martin, J., Hembrado, J., O’Keefe, M., Gee, C., Marquess, D., Smith., J. (2004) Comparison of the Pharmacological properties of rat NaV1.8 with rat NaV1.2a and human NaV1.5 voltage-gated sodium channel subtypes using a membrane potential sensitive dye and FLIPR. Receptors and Channels 10(1), 11–23. Minta, A., Kao, J.P., Tsien, R.Y. (1989) Fluorescent indicators for cytosolic calcium based on rhodamine and fluorescein chromophores. J. Biol. Chem. 264, 8171–8178. Huang, C.J., Harootunian, A., Maher, M.P., Quan, C., Raj, C.D., McCormack, K., Numann, R., Negulescu, P.A., Gonza´lez, J.E. (2006) Characterization of voltagegated sodium-channel blockers by electrical stimulation and fluorescence detection of membrane potential. Nat. Biotechnol. 24(4), 415–416. Gill, R., Lee, S.S, Hesketh, J.C., Fedida, D., Rezazadeh, S., Stankovich, L., Liang D. (2003) Flux assays in high throughput screening of ion channels in drug discovery. Assay Drug Dev. Technol. 1(5), 709–717. Liu, C.J., Priest, B.T., Bugianesi, R.M., Dulski, P.M., Felix, J.P., Dick, I.E., Brochu, R.M., Knaus, H.G., Middleton, R.E., Kaczorowski, G.J., Slaughter, R.S., Garcia, M.L., K¨ohler, M.G. (2006) A high-capacity membrane potential FRET-based assay for NaV1.8 channels. Assay Drug Dev. Technol. 4, 37–48.
208
Southan and Clark
16. Kolok, S., Nagy, J., Szombathelyi, Z., Tarnawa, I. (2006) Functional characterization of sodium channel blockers by membrane potential measurements in cerebellar neurons: Prediction of compound preference for the open/inactivated state. Neurochem. Int. 49, 593–604. 17. Hamill, O.P., Marty, A., Neher, E., Sakmann, B., Sigworth, F.J. (1981) Improved patch-clamp techniques for high-resolution current recording from cells and cell-free membrane patches. Pflugers Arch. 391, 85–100. 18. Asmild, M., Oswald, N., Krzywkowski, K.M., Friis, S., Jacobsen, R.B., Reuter, D., Taboryski, R., Kutchinsky, J., Vestergaard, R.k., Schrøder, R.l., Sørensen, C.B., Bech, M., Korsgaard, M.P.G., Willumsen, N. (2003) Upscaling and automation of electrophysiology: Toward high throughput screening in ion channel drug discovery. Receptors and Channels 9, 49–58. 19. Owen, D., Silverthorne, A. (2002) Channelling drug discovery, current trends in ion channel discovery research. Drug Discov. World 3(2), 48–61. 20. Schroeder, K., Neagle, B., Trezise, D.J., Worley, J. (2003) Ionworks HT: A new high-throughput electrophysiology measurement platform. J. Biomol. Screen. 8(1), 50–64. 21. Finkel, A., Wittel, A., Yang, N., Handran, S., Hughes, J., Costantin, J. (2006) Population patch clamp improves data consistency and success rates in the measurement of ionic currents. J. Biomol. Screen. 11(5), 488–496. 22. John, V.H., Dale, T.J., Hollands, E.C., Chen, M.X., Partington, L., Downie, D.L., Meadows, H.J., Trezise, D.J. (2007) Novel 384-well population patch clamp electrophysiology assays for Ca2+activated K+channels. J. Biomol. Screen. 12(1), 50–60. 23. Clark, G.S., Todd, D., Liness, S., Maidment, S.A., Dowler, S., Southan, A.P. (2005) Expression and characterisation of a two pore potassium channel in HEK293 cells using different assay platforms. Proceedings of the British Pharmacological Societyat http://www.pA2online.org/abstracts/Vol 3Issue4abst105P.pdf. 24. Lee, Y.T., Vasilyev, D.V., Shan, Q.J., Dunlop, J., Mayer, S., Bowlby, M.R. (2008) Novel pharmacological activity of loperamide and CP-339,818 on human HCN channels characterized with an automated
25.
26.
27.
28.
29.
30.
31.
32.
electrophysiology assay. Eur. J. Pharmacol. 581(1–2), 97–104. Tao, H., Santa Ana, D., Guia., A, Huang, M., Ligutti, J., Walker, G., Sithiphong, K., Chan, F., Guoliang, T., Zozulya, Z., Saya, S., Phimmachack, R., Sie, C., Yuan, J., Wu, L., Xu, J., Ghetti, A. (2004) Automated tight seal electrophysiology for assessing the potential hERG liability of pharmaceutical compounds. Assay Drug Dev. Technol. 2(5), 497–506. Kutchinsky, J., Friis, S., Asmild, M., Taboryski, R., Pedersen, S., Vestergaard, R.K., Jacobsen, R.B., Krzywkowski, K., Schrøder, R.L., Ljungstrøm, T., He´lix, N., Sørensen, C.B., Bech, M., Willumsen, N.J. (2003) Characterization of potassium channel modulators with QPatch automated patchclamp technology: system characteristics and performance. Assay Drug Dev. Technol. 1(5), 685–693. Farre, C., Stoelzle, S., Haarmann, C., George, M., Bru ¨ ggemann, A., Fertig, N. (2007) Automated ion channel screening: patch clamping made easy. Ion channel drug discovery expands into new disease areas. Expert Opin. Ther. Targets 11(4), 557–565. Lepple-Wienhues, A., Ferlinz, K., Seeger, A., Scha¨fer, A. (2003) Flip the tip: An automated, high quality, cost-effective patch clamp screen. Receptors Channels 9(1), 13–17. Dunlop, J., Roncarati, R., Jow, B., Bothmann, H., Lock, T., Kowal, D., Bowlby, M., Terstappen, G.C. (2007) In vitro screening strategies for nicotinic receptor ligands. Biochem. Pharmacol. 74(8), 1172–1181. Papke, R.L. (2006) Estimation of both the potency and efficacy of alpha7 nAChR agonists from single-concentration responses. Life Sci. 78(24), 2812–2819. Schnizler, K., Ku¨ster, M., Methfessel, C., Fejtl, M. (2003) The robocyte: Automated cDNA/mRNA injection and subsequent TEVC recording on Xenopus oocytes in 96-well microtiter plates. Receptors Channels 9(1), 41–48. Xu, S.-Z., Sukumar, P., Zeng, F., Li, J., Jairaman, A., English, A., Naylor, J., Ciurtin, C., Majeed, Y., Milligan, C.J., Bahnasi, Y.M., Al-Shawaf, E., Porter, K.E., Jiang, L.H., Emery, P., Sivaprasadarao, A., Beech, D.J. (2008) TRPC channel stimulation by extracellular thioredoxin. Nature 451 (7174), 69–73.
Chapter 10 Automated Patch Clamping Using the QPatch Kenneth A. Jones, Nicoletta Garbati, Hong Zhang, and Charles H. Large Abstract Whole-cell voltage clamp electrophysiology using glass patch pipettes (1) is regarded as the gold standard for measurement of compound activity on ion channels. Despite the high quality of the data generated by this method, in its traditional format, patch clamping has limited use in drug screening due to very low throughput. Over the years, developments in microfabrication have driven the development of planar, multi-aperture technologies that are suitable for parallel, automated patch recording techniques. Here we present detailed methods for two common applications of the planar patch technology using one of the commercially available instruments. The results demonstrate (a) the high quality of whole-cell recordings obtainable from cell lines expressing human Nav1.2 or hERG ion channels, (b) the advantages of the methodology for increasing throughput, and (c) examples of how these assays support ion channel drug discovery. Keywords: Automated patch clamp, Electrophysiology, QPatch, Nav1.2, hERG, Ion channels.
1. Introduction Whole-cell voltage clamp using patch electrodes (1) is regarded as the gold standard for measuring the physiological and pharmacological properties of ligand-gated and voltage-gated ion channels. Through an electrode attached to the cell membrane, current generated by ions flowing through ion channels in the cell membrane can be measured while the membrane potential is clamped to defined voltages. Despite the high-quality data generated by this method, throughput is relatively slow and not suitable for testing hundreds of compounds or many compounds at multiple concentrations. While ion channels traditionally have made excellent drug targets, lack of parallel recording methods has caused a bottleneck in screening for this important class of transmembrane proteins. W.P. Janzen, P. Bernasconi (eds.), High Throughput Screening, Methods and Protocols, Second Edition, vol. 565 ª Humana Press, a part of Springer Science+Business Media, LLC 2009 DOI 10.1007/978-1-60327-258-2_10, Springerprotocols.com
209
210
Jones et al.
In traditional patch clamping, glass electrodes are manually fabricated using a heated electrode puller and used to record from cells one at a time. In the 1990s, investigators began developing planar materials featuring 1–2 mM diameter holes suitable for obtaining high-resistance seals against cells (2). After many refinements, glass- or silicon-based chips containing multiple apertures were designed to be used with sophisticated hardware and electronics such that cell positioning, gigaseal and whole-cell formation are obtained automatically. Several such devices including the PatchXpress1 (3, 4), developed by Axon Instruments; QPatch1 (5), developed by Sophion; and IonWorks1 (6), distributed by Molecular Devices, have now been successfully introduced into the market and are being used by pharmaceutical, contract and academic research labs. At the heart of the QPatch is a disposable chip that contains an array of 16 micro-fabricated holes enabling voltage clamp measurements to be made in parallel. A carefully prepared cell suspension is added to the chip units and negative pressure applied to attract a cell. Increasing this pressure allows the formation of a gigaseal, and subsequent pressure ramps rupture the cell membrane providing a whole-cell configuration. Once a whole-cell configuration has been achieved, voltage protocols are executed, and the system then adds drugs using a single or multiple robotic pipettor. Data are recorded and stored on a dedicated server. Many CNS-active drugs have a tendency to cause unwanted effects on ion channels responsible for electrical conduction in the heart. A potassium channel, human ether-a-go-go-related gene (hERG) (7, 8), critically responsible for the repolarising phase of ventricular action potentials, is associated with adverse events such as QT interval prolongation when blocked by drugs (9). To a great extent, the impetus for the commercial development of automated patch clamping arises from the need of the pharmaceutical industry to develop drugs that do not inhibit hERG at therapeutic concentrations and thus have an improved cardiovascular safety profile. Voltage-gated sodium channels (VGSC) have a fundamental role in most electrically excitable cells since they conduct the inward currents that occur during the rising phase of action potentials. The main pore-forming component of these channels is the -subunit, of which nine different subtypes have been identified (11). The human subtype Nav1.2 is predominantly expressed on the axons of central neurones (11, 12). The channel may exist in at least three different conformational states, resting (closed), open or inactivated, and the transition between these states is voltage dependent (13). The mechanism of activation of this ion channel is membrane depolarisation, which causes a conformational change in the protein (open state) allowing Naþ ions to permeate the channel. Channel opening is followed by a rapid (<2 ms) transition to an inactivated state. Upon repolarisation of the cell
Automated Patch Clamping
211
membrane, channels recover from this inactivated state and transition back to the closed or resting state. Drugs that stabilise the inactivated state of voltage-gated sodium channels have an established role in the treatment of neurological and some psychiatric disorders (14). Given the voltage- and time-dependent nature of drug interactions with the inactivated state of sodium channels, estimation of the affinity of drugs for this state requires a functional assay such as patch-clamp electrophysiology (15) and cannot be achieved using other currently available high-throughput screening techniques, such as voltage-sensitive dyes or radioligand binding. Consequently, the advent of planar patch technology has opened up the possibility for effective screening against this target and the identification of novel sodium channel blockers with improved selectivity. This chapter describes the application of the 16-channel QPatch (Sophion Bioscience A/S) to estimate the affinity of novel compounds for the inactivated state of Nav1.2 and to determine the potency of compounds blocking hERG.
2. Materials 2.1. Cell Culture and Cell Preparation
1. Dulbecco’s modified Eagle’s Medium (Gibco, Invitrogen) supplemented with 10% heat-inactivated foetal bovine serum (Gibco) and geneticin (Gibco) as the selection antibiotic (400 mg/mL). 2. 293 SFM II medium (Gibco) containing 25 mM HEPES (Gibco) and 0.04 mg/mL soy bean trypsin inhibitor (Sigma) (see Note 2). 3. D-PBS (Dulbecco’s w/o: calcium, magnesium, w/o sodium bicarbonate GibcoBRL, Invitrogen). 4. Versene (Gibco). 5. Trypsin/EDTA (T/E) (10 ) (Gibco, Life Technologies). 6. DMEM/F12 þ GlutamaxTM cell culture medium (Gibco) supplemented with 10% foetal bovine serum, 1% penicillin/ streptomycin (Invitrogen) and 500 mg/mL G418. 7. DetatchinTM (Genlantis).
2.2. Electrophysiological Solutions to Record Nav1.2 Currents
1. Internal solution: CF 140 mM, EGTA 1 mM , NaCl 10 mM, HEPES 10 mM, pH 7.3 with CsOH (need 500 mL 2.5 M CsOH in 1000 mL). Note: adjust osmolarity to 320 mOsm with sucrose, CsF will precipitate at 4, EGTA must be dissolved in CsOH (380 mg EGTA in 5 mL 0.75 M CsOH). Filter the solution.
212
Jones et al.
2. External solution: NaCl 140 mM, KCl 3 mM, MgCl2 1.2 mM, CaCl2 1 mM , Glucose 11 mM, HEPES 10 mM, CdCl2 0.1 mM, TEA Cl 20 mM, pH 7.3 with NaOH. Adjust osmolarity to 320 mOsm with sucrose. Filter the solution. 2.3. Electrophysiological Solutions to Record hERG Currents
1. Internal solution: KCl 100 mM, KF 30 mM, EGTA 10 mM, MgCl2 1 mM, CaCl2 1 mM, HEPES 10 mM, K2ATP 5 mM, Adjust pH 7.2 with KOH, adjust osmolarity to 305 with glucose (see Note 1). 2. External solution: NaCl 140 mM, KCl 4 mM, MgCl2 1 mM, CaCl2 1.8 mM, HEPES 10 mM, Glucose10 mM, adjust pH to 7.4 with NaOH, adjust osmolarity to 295 with glucose.
2.4. Drugs
Test compounds are dissolved in DMSO at a concentration of 10– 100 mM. After dilution in extracellular solution, final DMSO concentration is 0.1% (or less) and has no obvious effects on sodium or potassium currents (see Note 3).
3. Methods A general procedure for planar patch clamping can be applied to all studies of ion channels. Steps include the following: (1) preparation of a high-density suspension of cells in external solution (3–5 106 cells per mL – typically 1 T75 culture flask), (2) preparation of protocols on the instrument including the protocol for achieving whole-cell mode, voltage protocol and the drug application protocol and (3) preparation of drug solutions. A separate drug plate, typically 96 well, contains solutions to be used either in single-point screening mode or in a multi-point, concentration– effect mode (see Note 4). Once the cell suspension, drug plate and patch plate are ready and in position on the instrument platform, the experiment can begin under full robotic control. Analysis of data can be done using either built-in software routines or third-party methods. 3.1. Cell Culture
HEK293 cells stably transfected with cDNA encoding the human brain type IIa sodium channel -subunit (GSK R&D, UK) were grown in DMEM supplemented with 10% heat-inactivated foetal calf serum and geneticin as the selection antibiotic (400 mg/mL). CHO cells stably expressing human hERG were purchased from AVIVA Bioscience (4) and grown in DMEM/F12 medium supplemented with 10% FBS, 1% pen/strep and 1% l-glutamine. Cells were grown in standard, vented 75-cm2 flasks in a 5% CO2 atmosphere at 37C.
Automated Patch Clamping
213
3.2. Sub-culturing Cells
Cells are passaged using standard methods with care taken to split cells when confluency reaches 70% for Nav1.2 cells and 80% for hERG cells. Permitting overgrowth at any time during passaging can result in a dramatic loss of expression. Both cell types are enzymatically dispersed to ensure uniform seeding onto daughter plates.
3.3. Preparing Cells for Experiments
An example of the weekly cycle of cell culture for HEK293 cells is provided (Table 10.1): On Monday, two or more T75 flasks (80% confluent) are split; three T75 flasks are plated at 1 106 cells/ mL for use on Wednesday and three T75 flasks are plated at 5 105 cells/mL for use on Thursday. On Wednesday it is also necessary to repeat the split and plate at 1 106 cells/mL to provide sufficient flasks for further experiments on Friday and to get mother flasks for the next cycle.
Table 10.1 Example of a weekly cycle of cell culture to provide cells for experiment on 3 days Monday
Tuesday
Two mother flasks
Wednesday
Thursday
Friday
Three T75 flasks (1 106)
Three T75 flasks (5 105)
Three T75 flasks (1 106)
Two mother flasks
Two mother flasks
It is important that cells in the T75 flasks for use in the assay are 70% confluent. Cell isolation into a single-cell suspension for addition to the QPatch is aided if the cells are thinly spread out rather than in rafts forming a monolayer (see Note 5). 3.4. Harvesting and Preparation of Cells Using the QPatch OnBoard Stirrer
1. Remove growth medium from the culture flask by aspiration when cells are 70% confluent. 2. Wash cells 1 with D-PBS and remove buffer. Add 6–7 mL T/E 1 (ensure an even distribution of T/E), then remove the T/E and leave the culture flask to rest for 1 minute in the incubator. Ensure that cells are rounded up before proceeding. If not, continue incubation. 3. Detach cells by firmly tapping the sides of the flask until the cells loosen from the bottom. Add 10 mL of fresh 293 SFM II medium containing 25 mM HEPES and 0.04 mg/mL soy bean trypsin inhibitor to the first of the three flasks and resuspend the cells by gently working the cell suspension up and down a pipette 5–10 times to break up cell clumps. 4. Immediately add the cell suspension to the storage container on the QPatch and start stirring.
214
Jones et al.
3.5. Recording from Na Channels: Validation of the QPatch hNaV1.2 Assay 3.5.1. Biophysical Characteristics
The basic biophysical characteristics of sodium channels can be studied by the application of a family of voltage steps. Cells are held at a potential of –90 mV. Depolarizing voltage steps of 10 ms duration are applied to a range of potentials (–40 mV to þ40 mV, in intervals of 10 mV) (Fig. 10.1a). By measuring the current that is generated at each step potential (Fig. 10.1b) and applying leak subtraction (P4), a current–voltage plot can be generated using the QPatch software (Fig. 10.1c). In studies conducted with the HEK293–hNaV1.2 cell line, the maximum peak inward current was evoked with a test pulse to –10 mV. Furthermore, extrapolation of the curve suggests that the inward current reverses at around þ50 to þ60 mV, consistent with the reversal potential of sodium ions observed in manual patch clamp experiments under similar conditions.
b
a
Voltage (mV)
50
–40 500 pA –90 0
50
70 Time (ms)
2 ms
100
c Voltage (mV) –60 –50 –40 –30 –20 –10 –200
10
20
30
40
50
60
–600 –800 –1000 –1200
Current (pA)
–400
–1400 –1600 –1800 –2000
Fig. 10.1. (a) Voltage protocol used to construct an I–V plot from HEK293–hNaV1.2 cells. Cells were held at a membrane potential of –90 mV and stepped to a range of potentials (–40 to þ40 mV) for 10 ms. (b) Examples of the current responses recorded in response to the family of voltage steps. (c) An I–V plot of these data shows that the peak sodium current could be evoked with a voltage step to –10 mV, consistent with results from similar experiments using manual patch-clamp electrophysiology.
3.5.2. Sensitivity to DMSO
DMSO is typically used to dissolve test compounds. At high concentrations the solvent can disrupt plasma membrane properties and may affect the quality of the recordings and pharmacological
Automated Patch Clamping
215
results. Consequently, the sensitivity of the QPatch hNaV1.2 assay to DMSO was determined by applying the voltage protocol used to construct an I–V plot in the presence of increasing concentrations of the solvent (data not shown). Results suggest that a maximum concentration of 0.3% v/v should be used, since higher concentrations adversely affected current responses. In practice, concentrations of 0.01–0.1% v/v DMSO are generally sufficient to ensure the solution of most compounds up to concentrations of 10 mM in physiological saline. The longevity of whole-cell recordings is important since the protocol used to estimate the affinity of a compound for the inactivated state of the NaV1.2 channels is quite long with respect to the simple I–V determination discussed earlier. As a result, it is important that stable recordings be maintained for 20–30 minutes. Therefore, in a further set of experiments the viability of wholecell recordings from HEK293–NaV1.2 cells was followed over time. For these experiments, 20 ms test pulses to 0 mV were applied from different holding potentials at an interval of 7 s. Analysis of the peak current evoked by each pulse over time (Fig. 10.2) shows that viable recordings could be maintained for at least 20 minutes when cells were held at relatively hyperpolarised potentials of –90 or –120 mV. At a more depolarised holding potential of –70 mV, there was an evident rundown of the inward current, possibly due to gradual accumulation of the channels in the inactivated state.
1.0 0.9 0.8 0.7 Peak (pA)
3.5.3. Stability of Recordings Over Time
0.6 0.5 0.4 0.3 0.2 0.1 0.0 0
100
200
300
400 500 600 Time (sec)
700
800
900
1000
Fig. 10.2. The time course of inward sodium currents evoked by voltage steps to 0 mV at intervals of 7 s. Data shown are the mean peak amplitudes of the current responses from cells held at –120 (l), –90 (~) or –70 (.) mV (n ¼ 7).
216
Jones et al.
3.6. Criteria for Excluding Cells from the Analysis
It was possible on occasion to obtain 14–15 stable whole-cell recordings from each 16-well QPlate, although the average over a 10-plate trial run was 8.5 (or 53%) (Table 10.2). Cells were included in the analysis, however, only if they met additional requirements: formation of seal with a resistance of >0.1 M ; in whole-cell configuration, the series resistance remains <10 M
for the duration of the experiment; the peak current evoked from a conditioning pulse of –120 mV is greater than 200 pA.
Table 10.2 Success rates defined by various criteria for HEK293–NaV1.2 cell line Number of plates tested
Whole-cell rate (%)
Completed experiments (%)
Cells used for analysis (%)
10
53
49
39
Average per plote (out of 16 possible).
3.7. Estimation of the Affinity of Compounds for the Inactivated State of hNaV1.2 Channels 3.7.1. Voltage Protocol to Investigate Steady-State Inactivation 3.7.2. Data Analysis
A protocol to study steady-state inactivation of the hNaV1.2 channels was developed based on one described by Bean et al. and Kuo and Bean (15, 16). The HEK293–hNaV1.2 cells were held at –120 mV and stepped to a range of conditioning voltages (–120 to –40 mV) for 9 s to induce steady-state inactivation. At the end of each conditioning period, the cell was stepped to þ20 mV for 2 ms to elicit residual sodium currents.
For each cell, the peak current is plotted against the conditioning voltage (Fig. 10.3). This plot illustrates the voltage dependence of inactivation of the channels and can be fitted to the following Boltzmann equation: Y ¼ 1=1ð1 þ expððV VhÞ=ðV VhÞkÞÞ: where Vh represents the voltage at which inactivation reaches midpoint and k is the slope factor. The shift in the Vh value (V) is determined for each drug concentration tested. The values of V thus obtained are then normalised with respect to the slope factor and are used to estimate the Ki using the following equation: Exp ðV=kÞ ¼ ð1 þ ðX=ÞÞ=ð1 þ ðX=KiÞÞð1 þ ðX=KRÞÞ where X is the drug concentration, KR is the affinity constant for the resting state of the channels [KR is approximately equivalent to the IC50 value obtained from the concentration– response curve at a holding potential of –110 mV, where no inactivation is present (17)].
Automated Patch Clamping
217
1.0 0.9
fractional current available
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 –120
–110
–100
–90
–80 –70 –60 Conditioning potential (mV)
–50
–40
–30
Fig. 10.3. Current–voltage plots showing the inactivation of sodium currents with more depolarised conditioning potentials. Data shown are the mean – SEM (n ¼ 9 cells) of the peak currents at each conditioning voltage fitted to a Boltzmann equation. Drug application causes a concentration-dependent, leftward shift of the curve. Dotted line – Control; solid line – increasing concentration of drug.
A comparison of Ki values determined using QPatch and manual patch electrophysiology for a series of sodium channel inhibitors is shown in Fig. 10.4; there is good agreement with values obtained using manual patch clamp performed under similar conditions and with a similar voltage protocol.
Manual Patch Ki (µM)
100
10
1
0.1 0.1
1 10 QPatch Ki (µM)
100
Fig. 10.4. The estimated affinity (Ki) of a series of sodium channel blockers for the inactivated state of human NaV1.2 channels expressed in HEK293 cells. Data shown were determined from at least five cells using QPatch and conventional manual patch-clamp techniques. Voltage protocols and other experimental conditions were similar between the two techniques.
218
Jones et al.
3.8. Recording WholeCell hERG Currents 3.8.1. Preparing the Cell Suspension
Flasks are selected at 50–90% confluency, media are removed, cells are washed once with D-PBS and 4 mL of DetachinTM is placed evenly around the cells. The flask is returned to the incubator for 7 minutes. Subsequently the flask is observed on the microscope (10 objective) for the presence of rounded and freely floating cells. Tapping of the flask is not necessary. Cells are gently suspended by pipetting the solution up and down along the bottom of the tipped flask three to four times. The cell suspension is removed and placed in a 15-mL conical tube and centrifuged at 1000 rpm for 2 minutes. The supernatant is removed; cells are resuspended in 5–7 mL of external solution and centrifuged again at 1000 rpm for 2 minutes. The supernatant is removed and 300–500 mL of external solution is added to achieve a final cell density of about 3 106 cells/mL (final volume depends on the number of cells in the sample).
3.8.2. Drug Plate Preparation
Stock solutions of compounds in DMSO at 10 mM are diluted to 10 and 30 mM in EC solution; the 10 mM solutions are further diluted to make 3 and 0.3 mM concentrations. These are placed in 96 deep well plates (1 mL volumes) with each compound occupying one partial row. In the job set-up, the minimum and maximum compound repetition variables determine the number of replicates for each individual compound. This method obviates the need to repeat rows for each compound, simplifies drug plate preparation and provides more flexibility when repeated jobs are required to provide the required number of replicates.
3.8.3. Voltage, Whole-Cell and Application Protocols
The protocol for obtaining gigaseals and whole-cell configuration is a standard protocol supplied by Sophion Bioscience for CHO cells. After whole-cell mode is obtained, cells are held at –90 mV. hERG currents are activated by stepping to þ20 mV for 2 s and then to –50 mV for 4 s (Fig. 10.5). This second repolarizing step generates an outward tail current resulting from the recovery from inactivation (18) (Fig. 10.6). The pulse protocol is repeated every 12 s until the last drug concentration is tested (about 25 minutes). Typically several additions of external solution are made while the baseline is being established. Subsequently, each drug concentration is added two or more times to ensure a complete exchange of the new solution.
3.8.4. Criteria for Selecting Cells for Analysis
Typically 14–15 stable whole-cell recordings are obtained from each Qplate (Table 10.3). Of these, one to two may be rejected based on low current (<100 pA) or current ‘‘rundown’’, defined as a progressive loss of current over time.
Automated Patch Clamping
219
Fig. 10.5. hERG outward tail currents elicited from a voltage-clamped cell on the QPlate. Arrows mark the times where resting currents and peak tail currents are measured. Note the progressive loss of current as haloperidol is applied at concentrations ranging from 30 nM to 10 mM.
500 0.03
Current (pA)
400 0.03
300
200 0.3 0.3
3
10
1400 1600 Sweep time (s)
1800
100
0 600
800
1000
1200
2000
2200
Fig. 10.6. Plot of peak tail currents (minus resting current) from example cell shown in Fig. 10.5. Note the modest rundown that occurs during the first 400 s and subsequently stabilises. Drugs are applied (arrows) twice for the first two concentrations and once for the last two concentrations. Note also that the first of the two drug additions do not produce a steady-state inhibition.
220
Jones et al.
Table 10.3 Success rates defined by various criteria for CHO–hERG cell line No. plates tested
No. cells tested
WC rate*
Completed protocols*
Used for data analysis*
No. current rate*
Run-down rate*
8
114
89
74
53
2.6
17
*Expressed as percent of total number of cells tested.
Cells are rejected if rundown exceeds 5% per minute or if more than 25% of current is lost before the first drug addition occurs. 3.8.5. Data Analysis
IC50s are determined using the QPatch analysis software. Peak outward current at –50 mV, minus the holding current measured at a 5-ms pre-pulse to –50 mV, is plotted every 10 s. The stability of the current over time and the effect of drugs are inspected; data are excluded from cells if they exhibit excessive rundown, have currents less than 80 pA or if drug effects do not plateau (Fig. 10.7). Sigmoidal curve fits are calculated using the QPatch analysis software using the last buffer response to define the 100% current value (Fig. 10.7). Table 10.4 shows IC50 values for reference compounds compared to literature values. In general, there is a good agreement compared to manual patch-clamp methods.
Fig. 10.7. Example concentration–effect response for haloperidol. Peak current is determined for each concentration of compound, using the second addition for double addition protocols, and the curve fit is generated using the Hill equation.
Automated Patch Clamping
221
Table 10.4 Average IC50 values for reference compounds at hERG determined using the QPatch compared to manual patch-clamp methods QPatch (nM)a
Manual patch clamp (nM)
Haloperidol
36
15b
Amiodarone
1374
48c–1000d
Quinidine
681
300d
Clozapine
1510
1700b
Cisapride
30
15c
Aspirin
>30,000
>30,000b
Thioridazine
515
116c
a
Mean of 3–7 determinations. In-house determination. c Guo and Guthrie (10). d Redfern et al. (9). b
4. Conclusions Automated patch clamping using planar patch technology combined with sophisticated liquid handling offers the ability to greatly increase throughput with minimal sacrifice in data quality. We routinely complete concentration–response curves for 25–30 cells in a single day, roughly four times the throughput of conventional methods. The voltage-dependent block of certain classes of compounds dictates that full voltage control is important for characterising the physiological effects of drug application. This has been found to be very important for measuring drug block of hERG (18). An example was shown for human Nav1.2 where compounds were found to cause a shift in the voltage dependence of channel inactivation. It is possible to apply the vast majority of voltage manipulations in an automated format using the QPatch. The results confirm the utility of automated patch clamping for investigating complex aspects of voltage-gated ion channel function and pharmacology.
5. Notes
1. ATP should be added to the internal solution from dry powder at the start of each day to avoid rundown of hERG current.
222
Jones et al.
2. Based on experiments considering modulation of pH and osmolarity of the SFM II storage medium during the 4 hours on the QPatch, we recommend the use of 25 mM HEPES in the SFM II storage medium. We have found that addition of 25 mM HEPES modulated the pH value to a more physiological value (7.4), and it increased the osmolarity of the SFM II storage medium to a more physiological level (297 mOsm). Finally, application of 25 mM HEPES did not reduce the gigaseal formation rate or the whole-cell establishment. 3. For some series of compounds it is important to use glasscoated vessels to avoid adsorption to plastics. 4. Drug additions to the chip units are made at least twice in order to ensure that complete equilibration occurs to the required drug concentration. 5. If confluency is anticipated to be higher than 80% on the day of the experiment, flasks can be placed in an incubator at 30C for 24 hours. This slows growth and can improve surface expression for some cell lines. References 1. Hamill, O. P., Marty, A., Neher, E., Sakmann, B., and Sigworth, F. J. (1981) Improved patch-clamp techniques for high-resolution current recording from cells and cell-free membrane patches. Pflugers Arch. 391, 85–100. 2. Klemic, K. G., Klemic, J. F., Reed, M. A., and Sigworth, F. J. (2002) Micromolded PDMS planar electrode allows patch clamp electrical recordings from cells. Biosens. Bioelectron. 17, 597–604. 3. Dubin, A. E., Nasser, N., Rohrbacher, J., Hermans, A. N., Marrannes, R., Grantham, C., Van, R. K., Cik, M., Chaplan, S. R., Gallacher, D., Xu, J., Guia, A., Byrne, N. G., and Mathes, C. (2005) Identifying modulators of hERG channel activity using the PatchXpress planar patch clamp. J. Biomol. Screen. 10, 168–181. 4. Tao, H., Santa, A. D., Guia, A., Huang, M., Ligutti, J., Walker, G., Sithiphong, K., Chan, F., Guoliang, T., Zozulya, Z., Saya, S., Phimmachack, R., Sie, C., Yuan, J., Wu, L., Xu, J., and Ghetti, A. (2004) Automated tight seal electrophysiology for assessing the potential hERG liability of pharmaceutical compounds. Assay. Drug Dev. Technol. 2, 497–506.
5. Mathes, C. (2006) QPatch: the past, present and future of automated patch clamp. Expert. Opin. Ther Targets 10, 319–327. 6. Schroeder, K., Neagle, B., Trezise, D. J., and Worley, J. (2003) Ionworks HT: a new high-throughput electrophysiology measurement platform. J. Biomol. Screen. 8, 50–64. 7. Sanguinetti, M. C., Jiang, C., Curran, M. E., and Keating, M. T. (1995) A mechanistic link between an inherited and an acquired cardiac arrhythmia: HERG encodes the IKr potassium channel. Cell 81, 299–307. 8. Trudeau, M. C., Warmke, J. W., Ganetzky, B., and Robertson, G. A. (1995) HERG, a human inward rectifier in the voltage-gated potassium channel family. Science 269, 92–95. 9. Redfern, W. S., Carlsson, L., Davis, A. S., Lynch, W. G., MacKenzie, I., Palethorpe, S., Siegl, P. K., Strang, I., Sullivan, A. T., Wallis, R., Camm, A. J., and Hammond, T. G. (2003) Relationships between preclinical cardiac electrophysiology, clinical QT interval prolongation and torsade de pointes for a broad range of drugs: evidence for a provisional safety margin in drug development. Cardiovasc Res. 58, 32–45.
Automated Patch Clamping 10. Guo, L. and Guthrie, H. (2005) Automated electrophysiology in the preclinical evaluation of drugs for potential QT prolongation. J. Pharmacol. Toxicol. Methods. 52, 123–135. 11. Catterall, W. A. (1992) Cellular and molecular biology of voltage-gated sodium channels. Physiol Rev. 72, S15–S48. 12. Whitaker, W. R., Faull, R. L., Waldvogel, H. J., Plumpton, C. J., Emson, P. C., and Clare, J. J. (2001) Comparative distribution of voltage-gated sodium channel proteins in human brain. Brain Res. Mol. Brain Res. 88, 37–53. 13. Catterall, W. A. (1999) Molecular properties of brain sodium channels: an important target for anticonvulsant drugs. Adv Neurol. 79, 441–456. 14. Clare, J. J., Tate, S. N., Nobbs, M., and Romanos, M. A. (2000) Voltage-gated
15.
16.
17.
18.
223
sodium channels as therapeutic targets. Drug Discov. Today 5, 506–520. Bean, B. P., Cohen, C. J., and Tsien, R. W. (1983) Lidocaine block of cardiac sodium channels. J. Gen. Physiol. 81, 613–642. Kuo, C. C. and Bean, B. P. (1994) Slow binding of phenytoin to inactivated sodium channels in rat hippocampal neurons. Mol. Pharmacol. 46, 716–725. Kuo, C. C. and Lu, L. (1997) Characterization of lamotrigine inhibition of Naþ channels in rat hippocampal neurones. Br. J. Pharmacol. 121, 1231–1238. Witchel, H. J., Milnes, J. T., Mitcheson, J. S., and Hancox, J. C. (2002) Troubleshooting problems with in vitro screening of drugs for QT interval prolongation using HERG Kþ channels expressed in mammalian cell lines and Xenopus oocytes. J. Pharmacol. Toxicol. Methods 48, 65–80.
Chapter 11 High-Throughput Screening of the Cyclic AMP-Dependent Protein Kinase (PKA) Using the Caliper Microfluidic Platform Leonard J. Blackwell, Steve Birkos, Rhonda Hallam, Gretchen Van De Carr, Jamie Arroway, Carla M. Suto, and William P. Janzen Abstract Inhibitors of kinase activities can be mechanistically diverse, genomically selective, and pathway sensitive. This potential has made these biological targets the focus of a number of drug discovery and development programs in the pharmaceutical industry. To this end, the high-throughput screening of kinase targets against diverse chemical libraries or focused compound collections is at the forefront of the drug discovery process. Thus, the platform technology used to screen such libraries must be flexible and produce reliable and comparable data. The Caliper HTS microfluidic platform provides a direct determination of a peptidic substrate and phosphorylated product through the electrophoretic separation of the two species. The resulting data are reliable and comparable among screens and cover a broad range of biological targets, provided there is a definable peptide substrate that permits separation. Here we present a method for the high-throughput screening of the cyclic AMP-dependent protein kinase (PKA) as an example of the simplicity of this microfluidic platform. Key words: Microfluidics, Kinase, PKA, High-throughput screening, HTS, Caliper.
1. Introduction Kinases are one of the leading drug targets of the pharmaceutical industry (1–3). This is in part due to the early success of Gleevec, an inhibitor of the kinase activity of the Bcr–Abl fusion protein involved in chronic myeloid leukemia (CML) (4). To this end, there has been significant technological advancements in the method and platform development of kinase assays in the high-throughput screening arena. Many assays rely on W.P. Janzen, P. Bernasconi (eds.), High Throughput Screening, Methods and Protocols, Second Edition, vol. 565 ª Humana Press, a part of Springer Science+Business Media, LLC 2009 DOI 10.1007/978-1-60327-258-2_11, Springerprotocols.com
225
226
Blackwell et al.
secondary activities to detect phosphorylation by kinases using either phospho-specific antibodies (AlphaScreen, PerkinElmer) (5), metal chelates (IMAP, Molecular Devices) (6), protease sensitivity (Z0 -LYTE, Invitrogen), or ADP detection reagents (Kinase-Glo, Promega) (7). The Caliper microfluidic platform has the advantage of directly measuring the status of both a phosphorylated product and a substrate peptide with an accurate determination of the ratio of the two species (8). The technology uses capillary electrophoresis to separate the reaction products into two distinct peaks using a combination of voltage and pressure. The assay relies on the design of a peptide that serves as a substrate for the kinase and is electrophoretically neutral so as to produce clear separation between the negatively charged products relative to the substrate. Samples can be processed in 20 s and the microfluidic chips have 12 channels allowing parallel processing. Kinase reactions are assembled in the presence of an inhibitor on a 384-well plate and terminated with EDTA at a kinetically determined end point. Terminated reactions are processed on the Caliper LabChip 3000. A single plate can be read in 20 min. The output of the assay is a ratio of the product height and the peak sum of product and substrate referred to as PSR (product/sum ratio). Because of this, small variations in the fluorescent output will have very little impact on the PSR value, insuring the reproducibility of the data and the reliability of the assay. Typically the kinetic end point of the assay is validated in the range of 0.3–0.5 PSR and insured to be in the linear phase of the reaction. From an operational standpoint, most kinase reactions are terminated after 3 hr. Here we present an example of the platform detailing the high-throughput screening of the cyclic AMP-dependent protein kinase (PKA). The percent inhibition cutoff for determining active compounds is 13.3% (see Section 3.7, Step 4 for a discussion of assay threshold). The Z0 (11) for the assay is 0.85. Prior to HTS the PKA assay was developed with a validated peptide substrate and the biochemical parameters for the assay determined including ATP Km, rate, specific activity, and control inhibitor validation. The operating buffer is standardized for serine/threonine kinases and the concentration of ATP used is twice the Km. With this description as a template, other kinases may be substituted after development and validation of the assay. The microfluidic platform additionally offers biological diversity with the ability to assay several peptide-based target classes including phosphatase, protease, and histone acetylase and deacetylase activities.
High-Throughput Screening of the Cyclic AMP-Dependent Protein Kinase
227
2. Materials 2.1. Proteins and Peptides
1. Cyclic AMP-dependent protein kinase catalytic subunit (PKA) was purchased from Sigma, catalog number C-8482. PKA is a human, recombinant, His-tagged protein expressed in Escherichia coli. 2. The peptide used for detection of PKA phosphorylation is labeled at the amino-terminal with carboxyfluorescein (FAM). The amino acid sequence is LRRASLG (Kemptide) (9) and was synthesized by CPC Scientific. The peptide is dissolved in 40% DMSO to yield a 1 mM solution.
2.2. Kinase Reaction
1. Master buffer (2): 200 mM HEPES, pH 7.5, 0.2% BSA (w/v), 0.02% Triton X-100 (v/v). 2. Enzyme buffer (2.5): 1 master buffer, 2.5 mM DTT, 25 mM MgCl2, 25 mM sodium orthovanadate, 25 mM b-glycerophosphate, 25 mM ATP, 0.08 nM PKA. 3. Substrate buffer (2.5): 1 master buffer, 2.5 mM FAMLRRASLG. 4. Termination buffer (1.55): 1 master buffer, 31 mM ethylenediamine tetraacetic acid (EDTA). 5. Compound (5) in 1 master buffer.
2.3. Enzyme Validation
1. Enzyme buffer (2): 1 master buffer, 2 mM DTT, 20 mM MgCl2, 20 mM sodium orthovanadate, 20 mM b-glycerophosphate, 20 mM ATP, 2% DMSO. 2. Substrate buffer (2): 1 master buffer, 2 mM FAMLRRASLG. 3. Termination buffer (1.4): 1 master buffer, 28.8 mM EDTA.
2.4. Compound Preparation
1. Compound buffer (5): 1 master buffer, 50 mM compound, 5% DMSO (from compound).
2.5. Staurosporine Assay Plate Preparation
1. Staurosporine (Calbiochem) (3 mM) dissolved in 100% DMSO.
2.6. Microfluidic Analysis
1. Separation buffer: 1 master buffer, 0.36% DMSO, 20 mM EDTA, 0.1% coating reagent (supplied by Caliper).
2. Inhibitor dilution buffer: 1 master buffer, 5% DMSO.
2. Row marker dye: 1 master buffer, 20 mM EDTA, 0.75 mM FAM-LRRASLG.
228
Blackwell et al.
3. Methods The following procedure assumes access to equipment needed to accomplish a high-throughput screen. In addition to a Caliper LabChip 3000, liquid-handling instrumentation needed to create compound-containing assay plates and distribute enzymatic reaction components (BioMEK; Beckman Coulter, multidrop; Thermo Labsystems) is required. Prior to the high-throughput screening of PKA, the kinase activity was subjected to a rigorous assay development process in an effort to determine the biochemical parameters, performance, and validation of the assay. The first step of the assay development process is optimizing a peptide substrate that is both an efficient and a biologically relevant substrate for the kinase and whose product and substrate populations can be separated into two distinct peaks. Microfluidic separation of the peptide is optimized through the manipulation of voltage and pressure on the Caliper instrument. The ATP Km had been determined to be 4.5 0.7 mM. Inhibition of the PKA activity is measured at twice the Km or 10 mM. Before initiating the PKA screen, the enzymatic activity is first validated to insure a quantity of PKA that results in a product/sum ratio (PSR) of 0.3–0.5 after a 3 hr incubation period. This is required as the activity of PKA may vary from vendor to vendor and lot to lot. Validation is accomplished with an enzyme titration at a fixed ATP concentration and a kinetically determined end point. Statistical validation of the assay is then determined using experimentally designed assay plates that result in known values of inhibition derived from an IC50 titration of staurosporine. With the assay validated, the high-throughput screen is begun by first preparing assay plates that contain 5 ml of a 5 concentration of the compound (50 mM) in the aqueous buffer. Assay plates are created through the addition of aqueous buffer to daughter plates containing 1 mM compound in neat DMSO. Daughter plates are created directly either from 10-mM master plates or from intermediate 3-mM mother plates. Prior to screening, we typically quantitate the compounds by high-performance liquid chromatography (HPLC) coupled with chemiluminescent nitrogen detector (CLND) in the aqueous buffer relevant for follow-up and IC50 determination (10). We have determined previously that the aqueous concentrations of the compounds vary dramatically compared to their DMSO stocks (10). Each of the assay plates is designed to contain a statistically significant number of positive controls: zeros without inhibitor and 100% inhibitor controls that contain 20 mM EDTA. With the assay plates ready, a 2.5 enzyme buffer is added to the plate using a
High-Throughput Screening of the Cyclic AMP-Dependent Protein Kinase
229
multidrop device and the reaction initiated by the addition of the 2.5 buffer that contains the fluorescently labeled peptide substrate. The addition of the enzyme buffer first allows potential prebinding of the inhibitor prior to the initiation of the reaction. After a 3 hr incubation period the reaction is terminated with 20 mM EDTA. Terminated reactions are stable for more than 24 hr, so there are no time constraints on plate reading. The plates are read on the Caliper instrument and the data collected for analysis. 3.1. Enzyme Validation
1. To insure that the enzyme is performing to the desired activity within the 3-hr time period, PKA is titrated to the fluorescently labeled substrate peptide. The assays are designed such that two solutions are consecutively added: one that contains the kinase and a majority of the reaction components including ATP and Mgþþ and one that contains the peptide. All assays are performed on 384-well plates, which have 16 rows and 24 columns. For convenience the enzyme dilutions are completed directly on the assay plate. A 125-ml-capacity multichannel pipette is ideal for this task. 2. Dilute PKA 1:10 in 1 master buffer. 3. For an eight-point serial dilution, add 99 ml of 2 enzyme buffer to Row M1 on a 384-well plate, then 50 ml of 2 enzyme buffer to wells M2–M8. 4. Add 1 ml of diluted PKA to well M1 for a 1:1000 dilution of PKA. Lot #092K0330 had an initial concentration of 2.3 mM. The starting concentration of the dilution is therefore 0.0023 mM. 5. Dilute serially adding 50 ml of M1 to the 50 ml of M2, mix and repeat from M3 to M8. 6. Add 10 ml of 2 substrate buffer to rows A and E, wells 1–24. 7. Initiate the reaction by adding 10 ml of the diluted PKA eight wells at a time to wells A1–8, A9–16, and A17–24 for triplicate determination. To row E, add 10 ml of 1 master buffer as a comparative control for peak mobility (substrate only). Seal plates and incubate for 3 hr at room temperature. The final concentrations of PKA are 1.3, 0.65, 0.325, 0.163, 0.081, 0.041, 0.020, and 0.010 nM. 8. Stop reactions with 50 ml of termination buffer. 9. Analyze plate on the Caliper instrument. 10. Plot data as PSR on the y-axis and the concentration of PKA on the x-axis, fitting the data to a hyperbola. Determine the concentration of PKA that results in a PSR of 0.3 or 30% substrate-to-product conversion. This is the amount of enzyme to be used in the assay for HTS.
230
Blackwell et al.
3.2. Staurosporine IC50 Determination
1. The IC50 value of staurosporine will be determined for PKA to use for validation of the assay below. Here reagents outlined for the kinase reaction (2.2) will be used. 2. Prepare a 50 mM stock of staurosporine from 3 mM in the inhibitor dilution buffer. 3. Serial dilute 50 mM staurosporine 15 times in the inhibitor dilution buffer. There will be 16 staurosporine assay points total. Add 5 ml of each concentration to wells A1–A16 and E1–E16 (most to least concentrated) for duplicate IC50 determination. These are 5 concentrations of inhibitor. In rows A17–24 and E17–24, add 5 ml of inhibitor dilution buffer. 4. To wells A1–A24 and E1–E24, add 10 ml of 2.5 enzyme buffer followed by 10 ml of 2.5 substrate buffer. Cover plate, incubate at room temperature, and after 3 hr, stop the reaction by adding 45 ml of termination buffer. 5. Analyze the samples in the Caliper instrument and extract the PSR values relative to the staurosporine concentrations. 6. Based on the average of the zeros (wells A17–24 and E17–24) derive the percent inhibition from the staurosporine-containing wells (see Note 5). Plot the percent inhibition on the y-axis and the staurosporine concentration on the x-axis. If using Excel, fit the data to a dose– response, single-site model to derive the IC50. We previously determined the staurosporine IC50 for PKA to be 4 nM. Also, determine from the fit the amount of staurosporine required for 20, 50, and 70% inhibition for the assay validation below.
3.3. Assay Validation
1. Create three assay plates according to Fig. 11.1. The plates are designed to have statistical quantities of staurosporine that result in 0, 20, 50, and 70% inhibition at a 5 concentration. The amount of staurosporine used in the validation is based on the IC50 determination of the inhibitor. Five microliters of each inhibitor concentration is added to the designated wells on the assay plate. This process is best suited for robotic liquid handlinginstrumentation. 2. Add 10 ml of 2.5 enzyme buffer followed by 10 ml of 2.5 substrate buffer using a multidrop or similar liquidhandling instrument. Seal plates, incubate at room temperature, and after 3 hr terminate with 45 ml of termination buffer.
High-Throughput Screening of the Cyclic AMP-Dependent Protein Kinase
A
1
2
0
100
3
4
5
6
20 20
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22
50 50
231
23
24
70 70
B
0
100
20 20
50 50
70 70
C
0
0
20 20
50 50
70 70
D
0
0
20 20
50 50
70 70
E
0
100
20 20
50 50
70 70
F
0
100
20 20
50 50
70 70
G
0
0
20 20
50 50
70 70
H
0
0
20 20
50 50
70 70
I
20 20
50 50
70 70
0
100
J
20 20
50 50
70 70
0
100
K
20 20
50 50
70 70
0
0
L
20 20
50 50
70 70
0
0
M
20 20
50 50
70 70
0
100
N
20 20
50 50
70 70
0
100
O
20 20
50 50
70 70
0
0
P
20 20
50 50
70 70
0
0
Fig. 11.1. HTS plate layout format with controls and validation pattern.
3. Read plates on the Caliper instrument and extract the percent inhibition values from the plates using the 0 and 100% inhibition controls designated on the Well Analyzer software. 4. Perform statistical analysis on the plate data both considering well variation and plate-to-plate variation in the average and standard deviation of the values to insure that assay performance meets quality standards. Additional experiments may be performed on consecutive days to achieve day-to-day variation of the assay as well. For validation, it is more important that the inhibition values are consistent among the wells rather than being exactly the values predicted from the staurosporine IC50 curve. 3.4. Assay Plate Preparation
1. Assay plates containing compounds and controls at 5 concentration according to the plate map in Fig. 11.1 are created using automated liquid-handling instrumentation (i.e., Biomek FX, Beckman Coulter). 2. Daughter plates containing 1 mM compound in neat DMSO are diluted to 50 mM with assay buffer. 3. DMSO (0%) and 20 mM EDTA (100%) controls are added to the wells indicated in Fig. 11.1. 4. Five microliters of diluted compound are transferred to the assay plate, which is now ready for the addition of the kinase reaction reagents.
232
Blackwell et al.
3.5. HTS Kinase Reaction
1. To an assay plate containing 5 ml of compound, add 10 ml of 2.5 enzyme buffer followed by 10 ml of 2.5 substrate buffer using a multidrop (Thermo Labsystems). Cover plates and incubate for 3 hr at room temperature. Terminate reactions with 45 ml of termination buffer. Measure phosphorylation of peptide on the Caliper instrument.
3.6. Caliper Operation
1. Prepare a microfluidic chip by first rinsing the chip with deionized water and suctioning off the excess fluid with a vacuum line fitted with a micropipette tip. One must insure not to directly suction the microfluidic channels 2. Fill the waste wells with 0.42 ml of separation buffer and each of the upstream wells with 65 ml of separation buffer. 3. Load chip into the cassette and secure the lid. 4. Flow separation buffer through the instrument lines and fill each of the two dye troughs with 1.5 ml of row marker dye. 5. Place cassette with microfluidic chip into the instrument, lock into place, and with the software, lower the sippers into the troughs containing the separation buffer. 6. Follow the directions of the Caliper software to align the optics and insure correct flow into all 12 channels, test the voltage of the chip, validate the baseline, and determine the simultaneous sample output of the chip. 7. Load the assay plates into robotic stacks and design a job using the instrument software with the upstream voltage settings set at –2700 V and the downstream at –800 V. The pressure setting is –1.5 psi, dye and sample sip time set at 0.2 sec, initial delay, post dye, and post sample sip times set at 20 sec, and the final delay sip time set at 120 sec.
3.7. Data Analysis
1. The data from each plate are first visually inspected as a color-coded plate map (formulated within the Caliper software) to identify any anomalous patterns associated with liquid handling or the Caliper readout before transferring into the corporate database (Fig. 11.2). 2. To eliminate fluorescent anomalies, an Excel-based template is used to automatically remove any data point whose fluorescent signal of both product and substrate is above or below 6 standard deviations of the average signal within each microfluidic channel.
High-Throughput Screening of the Cyclic AMP-Dependent Protein Kinase
233
Fig. 11.2. HTS screening data. Anomalous patterns caused by liquid-handling error in rows I through P would be noted by the yellow and green color.
3. Control outliers are eliminated and a statistical equilibration is performed on the 12 channels to adjust for variability in individual channels. The normalized 0 and 100% controls are then used to calculate percent inhibition values for each compound tested. 4. The assay threshold is determined by the interquartile range or IQR to identify distribution outliers in lieu of the sample standard deviation. The IQR is calculated, where Q(X) represents the quantile below which x% of the data is observed IQR ¼ Q ð:75Þ Q ð:25Þ The IQR represents the width of exactly 50% of the data. When applied to the assay data, the bulk of the distribution representing the inactives will fall inside the interval: ½Q ð:25Þ 1:5 IQR; Q ð:75Þ þ 1:5 IQR All points outside the interval are potential actives. For a normally distributed population this method will bound 99.3% of the data; approximately the same percentage bound by an interval based on 3s. An interval of 3.95IQR will bound 99.9999999% of the data, which is equivalent to an interval based on 6s. 5. A Z0 value is calculated for each plate (11). The compounds from any plate whose Z0 is below 0.5 are rescreened.
Blackwell et al.
3.8. HTS Results
Statistics for the PKA screen are shown in Table 11.1. The total number of points is the number of wells screened, while the total number of samples is the number of compound batches screened. Based on the determination of 3s and 6s, there are three data populations: 1. Compounds without effect are those whose range is 3s. 2. A grey area is determined to be between 3s and 6s (<13.3%). ˜.3%. 3. Potent hits are those whose values are >6s >13
Table 11.1 HTS statistics Total number of points
261,432
Total number of samples
115,158
Number of samples (3s–/3s+)
111,497
96.8%
Number of samples (3s+/6s+)
1,375
1.2%
Number of samples (>6s+)
224
0.19%
6s cutoff
13.34%
Boolean samples
508
17 .8
14 .2
.7 10
7. 1
6 3.
0. 1
.5 –3
.0 –7
.6 –1 0
.1
.7
40 35 30 25 20 15 10 5 0
–1 4
Frequency
Boolean samples noted in Table 11.1 are those samples where replicate measurements fall in both the active and inactive populations. Figure 11.3 is a Gaussian distribution of the sample results around the center of the assay or those compounds without effect. Figure 11.4 is the distribution of samples >3s with a standard deviation determined among replicate points. Figure 11.5 is a plot of the minimum and maximum values for two replicate points that conform to a linear progression.
–1 7
234
% Inhibition
Fig. 11.3. Gaussian distribution of the data around the center of the assay (no effect). The percent inhibition of the assay points was binned within intervals of 2.96% and plotted as a histogram with the frequency of the range on the y-axis.
High-Throughput Screening of the Cyclic AMP-Dependent Protein Kinase
235
100 90
% Inhibition (Average)
80 70 60 50 40 30 20 10 0 0
400
800
1200
1600
Compounds Fig. 11.4. Distribution and grey area of the outlier samples. The average and the standard deviation of duplicate assay points were plotted with rank order potency up to 100% inhibition of PKA activity. The 6s cutoff for the screen was 13.3% inhibition.
100 90 80
max value
70 60 50 40 30 20 10 0 0
30
60
90
min value Fig. 11.5. Plot of minimum and maximum outliers of the sample duplicates. Scatter diagram with linear R2 highlights the precision of the duplicate outliers.
236
Blackwell et al.
4. Notes 1. All solutions should be prepared in filtered water that has a resistance of 18.2 -cm and a total organic content of less than five parts per billion. 2. The peptide is solubilized in 40% DMSO and water (v/v) to a concentration of 1 mM. 3. The fluorescently labeled peptide is quantitated by measuring the absorbance of the fluorescein fluorophore at A492 using an extinction coefficient of 79,000 cm–1M–1. The concentrated peptide is diluted in 50 mM sodium carbonate, pH 9 prior to measuring the absorbance. Following quantitation, the peptide is examined on the Caliper to determine the degree of purity. The accepted level of purity, where the contaminant does not interfere with product peak assignment, is >95%. 4. The Caliper instrument can be run continuously for 5 days to shorten machine preparation and prolong the life of the microfluidic chip. All buffer solutions are stable throughout this time period. Only the chip needs to be refreshed daily with separation buffer. 5. Percent inhibition calculation 1
PSR compound PSR 100 ðPSR 0 PSR 100 Þ 100:
6. Z0 calculation: 1 ð3 ðstdev0 þ stdev100 ÞÞ= absolute value of average0 average100
:
References 1. O’Neill, L.A. (2006) Targeting signal transduction as a strategy to treat inflammatory diseases. Nat. Rev. Drug Discov. 5(7), 549–563. 2. Hennessy, B.T., Smith, D.L., Ram, P.T., Lu, Y., Mills, G.B. (2005) Exploiting the PI3K/ AKT pathway for cancer drug discovery. Nat. Rev. Drug Discov. 4(12), 988–1004. 3. Vlahos, C.J., McDowell, S.A., Clerk, A. (2003) Kinases as therapeutic targets for heart failure. Nat. Rev. Drug Discov. 2(2), 99–113. 4. Druker, B.J., Tamura, S., Buchdunger, E., Ohno, S., Segal, G.M., Fanning, S., Zimmermann, J., Lydon, N.B. (1996) Effects of a selective inhibitor of the Abl tyrosine kinase on the growth of Bcr-Abl positive cells. Nature Med. 2(5), 561–566.
5. Warner, G., Illy, C., Pedro, L., Roby, P., Bosse, R. (2004) AlphaScreen kinase HTS platforms. Curr. Med. Chem. 11(6), 721–730. 6. Sportsman, J.R., Gaudet, E.A., Boge, A. (2004) Immobilized metal ion affinitybased fluorescence polarization (IMAP): advances in kinase screening. Assay Drug Dev. Technol. 2(2), 205–214. 7. Koresawa, M., Okabe, T. (2004) Highthroughput screening with quantitation of ATP consumption: a universal non-radioisotope, homogeneous assay for protein kinase. Assay Drug Dev. Technol. 2(2), 153–160. 8. Dunne, J., Reardon, H., Trinh, V., Li, E., Farinas, J. (2004) Comparison of on-chip and off-chip microfluidic kinase assay formats. Assay Drug Dev. Technol. 2(2), 121–129.
High-Throughput Screening of the Cyclic AMP-Dependent Protein Kinase 9. Cheng, H.C., Kemp, B.E., Pearson, R.B., Smith, A.J., Misconi, L., Van Patten, S.M., Walsh, D.A. (1986) A potent synthetic peptide inhibitor of the cAMP-dependent protein kinase. J. Biol. Chem. 261(3), 989–992. 10. Janzen, W., Bernasconi, P., Cheatham, L., Mansky, P., Popa-Burke, I., Williams, K., Worley, J., Hodge, N. (2004) Optimizing
237
the Chemical Genomics Process. In: Darvas, F., Guttman, A., and Dorman, G. (eds.), Chemical Genomics. Marcel Dekker, New York, pp. 59–100. 11. Zhang, J.H., Chung, T.D., Oldenburg, K.R. (1999) A simple statistical parameter for use in evaluation and validation of high throughput screening assays. J. Biomol. Screen. 4, 67–73.
Chapter 12 Use of Primary Human Cells in High-Throughput Screens Angela Dunne, Mike Jowett, and Stephen Rees Abstract Traditionally, the objective of high-throughput screening (HTS) has been to identify compounds that interact with a defined target protein as the starting point for a chemistry lead optimisation programme. To enable this it has become commonplace to express the drug target in a recombinant expression system and use this reagent as the source of the biological material to support the HTS campaign. In this chapter we describe an alternative HTS methodology with the objective of identifying compounds that mediate a change in a defined physiological end point as a consequence of compound activity in human primary cells. Rather than screening at a defined molecular target, such ‘‘phenotypic’’ screens permit the identification of compounds that act at any target protein within the cell to regulate the end point under study. As an example of such a screen we will describe an HTS campaign to identify compounds that promote the production of the cytokine interferon-a from human blood peripheral mononuclear cells (PBMCs) isolated from whole blood. We describe the procedures required to obtain and purify human PBMCs and the electrochemiluminescence-based assay technology used to detect interferon-a and highlight the challenges associated with this screening paradigm. Keywords: High-Throughput Screen, Primary cells, PBMCs, Electrochemiluminescence, MesoScale Discovery, Interferon-a.
1. Introduction During the past 15 years, high-throughput screening (HTS) has become a central engine of drug discovery. As described in this volume, pharmaceutical and biotechnology companies, and increasingly academic institutions, have established the infrastructure to screen large libraries of chemically diverse molecules against drug targets, using automated robotic screening platforms (1, 2). In parallel with the development of HTS automation and instrumentation, a huge range of bioassay technologies have been W.P. Janzen, P. Bernasconi (eds.), High Throughput Screening, Methods and Protocols, Second Edition, vol. 565 ª Humana Press, a part of Springer Science+Business Media, LLC 2009 DOI 10.1007/978-1-60327-258-2_12, Springerprotocols.com
239
240
Dunne, Jowett, and Rees
enabled, which share a number of common features; the assays are typically homogeneous, amenable to assay in sub-100 ml assay volumes, tolerant to compound solvents such as dimethyl sulphoxide (DMSO) and are relatively cheap and simple to configure (3). Importantly, almost all HTS assays rely upon the use of recombinant protein or recombinant cell lines as the source of biological target due to the ability to generate a virtually limitless supply of material of consistent and high quality (4). Following hit identification, recombinant assays are usually complemented by downstream native tissue phenotypic assays used to profile hit or lead compounds for efficacy and mechanism of action (MOA). These assays typically rely upon the determination of compound activity in a cellular model of disease using either human primary cells or animal tissue in which the target protein is expressed in the native environment (5, 6). If the compounds are active in the phenotypic assay, the programme may progress towards the clinic; however, if the compounds are inactive, the compound series may be declared of no further interest. As a consequence, many years of effort may be wasted in optimizing molecules that ultimately have no activity in the disease-relevant phenotypic assay. For this reason it is attractive to move the phenotypic assay to an earlier point in the programme to avoid wasted work. To run the HTS using a native tissue phenotypic assay maximizes the possibility of identifying hits with the desired phenotypic activity and use recombinant assays to identify the MOA and to profile off-target activities. A phenotypic HTS enables the direct assessment of compound action on a pathway, rather than a defined target (7). This allows the scientist to probe all molecular targets on the pathway of interest and increases the likelihood of identifying compounds with the desired mechanism of action. However, it leads to a major question regarding the need to identify the mechanism of action of that molecule. There are two approaches to this issue: first, all hits can be profiled in recombinant assays against targets suspected to be of interest to identify the MOA and subsequent compound optimisation can be performed in recombinant assays. Second, if knowledge of the MOA is not required, or if it is not possible to identify the MOA, then all subsequent activities could be performed using the phenotypic assay as the primary assay. In this chapter we explore the use of phenotypic assays involving primary human cells for hit discovery and discuss the issues to be addressed to enable this screening paradigm using a HTS as the example to identify activators of interferon-a (IFN-a) production from human peripheral blood mononuclear cells (PBMCs).
Use of Primary Human Cells in High-Throughput Screens
241
1.1. Issues to Be Addressed to Enable a Phenotypic Assay HTS
There are a number of practical issues to be addressed prior to running a phenotypic HTS. First, the supply of primary human or animal tissue is limited. Cryopreserved Primary cells can be purchased from a number of vendors and many cell lines including human lung fibroblasts, chondrocytes and neuronal cells are available. However, primary human tissue remains difficult to obtain and for this reason we have run our first phenotypic screens using human blood cells. Second, the range of assay technologies available for HTS in native tissue is relatively limited. Phenotypic assays often rely upon the detection of the level of expression of a surface protein or the determination of the concentration of a secreted analyte in the culture media using ELISA (enzyme-linked immunosorbent assay) for detection. The development of miniaturized ELISA technologies such as electrochemiluminescence (MesoScale Discovery) (8) or AlphaLISA (Perkin-Elmer) (9) enables the performance of these assays in 96- and 384-well microtitre plates, thus making these assays HTS compatible. The third issue is often organisational and comes from a perception that phenotypic assays cannot be run for HTS due to the logistical reasons mentioned here or a belief that the HTS department will not run such an assay.
1.2. HTS to Detect IFNa Production by Human PBMCs
Toll-like receptors (TLR) are a family of at least 10 single-membrane spanning receptors, expressed in immune cells, that play a key role in mediating the innate immune response to the presence of pathogens (10). The activation of these receptors promotes leucocyte recruitment to the site of infection and causes the release of pro-inflammatory cytokines including IFN-a to cause the induction of the immune response to combat the presence of the foreign antigen (11). For this reason, TLR agonists are of interest as pro-inflammatory therapeutics to fight pathogen infection and as vaccine adjuvants (10–13). One such molecule that has been described is the imidazoquinoline compound Resiquimod (R-848) (Fig. 12.1). Resiquimod induces the production of IFN-a and a number of other pro-inflammatory cytokines from cultured human PBMCs. While the precise mechanism of action remains unclear, it has been
NH2 N
N
O
N OH
Resiquimod Fig. 12.1. Structure of the TLR agonist Resiquimod (R-848).
242
Dunne, Jowett, and Rees
demonstrated to act as an agonist at both the TLR-7 and TLR-8 receptors (14). Rather than develop a recombinant HTS to identify agonists of TLR receptors, we elected to develop a screen to identify compounds capable of mediating the production of the physiologically relevant end point, IFN-a, from human PBMCs through any mechanism of action. This required the establishment of a robust supply chain for the collection and preparation of human PBMCs and the identification of an assay technology amenable to IFN-a detection with HTS performance characteristics. 1.3. MesoScale Discovery (MSD) Assay Platform
MesoScale Discovery (http://www.meso-scale.com) assay technology allows the performance of ELISA assays within 96- or 384well microtitre plates using electrochemiluminescence (ECL) detection (8). ECL is a non-isotopic, homogeneous and sensitive assay technology that allows the detection of analytes within the media of cultured cells (Fig. 12.2). MSD assays are performed using microtitre plates, which contain an electrode built into the
Measured signal is light LIGHT
*Ru(bpy)32+
Ru(bpy)32+
TPA–
Ru(bpy)32+
TPA
–H+
TPA– +
Detection antibody Analyte Capture antibody
Counter electrode
Working electrode
Dielectric
Fig. 12.2. Schematic representation of the electrochemiluminescence-based IFN-a detection assay. 384-well MSD plates are coated with capture antibody. Following the binding of analyte to this antibody, two ruthenium-labelled detection antibodies are added to the assay plate to form an ELISA sandwich at the base of the plate. Following the addition of MSD Read Buffer, an oxidation reaction occurs, which results in the generation of light. Light intensity is proportional to the concentration of captured analyte (see Section 3 and www.meso-scale.com for further details).
Use of Primary Human Cells in High-Throughput Screens
243
base of each well of the assay plate. To establish an MSD assay for the detection of human IFN-a, 384-well MSD plates, pre-coated with a goat anti-rabbit immunoglobulin, were coated with a rabbit polyclonal antibody to human IFN-a. Following the addition of cell culture media containing IFN-a, the cytokine is captured by the antibody. Captured analyte is detected following the addition of two monoclonal anti-IFN-a antibodies previously labelled with Ruthenium. We used two detection antibodies that recognize different epitopes on IFN-a. During assay development we determined that the signal window was enhanced through the use of an equimolar ratio of the two antibodies compared to the use of each antibody alone (data not presented). This is unusual; typically a single detection antibody is used. Following addition of MSD Read Buffer, the level of analyte is detected by reading the assay plate in the MSD Sector Imager and an electric current applied. This promotes the oxidation of Ruthenium with the resulting generation of a chemiluminescent signal, which is detected in the reader (Figs. 12.3 and 12.4). 1.4. Regulations Regarding the Use of Human Cells
DAY 1
It is necessary to consider whether there are any regulatory procedures that need to be adopted regarding the use of human tissue. Our screen was run in the United Kingdom and we briefly describe the regulatory issues encountered to perform this work. In 2004
Isolate PBMCs Re-suspend in Culture Media 50μl/well cells (4x10e5/ml) onto compound plate 48 hr incubation @ 37°C, 5% CO2 Add 20µl/well (1:8000)anti IFN-α polyclonal antibody to MSD plate Incubate at 4°C overnight
DAY 2
Harvest cell supernatant in compound plate by centrifugation (3 mins at 300g)
DAY 3
Wash MSD plate with 50μl/well PBS (Tecan Power Washer)
Transfer 20µl/well cell supernatant from compound plate to washed MSD plate using Cybiwell Add 20µl/well secondary mouse monoclonals sulpho-tagged antibodies Seal plate Incubate at 4°C overnight
DAY 4
Remove seal and aspirate contents (Tecan Power Washer) Add 30µl 2X MSD read buffer, incubate 10mins at Room Temp and read on Sector Imager
Fig. 12.3. Flow chart describing the assay protocol developed for the PBMC IFN-a production electrochemiluminescence HTS (see Section 3 for details).
244
Dunne, Jowett, and Rees
ASSAY 1
MONDAY
TUESDAY
WEDNESDAY
Isolation of PBMCs
Preparation of MSD plates
Isolation of PBMCs
ASSAY 2
IFNα MSD assay
Preparation of MSD plates
THURSDAY
FRIDAY
Read plates
IFNα MSD assay
Read plates
Fig. 12.4. Chart outlining weekly work pattern required to support the PBMC IFN-a production electrochemiluminescence HTS (see Section 3 for details).
the Human Tissue Act came into force, which regulates research with human tissue (15). An institute wanting to undertake such work must apply to the Human Tissue Authority to obtain a license that describes the type of work and the mechanism of how it will be conducted. The license requires among other things the following: First, records of all scientists performing the work are kept, where the material is stored and by what manner, a description of all equipment used including service and maintenance schedules, the generation of appropriate safety documentation including risk assessments and standard operating procedures, records of all staff training, a description of what is the material going to be used for and finally all disposal records. It is a legal requirement that human tissue has an audit trail starting with when the sample was obtained through to disposal. This license will describe the consent process under which tissue is taken ensuring that the donor understands why the tissue is being taken and for what purpose it will be used. The material can be used only for the purpose for which it was taken. Finally, the vaccination status of employees handling human tissue should be considered.
2. Materials 2.1. PBMC Preparation
1. Human blood was obtained from healthy volunteers by the GSK Blood Donation Unit 2. RPMI 1640 Media (Gibco, Paisley, Scotland) 3.
L-Glutamine
(100 ) (Gibco, Paisley, Scotland).
4. Penicillin/streptomycin (Gibco, Paisley, Scotland) 5. Foetal bovine serum (FBS) (Low Endotoxin) (Invitrogen, Paisley, Scotland)
Use of Primary Human Cells in High-Throughput Screens
245
6. Cell culture media: 10% foetal bovine serum (FBS), 2% penicillin/streptomycin and 1% L-glutamine in RPMI 1640 media. Stored at 4C for up to 4 weeks 7. Leucosep tubes pre-filled with ficol-histopaque (Greiner, Kremsmunster, Austria) 8. Centrifuge 5810 (Eppendorf, Hamburg, Germany) 9. Human recombinant IFN-g (Peprotech, Rocky Hill, NJ) 10. Citrate buffer (Baxter HealthCare, Glendale, CA) 11. Phosphate-buffered saline (PBS) (Gibco, Paisley, Scotland) 12. Controlled-rate freezer (Planer, Sunbury-On-Thames, UK) 13. Dimethyl sulphoxide (DMSO) (Sigma-Aldrich, St Louis, MO). 14. Freezing media (10% DMSO/90% FBS) 15. Cryovials (Corning, Corning, NY) 16. 140C freezer (ThermoScientific, Waltham, MA) 2.2. Antibody Labelling
1. Rabbit polyclonal anti-IFN-a (Carrier Free) (Stratech Scientific, Tonbridge, UK) 2. Mouse monoclonal anti-hIFN-a (MMHA-2 Carrier Free) (Stratech Scientific, Tonbridge, UK). Diluted to 2 mg/ml in PBS 3. Mouse monoclonal anti-hIFN-a (MMHA-11 Carrier Free) (Stratech Scientific, Tonbridge, UK). Diluted to 2 mg/ml in PBS 4. MSD SULPHO-TAG NHS-Ester (MesoScale Discovery, Gaithersburg, MD) 5. PD-10 columns (SEPHADEX G-25 M) (GE Healthcare, Bucks, UK) 6. Biorad protein assay kit (BioRad, San Ramon, CA) 7. Tube rotator (Stuart Scientific, Stone, UK)
2.3. MSD Assay
1. MSD Sector Imager 6000 Reader (MesoScale Discovery, Gaithersburg, MD) 2. MSD Read Buffer T 4 (MesoScale Discovery, Gaithersburg, MD). Dilute 4 stock to 2 with water 3. GAR-Coated Standard MA6000 384 plates (MesoScale Discovery, Gaithersburg, MD) 4. Plate seal (Weber Labelling, Arlington Heights, IL) 5. Water (Sigma-Aldrich, St Louis, MO)
2.4. Compound Plates
1. For HTS, compounds were supplied as 0.5 ml of 1 mM stock solutions in 100% DMSO in 384-well clear microtitre plates (Greiner, Kremsmunster, Austria). Compounds were
246
Dunne, Jowett, and Rees
supplied in all columns of the plate except column 6 and 18. Column 6 contained 0.5 ml of DMSO (low control) and column 18 contained 0.5 m1 of 100 mM resiquimod (high control). 2. Resiquimod was prepared by GSK Medicinal Chemistry and supplied at 10 mM in 100% DMSO. 2.5. Automation Used for HTS
1. 384-Well Tecan Power Washer (Tecan Trading AG, Zurich, Switzerland) 2. 384-Well Multidrop (ThermoScientific, Waltham, MA) 3. 384-Well Cybiwell (Cybio, Jena, Germany) 4. Plate Incubator (ThermoScientific, Waltham, MA) 5. Cedex Cell Counter (Innovatis, AG, Bielefeld, Germany) 6. Class II Cell Culture Cabinet (ThermoScientific, Waltham, MA) 7. Spectrophotometer (Perkin-Elmer, Waltham, MA)
3. Methods 3.1. Blood Collection
3.2. Isolation of Peripheral Blood Mononuclear Cells (PBMCs)
1. Collect blood by vein puncture into 15% citrate buffer (blood anticoagulant) by blood volume (9 ml citrate for 60 ml of blood). See Notes 1 and 2. 1. Add 30 ml blood to 50 ml leucosep tubes pre-filled with 15 ml histopaque 1077. 2. Centrifuge for 20 minutes at 1000g at room temperature. 3. Pour off enriched mononuclear fraction (upper phase) into second 50-ml centrifuge tube. Rinse out walls of leucosep tube with PBS, add to centrifuge tube, top up to 50 ml with PBS. 4. Centrifuge at 300 g for 10 minutes at room temperature. 5. Discard supernatant and wash cell pellet once in PBS and once in culture media. 6. Resuspend cell pellet in culture media and determine cell number on Cedex Cell Counter. 7. Dilute cells in culture media to 4 105/ml. 8. Store at 4C for a maximum of 4 hours before use in assay.
3.3. Cryopreservation of Human PBMCs
See Notes 3and 4. 1. Prepare Freezing Media and store at 4C.
Use of Primary Human Cells in High-Throughput Screens
247
2. Follow PBMC preparation method (Section 3.2) until Step 5. Resuspend the cell pellet in freezing media at a cell density of 4 107/ml. 3. Immediately aliquot cells into 1 or 5 ml Cryovials. 4. Transfer vials to a Controlled-Rate Freezer and freeze using the following programme: l Start temperature 5C l
Hold at 5C for 7 minutes
l
Cool 1C per minute to 5C
l
Cool 3C per minute to 12C
l
Cool 5C per minute to 14C
l
Cool 7.5C per minute to 20C
l
Cool 6.5C per minute to 25C
l
Hold at 25C for 2 minutes
l
Warm 3C per minute to 20C
l
Hold at 20C for 2 minutes
l
Cool 1C per minute to 50C
Cool 10C per minute to 130 C. 5. Transfer frozen vials to 140C freezer for storage. We have found that cells can be stored for a maximum of 6 months without any loss of viability. l
3.4. Labelling of IFN-a Monoclonal Antibody with MSD Sulpho-TAG
1. Dilute both mouse monoclonal antibodies to 2 mg/ml in PBS. 2. Dilute Sulpho-TAG NHS-Ester in DMSO to 10 nmol/ml immediately before use. 3. Add MSD Sulfo-TAG NHS ester solution to the antibody preparation to give a ratio of 20:1 molar excess of ester solution and mix. 4. Wrap the tubes in foil and mix on Tube Rotator at room temperature for 2 hours. 5. Prepare the G25M Sephadex PD-10 column by filling with PBS and allow to drain by gravity. Repeat three times before loading antibodies. 6. Add antibody label mix (Step 3) to the column and elute by gravity. 7. Elute from column using PBS. Collect eluate into 500 ml fractions. 8. Determine the concentration of labelled protein in each elute using Biorad Protein Assay following the instructions therein. 9. Labelled antibody can be stored at 4C at a concentration of 2 mg/ml for 6 months.
248
Dunne, Jowett, and Rees
3.5. IFN-a MSD Assay Protocol
See Notes 5 and 6. Day 1: 1. Using a ThermoLab 384-well Multidrop add 50 ml of PBMCs in culture media 4 105/ml into each well of a 384-well compound plate (hereinafter referred to as the compound plate). All plates should be lidded. 2. Incubate at 37C/5% CO2for 48 hours in a Heraeus incubator. Day 2: 3. Add 20 ml of diluted (1:8000 in culture media) anti-IFN-a polyclonal antibody to each well of a 384-well GAR Coated Standard MSD plate using a ThermoLab 384-well Multidrop (hereinafter referred to as the MSD plate). 4. Incubate at 4C overnight. Day 3: 5. Using a 384-well Tecan Power Washer remove the antibody solution from the MSD plate, wash each well twice in 50 ml PBS. 6. In parallel, centrifuge the compound plate for 3 minutes at 300g (1200 rpm). 7. Using a 384-well Cybiwell transfer 20 ml of cell supernatant from the compound plate to the MSD plate. 8. Add to the MSD plate 20 ml of the two mouse monoclonal sulpho-tagged antibodies (from 3.4). Cover plates using a Plate Seal. 9. Incubate at room temperature overnight in the dark. Day 4: 10. Remove the plate seal. Using a 384-well Tecan Power Washer aspirate the solution from the MSD plate. 11. Using a ThermoLab 384-well Multidrop add 30 ml of 2 MSD Read Buffer. 12. Incubate for 10 minutes at room temperature. 13. Read plate on MSD Sector Imager.
3.6. Preparation of Human PBMCs
Our objective was to screen 1.2 M compounds in 384-well microtitre plates with a throughput of 180 plates/ experiment with two screening experiments each week. Assay development data indicated that it was necessary to take blood, prepare PBMCs and add these to compound plates on the day of donation. To support the HTS we elected to take blood donations of 200 ml to allow for recycling of volunteers. We found that each donation generated sufficient cells to screen around twenty 384 well assay plates. Thus we had to establish a supply chain that enabled collection of blood
Use of Primary Human Cells in High-Throughput Screens
249
Plate Z’
from nine volunteers on each day of assay with 18 volunteers required each week. Following collection each donation has to be processed individually as it is not possible to mix blood from separate donors due to surface antigen cross-reactivity. This required multiple parallel processing of samples. The most significant challenge for this HTS was the effect of donor variability on assay performance. A requirement of any HTS assay is that the assay has a high signal window, usually defined as a Z0 of greater than 0.4 (16), which is consistent across plates and across days. In an HTS supported using a recombinant reagent it is possible to generate a reagent that enables consistent assay performance throughout the screening campaign. This is not possible in a phenotypic HTS. We observed differences in the ability of PBMCs from different donors to produce IFN-a in response to Resiquimod, which caused significant differences in assay window, biological activity cut-off and hit rate throughout the screen (Fig. 12.5). This led us to treat each donation as an individual batch within the HTS, with data processed on a donation-by-donation basis. We found that 20% of the donations failed to give a robust response to the standard compound Resiquimod and plates from these donors failed in the assay. The consequence of this was a high plate failure rate (20%) in the HTS.
Donor
Fig. 12.5. Variation in assay performance is donor dependent. PBMCs were prepared from 13 donors to support assay development. Data show the plate Z0 obtained in the IFN-a assay using PBMCs prepared from each donor (range ¼ 0–0.73). Z0 was calculated according to Zhang et al. (16) using the response obtained from 16 wells of a 384-well plate containing a maximal concentration of Resiquimod against 16 wells of a 384-well plate containing DMSO alone (numbers are actual Z0 from each experiment; data are the mean of three experiments).
250
Dunne, Jowett, and Rees
While this HTS used cells prepared on the day of assay, we later found that PBMCs could be cryopreserved for subsequent use (17, 18). Cryopreservation of cells allowed us to decouple blood preparation from the screening assay and led to a simpler and more flexible work pattern. In addition, the use of cryopreserved cells allows cells to be performance tested such that cells from donors that do not show a robust response to Resiquimod can be discarded ahead of screening. A number of factors were optimized ahead of HTS including cell density, incubation times, antibody concentrations, plate types, screening concentration, assay stability across screen batches, pharmacological validation and solvent tolerance. The experiments required to develop the PBMC assay were no different to those required for any cell-based HTS with the exception that all experiments had to be repeated on blood samples taken from multiple donors to account for the effects of donor variability (3). As an example of this we studied the tolerance of the assay to the compound solvent DMSO in multiple donors. This was determined by monitoring the ability of resiquimod to promote the production of IFN-a over a range of DMSO concentrations. In most donors, assay performance was not affected by DMSO concentrations of up to 1%; however, in a minority of donors the assay window collapsed at concentrations of DMSO above 0.5% (Fig. 12.6). In the final assay conditions the standard agonist Resiquimod had a pEC50 of 7.5 0.25 in agreement with other reports (11) and the assay gave a Z0 of 0.55 0.32 (n ¼ 84). Compounds were screened at a final assay concentration of 10 mM in 1% DMSO (see Notes 7–21).
A
B MSD Raw Count
MSD Raw Count
3.7. Assay Development
% [DMSO]
% [DMSO]
Fig. 12.6. Sensitivity of the IFN-a assay to the solvent DMSO. The ability of an EC100concentration of Resiquimod to promote IFN-a production by human PBMCs prepared from two donors (A and B) was determined in the presence of the indicated concentrations of DMSO. Each data point represents the mean SEM of quadruplicate determinations.
Use of Primary Human Cells in High-Throughput Screens
Prior to committing to HTS a small validation screen is performed. At GSK we have constructed a validation compound set containing 9855 compounds dispensed into 28 384 well assay plates, which is representative of compounds drawn from the GSK screening collection. This set is screened on three independent occasions to determine the performance of the assay during extended screen runs and the ability of the assay to reproducibly identify the same active molecules. A number of observations were made during the validation screen: 1. Each validation set was screened using cells prepared from separate donors. As expected we saw data variation between donors (data not shown). 2. Using a statistical activity cut-off (compounds with activities greater than 3 standard deviations above the sample mean) the calculated hit rate for the screen was 1.5% with a cut-off of 3% of the Resiquimod response (Fig. 12.7). This was not altogether surprising as we typically see low hit rates in agonist screens. As a consequence we elected to progress compounds from the HTS that exhibit activities greater than 10% of the Resiquimod response.
50
TLR_VAL_0006 ACT
3.8. HTS Assay Validation
251
40
30
20
10
0
0
20
40
60
80
100
120
TLR_VAL_0008 ACT Fig. 12.7. Assay validation. The GSK validation compound set (9855 compounds) was screened at 10 mM compound concentration in the IFN-a assay. Compounds were screened against PBMCs prepared from two donors. Data show the correlation of activity between donor 1 and donor 2.
3. There is little correlation between active molecules in different screens (Fig. 12.7). During assay validation we routinely observed molecules that were active in specific donors. As we
252
Dunne, Jowett, and Rees
are interested only in identifying molecules with activities across multiple donors we elected to continue with the screen. 4. In contrast to a recombinant HTS in which plates are failed if the Z0 is below 0.4, a number of additional QC criteria were put in place for the PBMC HTS to account for donor variation. We elected to fail all plates where the Resiquimod response was less than 2000 raw counts and we passed any plates failed on Z0 if that plate contained hits displaying activities greater than 10% of the Resiquimod response.
3.9. Primary Screen
We screened 1,212,006 compounds at 10 mM final assay concentration across 3388 384-well compound plates. Other than the logistical issues caused by donor variation the HTS ran as predicted for a recombinant HTS using MSD detection. We saw a range of plate Z0 throughout the screen (Fig. 12.8). The mean Z0 for plates that passed quality control was 0.40 0.23 and as predicted, the plate failure rate was 20% with all failures being due to the absence of a robust IFN-a with certain donors. Using a cut-off of 10% of the Resiquimod response, we identified 2480 active compounds; a hit rate of 0.2% (Fig. 12.9).
A
B
Robust Z’
Number of Records
800 700 600 500 400 300 200 100 0 <0.15 0.2
Data Set
0.3
0.4
0.5
0.6
0.7
0.8 0.9 >0.95
Robust Z’
Fig. 12.8. HTS quality control statistics. Z0 is calculated for each plate according to the signal window between column 6 (DMSO) and column 18 (Resiquimod) as described in Section 2.4. (A) Z0 for each screen plate plotted against each data set (one data set corresponds to an assay on PBMCs prepared from a single blood donation). (B) Binned Z0 for all plates screened during the HTS. The average Z’ for all plates that passed QC was 0.4 0.23.
3.10. Concentration– Response Determinations
One thousand nine hundred and ninety-two active molecules were progressed to concentration–response testing. We generated concentration–response curves on four experimental occasions with each compound being tested against cells prepared from four donors. As anticipated some compounds were similarly active
Use of Primary Human Cells in High-Throughput Screens
B
% Agonism Binned Response vs Frequency
Count
% Activity (of Resiquimod)
A
253
Data Set
% Agonism Binned Response
Fig. 12.9. HTS activity rates. (A) Data show percent activity of each compound on each screening plate. Using a cut-off of 10% of the Resiquimod response (normalised to 100%), the activity rate in the screen was 0.2%. (B) Activity distribution of all molecules identified in the HTS with activities greater than 10% of the response to Resiquimod. Two thousand four hundred and eighty active compounds were identified. Numbers represent the number of compounds in each activity bin.
against all donors, whereas others appeared to show donor-specific activity (Fig. 12.10). As our objective was to identify molecules with clinical efficacy in broad patient populations, we elected to progress molecules that had activity against all donors tested and did not progress apparently donor-selective molecules. As a consequence this screen identified 17 molecules for
B
% Response
% Response
A
Concentration
Concentration
Fig. 12.10. Representative concentration–response curves for two hits from the HTS. HTS hits were screened against PBMCs prepared from four donors. Ten-point concentration–response curves were generated for each compound. Data are presented as a percentage of the maximum response to resiquimod in each donor. Each curve represents a single potency determination. Compound A generated reproducible potency determinations in all assays. Compound B was inactive on two of the four test occasions and displayed donor dependent efficacy.
254
Dunne, Jowett, and Rees
progression with pEC50values in the range of 4.3–7.3. Following an assessment of the data, chemistry has been initiated on a number of series.
4. Conclusions There are many factors to be considered prior to a phenotypic HTS. First, a phenotypic screen is molecular target independent and allows the screener to identify molecules that regulate the disease-relevant end point. Second, a phenotypic screen may be considered for targets for which recombinant expression is difficult or for which the pharmacology of the target changes when expressed in a recombinant system. Third, in situations where a recombinant assay has failed to identify hits, there is a possibility that screening against the target in the native environment may facilitate hit identification. While it may be possible to determine the MOA of hits from a phenotypic screen, it is likely that hits will be identified for which the MOA is unknown. Hence prior to committing to such a screen, one should consider whether knowledge of the MOA is a requirement for progression and if so an experimental plan has to be constructed to allow determination. In that regard, phenotypic HTS can be regarded as the natural heir to tissue strip pharmacology through which all drugs were identified prior to the 1980s and the dawn of the recombinant era. Perhaps the major achievement of this work has been to show that it is possible to alter the paradigm of hit identification and run an HTS using primary human tissue. The PBMC HTS successfully identified a number of chemical series that regulate IFN-a production from human PBMCs and we have since completed a number of other high- and low-throughput screens to identify modulators of cytokine release from human PBMCs. While it is possible to obtain sufficient human blood to support HTS, the availability of most other tissue types in sufficient quantity for HTS remains a challenge. We have run smaller screens using hepatocytes, chondrocytes and neuroblastoma cells. The ability to run HTS using other primary tissue will depend upon the development of assay technologies that reduce the cell requirement or the development of alternative sources of biological material such as the enablement of terminally differentiated stem cells. However, it is clear that phenotypic screening offers an exciting alternative to recombinant screens that may enhance the success of early hit identification.
Use of Primary Human Cells in High-Throughput Screens
255
5. Notes 1. Rinse steps are included to collect as many cells as possible. 2. Samples from different donors are kept separate throughout the isolation and assay protocol. 3. Cryopreserved PBMCs were not used in the HTS described here. We have since found that this method can be used to prepare cells in advance of screening. 4. Samples from different donors must be kept separate throughout the freezing procedure. 5. See Figs. 12.3 and 12.4 for overview of HTS protocol. 6. For the HTS, plates were processed in 30-plate batches with two scientists. 7. Capture antibody is spotted directly onto the electrode in the assay well. MSD will supply plates where antibody has been spotted as a catalogue item or as a custom service (8). Alternatively, MSD will supply base plates for the customer to perform this exercise. Plate spotting requires specialist expertise and equipment and, in our experience, is difficult to perform reproducibly. 8. In the work described here we purchased plates from MSD in which a goat anti-rabbit immunoglobulin (GAR) had been spotted onto the electrodes (8). We used this to capture the IFN-a polyclonal capture antibody. 9. Following spotting of capture antibody, plates are stable for up to 1 year when stored at 4C. 10. Plates spotted with capture antibody may need to be blocked with protein to prevent non-specific binding. This was not required in the assay described here. If required this can be performed using 1% milk powder reconstituted in PBS. The requirement for blocking should be determined during assay development. 11. Excess blocking reagent should be removed by washing in PBS. We typically use a 384-well plate washer to do this. 12. When transferring reagents to the MSD assay plate, care should be taken not to damage electrode in the bottom of the plate. 13. It is critical to define cell plating density during assay development and the cell number per well used should be minimized to conserve cells. 14. The addition of antibiotics to the culture media is advised to avoid bacterial contamination of the samples.
256
Dunne, Jowett, and Rees
15. To minimize the number of steps in the assay, the detection antibody mixture should be added directly to the assay plate containing the analyte. However, assay performance may be enhanced if plates are washed before addition of the detection antibody. 16. It is a requirement that the detection antibody recognizes a different epitope to the capture antibody. 17. Following the addition of MSD Read Buffer to the plates, the optimal final volume should be 35 ml. At volumes of less than this, assay performance decreases as the camera in the Sector Imager is unable to detect the assay signal. For this reason, plates are sealed to prevent evaporation prior to reading. The camera height or the read time cannot be adjusted without engineer intervention. 18. Unbound detection antibody not washed away prior to the addition of the MSD Read Buffer will generate a background signal. Assay performance can be improved by washing of the plate prior to the addition of read buffer. The use of the MSD Sector Imager to read plates is a requirement of this assay. MSD assay plates are not compatible with other readers. 19. As work with human tissue carries potential health risks, all work should be contained. In our laboratory, specific equipment is used for PBMC work and not for other purposes. All tissue culture is performed within a Class II Safety Cabinet. 20. A robust data-handling process should be established ahead of screening to facilitate the identification of any quality-control failures prior to the commitment of large numbers of plates for screening to minimize waste. 21. Screen data from each PBMC batch were analysed separately to account for donor variation.
6. Acknowledgements The authors would like to acknowledge the expertise of the members of the Biological Reagents and Assay Development, Screening and Compound Profiling, Discovery Technology Group and the Infectious Diseases Centre of Expertise for Drug Discovery for their work to enable phenotypic HTS at GlaxoSmithKline: Ken Grace, Barbara Hebeis, Ketaki Shah, David Gray, Sian Lewis, Rupal Kapadia, Shie Chang, Claire Purkiss, Jason Signolet, Anesh Sitaram, Peter Morley, Lucy Reynell, Elena Sciamanna, Mike Sowa, Gavin Harper, Karen Amaratunga, Carolyn O’Malley,
Use of Primary Human Cells in High-Throughput Screens
257
Michael Wilson. Finally we would like to thank the GSK Blood Donation Unit and the 200 blood donors who made this work possible. References 1. Posner, B. A. (2005) High-throughput screening-driven lead discovery: meeting the challenges of finding new therapeutics. Curr. Op. Drug Disc. Dev. 8, 487–494. 2. Gribbon, P. and Andreas, S. (2005) Highthroughput drug discovery: What can we expect from HTS? Drug Disc. Today 10, 17–22. 3. Walters, W. P. and Namchuck, M. (2003) Designing screens: How to make your hits a hit. Nat. Rev. Drug Disc. 2, 259–266. 4. Moore, K. and Rees, S. (2001) Cell-based versus isolated target screening: How lucky do you feel? J. Biomol. Scr. 6, 66–74. 5. Horrocks, C., Halse, R., Suzuki, R., and Shepherd, P. A. (2003) Human cell systems for drug discovery. Curr. Op. Drug Disc. Dev. 6, 570–575. 6. Clemons P. A. (2004) Complex phenotypic assays in high-throughput screening. Curr. Op. Chem. Biol. 8, 334–338. 7. Rossi, C., Padmanaban, D., Ni, J., Yeh, L.A., Glicksman, M., and Waldner, H. (2007) Identifying drug-like inhibitors of myelinreactive T cells by phenotypic high-throughput screening of a small-molecule library. J. Biomol. Scr. 12, 481–489. 8. See http://www.meso-scale.com for literature describing the theory and application of electrochemiluminescence detection. 9. See http://las.perkinelmer.com/ for literature describing the theory and application of AlphaLisa detection. 10. Gay, N. J. and Gangloff, M. (2007) Structure and function of Toll receptors and their ligands. Ann. Rev. Biochem. 76, 141–165.
11. Uematsu, S. and Akira, S. (2007) Toll-like receptors and type-1 interferons. J. Biol. Chem. 282, 15319–15324. 12. Weeratna, R. D., Makinen, S. R., McCluskie, M. J., and Davis, H. L. (2005) TLR agonists as vaccine adjuvants: Comparison of CpG ODN and Resiquimod (R-848). Vaccine 23, 5263–5270. 13. Gerondakis, S., Grumont, R. J., and Banerjee, A. (2007) Regulating B-cell activation and survival in response to TLR signals. Immunol. Cell Biol. 85, 471–475. 14. Jurk, M., Heil, F., Vollmer, J., Schetter, C., Krieg, A. M., Wagner, H., Lipford, G., and Bauer, S. (2002) Human TLR7 or TLR8 independently confer responsiveness to the anti-viral compound R-848. Nat. Immunol. 3, 499–504. 15. Human Tissue Act (2004) available from http://www.opsi.gov.uk. 16. Zhang, J.-H., Chung, D. Y., and Oldenberg, K. R. (1999) A simple statistical parameter for use in evaluation and validation of high throughput screening assays. J. Biomol. Scr. 4, 67–73. 17. Kunapuli, P., Zheng, W., Weber, M., Solly, K., Mull, R., Platchek, M., Cong, M., Zhong, Z., and Strulovici, B. (2005) Application of division arrest technology to cell-based HTS: comparison with frozen and fresh cells. Assay & Drug Dev. Technol. 3, 17–26. 18. Smith, J. G., Joseph, H. R., Green, T., Field, J. A., Wooters, M., Kaufhold, R. M., Antonello, J., and Caulfield, M. J. (2007) Establishing acceptance criteria for cell-mediatedimmunity assays using frozen peripheral blood mononuclear cells stored under optimal and suboptimal conditions. Clinical & Vaccine Immunol. 14, 527–537.
INDEX Apatchi-1TM ................................................................... 192 See also Reader ASDIC............................................100, 101, 102, 103, 104 Assay AlphaLISA ........................................................... 8, 241 automated patch clamp......................... 9, 192, 209–222 See also Electrophysiology Binding .............9, 13–29, 118, 129, 131, 132, 139, 141 BRET............................................................................ 9 cell-based.....................5–6, 7, 8, 9, 17–18, 20, 145, 153 CE, see Capillary Electrophoresis (CE) coupled ..................8, 14, 112, 113, 122, 145, 146, 147, 148, 156 ECL .................................................................. 7, 9, 242 See also Electrochemiluminescence (ECL) Efflux ................................................................ 154, 191 ELISA .......................................................... 8, 241, 242 end-point ............................................................ 14, 113 enzymatic ................................11, 28, 73, 108, 109, 137 FCS............................................................................... 7 See also Fluorescence Correlation Spectroscopy (FCS) FIDA ............................................................................ 9 See also Fluorescence Intensity Distribution Analysis (FIDA) FLINT .......................................................... 9, 137, 141 See also Fluorescence intensity (FLINT) Format............6, 7, 9, 10, 21, 23, 75, 76, 109, 110, 111, 112, 113, 115, 119, 123, 145, 190, 191, 205 FP..................9, 17, 114, 121–122, 130, 131, 134, 135, 136, 137, 138, 139, 140, 141 n.2 See also Fluorescence Polarization (FP) FPIA ......................................................... 114, 115, 116 See also Fluorescence Polarization Immunoassay (FPIA) FRET................................................8, 9, 113, 114, 190 See also Fluorescence Resonance Energy Transfer (FRET) HCS.................................................. 160, 161, 175, 177 See also High Content Screening (HCS) incubation time .............9, 14, 18, 20, 27, 43, 119, 120, 123 n.5, 166, 185 n.13, 250 microfluidic ....................................................... 112, 226 phenotypic..................................................... 5, 240, 241 PolarScreen ............................................................... 138 radioligand binding................................... 145, 190, 211
A AAO ........................................................................... 21, 22 See also Automated Assay Optimization (AAO) Absorbance......................6, 8, 9, 24, 26, 49, 112–113, 115, 117, 236 n.3 Activator .....................18, 42, 110, 116, 117, 164, 196, 240 Active ...............2, 37, 70, 72, 73, 82, 83, 85, 89, 92, 94, 96, 97, 99, 100, 109, 110, 131, 132, 135, 137, 138, 140, 175, 176, 177, 178, 179, 180, 181, 182, 198, 210, 226, 234, 240, 251, 252, 253 ADME-Tox Affinity binding ...................................................................... 140 km ............................................................................... 16 Agonist EC50................................................................. 250 Akt .................................................................................. 109 See also Enzyme ALA Scientific See also Manufacturer Alembic Instruments ...................................................... 199 See also Manufacturer Alkaline phosphatase ...................................................... 113 See also Enzyme Allosteric ligand.............................................................. 131 AlphaLISA ................................................................. 8, 241 See also Assay AlphaScreen........................................................7, 8, 9, 226 See also Reader Amlodipine ..................................................................... 189 See also Prescription Drugs Amphotericin-B.............................................................. 194 AMP kinase .................................................................... 110 See also Enzyme Analysis software ........................................ 47, 53, 168, 220 Analyst AD ..................................................... 132, 133, 136 See also Reader Anisomycin ....................160, 164, 165, 166, 167, 168, 170, 172, 173, 174, 175, 178, 179, 180, 183 n.4, 184 n.8 Anisotropy ..............70, 128, 129, 130, 132, 133, 134, 135, 136, 141 n.1 Antagonist IC50 Anthropomorphic arm.......................................... 35, 36, 39 See also Robot Antibiotic selection......................................................... 149
259
HIGH THROUGHPUT SCREENING
260 Index
Assay (continued) SPA...............................................7, 8, 9, 13, 14, 17, 34 See also Scintillation proximity assay (SPA) Transcreener PDE .................................................... 138 TR-FRET...................................6, 7, 8, 9, 77, 114, 116 Assay development..............2, 10, 23, 25, 34, 70, 108, 109, 110, 113, 115, 119, 120, 122, 131, 164, 165, 166, 170, 203, 228, 243, 248, 249, 250, 255 n.10, 255 n.13 Assay plate ..................40, 66, 67, 72, 74, 76, 80, 117, 130, 141, 146, 177, 196, 222, 228, 229, 230, 231, 232, 242, 243, 248, 251, 255 n.12, 256 n.18 ATP ......................12, 16, 109, 221 n.5, 226, 227, 228, 229 Automated Assay Optimization (AAO), see AAO Automated patch clamp.............................. 9, 192, 209–222 See also Assay The Automation Partnership ........................................... 46 See also Manufacturer Average .........47, 49, 51, 70, 76, 79, 81, 88, 101, 103, 111, 122, 123, 131, 154, 162, 163, 172, 174, 177, 179, 180, 181, 195, 201, 216, 221, 230, 232, 235, 252 See also Statistical analysis
B Barcode label BDTM Calcium Assay Kit .............................................. 153 See also Dye Beckman Coulter ..........................21, 22, 36, 170, 228, 231 See also Manufacturer Binding Bmax ........................................................................... 16 competitive........................................10, 11, 16, 17, 139 equilibrium................................................................ 134 uncompetitive...................................................... 16, 139 See also Assay Bioluminescence Resonance Energy Transfer (BRET)......................................................... 8, 9 See also Bioluminescence Resonance Energy Transfer (BRET) Biomek.............................................................. 22, 228, 231 Biomek 2000..................................................................... 22 See also Robot Bmax ......................................................................... 16, 154 Boltzmann’s equation ............................................. 216, 217 Boolean ........................................................................... 234 BRET.............................................................................. 8, 9 See also Assay
C Calcium....................................................................... 3, 153 See also Dye Calcium-activated potassium channel ............................ 196 See also Ion Channel
Calcium Green-1............................................................ 153 See also Dye Calcium Sensitive Dye............................ 146, 153, 154, 190 See also Dye Caliper BioSciences See also Manufacturer CAMP ............................................................................ 7, 8 Campaign....................2, 10, 12, 13, 19, 25, 34, 47, 69, 71, 83, 84, 85, 91, 94, 95, 96, 97, 99, 104, 136, 153, 160, 175, 198, 205, 249 Capillary Electrophoresis (CE) ...................................... 226 Carbamazepine ............................................................... 189 CCD cooled charge coupled device camera (CCD) ........146, 151, 161, 170 n.1–171 n.1, 201 Cell-based assay ......................5, 6, 7, 8, 9, 17, 18, 20, 145, 153, 185 n.13 Cellectricon..................................................................... 202 See also Manufacturer CE, see Assay Chemical library................................................................ 49 Chemiluminescent Nitrogen Detector (CLND)........................................................ 228 Cherry picking ...................................................... 46, 66, 67 Chloride channel .................................................... 188, 196 See also Ion Channel CHO cells....................................................... 154, 212, 218 CLND ............................................................................ 228 Clone selection................................................................ 197 Coefficient of variation of signal and background (CV) ........23, 25, 97, 122, 151, 184 n.12 Coelenterazine .................................................................... 7 Competitive ............................10, 11, 12, 16, 17, 114, 116, 119, 121, 135, 139, 140 Compound chemical library ........................................................... 49 collection .................................2, 3, 4, 6, 10, 14–15, 29, 55, 91, 117, 205, 206, 242, 251 concentration ..........................6, 10, 11, 12, 13, 14, 19, 66, 79, 83, 91, 92, 112, 139, 151, 156, 174, 175, 176, 177, 179, 180, 181, 185, 209, 215, 228, 251 fluorescent...............................116, 117, 137, 140, 146, 147, 156, 180, 181 identifier..............................................47, 48, 49, 59, 62 interference .............6, 9, 111, 112, 114, 115, 116, 123, 124, 136, 180 library ............................42, 55–68, 119, 136, 167, 175, 178, 205, 206 registration .................................................................. 59 storage ............................................................. 44, 46, 57 tracking
HIGH THROUGHPUT SCREENING
Index 261
Concentration ...............................6, 10, 11, 12, 13, 14, 15, 16, 17, 19, 21, 27, 65, 66, 75, 79, 83, 91, 92, 111, 112, 113, 115, 116, 118, 119, 120, 132, 133, 134, 135, 136, 137, 139, 140, 141, 149, 151, 152, 154, 155, 156, 157, 160, 164, 165, 167, 168, 169, 170, 172, 174, 176, 177, 178, 179, 180, 181, 184, 185, 188, 194, 209, 212, 214, 215, 217, 218, 219, 220, 222, 226, 228, 229, 241, 250, 252, 253 Control software system ................................................... 36 Conventional patch clamp ...................................... 192, 206 See also Electrophysiology Cooled Charge Coupled Device Camera (CCD)...........146, 151, 161, 171, 184 n.11, 201 See also CCD CV.........................................................23, 25, 97, 151, 184 See also Statistical analysis CyBi1-Lumax ................................................................ 154 See also Reader Cyclic AMP-dependent protein kinase.................. 225–236 See also Enzyme Cystic fibrosis transmembrane conductance regulator (CFTR)......................................................... 196 See also Ion Channel Cytokine..................................160, 185 n.14, 241, 243, 254 Cytotoxicity.......................................177, 180, 181, 183 n.2
D DAPI ................................................................161, 184 n.9 See also Dye Database............39, 47, 48, 49, 50, 51, 52, 65, 68, 177, 232 Data handling .....................34, 47–48, 50, 54, 75, 256 n.20 Daughter plate ..........................................66, 213, 228, 231 Deacetylase...................................................... 108, 110, 226 See also Enzyme DecisionSite ...................................................................... 47 Diazepam ........................................................................ 188 Dimethyl sulfoxide (DMSO) ............18, 19, 44, 66, 67, 95, 109, 119, 120, 122, 134, 148, 149, 164, 167, 168, 170, 171, 172, 175, 185 n.13, 212, 214, 215, 218, 227, 228, 231, 240, 245, 246, 247, 249, 250, 252 See also DMSO tolerance Dispenser ................................................................ 148, 169 See also Robot DMSO tolerance ..................109, 119, 134, 164, 168, 171, 172, 185 DNA .......................................114, 127, 149, 161, 182, 212 Dye BDTM Calcium Assay Kit ........................................ 153 calcium .................................................................. 3, 153 calcium green–1 ........................................................ 153 calcium sensitive................................146, 153, 154, 190 DAPI ..........................................................161, 184 n.9 fluo-4/AM ........................................ 148, 150, 153, 154 fluo-8/NW Calcium Assay Kit ................................ 153
fluorescein ......................115, 116, 130, 141 n2, 235 n.3 Hoechst 33342..................160, 161, 162, 166, 169, 170 membrane potential-sensitive oxonol dye ................ 190 oxonol dye ................................................................. 190 voltage-sensing dyes...................................................... 8 Dynaflow1 ...................................................................... 202 Dynamic................................................38, 39, 77, 116, 188
E E-76, 132, 135, 136 EC50, 138, 160, 164, 167, 168, 171, 172, 173, 192 See also Statistical analysis EC100, 250 See also Statistical analysis ECL ........................................................................ 7, 9, 242 See also Assay Edge effect ..........................................18, 23, 153, 169, 170 EDTA.....................20, 120, 122, 124, 148, 150, 211, 226, 227, 228, 229, 231 Electrochemiluminescence (ECL) ...............7, 8, 239, 241, 242, 244 See also ECL Electronic lab notebook.................................................... 58 Electrophysiology automated patch clamp......................... 9, 192, 209–222 See also Assay conventional patch clamp ................................. 192, 206 See also Assay gigaseal.............................................. 192, 210, 218, 222 high resistance seal.................................... 192, 200, 210 perforated patch clamp See also Assay planar patch clamp ............................ 198, 201, 202, 212 PPC...........................................193, 195, 198, 203, 206 See also Population Patch Clamp (PPC) Electroporation ............................................................... 149 ELISA ......................................................6, 8, 13, 241, 242 See also Assay End-Point ................................................... 14, 15, 113, 121 See also Assay Enzymatic assay ............................11, 28, 73, 108, 109, 137 See also Assay Enzyme alkaline phosphatase ................................................. 113 Akt .................................................................................. 109 AMP kinase .............................................................. 110 cyclic AMP-dependent protein kinase ............. 225–236 deacetylase................................................. 108, 110, 226 FVIIa.................................131, 132, 133, 134, 135, 136 histone acetylase........................................................ 226 isomerase................................................................... 108 kinase ..............3, 4, 7, 9, 107, 108, 109, 110, 114, 115, 118, 120, 137, 138, 160, 175, 179, 226, 227, 228, 229, 230, 231
HIGH THROUGHPUT SCREENING
262 Index
Akt (continued) ligase............................................................ 22, 108, 114 phosphatase...............108, 109, 110, 113, 116, 117, 226 PKA .................................................................. 225–236 protease .............4, 9, 20, 107, 108, 110, 111, 113, 114, 120, 131, 226 recombinase............................................................... 108 RNA polymerase......................................................... 27 valyl-tRNA synthetase.......................................... 13, 14 Enzyme fragment complementation .............................. 8, 9 Equilibrium......................................................... 16, 17, 134 Error handling ................................................................ 38, 52 See also Statistical analysis Essen Instruments........................................................... 194 See also Manufacturer Experimental design ................................................... 21, 85
F False negative......................................2, 29, 82, 83, 91, 92, 100, 115 False positive.............2, 29, 82, 91, 92, 113, 115, 116, 131, 136, 138, 146, 151, 156, 182, 184 FBS .......................148, 149, 165, 166, 167, 168, 169, 173, 175, 212, 244, 245 FCS..................................................................................... 7 See also Assay FIDA .................................................................................. 9 See also Assay FLINT ................................................................ 9, 137, 141 See also Assay FLIPR...................7, 8, 9, 42, 43, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156 See also Reader FlipTip .................................................................... 200, 201 See also Reader Fluo-4/AM ............................................. 148, 150, 153, 154 See also Dye Fluo-8/NW Calcium Assay Kit ..................................... 153 See also Dye Fluorescein ........................115, 116, 130, 141, 227, 236 n.3 See also Dye Fluorescence................6, 7, 9, 13, 15, 42, 50, 77, 112, 113, 114, 115, 116, 117, 122, 127–142, 146, 147, 153, 154, 155, 157, 161, 164, 181, 182, 190, 191, 195, 196, 205 See also FCS Fluorescence Correlation Spectroscopy (FCS)................... 7 Fluorescence Intensity Distribution Analysis (FIDA) ....... 9 See also FIDA Fluorescence intensity (FLINT)...............6, 128, 132, 133, 134, 137, 141, 162 See also FLINT
Fluorescence Polarization (FP)......................... 13, 15, 114, 121, 122, 127–142 Boltzmann equation.......................................... 216, 217 See also FP Fluorescence Polarization Immunoassay (FPIA)...................................114, 115, 116, 121 See also FPIA Fluorescence Resonance Energy Transfer (FRET) .................................6, 8, 113, 114, 190 See also FRET Fluorescent compound..........................116, 117, 137, 146, 147, 156, 180, 181 See also Compound Fluorometric Imaging Plate Reader (FLIPR) ............................................ 9, 145–157 See also FLIPR Fluorophore acceptor .............................................. 113, 114 Flyion GmbH See also Manufacturer FlyScreen1.............................................................. 200, 201 See also Reader FP............................7, 9, 17, 114, 121, 122, 127, 128, 130, 135, 137, 138, 140 See also Assay FPIA ............................................................... 114, 115, 116 See also Assay Freezer..................................................................... 245, 247 FRET.............................................................. 8, 9, 113, 114 See also Assay FVIIa............................................... 131, 132, 134, 135, 136 See also Enzyme FVIIa/E-76 complex ...................................................... 132
G GABAA.......................................................................... 204 See also Ion Channel Gaussian distribution.......................................... 70, 99, 234 See also Statistical analysis GFP ..........................................................149, 159, 183 n.6 Gigaseal.............................................192, 210, 218, 222 n.2 See also Electrophysiology Gleevec............................................................................ 225 See also Prescription Drugs G-protein coupled receptor (GPCR) ....................... 3, 4, 6, 7, 8, 9, 145, 146, 147, 148, 149, 152, 153, 155, 156 See also GPCR Graphic User Interface (GUI)............................ 58, 78, 179 See also GUI Green Fluorescent Protein (GFP)............149, 160, 183 n.6 See also GFP GUI..................................................................... 58, 78, 179 See also Graphic User Interface (GUI)
HIGH THROUGHPUT SCREENING
Index 263
H
I
HCS................................................ 160, 161, 175, 177, 180 See also Assay HERG ..................196, 199, 203, 210, 211, 212, 213, 218, 219, 220, 221 n.1 See also Ion Channel High Content Screening (HCS) .................. 160, 161, 175, 177, 180 See also HCS High-Performance Liquid Chromatography (HPLC) ........................................................ 228 See also HPLC High resistance seal ........................................ 192, 200, 210 See also Electrophysiology High Throughput Screening (HTS).......................... 1–29, 33–54, 69–103, 107–125, 189, 190, 211, 225–236 See also HTS Hill slope......................................................................... 110 Histone acetylase ............................................................ 226 See also Enzyme Hit false negative ...........................2, 29, 71, 82, 83, 91, 92, 100, 115 positive ................21, 23, 26, 71, 89, 91, 92, 93, 96, 97, 100, 135, 138, 149 n.4, 156, 163, 228 Hoechst 33342................160, 161, 162, 166, 169, 170, 171 See also Dye Hotel ..................................................................... 36, 43, 44 See also Robot HPLC ............................................................................. 228 See also High-Performance Liquid Chromatography (HPLC) HTS ........... 1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12, 14, 15, 17, 18, 19, 20, 21, 23, 25, 27, 28, 29, 33, 34, 35, 37, 39, 40, 46, 47, 48, 54, 65, 71, 74, 76, 77, 80, 82, 83, 84, 88, 89, 91, 92, 93, 94, 95, 96, 99, 100, 103, 104, 108, 109, 110, 112, 119, 120, 121, 122, 123 n.3, 124 n.6, 129, 130, 134, 137, 146, 153, 164, 174, 175, 189, 190, 191, 196, 204, 205, 206, 229, 232, 239, 240, 242, 244, 248, 249, 250, 252, 253, 256 HTS:campaign................................2, 3, 10, 12, 18, 25, 33, 34, 40, 71, 83, 84, 85, 94, 96, 99, 104, 136, 146, 153, 203, 205 Hudson Control Group.................................................... 37 See also Manufacturer Human Tissue Act.......................................................... 244 Humidity.........................................................23, 161, 167, 169, 170 Hyperpolarisation activated cyclic nucleotide-gated channel.......................................................... 196 See also Ion Channel
IC50 ........................11, 12, 15, 16, 132, 135, 138, 180, 182 Imatinib mesylate Inactivated state block ............................................ 189, 191 InCell ...................................................................... 159–185 Incubation time...................9, 14, 18, 20, 27, 43, 119, 120, 123, 166, 185 n.13, 250 Incubator...................18, 42, 43, 44, 45, 53, 149, 150, 153, 169, 170, 213, 218, 222 n.5, 246, 248 Inhibitor competitive................................10, 11, 12, 16, 119, 139 IC50 ................11, 12, 15, 110, 121, 138, 164, 171, 230 Interface patch clamp...................................................... 192 Interferon ........................................................................ 240 Intracellular calcium............................................ 7, 145–157 Intra-Plate.......................................................77, 78, 81, 85 See also Statistical analysis Inventory management ............................................... 58, 64 Ion channel calcium-activated potassium ..................................... 196 CFTR........................................................................ 196 chloride ............................................................. 188, 196 cystic fibrosis transmembrane conductance regulator (CFTR)......................................................... 196 GABAA.................................................................... 204 hERG......................196, 199, 203, 210, 211, 212, 213, 218, 219, 220, 221 hyperpolarisation activated cyclic nucleotide-gated . 196 KCNQ2/3................................................................. 196 Kv1........................................................................ 4, 196 Nav1..............2, 210, 211, 213, 214, 215, 216, 217, 221 TASK3...................................................................... 196 TRP........................................................................... 188 See also Transient Receptor Potential (TRP) two-pore domain ...................................................... 196 VGSC ....................................................................... 210 See also Voltage-gated Sodium channel (VGSC) IonWorks1 HT....................194, 195, 196, 197, 198, 203, 204, 205, 210 See also Reader IonWorks1 QuattroTM .................................................. 205 IonWorks............................................................................ 8 See also Reader Iressa See also Tipifarnib Isomerase ........................................................................ 108 See also Enzyme
K KCNQ2/3....................................................................... 196 See also Ion Channel Ki relationship between IC50 and Ki.......... 11, 12, 16, 120
HIGH THROUGHPUT SCREENING
264 Index
Kinase............110, 114, 115, 118, 120, 137, 138, 160, 175, 179, 226, 227, 228, 231 See also Enzyme Kinetic constants...................................................................... 14 read.................................................................... 117, 120 Km ..............................................10, 11, 12, 15, 16, 19, 138 Kv1.............................................................................. 4, 196 See also Ion Channel
L Label ....................................................................... 131, 247 Laboratory...........................................56, 68, 121, 256 n.19 Labware....................................................................... 23, 36 LED........................................................................ 150, 151 Library.....................42, 46, 49, 56, 57, 58, 60, 61, 66, 119, 136, 167, 175, 176, 178, 179, 205, 206 Lidocaine ........................................................................ 188 See also Prescription Drugs Ligand, allosteric....................................................... 16, 131 Ligase ...................................................................... 108, 114 See also Enzyme Light Emitting Diode (LED) ................................ 150, 151 See also LED Linear translation.............................................................. 36 Liquid handling ................7, 19, 21, 22, 23, 28, 64, 75, 83, 121, 150, 172, 174, 175, 176, 199, 200, 221, 228, 230, 231, 232 See also Robot Luminescence .............................6, 7, 8, 50, 127, 154, 241, 242, 243, 244
M MAD ................................................................................ 88 Manufacturer ALA Scientific alembic Instruments.................................................. 199 the Automation Partnership....................................... 46 Beckman Coulter ....................21, 22, 36, 170, 228, 231 Caliper BioSciences cellectricon ................................................................ 202 Essen Instruments..................................................... 194 Flyion GmbH Hudson Control Group.............................................. 37 molecular devices ..................7, 42, 132, 137, 146, 153, 194, 195, 198, 210, 226 multichannel systems ................................................ 203 Nanion Technologies................................................ 198 REMP....................................................................... 467 Sophion Bioscience........................... 192, 198, 211, 218 Velocity11 ................................................................... 36 Master buffer .......................................................... 227, 229 Master plate .............................................................. 66, 228
Mean .........................23, 70, 72, 78, 80, 88, 91, 92, 93, 94, 96, 97, 98, 100, 102, 110, 250, 252 See also Statistical analysis Mean Absolute Distance (MAD) .................................... 88 Microfluidic .................................................... 112, 225–236 See also Assay Microtitre plate.................................72, 241, 242, 245, 248 Minimum Significant Ratio (MSR)................................. 73 See also MSR Molecular Devices ......................7, 42, 132, 137, 146, 153, 194, 195, 198, 210, 226 Molecular Libraries Screening Center Network ............ 108 Mother plate ................................................................... 228 MSR.................................................................................. 73 See also Statistical analysis Multichannel Systems..................................................... 203 See also Manufacturer Multidrop......................................148, 169, 170, 228, 229, 230, 232, 246, 248 See also Robot
N Nanion Technologies...................................................... 198 See also Manufacturer Natural products ................................................................. 4 Nav1.2...........................................210, 211, 213, 214, 215, 216, 217 See also Ion Channel Nav1.7............................................................................. 196 See also Ion Channel NexavarTM See also Prescription Drugs Nifedipine ....................................................................... 189 NIH ........................................................................ 108, 122 Normal distribution ..............25, 70, 73, 80, 93, 94, 96, 97, 98, 99, 101, 102 See also Statistical analysis Normalized values................................................... 123, 233 See also Statistical analysis Norvasc1 ........................................................................ 189 See also Prescription Drugs NR................................................................... 131, 136, 137 See also Receptor Nuclear receptor (NR), see NR Nuclear trafficking ................161, 162, 164, 165, 166, 167, 168, 170, 173, 175, 181, 184 n.10
O Oocyte............................................................................. 203 Operation..............2, 35, 36, 48, 52, 53, 63, 64, 74, 75, 87, 165, 166, 171, 178, 179, 226, 232 OpusXpress1 6000A ...................................................... 203 See also Reader
HIGH THROUGHPUT SCREENING
Index 265
Outlier.................23, 47, 50, 51, 73, 78, 79, 85, 97, 98, 99, 100, 101, 233, 235 See also Statistical analysis Oxonol dye...................................................................... 190 See also Dye
P Patchbox.................................................................. 201, 202 See also Reader Patch clamp automated ............................................. 9, 192, 209–222 See also Electrophysiology; Assay gigaseal...................................................................... 210 high resistance seal.................................................... 210 planar.........................................194, 198, 201, 202, 212 voltage steps .............................................. 194, 214, 215 Patchliner# ..................................................... 200, 201, 205 See also NPC16 Patchliner# Patchliner NPC–16 ........................................................ 198 See also Reader PatchPlate ....................................................................... 197 PatchXpress......................................................................... 8 See also Reader PBS .................148, 169, 170, 245, 246, 247, 248, 255 n.10 Peptide ......................114, 226, 227, 228, 229, 232, 236 n.2 Percent inhibition calculation...................................236 n.5 Perforated patch clamp See also Electrophysiology Pharmacophore ................................................................... 4 Phenotypic assay ................................................. 5, 240, 241 See also Assay Phosphatase ....................108, 109, 110, 113, 116, 117, 226 See also Electrophysiology Phosphorylated product.......................................... 115, 226 Phosphorylation ..............................5, 9, 160, 226, 227, 232 Photina1 ......................................................................... 154 Photoprotein ............................................................... 7, 154 Pipetting workstation ................................................. 34, 35 See also Robot PKA ........................................................................ 225–236 See also Enzyme Planar patch clamp..........................194, 198, 201, 202, 212 See also Electrophysiology Plate 1536-well .................................................... 40, 146, 190 384-well ..............18, 20, 26, 40, 42, 66, 111, 117, 132, 136, 148, 150, 151, 156, 159, 161, 164, 165, 168, 169, 170, 173, 175, 176, 177, 179, 183 n.4, 193, 197 96-well ....................................18, 40, 66, 149, 160, 164 barcode ............................................................ 39, 46, 47 cherry picking........................................................ 46, 67 daughter ................................................ 66, 67, 228, 231 hotel ................................................................ 36, 43, 44
microtiter .............................................. 6, 118, 146, 148 mother................................................................. 66, 228 plastic ................................................ 161, 164, 170, 183 transport system .................................................... 42, 43 Plate gripper.................................................... 35, 36, 42, 44 See also Robot Plate reader ...................7, 8, 40, 41, 42, 43, 116, 117, 131, 132, 137, 141 n.1 Plate sealer .................................................................. 44, 45 See also Robot Plate washer ................................................ 43, 44, 150, 255 See also Robot Polarization...........................................7, 13, 15, 114, 121, 122, 127–142 PolarScreen ..................................................................... 138 See also Assay Population Patch Clamp (PPC)................... 193, 195, 198, 203, 206 See also PPC Port-a-Patch ........................................................... 201, 202 See also Reader Positive....................21, 22, 23, 24, 26, 71, 80, 92, 96, 135, 139, 149, 156, 163, 228 Power of an assay .............................................................. 93 See also Statistical analysis PPC.................................................193, 195, 198, 203, 206 See also Electrophysiology Precision radius......................................................... 73, 100 See also Statistical analysis Preincubation ............................................16, 17, 18, 20, 27 Prescription drugs amlodipine ................................................................ 189 gleevec ....................................................................... 225 See also Imatinib mesylate iressa lidocaine .................................................................... 188 nexavarTM See also Sorafenib tosylate norvasc1 ................................................................... 189 See also Amlodipine tegretol1 ................................................................... 189 See also Carbamazepine valium See also Diazepam zarnestraTM See also Tipifarnib Price-Supplier Score ......................................................... 61 Primary cells.................................................... 201, 240, 241 Probenecid .............................................................. 148, 154 Process.......................................2, 3, 19, 21, 23, 35, 38, 46, 49, 51, 52, 53, 58, 60, 61, 64, 65, 66, 68, 71, 74, 75, 77, 78, 81, 82, 91, 92, 103, 109, 129, 131, 141, 154, 184–185, 189, 193, 228, 230, 244, 256
HIGH THROUGHPUT SCREENING
266 Index
Product/Sum ratio (PSR) ...............226, 228, 229, 230, 280 See also PSR Protease.........................9, 20, 110, 111, 114, 120, 131, 226 See also Enzyme
Q QC ......................2, 25, 72, 74, 75, 76, 77, 79, 82, 93, 131, 137, 252, 256 QPatchTM HT........................................................ 198, 200 See also Reader QPlate .............................................198, 199, 216, 218, 219 QSAR ............................................................................... 71 Quality Assurance and Quality Control...............2, 25, 72, 74, 75, 76, 77, 79, 82, 93, 131, 137, 252, 256 Quality control (QC) spatial uniformity correction..................... 151, 156, 157 troubleshooting ............................................... 23, 35, 71 See also QC Quencher......................................................... 113, 114, 137
R R-848 .............................................................................. 241 Radiolabel ligand .................................................... 145, 190 Radioligand..................................................... 145, 190, 211 Radioligand binding ....................................... 145, 190, 211 See also Assay Raw data .............................................23, 49, 50, 76, 94, 96 Reader absorbance............................................................. 24, 49 AlphaScreen..................................................7, 8, 9, 226 Analyst AD ............................................... 132, 133, 136 Apatchi-1TM ............................................................. 192 automated patch clamp......................... 9, 192, 209–222 CyBi1-Lumax .......................................................... 154 FLIPR.........................9, 146, 147, 148, 149, 150, 151, 152 n.1, 154, 155, 156 See also Fluorometric Imaging Plate Reader (FLIPR) FlipTip .............................................................. 200, 201 FlyScreen1........................................................ 200, 201 IonWorks ...................................................................... 8 IonWorks1 HT................................................ 194, 195 IonWorks1 QuattroTM ............................................ 205 OpusXpress1 6000A................................................ 203 Patchbox............................................................ 201–202 Patchliner NPC-16................................................... 198 PatchXpress...........................8, 198, 199, 200, 205, 210 Port-a-Patch ............................................................. 202 QPatchTM HT.................................................. 198, 200 Robocyte ................................................................... 203 Reagent addition .......................15, 121, 123, 135, 154, 155, 157 management.................................................... 60, 61, 62 selection....................................................................... 58
Receptor concentration ........................................................ 16, 17 GPCR .....................3, 4, 6, 7, 8, 9, 145, 146, 147, 149, 155, 157 NR............................................................. 131, 136, 137 relationship between IC50 and Ki.......... 11, 12, 16, 120 TLR .......................................................... 241, 242, 251 Recombinase ................................................................... 108 Registration...............................................57, 58, 59, 60, 65 Relative fluorescence units (RFU).......................... 133, 151 See also RFU REMP............................................................................... 46 See also Manufacturer Resiquimod .....................241, 246, 249, 250, 251, 252, 253 RFU ........................................................................ 133, 151 RNA polymerase............................................................... 27 Robocyte ......................................................................... 203 See also Reader Robot anthropomorphic arm ..................................... 35, 36, 39 Biomek 2000, 22 cherry picking........................................................ 46, 66 dispenser............................................................ 148, 169 hotel ...................................................................... 43–44 linear translation ......................................................... 36 liquid handling............7, 19, 21, 22, 23, 28, 64, 75, 83, 121, 150, 172, 174, 175, 176, 199, 200, 221, 228, 230, 231, 233 Multidrop........................148, 169, 170, 228, 229, 230, 232, 246, 248 pipetting workstation............................................ 34, 35 plate gripper .............................................. 35, 36, 42, 44 plate sealer............................................................. 44, 45 plate washer...................................43, 44, 150, 255 n.11 shelf.......................................43, 44, 56, 57, 58, 65, 206 washer ..................................................... 40, 43, 44, 150 Robust statistics ......................................78, 97, 98, 99, 100 Row marker............................................................. 227, 232 R-score ........................................................................ 73, 94 See also Statistical analysis Running median procedure .............................................. 87 See also Statistical analysis
S Sample integrity................................................................. 64, 65 See also Compound Sample management................................................... 57, 64 SAR............................................................................. 14, 92 See also Structure Activity Relationship (SAR) SB203580..............................164, 165, 166, 167, 168, 169, 172, 174, 175 SBS See also Society for Biomolecular Sciences (SBS)
HIGH THROUGHPUT SCREENING
Index 267
Scheduling ............................................2, 17, 38, 39, 42, 46 Scheduling software.................................................... 42, 46 Scintillation proximity assay (SPA).........7, 8, 9, 13, 14, 17, 34, 112 See also SPA Screening Quality Control ............................. 74, 77, 79, 82 SDI................................................69, 95, 99, 100, 101, 102 See also Statistical analysis Second messenger SEL............................................................................. 90, 91 See also Statistical analysis Separation buffer............................................. 227, 232, 236 Shelf ......................................43, 46, 56, 57, 58, 64, 65, 206 See also Robot Signal to background ........2, 12, 13, 18, 21, 23, 72, 97, 154 See also Statistical analysis Signal Window ...........24, 25, 73, 134, 135, 138, 140, 141, 165, 171, 172, 174, 175, 243, 249, 252 See also Statistical analysis Society for Biomolecular Sciences (SBS) See also SBS Software ..............34, 35, 36, 38, 39, 42, 46, 47, 48, 49, 51, 53, 57, 58, 62, 76, 77, 78, 79, 81, 82, 84, 103, 151, 156, 161, 164, 167, 170, 171, 176, 177, 179, 180, 193, 198, 200, 201, 202, 212, 220, 231, 232 Solution...........7, 8, 12, 21, 34, 42, 46, 54, 57, 58, 64, 107, 114, 118, 120, 122, 124, 127, 128, 129, 130, 148, 150, 153, 161, 169, 170, 171, 183, 190, 192, 193, 197, 199, 200, 202, 204, 211 Sophion Bioscience................................. 192, 198, 211, 218 See also Manufacturer Sorafenib tosylate SPA.....................................................7, 8, 9, 13, 14, 17, 34 See also Assay SPC............................................................................. 74, 82 See also Statistical analysis SSMD ............................................................................... 93 See also Strictly standardised mean difference (SSMD) Standard Deviation...............23, 51, 70, 72, 73, 78, 92, 93, 95, 96, 97, 98, 99, 101, 102, 122, 137, 231, 233, 234, 251 See also Statistical analysis Standard Deviation of Inactives ................................. 95, 99 See also SDI Statistical analysis average....................................................................... 231 CV....................................................................... 97, 184 distribution.................................................................. 96 EC100....................................................................... 250 EC50.................138, 160, 164, 167, 172, 183, 192, 250 error...........................................................75, 89, 92, 94 Gaussian distribution.......................................... 99, 234 IC50 ..............................11, 12, 15, 110, 121, 138, 164, 171, 230
Intra-Plate.................................................77, 78, 81, 85 MAD .......................................................................... 88 mean........................................................49, 50, 70, 76, 87, 92, 96 MSR, see Minimum Significant Ratio (MSR) normal distribution ......................................... 70, 73, 96 normalised values ........................................................ 94 outlier .................................................................. 73, 233 power of an assay ........................................................ 93 precision radius ................................................... 73, 100 R-score .................................................................. 73, 94 running median procedure.......................................... 87 SDI, see Standard Deviation of Inactives (SDI) SEL....................................................................... 90, 91 See also Systematic Error Level (SEL) signal to background.....................2, 12, 13, 18, 21, 23, 72, 97, 154 Signal Window .............24, 25, 73, 134, 135, 138, 141, 145, 171, 172, 175, 243, 249, 252 spatial uniformity correction..................... 151, 156, 157 SPC....................................................................... 74, 82 See also Statistical Process Control (SPC) SSMD......................................................................... 93 standard deviation.................23, 51, 69, 70, 72, 73, 78, 92, 93, 95, 96, 97, 98, 99, 101, 122, 137, 231 threshold .......................49, 51, 73, 80, 91, 94, 97, 161, 176, 180, 197, 226, 233 VEP....................................................................... 88, 89 See also Variance Explained by the Patterns (VEP) Z’ calculation............................................................. 236 Z’ factor........................................................... 25, 26, 47 Statistical Cut-Off ..........................92, 94, 97, 98, 100, 103 Statistical evaluation ................................................... 21, 71 Statistical Process Control (SPC)......................... 71, 74, 82 See also SPC Staurosporine .................................................. 227, 230, 231 Storage ..................................2, 40, 44, 46, 57, 64, 75, 149, 213, 222, 247 See also Compound Storage (PBMC)...........................2, 37, 40, 44, 46, 57, 64, 75, 149, 213, 222, 240, 241, 243, 244, 246, 247, 248, 249, 252, 254, 256 Strictly standardised mean difference (SSMD)................ 93 See also SSMD Structure Activity Relationship (SAR) ...................... 14, 92 See also SAR Sub-cellular distribution ................................................. 159 Substrate ...................................4, 6, 10, 11, 12, 13, 15, 20, 73, 109, 111, 114, 116, 119, 120, 124, 138, 160, 193, 195, 198, 199, 200, 202, 205, 206, 226, 227, 229, 232 Subtract bias.................................................................... 156 Systematic Error Level (SEL) .................................... 90, 91 See also SEL
HIGH THROUGHPUT SCREENING
268 Index T
V
Target.........2, 3, 5, 9, 22, 27, 108, 109, 110, 116, 131, 132, 135, 139, 140, 146, 149, 152, 160, 162, 172, 181, 185, 187, 189, 191, 196, 206, 211, 226, 240, 254 Target-to-Lead effort ......................................................... 5 Target type...................................................................... 4, 9 TASK3, 196 Tegretol1 ........................................................................ 189 See also Prescription Drugs Temperature..............18, 19, 20, 28, 42, 44, 47, 50, 75, 83, 87, 120, 121, 124, 133, 135, 148, 150, 154, 161, 169, 170, 188, 229, 230, 232, 246, 248 Termination buffer ................................. 227, 229, 230, 232 Threshold................49, 51, 72, 73, 80, 82, 91, 93, 97, 161, 176, 177, 179, 180, 181, 184, 197, 226, 233 See also Statistical analysis Time-resolved fluorescence resonance energy transfer (TR-FRET)........................6, 8, 9, 77, 114, 116 See also TR-FRET Tipifarnib Tissue culture flask ................................................. 148, 169 Titration..................21, 115, 119, 132, 133, 134, 135, 136, 138, 139, 228 TLR ................................................................ 241, 242, 251 See also Receptor Toll-like receptor (TLR), see TLR Tracking.................................................................... 65, 161 See also Compound Transcreener PDE .......................................................... 138 See also Assay Transfection ......................................18, 148, 149, 152, 153 Transient Receptor Potential (TRP).............................. 188 Transport system................................................... 37, 42, 43 TR-FRET.............................................6, 8, 9, 77, 114, 116 See also Assay Troubleshooting.................................................... 23, 35, 71 TRP channel ................................................................... 188 See also Ion Channel Two-pore domain ion channel ....................................... 196 See also Ion Channel
Valium See also Prescription Drugs Valyl-tRNA synthetase............................................... 13, 14 See also Enzyme Variance Explained by the Patterns (VEP)................ 88, 89 See also VEP Velocity11 ......................................................................... 36 See also Manufacturer Vendor ........34, 35, 38, 40, 42, 43, 46, 48, 52, 54, 228, 241 VEP............................................................................. 88, 89 See also Statistical analysis Verapamil........................................................................ 188 VGSC ............................................................................. 210 See also Ion Channel Vmax ................................................................................. 16 Voltage-dependent block................................................ 221 Voltage-gated sodium channel (VGSC) ................ 210, 211 See also VGSC Voltage-sensing dye............................................................ 8 See also Dye Voltage steps ................................................... 194, 214, 215
U Uncompetitive...............................11, 12, 16, 119, 139, 140 Use-dependent block ...................................................... 189
W Washer ..........................40, 43, 44, 150, 243, 246, 248, 255 See also Robot 96-well Plate ...................................................... 18, 40, 149, 160 See also Format 384-well Plate ............18, 20, 24, 26, 40, 66, 111, 117, 151, 161, 165, 174, 175, 176, 177, 179, 183, 197, 226, 229, 249, 255 n.11 See also Format 1536-well format ............................................................ 192
X Xenopus oocyte ............................................................... 203
Z ZarnestraTM Z’ calculation................................................................... 236 Z’ factor....................................................................... 25, 26 See also Statistical analysis See also Prescription Drugs