Statistical Techniques in Bioassay
Z. Govindarajulu
žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Statistical Techniques in Bioassay...
105 downloads
2559 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Statistical Techniques in Bioassay
Z. Govindarajulu
žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Statistical Techniques in Bioassay 2nd, revised and enlarged edition
5 figures and 12 tables, 2001
Basel Ð M¨unchen Ð Paris Ð London Ð New York Ð New Delhi Ð Singapore Ð Tokyo Ð Sydney
žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Z. Govindarajulu PhD, Professor, University of Kentucky, Department of Statistics, Lexington, Ky. Fellow of the Institute of Mathematical Statistics, American Statistical Association, Royal Statistical Society (England), and the American Association for the Advancement of Science; member of the Bernoulli Society; member of the International Statistical Institute and member of the National Academy of Sciences (India).
Library of Congress Cataloging-in-Publication Data Govindarajulu, Z. Statistical techniques in bioassay / Z. Govindarajulu. – 2nd, rev. and enlarged ed. p. cm. Includes bibliographical references and index. ISBN 3805571194 (hardcover) 1. Biological assay – Statistical methods. I. Title. QH323.5 .G68 2000 5720 .360 0727 – dc21
00-063292
All rights reserved. No part of this publication may be translated into other languages, reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, microcopying, or by any information storage and retrieval system, without permission in writing from the publisher. Copyright 2001 by S. Karger AG, P.O. Box, CH–4009 Basel (Switzerland) Printed in Switzerland by Reinhardt Druck, Basel ISBN 3-8055-7119-4
Dedicated to the memory of my parents-in-law, Dr. and Mrs. Mahanand Gupta
‘Sound and sufficient reason falls, after all, to the share of but few men, and those few men exert their influence in silence.’ Johann Wolfgang von Goethe ‘Every man should use his intellect, not as he uses his lamp in the study, only for his own seeing, but as the lighthouse uses its lamps, that those after off on the sea may see the shining, and learn their way.’ Henry Ward Beecher
žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Contents
Foreword . . . . . . . . . . . . . . . Foreword to the Second Edition Preface . . . . . . . . . . . . . . . . . Addendum to the Preface . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
XIII XIV XV XVII
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.1 1.2 1.3
History of Bioassay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Components of Bioassay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Role of Statistics in Bioassay . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 2 2
2
Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8
Types of Biological Assays . . . . . . . . . . . . . . . . . . . . . . . . . . Direct Assays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ratio Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p Asymptotic Distribution of Ratio Estimators n2 Y/X n/m . Fieller’s Theorem [Fieller, 1940, 1944] . . . . . . . . . . . . . . . . . . . Behrens’ Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Confidence Intervals for the Natriuretic Factor Assay . . . . . . . . . Use of Covariates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: Behrens-Fisher-Sukhatme Distribution . . . . . . . . . . . .
. . . . . . . . .
3 3 4 5 5 7 8 11 11
3
Algebraic Dose-Response Relationships . . . . . . . . . . . . . . . .
13
3.1 3.2 3.3 3.4 3.5 3.6
Indirect Assays . . . . . . . . . . . . . . . . . . . . . . . . Dose-Response Relationship in Terms of Regression Preliminary Guess of the Regression Function . . . . Transformation Leading to Linear Relationships . . . Nonlinear Regression . . . . . . . . . . . . . . . . . . . . Heterogeneity of Variance . . . . . . . . . . . . . . . . .
13 13 14 15 16 18
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . . . . .
. . . . . .
. . . . . . . . .
. . . . . .
. . . . . .
3
3.7 3.8 3.9 3.10 3.11 3.12 3.13
Maximum Likelihood Estimates of Parameters . . . . . . . . . . Maximum Likelihood: Iterative Scheme . . . . . . . . . . . . . . Estimation via the Relationship for the Standard Preparation . Estimation of the Potency Based on the Standard Slope . . . . Estimation Based on Simultaneous Tests . . . . . . . . . . . . . . Generalized Logistic Regression Models . . . . . . . . . . . . . . Computer Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: Regression Models . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .
20 21 25 26 27 27 31 32
4
The Logit Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
4.1
Case when the Dose-Response Curve for the Standard Preparation Is Known . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Case when the Dose-Response Curve for the Standard Preparation Is Unknown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quantal Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear Transformations for Sigmoid Curves: Tolerance Distribution . . . Importance and Properties of the Logistic Curve . . . . . . . . . . . . . . . Estimation of the Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimation of the Parameters in the Probit by the Method of Maximum Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Available Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Method of Minimum Logit c2 . . . . . . . . . . . . . . . . . . . . . . . . . . . Goodness-of-Fit Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spearman-Karber Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reed-Muench Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dragstadt-Behrens Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application of the Spearman Technique to the Estimation of the Density of Organisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quantit Analysis (Refinement of the Quantal Assay) . . . . . . . . . . . . . Planning a Quantal Assay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dose Allocation Schemes in Logit Analysis . . . . . . . . . . . . . . . . . . . Appendix: Justification for Anscombe’s Correction for the Logits li . . .
4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15 4.16 4.17
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
36 38 40 40 41 42 47 49 50 52 52 62 62 63 67 70 75 88
5
Other Methods of Estimating the Parameters . . . . . . . . . . .
5.1 5.2 5.3 5.4 5.5 5.6 5.7
Case of Two Dose Levels . . . . . . . . . . . . . . . . . . . Natural Mortality . . . . . . . . . . . . . . . . . . . . . . . . . Estimation of Relative Potency . . . . . . . . . . . . . . . . An Optimal Property of the Logit and Probit Estimates Confidence Bands for the Logistic Response Curve . . . Weighted Least Squares Approach . . . . . . . . . . . . . . Nonparametric Estimators of Relative Potency . . . . . .
6
The Angular Response Curve and Other Topics . . . . . . . . . .
113
6.1 6.2 6.3
Estimation by the Method of Maximum Likelihood . . . . . . . . . . . . . . Alternative Method of Estimation . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of Various Methods . . . . . . . . . . . . . . . . . . . . . . . . . .
113 115 116
Contents
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
91 91 94 97 100 104 106 108
X
6.4 6.5 6.6 6.7 6.8 6.9
Other Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of Maximum Likelihood (ML) and Minimum c2 Estimates (MCS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . More on Probit Analysis . . . . . . . . . . . . . . . . . . . . . . . . Linear Logistic Model in 2 ð 2 Contingency Tables . . . . . . Comparison of Several 2 ð 2 Contingency Models . . . . . . . Dose Response Models with Covariates . . . . . . . . . . . . . .
. . . . .
117 118 120 123 125
7
Estimation of Points on the Quantal Response Function . . .
128
7.1 7.2 7.3 7.4 7.5
Robbins-Monro Process . . . . . . . . . . . . Robbins and Monro Procedure . . . . . . . Parametric Estimation . . . . . . . . . . . . . Up and Down Rule . . . . . . . . . . . . . . . Another Modified Up and Down Method
128 130 134 137 141
8
Sequential Up and Down Methods . . . . . . . . . . . . . . . . . . . .
145
8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8
Up and Down Transformed Response Rule . . . . . . . Stopping Rules . . . . . . . . . . . . . . . . . . . . . . . . . Up and Down Method . . . . . . . . . . . . . . . . . . . . . Finite Markov Chain Approach . . . . . . . . . . . . . . . Estimation of the Slope . . . . . . . . . . . . . . . . . . . . Expected Values of the Sample Size . . . . . . . . . . . . Up and Down Methods with Multiple Sampling . . . . Estimation of Extreme Quantiles . . . . . . . . . . . . . . Appendix: Limiting Distribution of D D N 1/2
145 146 146 148 149 150 150 151 153
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . . . . .
. . . . .
. . . . . . . . .
. . . . .
. . . . . . . . .
. . . . .
. . . . . . . . .
. . . . .
. . . . . . . . .
.......
116
. . . . .
. . . . .
. . . . . . . . .
. . . . .
. . . . .
. . . . . . . . .
. . . . .
. . . . .
. . . . . . . . .
. . . . .
. . . . .
. . . . . . . . .
. . . . .
. . . . .
. . . . . . . . .
. . . . .
. . . . .
. . . . . . . . .
. . . . .
. . . . . . . . .
9
Estimation of ‘Safe Doses’ . . . . . . . . . . . . . . . . . . . . . . . . . . .
156
9.1 9.2 9.3 9.4 9.5 9.6 9.7
Models for Carcinogenic Rates . . . . . . . . . . . . . . . . . . . Maximum Likelihood Estimation of the Parameters . . . . . . Convex Programming Algorithms . . . . . . . . . . . . . . . . . Point Estimation and Confidence Intervals for ‘Safe Doses’ The Mantel-Bryan Model . . . . . . . . . . . . . . . . . . . . . . . Dose-Response Relationships Based on Dichotomous Data . Optimal Designs in Carcinogen Experiments . . . . . . . . . .
156 158 159 160 162 163 166
10
Bayesian Bioassay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
172
10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9 10.10
Introduction . . . . . . . . . . . . . . . . . . . . . . . . The Dirichlet Prior . . . . . . . . . . . . . . . . . . . The Bayes Solution for Squared Error Loss . . . The Alternate Bayes Approaches . . . . . . . . . . Bayes Binomial Estimators . . . . . . . . . . . . . . Selecting the Prior Sample Size . . . . . . . . . . . Adaptive Estimators . . . . . . . . . . . . . . . . . . Mean Square Error Comparisons . . . . . . . . . . Bayes Estimate of the Median Effective Dose . Linear Bayes Estimators of the Response Curve
172 173 175 175 178 179 180 182 182 185
Contents
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . .
. . . . . . . . . .
. . . . . . .
. . . . . . . . . .
. . . . . . .
. . . . . . . . . .
. . . . . . .
. . . . . . . . . .
. . . . . . .
. . . . . . . . . .
. . . . . . .
. . . . . . . . . .
. . . . . . .
. . . . . . . . . .
. . . . . . .
. . . . . . . . . .
XI
11
Radioimmunoassays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
189
11.1 11.2 11.3 11.4 11.5 11.6 11.7 11.8 11.9
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Isotope Displacement Immunoassay . . . . . . . . . . . . . . Analysis of Variance of the McHugh and Meinert Model Other Models for Radioimmunoassays . . . . . . . . . . . . Assay Quality Control . . . . . . . . . . . . . . . . . . . . . . . Counting Statistics . . . . . . . . . . . . . . . . . . . . . . . . . Calibration Curve-Fitting . . . . . . . . . . . . . . . . . . . . . Principles of Curve-Fitting . . . . . . . . . . . . . . . . . . . . A Response Model using Nonlinear Kinetics . . . . . . . .
189 190 194 195 196 196 197 200 201
12
Sequential Estimation of the Mean Logistic Response Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
207
12.1 12.2 12.3 12.4 12.5 12.6
Introduction . . . . . . . . . . . . . . . . . . . . . . . . Maximum Likelihood Estimation of Parameters Properties of the Spearman-Karber Estimate . . Fixed-Width Interval Estimation of q . . . . . . . Asymptotic Properties of Rule (12.19) . . . . . . Risk-Efficient Estimation of q . . . . . . . . . . . .
. . . . . .
207 208 209 210 211 211
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
214 224
Contents
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . . . . .
. . . . . .
. . . . . . . . .
. . . . . .
. . . . . . . . .
. . . . . .
. . . . . . . . .
. . . . . .
. . . . . . . . .
. . . . . .
. . . . . . . . .
. . . . . .
. . . . . . . . .
. . . . . .
. . . . . . . . .
. . . . . .
. . . . . . . . .
. . . . . .
. . . . . . . . .
XII
žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Foreword
I am happy indeed to write this foreword. Raju and I were two of Richard Savage’s first thesis students at the University of Minnesota. My thesis was on certain aspects of quantal assay. Raju was interested and helpful. We have kept in touch and I am gratified to see this product of an interest partially fostered by me then and encouraged in the succeeding years. This is a book that has been needed for a long time. Every biostatistical consultant is faced, now and again, with a question on the design or analysis of a quantal assay. Quantal assays will always be an important tool in experimental biomedicine. This book presents the statistical aspects of quantal assay and analysis in a mathematically concise, rigorous and modern format. Coverage includes some material found at the present only in periodicals and technical reports. It will be a useful addition to the armamentarium of the biostatistical consultant, whether fledgling or veteran. I believe the book will also serve well as a text for a course that focuses on quantal assay, for students of statistics interested in biomedical consulting, or as a supplementary text for a more generally applied statistics course for such students. Byron Wm. Brown, Jr. Professor and Head Division of Biostatistics, Department of Family, Community and Preventive Medicine, Stanford University School of Medicine, Palo Alto, Calif.
XIII
žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Foreword to the Second Edition
The second edition of Professor Govindarajulu’s book on statistical aspects of biological assay will be of interest to students of this experimental area, to professional statisticians with an interest in research in this topic, to teachers in statistics and biology and to investigators in the biological and medical sciences who use bioassay in their work. Raju’s first edition was a valuable contribution to the topic and the second edition is a timely event. There is a fair amount of new material stemming from the recent statistical literature, some of it based on the author’s personal work. The material reflects recent modern trends in general applied statistical research and the efforts of statisticians to bring this work into practice in the biological and medical sciences. Examples are discussions of generalized logistic models, a response model using non-linear kinetics, additional discussion on design and planning, e.g. choices of dose levels, an additional section in the chapter on Bayes methods, and a new chapter on sequential estimation for the logistic model. The literature citations will also be of value to student, teacher, bioassyist and statistician. The book will have a proper place in the library of all applied statisticians and biologists who have a continuing interest in the subject of bioassay. Stanford, Calif., May 2000
Byron Wm. Brown, Jr. Professor of Biostatistics, Active Emeritus Division of Biostatistics Department of Health Research and Policy Stanford University, School of Medicine
XIV
žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Preface
Since the beginning of the 20th century, there has been a lot of activity in developing statistical methods for analyzing biological data. The development of the probit method is originally due to Gaddum and Bliss. The two important methods of analyzing biological data are: (1) the probit method, and (2) the logit method. Finney [1971] has written an exhaustive treatise on the probit method; Ashton [1972] a short monograph on the logit approach. Here we give equal importance to both the probit and the logit approaches and deal with other approaches to bioassay. Chapters first deal with direct and indirect assays. The logit approach is then covered. Further chapters focus on the angular response curve and other methods, while other chapters consider sequential methods. Readers will also find chapters devoted to estimation of low doses, Bayesian methods, and radioimmunoassays. There have been recent developments, especially in robust estimation methods in bioassay; however, these are not included in this book. More than 200 references are cited and given in a list at the end; this list is by no means complete. A basic course in statistical inference is all that is required of the readers. I shall appreciate readers drawing my attention to any shortcomings or errors found in this book. This book grew out of my lecture notes based on a course in bioassay given at the University of Kentucky during several summers. A quarter or semester’s course on bioassay can be taught out of this book. Selection of the appropriate chapters depends upon the emphasis of the course and the interests of the audience. I give special thanks to my students who were the involuntary ‘guinea pigs’ in the course I taught. I am thankful to Vicki Kenney, Debra Arterburn, Brian Moses and Susan Hamilton for the excellent typing of the manuscript. I thank the Department of Statistics for its support and other help. It is also a pleasure to thank the staff of S. Karger AG for generous help and excellent cooperation throughout this project. I thank Professors Byron Brown of Stanford University, Charles Bell of San Diego State University and Bartholomew Hsi of the University of Texas
XV
at Houston for reading the manuscript in its early stages and making very helpful comments and suggestions. For generous permission to reproduce tables and/or to use material, my special thanks go to Dr. Margaret Wesley, American Association for the Advancement of Science, Association of Applied Biologists, the American Statistical Association, the Biometric Society, the Biometrika Trustees, the Biochemical Society, the Institute of Mathematical Statistics, the Royal Statistical Society, the Society for Industrial and Applied Mathematics, Charles Griffin Publishers, the Methuen Company, Cambridge University Press, the MacMillan Publishing Company, Freeman and Company, the Longman House (United Kingdom), Marcel Dekker Inc., MIT Press, and the University of California Press. Z. Govindarajulu
Preface
XVI
žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Addendum to the Preface
The present form constitutes a revision of the first edition of this book published in 1988. Among the changes, numerous typographical errors have been corrected, helpful elaborations have been provided wherever necessary in order to enhance the readability of the book, new sections, namely 3.12, 3.13, 5.7, 6.9, 7.5, 10.10, and 11.9 have been included in order to reflect the current developments in this area. Further, a new Chapter 12 entitled “Sequential Estimation of the Mean Logistic Response Function” has been added. 53 additional references that are cited in the book have been added to the list at the end. In addition to the people whom I thanked in the first edition, I would like to include the following people to the list: Mr. Rolf Steinebrunner, Product Manager of S. Karger Publishers and Ms. Esther Bernhard, Production Editor, S. Karger Publishers, for their patience and encouragement and cooperation in bringing out the second edition, Professor Byron Brown for writing a new foreword, Brian Moses for an excellent typing of all the revisions, and Yuhua Su for his assistance in preparing the subject index. Lexington, Ky., April 2000
Z. Govindarajulu
XVII
1 žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Introduction
Definition Biological assays (bioassay, for short) are methods for estimating the potency of a drug or material by utilizing the reaction caused by its application to experimental subjects that are living. For example, how do pharmacists know that 6 aspirin tablets can be fatal to a child? Qualitative assessments of material do not pose any great problems. Quantitative assays of the material are our main concern. Examples (1) A study of the effects of different samples of insulin on the blood sugar of rabbits is not necessarily a bioassay problem. However, it will be if the investigator is interested in the estimation of the potencies of the samples on a scale of standard units of insulin. (2) A study of the responses of potatoes to various phosphate fertilizers would not be a bioassay; it is a bioassay if one is interested in using the yields of potatoes in assessing the potency of a natural rock phosphate. First, there is a physical quantity (like the weight of an animal) which we call factor X whose levels or doses can be controlled. The effect of the factor X will be called the response. The relation between the dose and the response will be described by means of a graph or an algebraic equation.
1.1. History of Bioassay
Although bioassay is regarded as a recent development, the essence of modern quantal response techniques were used by people in early times. Emmens [1948] was the first to consider the statistical aspects of bioassay. Coward [1947] and Gaddum [1948] considered the biological aspects of the assay.
1
1.2. Components of Bioassay
The typical bioassay involves a stimulus (for example, a vitamin or a drug) applied to a subject (for example, an animal, a piece of animal tissue, etc). The level of the stimulus can be varied and the effect of the stimulus on the subject can be measured in terms of a characteristic which will be called response. Although a relationship between stimulus and response (which can be characterized by means of an algebraic expression) might exist, the response is subjected to a random error. The relationship can be used to study the potency of a dose from the response it produces. The estimate of potency is always relative to a standard preparation of the stimulus, which may be a convenient working standard adopted in a laboratory. A test preparation of the stimulus, having an unknown potency, is assayed to find the mean response to a selected drug. Next we find the dose of the standard preparation which produces the same mean response. The ratio of the two equally effective doses is an estimate of the potency of the test preparation relative to that of the standard. We think of an ideal situation where the test and standard preparations are identical in their biologically active ingredient and differ only in the degree of dilution by inactive materials (such as solvents) to which they are subjected. The question is whether a relative potency estimated from one response or one species of subjects can be assumed to have even approximate validity for another response or species.
1.3. Role of Statistics in Bioassay
A statistician can make the following contributions: (1) advise on the general statistical principles underlying the assay method, (2) devise a good experimental design that gives the most useful and reliable results, and (3) analyze the data making use of all the evidence on potency. Here design consists of choosing the number of levels of doses at which each preparation is to be tested, the number of subjects to be used at each dose, the method of allocating subjects to doses, the order in which subjects under each dose should be treated and measured, and other aspects of the experiment. Needless to emphasize that a sound design is as important as the statistical analysis of data.
1
Introduction
2
2 žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Preliminaries1
2.1. Types of Biological Assays
Besides the purely qualitative assays, there are three main types of biological assays that are commonly used for numerical evaluation of potencies: (1) direct assays; (2) indirect assays based on quantitative responses, and (3) indirect assays based on quantal (‘all or nothing’) responses. Assays (2) and (3) are similar with respect to their statistical analysis. Both make use of dose-response regression relationships.
2.2. Direct Assays
Direct assays are of fundamental importance. A direct assay is based on the principle that doses of the standard and test preparations, sufficient to produce a specified response, are directly measured. The ratio of these two estimates the potency of the test preparation relative to the standard. Definition The potency is the amount of the standard equivalent in effect to one unit of the test preparation. An example of a direct assay is the ‘cat’ method for the assay of digitalis. The standard or the test preparation is infused into the blood stream of a cat until the heart stops beating. The dose is equal to the total period of infusion multiplied by the rate. This experiment is replicated on several cats for each preparation and the average doses are obtained. Example Assay of Atrial Natriuretic Factor. The following table gives sodium excretions (µg/min) by male Sprague-Dawley rats. The atrial extract supernatant was injected 1 Finney
[1971b] served as a source for part of this chapter.
3
Table 2.1. Excretion by Sprague-Dawley rats [Chimoskey et al., 1984]
Mean š SEM
Normal atrial extract
B10 14.6 atrial extract
1.061 2.176 2.580 7.012 2.296 3.743 4.547
0.406 1.490 0.594 1.064 1.332 4.172 0.917
3.345 š 0.745
1.425 š 0.480
The potency of B10 14.6 atrial extract relative to the normal one is RB D 3.345/1.425 D 2.347. That is, one unit of B10 14.6 extract is equivalent to 2.347 units of the normal extract.
intravenously (0.2 ml over 30–45 s). Sodium excretion rates for 15 min during the administration of the extracts are shown in table 2.1. 2.3. Ratio Estimates
Let Xi D m C ei i D 1, . . . , n1 and Yj D n C hj j D 1, . . . , n2 . Then X D m C e, Y D n C h and h nCh Y n 1C n D D e mCe m X 1C m
h e e2 n D 1C 1 C 2 ÐÐÐ m n m m
n h e he e2 D 1C ÐÐÐ . C m n m mn m2 Hence
n s2 e, n 1 C 1 2 cov C ÐÐÐ m n1 m mn 2 s . n D 1 C 1 2 if X and Y are independent, m n1 m
EY/X D
2
Preliminaries
4
2 e, h n h e C ÐÐÐ varY/X D var C var 2 cov m n m mn 2 2 s21 n s2 D C m n2 n2 n1 m2
2 1 n 1 . 2 Ds C , n2 m2 m n1 m2
provided X and Y are independent and have a common variance s2 . (Notice that we assume that je/mj < 1.) 2.4. Asymptotic Distribution of Ratio Estimators
p
n2
Y n − m X
n mY n nX m Y Ym Xn D D X m Xm Xm n Y Yn Xm 1/2 1/2 1/2 n D n2 n2 n2 m X m X X Y n n 1/2 1/2 ³ n2 n2 X m. m m2 Hence n Y 1/2 n2 X m is asymptotically normal with mean 0 and variance s22 n2 n2 2 C s, 2 m n1 m4 1 if X and Y are independent. If s21 D s22 , then the asymptotic variance is n2 n2 s2 1C . m2 n1 m2 2.5. Fieller’s Theorem [Fieller, 1940, 1944]
Fieller’s theorem is not applicable in situations where more than one error mean square occurs. Theorem Let m, n be two unknown parameters and let r D m/n. Let a and b be unbiased estimators for m and n, respectively. Assume that a and b are linear in observations that are normally distributed.
Fieller’s Theorem [Fieller, 1940, 1944]
5
Let R D a/b be an estimate of r. Then the upper and lower confidence limits for r are 1/2 gv12 ts v212 2 š v11 2Rv12 C R v22 g v11 RL , RU D R v22 b v22 1 g where g D t2 s2 v22 /b2 and s2 is an error mean square having m degrees of freedom and the estimates of the variances and covariance of a and b are v11 s2 , v22 s2 and s2 v12 , respectively; also t D tm,1a/2 . Proof. Consider a rb. Then Ea rb D 0 and it has an estimated variance given by s2 v11 2rv12 C r2 v22 with m degrees of freedom. Thus, P[a rb2 t2 s2 v11 2rv12 C r2 v22 ] D 1 a. The two roots for r obtained by solving the quadratic in r within the probability statement will yield the RL and RU given above. This result will be used in setting confidence limits for the ratio of two means, two regression coefficients or a horizontal distance between two regression lines (because the latter can be expressed algebraically in terms of the ratio of a difference of two means to a regression coefficient. When b is large in comparison to its standard error, then we can approximately set g D 0 and obtain RL , RU D R š t2 s2 D b2 /v22 ,
ts v11 2Rv12 C R2 v22 1/2 . b when
g D 1.
In the quadratic equation for 1/r set g D 1 and solving the resultant equation, obtain RU D 1 and RL D R2 v22 v11 /2Rv22 v12 . Also, often we will have v12 D 0. The general formula is not valid when g exceeds 1.0. Creasy [1954] and Fieller [1954] propose a solution to the problem of setting confidence intervals for the ratio of normal means using a fiducial approach. Fisher [1956] showed the simple relationship between the solutions of Creasy [1954] and Fieller [1954]. Fieller’s confidence intervals are always wider and thus are more conservative than the intervals based on Creasy’s fiducial distribution. However, James et al. [1974] point out that Fieller’s intervals are not conservative enough. The latter authors provide an interval estimation procedure which is similar to that
2
Preliminaries
6
of Creasy [1954] which is always wider than the Fieller confidence interval (see table 1 of James et al. [1974, p. 181]).
2.6. Behrens’ Distribution
Let a and b be independent and unbiased estimates of m and n and be linear combinations of observations that are normally distributed. Also assume that estimates of var a and var b are given by a D s21 v11 , var
b D s22 v22 var
where s21 and s22 are independent mean squares with m1 , m2 degrees of freedom, respectively. Then the estimated variance of a b is s2ab D s21 v11 C s22 v22 . Let DD
a b m n . s21 v11 C s22 v22 1/2
Then D has the Behrens’ distribution (D is also called Sukhatme’s D-statistic) which is tabulated by Finney [1971b, Appendix III]. Its distribution is defined in terms of the degrees of freedom m1 and m2 and the angle q where tan q D s21 v11 /s22 v22 1/2 . When q D 0° , D has t-distribution with m2 d.f.; when q D 90° , then D has t-distribution with m1 d.f. For 0 < q < p/2, the distribution of D is between tm1 and tm2 . When m1 D m2 , the value of D for any probability level is slightly less than the corresponding t-value with df D m, for all q. The D-test provides a test of significance for the difference between two means, or two regression coefficients whose variances are based on independent mean squares. For moderately large m1 , m2 , D tends to the standard normal variable in distribution. Next, let a and b denote independent estimates of m D n. Then we consider the weighted estimate given by a D [as21 v11 1 C bs22 v22 1 ][s21 v11 1 C s22 v22 1 ]1 and s2a D [s21 v11 1 C s22 v22 1 ]1 . Yates [1939] and Finney [1951] point out that a m[s21 v11 1 C s22 v22 1 ]1/2 follows the Behrens’ distribution with d.f. m1 and m2 and an angle q defined by tan q D s22 v22 /s21 v11 1/2 . Hence, one can test the significance of a from a theoretical value m and also set up a confidence interval for m as a š d1a/2 [s21 v11 1 C s22 v22 1 ]1/2
Behrens’ Distribution
7
where d1a/2 denotes the tabulated value for the probability 1 a/2. For incorporating the information about the magnitude of a b on the distribution of a, the reader is referred to Fisher [1961a, b].
2.7. Confidence Intervals for the Natriuretic Factor Assay
Let us assume that the sodium excretion rates are normally distributed with constant variance. The estimate of the common variance is s2 D 7f0.7452 C 0.4802 g/2 D 2.749 and the pooled sample standard deviation s D 1.658. Here we apply Fieller’s theorem (theorem 1) to set up confidence limits for rB D m/n. Let RB D a/b. Then, the estimates of the variances and covariances of a and b, respectively, are v11 s2 D v22 s2 D 2.749/7 D 0.393 and v12 D 0. Thus if 1 a D 0.90, then t12,0.95 D 1.78, g D 1.782 0.393/3.3452 D 0.11, 1.781.658 2 1/2 p RL , RU D 0.426 š [1 C 0.426 0.111] 1 0.111 3.345 7 1.781.658 1.034 D 0.426 š 0.889 8.85 D f0.426 š 0.345g/0.889 D 0.091, 0.867. Remark. Notice that we assumed that the population variances for the two extracts are the same. If they are not the same, then one can apply the improvisation of Fieller’s theorem obtained by Finney [1971b, section 2.7].
a D s21 v11 , var b D s22 v22 and cova, b D 0, then one can For example, if var easily obtain the confidence limits for r D m/n to be D 2 2 2 2 1/2 /1 g RL , RU D R š [s1 v11 C R s2 v22 gs1 v11 ] b where
g D D2 s22 v22 /b2 , D D D1a/2 and 1 2 s v11 /s22 v22 1/2 . r 1 Note that D depends on q which is a function of r. So, numerical evaluation of RL and RU requires interpolation or iteration. First set q D p/4, find D from table. tan q D
2
Preliminaries
8
Then compute RL and RU . Using the value of RL in place of r, find q. With this q, evaluate RL . Repeat this process until no appreciable change occurs in RL . Do a similar iteration for RU . 2.7.1. Dilution Assays
In many assays, the test preparation looks like a dilution (or concentration) of the standard preparation. An analytical dilution assay is one in which an analysis is made of the effective constituent of a preparation against a standard preparation which has in common all the constituents of the preparation except the one which has an effect on the response of the subjects. A comparative dilution assay is one in which the two preparations may look alike qualitatively, although they are not the same. For instance, two insecticides of related but different chemical compositions in their effects on an insect species may behave as though one was a dilution of the other. For preparations A and B, let XA and XB denote equivalent doses and that B behave like a dilution of A by a factor r, the relative potency. Hence rXA D XB , then r2 var XA D var XB . Unless r is close to unity, we cannot assume homogeneity of variances. However, by taking logarithmic values we have ln XA C ln r D ln XB . We can preserve the homogeneity of variances for log doses. Thus, it seems reasonable to assume normality for the distribution of log dose, since it can vary from 1 to 1. The advantages of doing analysis on log dosages are: (1) (2)
All variance estimates can be pooled, which enables one to have more precise estimation. Since the estimate of the relative potency is obtained as the antilog of the difference of two means rather than the ratio of two means, confidence intervals can be calculated from simple standard error formulae without the use of Fieller’s theorem. Example Natriuretic factor assay with log sodium excretion rates: One can perform an F test for the hypothesis of equality of variances: FD
3.369/6 D 1.525 2.209/6
which is not significant. Thus the pooled estimate of the variances is s2 D 2.209 C 3.369/12 D 0.465 and s D 0.682.
Confidence Intervals for the Natriuretic Factor Assay
9
Table 2.2. Sodium excretion (logarithmic values) Normal atrial extract
B10 14.6 atrial extract
0.059 0.777 0.948 1.948 0.831 1.320 1.514
0.901 0.339 0.521 0.062 0.287 1.428 0.087
7.397 1.057 2.209
0.667 0.095 3.369
Xi X Xi X2
Let MB D ln RB D XB XA D 0.095 1.057 D 0.962, where A denotes the normal extract. 1 1 1 1 s2MB D s2 C D 0.465 D 0.3642 . C NB NA 7 7 Hence, assuming that MB ln rB /s is distributed as Student’s t with 12 degrees of freedom, we obtain for the 90% confidence interval ML D 0.962 1.780.364 D 1.610 MU D 0.962 C 1.780.364 D 0.314 Hence RB D 0.382,
RL,B D expML D 0.200,
RU,B D 0.731.
2.7.2. Precision of Estimates
Consider the natriuretic factor data (without the ln transformation). We can assume that A and B have the same variability (since the F statistic is 2.41 which is not significant). Then the pooled variance is s2 D 2.749 varXA D varXB D s2 /7 D 2.749/7 D 0.393. Hence estimate of varRB D
s2 2
XA
2.749 1 0.4262 1 R2 C C D 7 7 3.3452 7 7
D 0.041, sRB D 0.204. Thus rB D 0.426 š 0.204.
2
Preliminaries
10
The confidence interval for rB is 0.426 š z1a/2 0.204. If a D 0.10, then RL D 0.09 and RU D 0.762. Remark. Notice that the sample sizes are fairly small for the confidence interval based on the normal approximation to be meaningful.
2.8. Use of Covariates
In the example of atrial natriuretic assay, we have considered sodium excretion due to injection of atrial extracts. It may be unreasonable to assume that the response is independent of body weight or weight of heart. If such a rule is adopted, one has to administer extreme doses to some human subjects, which are prohibited by the medical profession. For a wide range of body weights, proportional adjustment may be an oversimplification of the weight-tolerance relationship. Thus, a better procedure might be to do a regression (linear or nonlinear) analysis using body weight (or weight of heart, etc.) as the covariate. However, if the precision of estimates is not appreciably improved by using the regression analysis, then this method should be abandoned.
Appendix: Behrens-Fisher-Sukhatme Distribution
Let X1 , X2 , . . . , XmC1 be random sample from normal m, s2 , and Y1 , Y2 , . . . , YnC1 be a random sample from normal n, t2 and assume that the X and Y values are mutually independent. We wish to test H0 m D n against H1 m > n. Let X and Y denote the sample means and let s D sX and s0 D sY . Consider T D X m/s and T0 Y n/s0 . Define D D Y X/s2 C s02 1/2 . If tan q D s/s0 , then under H0 , D D T0 s0 C n Ts m/s2 C s02 1/2 D T0 T tan q cos q D T0 cos q T sin q, when m D n. Hence
fm tfn t0 dtdt0
P[D > djH0 ] D
t0 cos q t sin q > d 1 1 fn t0 dt0 fm tdt, D 1
Use of Covariates
v
11
where v D d C t sin q/ cos q, and fk t denotes the distribution of central t having k degrees of freedom. Carrying out numerical integrations, Sukhatme [1938] has tabulated the upper 5% points of D for m, n D 6, 8, 12, 24, 1 and qˆ D 0, 15° , 30° , 45° . When q D 0, D D Tn , and when q D 90° , D D Tm , where Tk denotes a random variable having Student’s t distribution with k d.f. Questions Is the distribution of D symmetric about 0 when H0 is true? (Easy to show that PD < d j H0 D PD > d j H0 . (2) Can the characteristic function be found? (3) Is it easy to evaluate the distribution of D when m D n?
(1)
2
Preliminaries
12
3 žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Algebraic Dose-Response Relationships1
3.1. Indirect Assays
Direct assays have their shortcomings, especially the bias introduced by the time-lag in producing the response. Even if this difficulty is overcome, it is not easy to administer the precise dosage that produces the desired response. For instance, it is almost impossible to determine individual tolerances of aphids (i.e. insects that live on plants by sucking their juice) for an insecticide. In an indirect assay, specified doses are given each to a set of experimental units and the resulting responses are recorded. It is said to be a quantal response if the response recorded is ‘all or nothing’ like whether a death does or does not result. Other responses could be a change in the weight of a certain organ, the length of the subject’s survival. In the latter case, the response is quantitative. Quantal responses are related to direct assays in this sense; if only the dosage that produced each subject’s ‘death’ is recorded and all other responses are deleted, then quantal response becomes the response in direct assays. However, the quantitative assays are mathematically complicated. It will be more informative to establish a relationship between dose and the magnitude of the response produced by the dose. If this relationship is established for several preparations, equally effective doses may be estimated and by suitable statistical procedures the precision of the estimates of the relative potencies may be established.
3.2. Dose-Response Relationship in Terms of Regression
Let z denote the dose and U denote the response it induces. Then if u D EU we assume that there exists a functional relationship between u and z given by u D hz 1 Finney
3.1
[1971b, chapter 3] served as a source for part of this chapter.
13
where hz is a single-valued real function of z for all doses in the range of interest. If S and T are abbreviations for standard and test preparations, then the regression relationships for the two preparations can be denoted by us D hs z and uT D hT z.
3.2
From (3.2) one can obtain the doses zs and zT that would induce the average response u. In other words, the doses zs and zT are equally effective. There may be some situations where zs and zT may not be determinable or may not be determined uniquely. However, in the case of analytical dilution assays, these uncertainties disappear since the two regression curves must be similar in shape. The relative potency zS /zT as a function of u can be plotted in the shape of a curve or an analytical expression can be obtained. In analytical dilution assays we make the following assumptions: (1) the response is produced solely by the factor x and nothing else like the impurity; (2) the response to be analysed is produced solely by the presence of factor x and not by any other substance, (3) the test preparation behaves like a dilution or a concentration of the standard preparation in a completely inert dilutent. If the two preparations contain the same effective constituent (or the same effective constituents in fixed proportions) and all other constituents are without effect on u, so that one behaves like a dilution of the other, the potency ratio zS /zT must be independent of u, that is zS /zT D r, hence
3.3
hT z D hS rz for all z,
3.4
where r is a constant, which is the condition for similarity that is satisfied by all dilution assays. In summary, the average response induced by a standard preparation has a dose-response regression function u D hz. A test preparation has the dose-response regression function u D hrz where r denotes the potency of the test preparation relative to the standard, namely, the number of units of effective constituent of the standard preparation equivalent to 1 unit dose of the test preparation. An estimate R of r is sought from observations made on dose-response tests. Remark. If the data arising from a series of tests cannot be described by the same form of regression function for both preparations, then either the assumption of similarity is incorrect or the conditions of testing have been different for the two preparations. It may happen that over a wide range of responses the two functions are nearly the same, although at extremes, they may differ significantly. Then the condition of similarity holds reasonably well. In a comparative dilution assay, its validity need not hold under conditions other than those of its estimation. However, in an analytical dilution assay, the estimate of potency should be independent of the experimental method and the assay technique employed.
3.3. Preliminary Guess of the Regression Function
We should have some idea of the form of hz before we estimate the unknown parameters occuring in hz. A preliminary investigation involving the past data that has accumulated over the years will be needed in order to establish the form hz,
3
Algebraic Dose-Response Relationships
14
which may require the observed response at several values of z. Some people would separate the assay from this investigation involving the determination of hz. For most of the assays that we encounter in practice, we assume that hz is strictly monotone (either increasing or decreasing). However, we focus our attention on strictly increasing functions. That is, if z2 > z1 then hz2 > hz1 . It is quite possible that nonmonotonic hz can occur provided the condition of similarity holds. However, the dose of the standard preparation having the potency as some other dose of the test preparation is not unique. Thus, we may confine ourselves to the range of doses over which the function hz is monotone. 3.4. Transformation Leading to Linear Relationships
Let us assume that us D ha C bzl
3.5
where a, b and l are unknown parameters and the form of h is completely specified and h is monotone. The limiting case of (3.5), when l D 0, is given by us D ha C b log z.
3.6 l
This can be justified as follows. The rate of change in a C bz is d a C bzl D lbzl1 . dz The ratio of this rate of increase to its value at z D 1 is zl1 which is increasing in l for z > 1 and decreasing in l for z < 1 and has a discontinuity at l D 0. (To see this point more clearly consider l 1 log z and study its behavior.) However, the rate of increase in a C b log z is bz1 and the ratio of this rate of increase to its value at z D 1 coincides with liml!0 zl1 . In this sense (3.6) is a limiting form of (3.5) when l ! 0. Equations (3.5) and (3.6) can easily be transformed into linear relationships. Let x D zl or x D log z, then us D ha C bx. Now set
3.7
Ys D h1 uS , yielding Ys D a C bx.
3.8
Then, the relation for the test preparation would be YT D a C brl x
l 6D 0
YT D a C blog r C x l D 0.
3.9
For l 6D 0, the lines intersect at x D 0 and R, the estimate of r is the 1/l-th power of the ratio of the slopes of the two linear equations. If l D 0, the lines are parallel and R is equal to the antilog of the difference of the intercepts of the two linear equations divided by the slope. In the literature, x D zl is called the dose-metameter and y D h1 u is called the response metameter. Thus, if l is
Transformation Leading to Linear Relationships
15
known, since h is a known function, the usual method of estimating potency from data on two preparations is to express doses and responses metametrically and to fit to two straight lines subject to the restriction that the lines either intersect at x D 0 or are parallel. Thus, the value of r will be a function of l. Hence, the preliminary investigation consists of determining the form of h and the value of l. The most common values of l are 0 and 1, other simple values of l are 1/2, 1/3, 1, 1/2. Potential candidates for h1 u are u, log u, 1/u or u1/2 . From tests with a large dosis range, the relation between u and x would be determined empirically by plotting u, the mean response against x and fitting a regression relationship by eye or by some other procedure. If hz D z and l D 0, then the metametric regressions given by YS D x YT D log r C x
3.10
yields the standard regression in which case a D 0 and b D 1. 3.5. Nonlinear Regression
Emmens [1940] proposes that a logistic function might provide a good fit to the regression of a quantitative response on dose when the range of responses is too great for a simple linear regression of u on x D log z to hold. We can write this function as u D 1/2H[1 C tan ha C bx], where tanh q D [exp2q 1]/[exp2q C 1].
3.11
Emmens [1940] finds (3.11) to be a good representation for a number of data. If the constant H is known or could reasonably be estimated from preliminary investigations, the metametric transformation 2u H 1 3.12 y D tan h H reduces equation (3.11) to the form of equation (3.8). Typically H will be unknown and hence must be estimated, for instance by the method of maximum likelihood. Also one could use aCbx
2p1/2 expt2 dt D Ha C bx
uDH
3.13
1
where the logistic has been replaced by the normal. Neither of the models seem to embrace all the situations because in either of the models the regression curve flattens and goes to zero as x ! 1 and goes to H as x ! 1. The flat parts of the curve will not be appropriate for the assay problems (because small changes in response correspond to large dose differences). Furthermore, the variance of the response at the extremes of the dose may depend on the response itself.
3
Algebraic Dose-Response Relationships
16
Sometimes an adjustment to the regression equation will improve matters. For instance, we can take uS D a C bx C gx2 , which yields a C blog r C x C glog r C x2 , uT D a C brl x C gr2l x2 ,
3.14 lD0 l 6D 0
3.15
However, the estimation of the parameters a, b, g, l may be troublesome. Consider the following example. Example Table 3.1 pertains to percentage survival Y of the beetle Tribolium castaneum at four densities (x denotes the number of eggs per gram of flour medium).
Table 3.1. Comparison of fitness characters and their responses to density in stock and selected cultures of wild type and black Tribolium castaneum [Sokal, 1967] Density x1 D 5/g
x2 D 20/g
x3 D 50/g
x4 D 100/g
Survival in proportions
0.775 0.725 0.875 0.775 0.875
0.862 0.844 0.800 0.763
0.730 0.725 0.725
0.640 0.585 0.584
Xi D log10 xi
0.699 5 4.025 0.805
1.301 4 3.269 0.817
1.699 3 2.18 0.727
2.000 3 1.809 0.603
ni j
Yij
Y
Since X D ni Xi / ni , we obtain X D f50.699 C 41.301 C 31.699 C 32.0g/15 D 19.796/15 D 1.3197 Y D 4.025 C 3.269 C 2.18 C 1.809/15 D 11.283/15 D 0.752 The fitted regression equation is Y D 0.929 0.134X. sa D standard error of the intercept D 0.049; sb D standard error of the slope D 0.035.
Table 3.2. Anova table for Tribolium castaneum data Source Linear regression Deviation from linear regression Error Total
Nonlinear Regression
d.f.
Sum of squares
Mean square
F value
Prob > F
1
0.0673
0.0673
28.45
0.0002
2 11 14
0.0323 0.0260 0.1256
0.0162 0.0024
6.82
0.012
17
From table 3.2 we infer that the linear fit is significant and the deviation from the linear fit is not significant at 1% level of significance. 3.6. Heterogeneity of Variance
If the variance of the response U for specified z is dependent upon u D EU, a transformation is necessary before the analysis is performed. If s2U D yu,
EU D u
then the proposed transformation is U UŁ D y1/2 v dv. Then dUŁ D dU/y1/2 U. Hence as a first approximation s2UŁ D s2U /yu D 1. Bartlett [1937] considers such transformations. The transformed response UŁ will have homoscedasticity of variances. In the following we shall consider some special cases of y. (1) (2) (3)
s2U D u implies that UŁ D 2U1/2 s2U D u2 implies that UŁ D ln U s2U D uH u/Ha implies that UŁ D sin1 [U/H1/2 ], a1/2 D 2H1/2 .
Using the test of Bartlett [1937] given by
c2k1 D fln s2 fi ln s2i c, where k
1 1 1 c D 1 C [3k 1] fi f 1
fD
k
fi
k D 4,
1
we obtain
1 1 1 1 1 1.4924 1 cD1C D1C C C C D 1.1658. 9 4 3 2 2 11 9
Hence c23 D [77.4320 C 116.0470]/1.1658 D 10.9152/1.1658 D 9.36, and Pc23 > 9.36 D 0.025.
3
Algebraic Dose-Response Relationships
18
Table 3.3. Test for homoscedasticity on the Tribolium castaneum data Dose level
Yi
d.f. D fi
0.699 1.301 1.699 2.000
0.805 0.817 0.727 0.603
4 3 2 2
Sample variance s2i 0.0045 0.0020 0.83(105 ) 0.001027
ln s2i 5.4037 6.2215 11.6952 6.8811
fi D 4 C 3 C 2 C 2 D 11 s2 D fi s2i / fi D 0.0260145/11 D 0.002365 ln s2 D 6.0470 fi ln s2i D 77.4320.
Table 3.4. Survival data in degrees Density x1 D 5/g
x2 D 20/g
x3 D 50/g
x4 D 100/g
61.68 58.37 69.30 61.68 69.30
68.21 66.72 63.44 60.84
59.69 58.37 58.37
53.13 49.89 49.82
So, at 1% level of significance we accept the null hypothesis of homogeneity of variance. Since the response is bounded, the angular transformation given by case (3) may p be appropriate. So, using the transformation Z D arcsin Y, we obtain the following data: We also obtain the following table: Dose D log10 x
Z
d.f.
0.699 1.301 1.699 2.000
64.07 64.80 58.81 50.95
4 3 2 2
s2i
ln s2i
0.007498 4.8930 0.003309 5.7112 0.105(104 ) 11.4633 0.001083 6.8282
Further computations yield: s2 D fi s2i fi D 0.003828 fi ln s2i D 73.2887 ln s2 D 5.5654
Heterogeneity of Variance
19
c23 D [115.5654 C 73.2887]/c D 12.0688/c D 10.35 P[c23 > 10.35] D 0.015. So, at 1% level of significance, we still accept the hypothesis of homogeneity of variances. The arc sine transformation stabilizes variances if there are large numbers of values close to 0 and 1. Next let us fit the linear regression on the transformed data. We obtain the estimated regression line as (when the response is in terms of radians) Z D 1.2591 0.1529x
3.16
sa D standard error of the intercept D 0.0582 sb D standard error of the slope D 0.0412. The following Anova table results: Table 3.5. Anova table for the transformed data Source Linear regression Deviation from linear regression Error Total
d.f.
Sum of squares
Mean square
F value
Prob > F
1
0.0876
0.0876
24.28
0.00005
2 11 14
0.0432 0.0397 0.1711
0.0216 0.0036
5.99
0.018
From table 3.5 we infer that the linear fit is significant and the deviation from the linear fit is not significant at 1% level of significance.
3.7. Maximum Likelihood Estimates of Parameters
There are several methods of estimating the parameters available in the literature, namely (1) the method of moments, (2) the method of maximum likelihood, (3) the method of minimum c2 and (4) the method of generating best asymptotically normal (BAN) estimates. Among these, we prefer the method of maximum likelihood because (a) maximum likelihood estimators (mle) are asymptotically unbiased, (b) they are consistent, (c) they are asymptotically normal, (d) they are asymptotically efficient, (e) they are functions of the sufficient statistic, (f) they are invariant in the sense that if g is a continuous function of q, the unknown parameter, then mle of gq is equal to g evaluated at the mle of q, and (g) mle can be interpreted as the mode of the posterior density of q when (irregular) uniform prior density is assumed for q.
3
Algebraic Dose-Response Relationships
20
3.8. Maximum Likelihood: Iterative Scheme
Here we provide iterative schemes for obtaining maximum likelihood estimates that can be performed by the use of a hand calculator. Of course these can be programmed on a computer if so desired. Suppose we observe the series of pairs of observations xi , Ui where x D log dose for a certain preparation (say, the standard one). Since we will be dealing with arithmetic means of responses, it is not unread sonable to assume the normal distribution for the response. Let U be D normal (u, s2 ). Then the log likelihood function is2 L D constant Ui ui 2 /2s2
3.17
where the constant involves s and not other parameters and the summation is over all the responses. Notice that ui involves the unknown parameters a and b [in fact, ui D ha C bxi , i D 1, 2, . . .]. Let a and b denote the mle values of a and b, respectively, and be the solutions of the likelihood equations: @L @L D 0, D 0. 3.18 @a @b Let a1 , b1 be some simple approximations to a and b obtained by some method (such as the eye test etc.). Then let a2 D a1 C da1 , b2 D b1 C db1 where da1 and db1 are given by @L @2 L @2 L C da1 2 C db1 D0 @a1 @a1 @b1 @a1 @L @2 L @2 L C da1 C db1 2 D 0. @b1 @a1 @b1 @b1
3.19
Suffix 1 on a and b indicates that the partial derivatives are evaluated at a1 and b1 . Consider @L/@a D s2 Ui ui @ui /@a. 3.20 i
In iterative solutions of mle, one can replace the second partial derivatives of L by their expected values since the expected values can be tabulated and by the strong laws of large numbers, the 2nd derivatives converge to their expected values. By substituting Ui D ui after differentiating (3.20), we obtain @ui /@a2 E@2 L/@a2 D s2 E@2 L/@a@b D s2 @ui /@a@ui /@b and @ui /@b2 . 3.21 E@2 L/@b2 D s2 2 We assume that all the responses are arranged in terms of the elements of a single vector. That is, not all x values are distinct and there may be several U values at the same dose level xi .
Maximum Likelihood: Iterative Scheme
21
Write @u/@y D h0 y where y D a C bx
3.22
and define the weight function W by W D [h0 y]2 .
3.23
Then, one can rewrite (3.21) as E@2 L/@a2 D s2 Wi @y/@a2 D s2 Wi E@2 L/@a@b D s2 Wi xi and Wi x2i . E@2 L/@b2 D s2
3.24
Corresponding to the initial estimates a1 and b1 the expected value of the response metameter Y is given by y1 where y1 D a1 C b1 x.
3.25
Using (3.23)–(3.25) in (3.19) we have da1 W1i C db1 W1i xi D W1i Ui u1i /h0 y1i , da1
W1i xi C db1
i
W1i x2i
D
[W1i xi Ui u1i /h0 y1i ].
3.26
i
Now add W1i y1i D W1i a1 C b1 xi to both sides of the first equation and W1i xi y1i to both sides of the second equation in (3.26). If the working response Y is defined by
Y D y C U u[h0 y]1
3.27
Equation (3.26) can be rewritten as a2 W1i C b2 W1i xi D W1i Y1i a2 W1i xi C b2 W1i x2i D W1i xi Y1i .
3.28
Now (3.28) implies that y2 D a 2 C b2 x
3.29
can be obtained as the weighted linear regression equation of Y1 on x. Now x1 D W1i xi W1i , 3.30 Y1 D W1i Y1i W1i , and 2 D Wi x2i W i xi Wi , xx
D
W i xi Y i
W i xi
W i Yi
Wi .
xY
3
Algebraic Dose-Response Relationships
22
Then b2 D
x1 ,Y1
x1 x1
a 2 D Y 1 b2 x1 .
3.31
Now the process can be iterated with a2 , b2 replacing a1 and b1 . The iteration can be stopped when one approximation does not differ appreciably from the immediately preceding one. Remark. Notice that the working response Y given by (3.27) does not coincide with h1 (U), unless hu D u. However, we always have EY D y D h1 u, u D EU. U represents the original response, or the value resulting after a scedasticity transformation. Unless otherwise stated, Y denotes the working response defined by (3.27).
Finney [1949b, 1952] extended the method of iterated maximum likelihood estimation to quantal responses. Towards the variances, we have varY D s2 Wi var b D s2 3.32 xx
denote the stage of iteration (that is, they where Wi D limj Wj,i i D 1, . . . and j are the limiting values.) Also, Wi and xx will be close to the values obtained at the last iteration. The unknown s2 will be replaced by the estimate s2 based on the residual sum of squares, namely, the sum of deviations of the individual values of U about the estimated regression equation. In summary, it is desirable to have a table of values of the metametric transformation that is used, i.e. a table of values of y D h1 u is required. By a table of the weighted coefficients given by equation (3.23), the minimum working response y0 D y u[h0 y]1 ,
3.33
and the range A D 1/h0 y
3.34
as functions of y can be made. Notice that the suffixes indicating the stage of iteration are dropped in equations (3.33) and (3.34). First we read the empirical response corresponding to each observed U namely, Y D h1 U from the table and then plot these against x. A straight line is drawn by eye through these points. Some allowance for unequal weights can be made when positioning the line. Expected responses y are read from the line which correspond with the observed x values. From the second table, the weighting coefficient for each y is read and the corresponding working response is formed as Y D y0 C UA.
3.35
Then the weighted regression of Y on x is calculated. Now with this new set of values for expected responses, a second iteration can be performed.
Maximum Likelihood: Iterative Scheme
23
This process of iteration is stopped when the pair of coefficients ar1 , br1 differs very little from the pair ar , br and the last set is regarded as the mle values of a and b. The process of iteration converges rapidly and it should not take more than two or three iterations, provided that the initial values, a1 , b1 are chosen judiciously. For quantitative responses, if the original observations are homoscedastic or if a scedasticity transformation has been performed, then y D u,
h0 y D 1
and
W D 1.
Hence YDU and the iterative process described earlier simplifies to calculating the unweighted linear regression and clearly one cycle suffices. In several practical situations, one can reasonably assume homogeneity of variances and the linearity of the regression. Example The following provides (artificial) data on weight gain in pounds by piglets in a fixed duration of time when a standard preparation of a certain diet is administered to them. U D gain in weight dose level z D 1 unit z D 2 units 3 2 4 2 3
4 3 5 4 5
Let xi D log zi (i D 1, 2) and let y D h1 u D log u. Then the points (xi , yi ), i D 1, . . . , 10 are 0, 1.099, 0, 0.693, 0, 1.386, 0, 0.693, 0, 1.099, 0.693, 1.386, 0.693, 1, 099, 0.693, 1.609, 0.693, 1.386, 0.693, 1.609. One can plot these points and draw a rough line. Let us take this line to be the one passing through the points 0, y1 D 0, 0.994 and 0.693, y2 D 0.693, 1.63 where y1 y2 denotes the average of the five y-values corresponding to x D 0 (x D 0.693). Thus the approximate equation of the line is y D 0.994 C 0.929x i.e., a1 D 0.994, b1 D 0.929. We make a table of values of u1i D exp0.994 C 0.929xi , y1i D 0.994 C 0.929xi , W1i D expf20.994 C 0.929xi g, Y1i D y1i C Ui u1i /u1i , W1i xi Y1i and W1i x2i (i D 1, . . . , 10). Numerical computations yield
3
Algebraic Dose-Response Relationships
24
b2 D
8.80 133.49 91.6839229.568/168.8 D D 0.640 2 63.54 91.6839 /168.8 13.74
a2 D Y1 b2 x1 D 1.36 0.6400.5432 D 1.01. With these values of a2 and b2 , we prepare tables of u2i , y2i , w2i , Y2i , W2i xi Y2i , W2i x2i , and obtain b3 D
90.66 63.41169.55/129.15 D 0.593 43.95 63.4102 /129.15
and a3 D 1.31 0.5930.49 D 1.01. Proceeding to the third iteration, we obtain b4 D
85.33 59.42161.654/123.4 D 0.595 41.2 59.422 /123.4
and a4 D 1.31 0.5950.481 D 1.02. Thus the MLE’s of a and b are approximately 1.02 and 0.595, respectively.
3.9. Estimation via the Relationship for the Standard Preparation
Suppose that in an assay of a cod liver oil for its vitamin D content, 10 rats received a dose of 50 mg and the average degree of curing initial rickets was 2.10 or a score of 8.40 on the scale of quarter units. The equivalent log dose having an expected response of 8.40 is given by x D 8.40 5.89/17.14 D 0.146Ł . Thus, the 50 mg cod liver oil has the same effect as antilog 0.146 D 1.40 IU of vitamin D. In other words, 1 g of cod liver oil is estimated to contain 28.0 IU vitamin D. Remark. Due to random fluctuations in experimental conditions within a laboratory, it is not good to put too much faith in the estimate of the potency. Thus, in general, a response once determined cannot indefinitely be used. It should be emphasized that the response curves for the standard as well as the test preparation should be based on data gathered in the same laboratory under identical experimental conditions. Ł Finney [1971, pp. 71–75] fits a linear regression to the response (in quarter units) to the middle four closes with vitamin D. (Let Y be the response running from 0 to 24 in quarter units and x be the log dose.) Then he obtains
Y D 5.89 C 17.14x.
Estimation via the Relationship for the Standard Preparation
25
3.10. Estimation of the Potency Based on the Standard Slope
Although the response regression might shift from day to day and under varied experimental conditions, it is not unreasonable to assume that the slope, namely the increase in response to per unit increase in log dose, may remain fairly constant. Also the foregoing method of estimating the standard slope should be restricted to linear regression relation between log dose and response. For instance, let a group of 8 rats receive 5 IU vitamin D each and show a mean response of 12.25. Also use the data on 10 rats that received 50 mg of cod liver oil and yielded an average response of 8.40 (table 3.6). If xS , xT denote log doses of S and T, respectively, and YS and YT the corresponding mean responses, the logarithm of the estimate of relative potency is (see p. 15) M D xS xT YS YT /b D aT aS /b, YS D aS C bxS
where
and
YT D aT C bxT Hence R D antilog M. For the above data M D log 5 log 50 3.85/17.14 D 1 0.225 D 2.775, yielding R D 0.0596. That is, 1 g of the cod liver oil is estimated to contain 59.6 IU vitamin D. Table 3.6. Hypothetical assay of vitamin D Response to 5 IU S
n y
y
50 mg T
15 10 18 6 9 14 12 14
4 10 12 7 5 5 9 14 10 8
8 12.25 98
10 8.40 84
y 1/2 24 50 mg T
Equivalent angles D Arcsin 5 IU S 52 40 60 30 38 50 45 50
8 45.62 365
24 40 45 33 27 27 38 50 40 35 10 35.90 359
Reproduced from Finney [1971b, p. 93].
3
Algebraic Dose-Response Relationships
26
Finney [1971, p. 77] fits a linear regression to log dose on the transformed angles for the vitamin D data and the fitted line is Y D 27.34 C 47.95x. Then 9.72 D 2.797 yielding 47.95 R D 62.7 i.u. per g, which is not a significant change from 59.6.
M D log 5 log 50
3.11. Estimation Based on Simultaneous Tests
In order to overcome the objections to estimating the potency via the standardcurve or standard-slope methods one can conduct simultaneous experiments on both the preparations under identical conditions, using two or more doses of each preparation. The required quantities can be evaluated from the present assay data. The present assay, because of a fewer number of observations made, leads to estimates of regression coefficients that are less precise; however, they will be current and upto-date. Empirical responses for both the standard and test preparations should be plotted against dose levels x and two preliminary regression lines be drawn subject to the constraint that either they intersect at x D 0 or they are parallel. The weighted regression calculations are carried out in order to improve the approximations to the estimates of all parameters, maintaining the constraint. Finally, r is estimated either from the ratio of the slopes or from the difference of the intercepts of the parallel lines on the response axis. If simultaneous trials are carried out on 3 or more doses of each preparation, then a test for deviations from linearity can be constructed. 3.12. Generalized Logistic Regression Models
Vølund [1978] has proposed a four parameter logistic model which is applicable to the free-face fat cell bioassay of insulin [Moody, Stan, Stan and Gliemann, 1974]. Healy [1972] also applied this model of immunoassay. In the following we will briefly describe the model. The usual logistic regression model in bioassay is given by EY D f1 C ebxm g1 exppx D . expbx C expbm According to Waud [1972] it can be expressed as EY D xl /xl C zl ; x > 0
3.36
with x D log z, l D b and m D log x. Vølund [1978] considers a slightly more general model which has been used in immunoassays and by Finney [1976] which is given by EY D q1 C q2 zl zl C xl 1 , x > 0
Generalized Logistic Regression Models
3.37
27
where x may be interpreted as ED50 . For z << x, the above model becomes a linear function of x D zl : EY D q1 C q2 xl x
3.38
and for z >> x it takes the approximate linear form in zl : EY D q1 C q2 xl x1 .
3.39
Thus the model (3.37) approximates the slope ratio model (3.9) in the low as well as the high dose region. For z/xl ³ 1, (3.37) becomes a linear function of x:
1 1 D q1 C q2 1 C ellog xx EY D q1 C q2 1 C x/zl D q1 C q2 2 l log x/4 C q2 lx/4
3.40
after taking the first two terms in the expansion of the function [1 C ellog xx ]1 around x D log x. Thus, the logistic model (3.37) approximates the parallel line model over a dose range region around x or ED50 . By plotting EY given by (3.37) and the approximations (3.38)–(3.40) with q1 D 0 and q2 D 1 against t D l logz/x, one surmises that the values of t corresponding to equal deviations between the models (3.37) and (3.40) on one hand and (3.37) and (3.38) or (3.39) on the other are approximately š1.405. Hence, the four-parameter logistic model can be fitted to assays in which a systematic sigmoid dose response curve is suitable. If only the low and high dose regions are represented, the slope ratio models (3.38) or (3.39) may be the best choice, since the data do not lend themselves to fitting (3.37). Similarly, if only the middle region is used, the parallel line model given by (3.40) is preferred. Statistical Analysis of Assays The analysis of the logistic model (3.37) based on large simple properties of the method of maximum likelihood is given by Vølund [1978] and will be outlined. Let nSi experimental units be administered the dose ZSi of the standard preparation S and yS,i,j denote ith response of the experimental unit (j D 1, . . . , nSi , i D 1, . . . , kS ). Also let Ns D ns1 C Ð Ð Ð C nsKs (the total number of experimental units given the standard preparation S. Analogously define nTi , ZTi , yT,i,j and NT for the test preparation T. Further, let ySi [yTi ] denote the mean of yS,i,j (j D 1, . . . , nSi ) [yT,i,j , (i D 1, . . . , nTi )]. If yS,i,j and yT,i,j have common variance s2 , this can be unbiasedly estimated by s2 where s2 D SSD/NS C NT kS kT
3.41
and SSD D
kS nS,i iD1 jD1
yS,i,j yS,i 2 C
nT,i kT
yT,i,j yT,i 2 .
3.42
iD1 jD1
The next step is to fit the model (3.37) to the standard and test data separately and estimate the parameters. If we assume that the nS,i and nT,i are sufficiently
3
Algebraic Dose-Response Relationships
28
large, then the yS,i is approximately normally distributed with mean mS,i D q1S C l l l q2S zS,iS /zS,iS C xSS and variance s2 /nS,i (i D 1, . . . , kS ). Similar statement holds for the distribution of yT,i . Then yS,i , yT,l , SSD, i D 1, . . . , kS and l D 1, . . . , kT are jointly sufficient for the unknown parameters. Also the MLE’s of the unknown parameters are obtained by minimizing SSRS D
kS
nS,i yS,i mS,i 2 .
3.43
iD1
Similarly the MLE’s of q1T , q2T , lT and xT are obtained by minimizing SSRT D
kT
nT,i yT,i mT,i 2 .
3.44
iD1
The similarity hypothesis HS , can be stated as HS : q1S , q2S , lS , xS D q1T , q2T , lT , rxT
3.45
where r denotes the potency of the test relative to the standard. It may be tested by a likelihood ratio (LR) or approximate F test. The MLE under HS are obtained by minimizing SSRST D
kS
2 nS,i yS,i q1 q2 ZlS,i xl C ZlS,i 1
iD1
C
kT
1 2 nT,i yT,i q1 q2 ZlT,i x/rl C ZlT,i .
3.46
iD1
The LR test of HS can be carried out using the following asymptotic c2 statistic with 3 degrees of freedom: c2 ³ SSRST SSRS SSRT /[SSD C SSRS C SSRT ]/NS C NT 8]. 3.47 When HS is accepted, a pooled estimate of s2 is obtained as s20 D SSD C SSRST /NS C NT 5.
3.48
ˆ the MLE of r as the estimated potency of the Then it is also appropriate to use r, test relative to the standard. The minimization of (3.43), (3.44) or (3.46) is carried out using the NewtonRaphson method. Initial estimates for the iteration scheme can be obtained as follows: (Ignore the indices S and T for the time being for the sake of simplicity). Set 1 1 q1 1 D minyi , q2 D maxfyi q1 g.
After excluding the extreme (average) responses, the remaining ones are transformed to logits: 1 1 log[yi q1 1 /q1 C q2 yiÐ ].
Generalized Logistic Regression Models
29
The slope and the intercept parameters of the linear equations between the logits and log zi are equal to l and l log x, respectively. These parameters can be esti1 1 mated by a weighted linear regression analysis using ni [yi q1 1 /q2 ] Ð [q1 C 1 1 j q2 yi /q2 ] as weights. Let t denote the jth iterated estimate of the parameter vector q D q1 , q2 , l, g, g D log x. Let Iz; q denote the logistic function given by (3.37). Let q0 denote the value of q that minimizes (3.43) or (3.44) which can be written as ni yi Izi ; q2 . SSRS D At step j, we have . Iz; q0 D Iz, tj C q0 tj @Iz; q/@qjqDtj . Thus djC1 D q0 tj can be estimated by a multiple linear weighted regression analysis with yi Izi , tj as the vector of the dependent variable, @Izi ; q/@qjqDtj as the matrix of the independent variables and ni as the vector of weights. Thus, the iteration proceeds from step j to step j C 1 with tjC1 D tj C djC1 . Vølund [1978, Appendix] suggests an improved estimate of q0 at step j C 1 to be tjC1 D tj C kdjC1 where k is given by j2 j1 j > SSRS > SSRS . k D 1 if SSRS 0 otherwise. This ad hoc value of k instead of k 1 seems to improve the convergence in terms of decreasing SSRS . The iterations can be continued until max 1 ts /tj1 < where D a preassigned number, say 103 . The final vector of estimates is denoted by t. The minimization in (3.46) is carried out in a similar fashion, the only difference being the incorporation of m D log r into the unknown parameter set. The initial estimate of m could be logxˆ S /xˆ T . However, when the test and standard have almost the same potency, starting value of m can be zero. Remark. While obtaining the ML estimates, the parameters xS , xT , x and r in (3.43), (3.44) and (3.46) can be replaced by exp gS , exp gT , exp g and exp m, respectively. This will not have any influence on the point estimates of the parameters. However, this transformation has been motivated by the estimation of the asymptotic covariance matrix of the parameter estimates. Considering the shape of the dose response curve of (3.37) it is conjectured the ˆ D log rˆ will be closer to their asymptotic distributions than distributions of gˆ D log xˆ and m ˆ are xˆ and r.
Covariance Matrix of the Estimates and Confidence Limits The asymptotic covariance matrix V of the MLE’s is the inverse of the matrix of the negative second-order derivatives of the log likelihood. That is, 1 V D s2 @2 12 SSR /@q2 jqDt where SSR is given by (3.43) or (3.44) and s2 by (3.42) or SSR is given by (3.46) and s2 D s20 given by (3.48). In the latter case, the estimate sˆ 2mˆ of the marginal ˆ from V can be used to set up an asymptotic confidence interval for m variance of m
3
Algebraic Dose-Response Relationships
30
ˆ Wˆsmˆ , m ˆ C Wˆsmˆ ] or for r by taking the antilog, where W is the appropriate as [m fractile of the standard normal or t distribution with NS C NT 5 d.f. The author has developed an APL computer program to carry out the above methods of iteration and are available from the author. These methods are applied to the data on free fat cell assay. 3.12.1. Concluding Remarks
In order to fit the logistic dose-response model (3.37), one requires at least four different dose levels. Two levels are needed to characterize the middle linear part of the dose-response curve in terms of the parameters x and l corresponding to LD50 and slope, respectively. The other two dose levels are required to estimate the parameters q1 and q2 . It would be difficult to determine the optimal number of doses and their distribution over the entire feasible dose range. One cannot, in general, answer the question whether the analysis based on the logistic model leads to a more precise potency estimate than the one that can be obtained with the slope ratio or parallel line assay analysis, because the comparisons will depend on the distribution of the dose levels. 3.13. Computer Programs
In the following, we will guide the reader to some computer programs that are available in the literature. Although SAS is a powerful statistical package, it does not cover a specialized area such as biological assay. Wang [1994] has developed a SAS program called SLOPE for analyzing bioassay in computer randomization based on the slope ratio method. This program calculates potencies and their corresponding confidence intervals for a group of bioassay, each assay having equal number of preparations. Wang [1996] also developed SAS programs for handling the statistical analysis for parallel line assays and assays based on quantal responses. These programs check the statistical validity of an assay and estimate the potencies of test preparation relative to a standard. An example in the probit procedure in SAS/STAT discusses how to test the parallelism between the dose-response curves for the standard and a test preparation. Wang’s [1996] SAS program QUA provides a check on the statistical validity of an assay using quantal responses and the required potencies. Further, a PROBIT procedure with option LACKFIT will perform two goodness of fit tests: a Pearson chi-square test and a log-likelihood ratio chi-square test. Wang [1996] provides an OUTPUT statement so that the results can be stored in an output data file for further use. Iznaga et al [1995] describe a program for symmetric parallel line analysis of bioassay with data with a logistic dose response relationship which will be studied in chapter 4. In a parallel line bioassay, the doses are identical so that the number of dose levels for the standard and test preparation are the same. Dose-response relationship is transformed into semi-log or log-log. The program calculates the regression lines for the dose-response curve for standard and test preparation and produces the estimate of potency of test preparation relative to the standard preparation based
Computer Programs
31
on parallel line bioassay methods. The program also provides chi-squared tests of the linearity of the two regression lines and the parallelism of the two lines. It also facilitates detection of ‘outliers,’ namely samples exhibiting non-parallelism. The authors first provide a review of the theory. They describe BIOASSAY, an IBM PC compatible program written in Turbo Pascal 6.0 which is proving to be a reliable and time-saving partner in performing parallel line bioassay.
Appendix: Regression Models
Linear Regression Model Let Yij D gi C eij or Yij D a C bxi C eij
j D 1, . . . , ni , i D 1, . . . , k
where not all x values are the same. We assume that Eeij D 0 and var eij D s2 . Hence we have ˆ aˆ D Y bx;
bˆ D
ni xi / ni and ni Yi D Yij ni i D 1, . . . , k,
where x D
ni xi xYi Y ni xi xYi D , ni xi x2 ni xi x2
ˆ i, ˜ i D aˆ C bx let Y
and n D n1 C Ð Ð Ð C nk .
1
We also have var bˆ D s2
ni xi x2
and ˆ var aˆ D var Y C x2 var bˆ 2x covY, b. Now
1 ˆ D n ni xi x2 cov ni Y i , ni xi xYi . covY, b
Hence,
nx2 s2 1C . var aˆ D n ni xi x2 Yij Yi 2 . Let sˆ 2 D n1 We wish to test
H0 : gi D a C bxi versus HA : the xi , gi are not collinear.
3
Algebraic Dose-Response Relationships
32
Consider
˜ i 2 D Yij Y
Yij Yi 2 C
k
˜ i 2 , ni Yi Y
iD1
that is, residual sum of squares D pure error sum of squares C lack of fit sum of squares. If we define F by ˜ i 2 /k 2 ni Yi Y F D Yij Yi 2 /n k then F, under H0 , is distributed as Snedecor’s F with degrees of freedom k 2 and n k, when the eij are independent and are normally distributed. Step-Wise Regression (Forward Selection Procedure) Let the model be Yij D hxi C eij j D 1, . . . , ni , i D 1, . . . , k. Assume that the eij are independent and normal 0, s2 . We wish to test H1 : hx D b1 H2 : hx D b1 C b2 x .. . Hk : hx D
k
bi xi1 .
iD1
In general, if model 1 involves m1 parameters and model 2 involves m2 parameters, with m1 < m2 , then the likelihood ratio for model 1 versus model 2 is D sup Lb1 . . . bm1 ; s2 / sup Lb1 . . . bm2 ; s2 D sˆ 21 /ˆs22 n/2 where L denotes the likelihood of the parameters, [Yij hˆ 1 xi ]2 D Q1 , nˆs21 D i
nˆs22 D
j
[Yij hˆ 2 xi ]2 D Q2 , n D
k
ni ,
1
and hˆ 1 and hˆ 2 are the least squares estimates of the regression function under models 1 and 2 respectively. According to the partition theorem (Cochran’s theorem, Lindgren [1976, p. 525]), Q1 /s2 and Q2 /s2 are c2 with n m1 and n m2 d.f. Also Q1 D Q1 Q2 C Q2 . We have the following facts from the general theory of linear models: (1) (2) (3)
Q1 Q2 /s2 is distributed as c2m2 m1 under model 1, Q2 /s2 is distributed as c2nm2 under model 2, Q2 and Q1 Q2 are independent under model 2.
Computer Programs
33
Then TD
Q1 Q2 /m2 m1 Q2 /n m2
d
D
Fm2 m1 ,nm2
when the null hypothesis, namely model 1, is true. Notice that the alternative hypothesis is that the true model is included in model 2, which is more complicated than model 1. In particular, if we wish to analyze the hierarchy of hypotheses given earlier, where H1 ² H2 ² Ð Ð Ð ² Hk , and H ² K denotes that model H can be deduced as a special case of model K. Then, we have the following Anova table: Source
Sum of squares
d.f.
Fitting H2 after H1 Fitting Hk after Hk1 Error
Q1 Q2 .. .. . . Qk1 Qk Qk
1 .. . 1 nk
Total (fitting H1 )
Q1
n1
Remark. If the predictor variables are more than 1, and one is interested in fitting a multiple linear regression, there are two ways of going about it: (1) backward elimination (BE) procedure and (2) forward selection (FS) procedure. Then the regressor variables need to be ordered in terms of their importance; for instance, with respect to their partial correlations with the predicted variable [Draper and Smith, 1981].
Problem For the following data obtain the MLE’s of a and b by the iterative scheme described in section 3.8. The (artificial) data pertains to weight gain in pounds by piglets in a fixed duration of time when a test preparation of a certain diet is administered to them. U D gain in weight dose level z D 1 unit z D 2 units 3 3 2 4 4
4 4 6 5 4
(Hint: Let x D loge z and y D h1 u D log u. Also, set a1 D 1.132 and b1 D 0.547.) Answer: aˆ D 1.63 and bˆ D 0.523 after the second iteration.
3
Algebraic Dose-Response Relationships
34
4 žžžžžžžžžžžžžžžžžžžžžžžžžžžž
The Logit Approach1
An action curve in pharmacology describes the amount of the response to any physical or chemical stimulation expressed as a percentage of the maximum obtainable in that particular biological system. Also, the action curve is invariably a sigmoid, in the sense that the plot of the percentage of dead organisms against some function of dosage is S-shaped. The change in percentage kill per unit of the abscissa is smallest near mortalities 0 and 100% and largest near 50%. Thus, the dose mortality curve describes the variation in susceptibility among individuals of a population. One can reasonably expect this susceptibility to follow the cumulative normal curve. Then the question is what function of the dosage should be taken as the absissa. Typically, the dosages increase and decrease in equal additive increments. Galton [1879] points out that the variation between individuals in their susceptibility to biological material exhibits a geometrical rather than an arithmetical distribution and this has been confirmed by several investigators of toxic substances. Thus, the logarithm of the dosage can be viewed as an index to the inherent susceptibility of the individual to the drug or poison. The Weber-Fechner law [Clark, 1933] implies that the concentration of the poison in the dose is proportional to the amount of poison fixed by the tissues of the experimental animal; although there is no evidence to support such a relationship. Since the susceptibility of an animal can be viewed as the average susceptibility of its component cells, it is probable that the average susceptibility of an animal is normally distributed. Thus, Galton [1879] and others surmised that the tolerance curve of an experimental unit is approximately normal; that is, plotting the proportion of positive responses on normal probability paper against log-dose would result in a straight line. Several methods, both graphical and analytical, are available for fitting this line from which the EDp (effective dosage at which 100 p% respond) can be estimated for 0 < p < 1. The fixation of a drug or poison seems to be a phenomena of adsorption and two basic formulae describing this process were proposed by Freundlich [1922] and 1 Ashton
[1972] served as a source for part of this chapter.
35
Langmuir [1917]. (See, for instance, Bliss [1935, pp. 142–143] or Clark [1933, p. 4].) If x denotes the concentration of a drug, y denotes the amount fixed in this organism, m is the mass of adsorbing constituents within the organism, and k and n are constants, then Freundlich’s empirical formula is given by kx1/n D y/m. Since m will be constant from animal to animal, we have log x D n log y C K0 (where, typically n D 0.5) which establishes a linear relation between the log-dosage and the logarithm of the amount fixed by the cells of the animal. In several instances, Langmuir’s [1917] adsorption formula fits more satisfactorily the biological data on the fixation of drugs than the Freundlich’s formula. Langmuir’s hyperbolic adsorption formula is given by kxn D y/100 y [Suits and Way, 1961, pp. 95, 445], where x denotes the concentration of the drug, y the percentage of the maximum amount of drug which can be fixed by the cell, n is determined by the molecular state of the fixed drug as compared with its state before adsorption and is usually 1 or 2; and k is constant. However, y cannot directly be measured. Since the changes observed should be a direct result of the fixation of the drug by the cells, y is estimated by the response, namely, the percentage kill of animals at the dose level. Although Langmuir’s formula may cause some problems at the lower dose levels, in many cases, the log dose-mortality line agrees satisfactorily with higher kills. At lower dose levels, the straight line need to be bent up if it is to fit to the entire range of observations. For example, Shepard [1934] obtained a satisfactory linear fit for dose versus the ‘logit’, namely logarithm of the ratio of percent killed to percent surviving by means of the equation kxn D y/100 y
with
log k D 18.2
and n D 10.2.
As pointed out by Clark [1933, p. 5], ‘it must be remembered that a formula is merely a convenient form of a shorthand and is an aid to and not a substitute for reason’. Thus, the logit method has a scientific justification. Other nice features of the logit method are easy calculation of the parameter estimates and the theoretical appeal due to the existence of sufficient statistics for the unknown parameters. Hence, the logit method is emphasized in most of the recent literature on estimating EDp. In this chapter, we dwell upon the approach, based on the logistic distribution. 4.1. Case when the Dose-Response Curve for the Standard Preparation Is Known
Let x D log10 (dose) D log10 z. Then let the known dose response relation after a metametric transformation of the response be y D a C bx. Suppose we administer a dose of the standard xs to one group of subjects and a dose of the test preparation xt to another group. If YS and YT denote the
4
The Logit Approach
36
mean responses of the two groups of experimental units for the standard and test preparations, then xS D YS a/b,
xT D YT a/b.
So xT xS D YT YS /b gives (on the log scale) the excess potency of the test preparation over that of the standard and antilog xT xS gives the ratio of the number of standard units in the doses of the two preparations. Let XS and XT , respectively, denote the log doses of the standard and test preparations measured in milligrams. Then 10xs standard units are contained in 10Xs mg of the standard preparation and 10xt standard units are contained in 10Xt mg of the test preparation. Hence 1 mg of the standard (test) preparation contains 10xs Xs 10xt Xt standard units. Then, let M D log10 potency of test/potency of standard. The expression for M can be obtained in the following way. The equations of the parallel dose-response curves are given by Y YS D bX XS Y YT D bX XT . The horizontal difference between the lines yields the difference in log10 (dose) for equal responses, and thus log10 (potency ratio). When Y D 0 X D XS YS b1 and X D XT YT b1 , yielding M D XS XT C YT YS b1 . After using the formula . VarU/V D [EV2 var U C EU2 var V 2EUEVCovU, V]/EV4 , see p. 5, line 1 we obtain 2 . s 2 2 4 s2M D 2 n1 C n1 T C YT YS s b , where YT YS and s are independent, b S
s2 is the (known constant) variance of the response within a group of subjects receiving the same dose and s2b denotes the square of the standard error of b. And nS and nT denote the number of experimental units in the two groups. The second term goes to zero when either b is taken to be exact or when EYS and EYT are equal. As pointed out, using Fieller’s theorem, one can set up confidence intervals for the potency.
Case when the Dose-Response Curve for the Standard Preparation Is Known
37
4.2. Case when the Dose-Response Curve for the Standard Preparation Is Unknown
At least two dose levels of the two preparations are required for calculating the respective slopes. If the slopes are not significantly different, the pooled value of b will be used for both the lines. Let k dose levels be used and let XS D log10 (dose) for the standard preparation, XT D log10 (dose) for the test preparation, YS,i D mean response of nS,i units receiving the i-th dosage of the standard preparation, and YT,i D mean response of nT,i units receiving the i-th dose of the test preparation. Let the mean values be denoted by XS D
k
nS,i XS,i
nS,i ,
YS D
nS,i YS,i
nS,i .
1
and analogous definitions of XT and YT hold, where the summations are taken over the k levels. Then the equation of the two lines are Y YS D bXS XS Y YT D bXT XT , where nS,i YS,i XS,i XS C nT,i YT,i XT,i XT . bD nS,i XS,i XS 2 C nT,i XT,i XT 2 Then the potency ratio is given by M D XS XT C YT YS b1 and 1 nS,i XS,i XS 2 C nT,i XT,i XT 2 , s2b D s2 1 2 2 4 s2M D s2 b2 n1 S C nT C YT YS sb b nS D nS,i , nT D nT,i , where
s2 D sum of squared deviations from group means/
nS,i C nT,i 2 .
Note that as sb /b increases, the contribution made by the second term to SM increases. Example Consider the following (artificial) data pertaining to weight gain in pounds by piglets in a fixed duration of time when the standard and test preparation of a certain diet are administered to them.
4
The Logit Approach
38
Gain in weight, y standard preparation 1 unit
ni
Yi
Yi
2 units
test preparation 1 unit
2 units
3 2 4 2 3
4 3 5 4 5
3 3 2 4 4
4 4 6 5 4
5 14 2.8
5 21 4.2
5 16 3.2
5 23 4.6
Computations yield: Let log10 x D X XS D 0.1505,
XT D 0.1505
YS D 2.8 C 4.2/2 D 3.5 YT D 3.2 C 4.6/2 D 3.9 aS D 2.8, aT D 3.2,
SaS D 0.32,
Since aS D YS bxS b D 4.65,
SaT D 0.32
and aT D YT bxT ,
Sb D 1.23
M D 0.1505 0.1505 C 3.9 3.5/4.65 D 0.086 R D potency ratio D 100.086 D 1.219 s2 D 0.682,
s D 0.826 0.4 ð 1.23 2 2 2 0.6823 ð 2 sM D 4.65 C 10 4.65 D 4.652 [0.136 C .10582 ] D 4.652 0.1472 sM D 0.082. By the delta method, we obtain . sR D Rloge 10sM D 1.2192.30.082 D 0.23.
Case when the Dose-Response Curve for the Standard Preparation Is Unknown
39
4.3. Quantal Responses
If the percentage of subjects responding is plotted against log10 (dose), a sigmoid curve will generally be the result. A transformation is needed in order to obtain a straight line. If k dose levels are considered and the proportion of units responding at level i is pi i D 1, . . . , k, a plot of a suitable transformation t(p) of the pi against log10 (dose) will be an approximate straight line. The line can be fitted by the usual least-squares method. However, with quantal data, the points do not all have equal weights, even though the number of experimental units in each group is the same. This is the main difference between the case of quantal response and that of a quantitative response.
4.4. Linear Transformations for Sigmoid Curves: Tolerance Distribution
When the response is quantal (or binary) its occurrence or non-occurrence will depend upon the intensity of the stimulus administered. Tolerance is defined as the level of intensity below which the response does not occur and above which the response occurs. The tolerance varies from unit to unit and thus one can talk of the distribution of tolerances. If f(x) denotes the density of the tolerance distribution, then the proportion of people responding to a dose x0 is given by P where
x0
fx dx,
PD 0
or P can be interpreted as the probability that the unit chosen at random will respond to the stimulus at dose level x0 . Then one can look upon P, purely as a function of x0 satisfying certain postulates. Since P is zero for small x (and unity for large x and is strictly increasing in x), it acts like a distribution function. Then the models that are proposed in the literature are: (1)
Normal curve: aCbx
expu2 /2 du, 1 < x < 1,
1/2
P D 2p
1
probit P D normal deviate D a C bx.
(2)
Note: The term ‘probit’ was used by Bliss [1934] as an abbreviation for ‘probability unit’. Logistic curve: P D [1 C expa bx]1 ,
1 < x < 1.
The logit P is given by P logit P D ln D a C bx. 1P
4
The Logit Approach
40
(3)
The term ‘logit’ was used by Berkson which is an abbreviation of logistic unit. Some people call ln[P/1 P] as log odds. Fisher and Yates [1963] define logit P as 1/2 ln[P/1 P]. Sine curve: P D 1/2[1 C sina C bx],
p/2 a C bx p/2.
This curve, proposed by Knudson and Curtis [1947], allows for finite range of doses. One can have the following angular transformation: k D sin1 2P 1 D a C bx. (4)
The form given in Fisher and Yates [1963] is k D sin1 P1/2 . Urban’s curve: P D 1/2 C 1/p tan1 a C bx,
1 < x < 1.
This gives tan[2P 1p/2] D a C bx. Since dP D 1/p[1 C a C bx2 ], it is the Cauchy tolerance distribution. The last three curves have the general form: P D 1/2[1 C Fa C bx]. For the logistic Fa C bx D tan h[a C bx/2]. If the tails are ignored, the models in (1)–(4) look alike. For comparing the four models we set a D 0 since it denotes the origin of the scale. Also note that the curves are skew symmetric, that is Px D 1 Px, about the point x D 0, P D 1/2. Other transformations that are also used are P D 1 ebt so that ln1 P D bt. If ln1 P is plotted against t, then one should get an approximate straight line. The log transformation is given by ln[ ln1 P] D ln b C ln t. This will be useful if one is interested in estimating b1 /b2 rather than a single b.
4.5. Importance and Properties of the Logistic Curve
The logistic curve has been used as a model for growth. It is a special case of a general function considered by Richards [1959]. If y D [1 C expa bx]1
Importance and Properties of the Logistic Curve
41
then it satisfies the differential equation dy/dx D by1 y which enables one to give a more meaningful interpretation of the underlying phenomena. If y denotes the ‘mass’ or ‘size’ of some quantity, then the rate of change of this mass is proportional to the mass and a factor which decreases as the mass increases. Typically x denotes the time. Properties of the Logistic Curve The function y given above has all the properties of a cumulative distribution function; x D a/b is a point of inflection because d2 y/dx2 D b1 2ydy/dx vanishes at x D a/b; dy/dx achieves its maximum b/4 at x D a/b. The expected value of dy/dx is
1
by1 y dy D b/6 0
and is called the mean growth rate. Thus 6/b is called the average time required for the major part of the growth to be achieved.
4.6. Estimation of the Parameters
If P D [1 C expa bx]1 then the logit denoted by 1 is 1 D logit P D ln[P/1 P] D a C bx. Special graph papers are available for fitting a straight line to the logit data. Pearl [1924], Schultz [1930] and Davis [1941] provide some simple methods of estimating the parameters a and b. Oliver [1964] gives a review of the above three early methods of estimation. Also by writing expa bx D exp[bx m] where m D a/b, one infers that m denotes the median effective dose or the median lethal dose denoted by LD50 or ED50 . Its estimate is c D a/b. c is somewhat insensitive to the choice of origin on the x-axis and is invariant under scale changes. Method of Maximum Likelihood Berkson [1957b] extended Fisher’s [1935] iterative method of maximum likelihood to estimate the parameters of the logistic curve. Let Ri denote the number of experimental units responding to the i-th dosage level, namely xi . Then n r n r r n r PRi D ri jxi D i Pi i Qi i i D Ci Pi i Qi i i , Qi D 1 Pi , ri
4
The Logit Approach
42
where ni denote the number of experimental units given the dosage xi i D 1, . . . , k. Then the log likelihood of the parameters is LD
ln Ci C
ri ln Pi C
ni ri ln Qi , since
Px D [1 C expa bx]1 note that @P/@a D PQ and @P/@b D xPQ. Hence k @L ri @Pi ni ri @Pi D @a P @a Qi @a iD1 i k @Pi ri ni Pi D ri ni Pi and D @a Pi Qi iD1
@L @Pi ri ni Pi D xi ri ni Pi . D @b @b Pi Qi
Let ri /ni D pi D i D 1, . . . , k. Then @L ni pi Pi and D @a @L/@b D ni xi pi Pi . Anscombe [1956] points out that if ni n and h is the dose span, then the likelihood equations can be written as h xk C 2
1 1 ri D Pi D Pxi D Px dx n h h x1
2
1 xPxj h1 x dP h h D xk C m h1 2
D
where m denotes the population mean, and 1 1 r i xi D xi Pi D n h
x Ch/2 1 x2 1 xPx dx D Pxxk h/2 1 h 2 2h h 2 1 p2 1 2 xk C m C 2 , D 2h 2 2h 3b
x2 dP
where m D a/b.
Estimation of the Parameters
43
Also since LD ln Ci C ni ln Qi C ri lnPi /Qi , where lnPi /Qi D a C bxi , it is easy to note that ri and ri xi are jointly minimally sufficient for a, b. Berkson [1957b] has suggested an iterative method of determining a and b from the likelihood equations. An estimate of the logit li is ˆli D ln[ri /ni ri ]i D 1, . . . , k. By plotting ˆ li , xi i D 1, . . . , k, let a0 and b0 be the preliminary estimates of a and b obtained by fitting a line by eye to the observations. Then let ˆl0i D a0 C b0 xi be the provisional value of the logit ˆli , and Pˆ 0i D f1 C expˆl0i g1 denote the corresponding value for Pˆ i . Also since dl D dP/PQ we can approximately take ˆ 0i D da C xi dbPˆ 0i Q ˆ 0i . Pˆ 0i Pˆ i D ˆl0i ˆli Pˆ 0i Q Substitution of this formula in the likelihood equations, namely @L/@a D 0 and @L/@b D 0, we obtain (after writing pi Pˆ i D pi Pˆ i0 C Pˆ i0 Pˆ i ˆ 0i ] D 0 ni [pi Pˆ i0 ˆli ˆl0i Pˆ 0i Q ˆ 0i ] D 0. ni xi [pi Pˆ i0 ˆli ˆl0i Pˆ 0i Q Hence
ˆ 0i C db ˆ 0i xi D da ni Pˆ 0i Q ni Pˆ 0i Q ni pi Pˆ 0i , ˆ 0i xi C db ˆ 0i x2i D da ni Pˆ 0i Q ni Pˆ 0i Q ni xi pi Pˆ 0i .
Alternatively, one can write these equations as 2 @ L/@a2 @L/@a0 @2 L/@a0 @b0 da 0 D @2 L/@a0 @b0 @L/@b0 @2 L/@b20 db where the subscripts on the derivatives denote that after differentiation they are evaluated at a0 and b0 . 2 da ˆ ˆ ˆ ˆ D ni xi P0i Q0i ni xi P0i Q0i @L/@a0 /D db ni xi Pˆ 0i Q ˆ 0i ˆ 0i @L/@b0 ni Pˆ 0i Q 2 ˆ 0i ˆ 0i ˆ 0i xi . After evaluating da ni Pˆ 0i Q ni x2i Pˆ 0i Q ni Pˆ 0i Q where D D and db, one can go to the next iteration, namely a2 D a1 C da,
b2 D b1 C db where a1 D a0 C da,
b1 D b0 C db.
This process of iteration is repeated until a certain internal consistency is obtained. Tables providing antilogits Pi and weights Wi D Pi Qi for various values of li are provided by Berkson [1953].
4
The Logit Approach
44
Notice that the procedure requires initial starting values. Hodges [1958] suggests a method called the ‘transfer method’ which enables one to obtain reasonable starting values. The method is based on minimal sufficient statistics of r and r x . The i i i sets fri g and fr0i g are said to be equivalent if ri D r0i and xi ri D xi r0i . Now choose the set fr0i g such that the points xi , l0i , where l0i is the logit corresponding to r0i , are on the same line. Then the mles of a and b can be obtained from the line l0 D a C bxdrawn through these points. The problem is how to construct the set fr0i g. Now r0i D ri implies that ri can be interchanged among the levels of the doses, however, xi ri D xi r0i implies that the interchanges are subjected to the restriction that the line passes through the center of gravity of the responses. If the original data are plotted on logit paper and a straight line is drawn by eye, certain points fall below and some others fall above the line. A method of transfer of points can be designed in order to achieve collinearity of the points. ri can be adjusted to produce r0i and another line can be fitted. After 3 or 4 transfer of points, satisfactory collinearity can be achieved. This procedure is simple to carry out when the xi are equally spaced. Let s D k1 ri D r0i , t D k1 xi ri D xi r0i . The value of s remains the same if the total number of responses is left unchanged, which will happen whenever we transfer responses from one level to another. The value of t is preserved when the center of gravity of the responses is not altered by these transfers. For example, if the dose levels are equally spaced, as is customary, we may transfer one response from a given level to each of these on either side. This transfer may be denoted (C1, 2, C1). Other simple transfers preserving the value of t are (C1, 0, 3, C2), (C1, 3, C3, 1), (C1, 0, 2, 0, C1), (C1, 1, 1, C1), or any of these with the signs changed, or any multiple of the preceding. In general, the transfers are of the form (a1 , . . . , ak ) where k
ai D 0 and
xi a i D 0
1
irrespective of the dose levels being equally spaced. Let us illustrate this by the following example. Example i 1 2 3
xi 0 1 2
ni 20 15 10
ri 6 12 6
ri C1 2 C1
r0i l0i r0i 7 0.619 0.5 10 0.693 1.0 7 0.847 0.5
r00i l00i r00i 7.5 0.510 0.1 9 0.405 0.2 7.5 1.099 0.1
r000 l000 i i 7.6 0.49 8.8 0.35 7.6 1.153
D0 D 0.958, D00 D l001 2l002 C l003 D 0.731, D000 D 0.037 When the data is plotted, we see that l2 is high while l1 and l3 are low so that the transfer (C1, 2, C1) is called for. The points (xi , l0i ) are nearly collinear, but a further lowering of n2 by about 1.0 is tried. The points (xi , r00i ) are almost collinear.
Estimation of the Parameters
45
However, a careful examination of the graph suggests that the point (x2 , l002 ) is a bit above. Hence, a small transfer is necessary. Thus we obtain r000 i . Now we achieve almost collinearity of the points (xi , l000 ) (i D 1, 2, 3). i Variancesof the mle Values of a and b ˆ i and ˆl D a C bx D a0 C bx x, a0 D Let x D ni Wi xi / ni Wi , Wi D Pˆ i Q a C bx. The large-sample variances of a and b can be obtained from the inverse of where ni W i ni Wi xi D ni W i xi ni Wi x2i det D ni Wi ni Wi xi x2 , x D ni W i xi / ni Wi , and 2 1 1 ni Wi xi ni Wi xi D det . ni W i xi ni W i Thus
1 1 ni Wi x2i / det D ni W i C x2 ni Wi xi x2 1 s2b D ni Wi xi x2 cova, b D x/ ni Wi xi x2 .
s2a D
Hence s2a0 D s2a x2 s2b D 1/
ni W i .
If X/Y is designed to estimate q1 /q2 , then EX/Y ³ q1 /q2 . One can show by the differential approach that (see section 2.3) 4 3 2 varX/Y D var Xq2 2 C q1 var Yq2 2q1 covX, Yq2 .
So if c D a/b is an estimate of m D a/b, then s2c D vara/b D vara/b and hence s2c D b2 [s2a C a2 b2 s2b 2ab1 cova, b] D b2 [s2a C c2 2cxs2b ]
1 ni W i C c x2 / ni Wi xi x2 D b2 D b2 [s2a0 C c x2 s2b ]. The assumptions governing the validity of the above expressions for the largesample expressions for the variances of a, b and c are:
4
The Logit Approach
46
(1) (2) (3)
the true probabilities conform to the logistic function, the ni are large, and the dose levels are free of measurement errors.
Hence, the standard errors of a, b and c should be treated as approximations. 4.7. Estimation of the Parameters in the Probit by the Method of Maximum Likelihood
Let Pi D a C bxi , where denotes the standard normal distribution function and a, b are the unknown parameters and xi is the dose level. Let ni be the number of experimental units at dose level xi and ri denote the number responding to the dose i D 1, . . . , k. Then the likelihood of a and b is ni ln Ci C ri ln Pi C ni ri ln1 Pi , where Ci D LD ri @L pi Pi @Pi ni D , pi D ri /ni @a Pi Qi @a @L pi Pi @Pi ni D , where @b Pi Qi @b @Pi @Pi D fa C bxi , D xi fa C bxi . @a @b Let us compute the information matrix @2 L 1 @Pi 2 ni D ni f2 a C bxi /Pi Qi E 2 D @a Pi Qi @a ni @Pi @Pi @2 L D D E ni xi f2i /Pi Qi @a@b Pi Qi @a @b @2 L E 2 D ni x2i f2i /Pi Qi , fi D fa C bxi . @b Let Wi D f2i /Pi Qi . Then the information matrix is ni W i ni xi W i ni xi W i ni Wi x2i Consider @L pi Pi ni fa C bxi D 0. D @a Pi Qi Let Hi a, b D
pi Pi fa C bxi . Pi Qi
Estimation of the Parameters in the Probit by the Method of Maximum Likelihood
47
Let a0 and b0 be the initial values of a and b, the estimates of a and b. Then Hi a, b D Hi a0 , b0 C a a0
@Hi,0 @Hi0 C b b0 @a0 @b0
@H @P fa C bx p Pa C bxf p P 1 2Pf2 D @a @a PQ PQ PQ2 D
f2 p Pa C bxf p P 1 2Pf2 . PQ PQ PQ2
Let 1 D 1 P D GP D a C bx. Then @P @l @P @P D Ð D , @a @l @a @l 1 1 @l D G0 P D D , and 1 @P f[ P] f1 dP dP . D l ˆlf2 ˆl. p P D [l ˆl] @a @a Likelihood equations are li ˆli ni f2 ˆli D 0, Pi Qi li ˆli ni xi f2 ˆli D 0. Pi Qi Now writing li ˆli D li ˆli0 C ˆli0 ˆli , ˆli0 D a0 C b0 xi D li ˆli0 da0 C db0 xi we have
ni xi Wi0 da0 C db0 xi D
That is
ni Wi0 da0 C db0 xi D
ni Wi0 a1 C b1 xi D
ni xi Wi0 a1 C b1 xi D
ni Wi0 li ˆli0 ni xi Wi0 li ˆli0 .
ni Wi0 li ni xi Wi0 1i , where
ˆ i0 . Wi0 D f2 ˆli0 /Pˆ i0 Q Thus a1 D ni Wi0 b1 ni xi Wi0
4
The Logit Approach
1 nxW nW l i 2i i0 i i0 i . ni xi Wi0 ni xi Wi0 li
48
. The iteration is carried out with li D 1 pi until the estimates do not differ significantly. Remark. Adjustment for natural mortality rate can also be made. If c denotes the proportion of the population that will respond even to zero dose, then the probability that an experimental subject will respond at dose level x is PŁ x D 1 1 P1 c D c C 1 cPx. Here, c is called the natural mortality rate. This is known as Abbott’s formula. After obtaining an estimate of c from a controlled group the method of maximum likelihood can be carried out by replacing PŁ x by c C 1 cPx in the likelihood equations.
4.8. Other Available Methods
Even though response is a continuous variable we may classify the observations into mutually exclusive categories or the data might naturally arise as classified into mutually exclusive categories. Let n denote the total number of observations, ni be the number of observations falling in the i-th category and pi q denote the probability of an observation falling in the i-th category i D 1, . . . , k. Notice that q, the unknown parameter, could be vector-valued. In the following we shall give some well-known methods of estimation and the criterion employed. (1)
Minimum c2 method. Find the q which minimizes k
[ni npi q]2 /npi q D
k
iD1
(2)
1
n2i n. npi q
2
Minimum modified c . Find the q that minimizes
[ni npi q]2 /ni D n C
k
n2 p2i q/ni .
1
with ni replaced by unity if ni D 0. (3)
Hellinger Distance (HD). Find q that minimizes 1/2 HD D cos1 ni /npi q
(4)
Kullback-Leibler Separator (KLS) pi q log[pi q/ni /n] KLS D
(5)
Haldane’s Discrepancy n C r! ni !prC1 i q r 6D 1 n! ni C r! D n1 ni log pi q.
Dr D D1
Other Available Methods
49
Rao [1965, p. 289] points out that the maximum likelihood method is superior to any of the preceding ones, from the point of second-order efficiency. Under certain regularity the maximum likelihood estimators of q are asymptotically conditions, normal q, where 1 denotes the Fisher information matrix.
4.9. Method of Minimum Logit c2
0 We wish to find a and b that would minimize Wi pi Pi 2 , where W0i D ni /Pi Qi D reciprocal of the variance of pi . The above expression is equivalent to Pearson’s c2 test statistic because c2 D Oi Ei 2 /Ei i
2 2 ri ri P 1 Qi i ni ni D C ni Pi Qi
D
ni pi Pi 2 /Pi Qi .
Let ˆli D lnpi /qi , then one can write . pi Pi D Pi Qi li ˆli (for the logistic case only) or Since @l D dP/PQ . pi Pi 2 D Pi Qi pi qi li ˆli 2 . Hence . c2 D ni pi qi li ˆli 2 . One can easily minimize the above expression without iterations. This property is unique to the logistic case and is not shared by the cumulative normal curve. The method of estimation is the method of weighted least squares. Setting Wi D pi qi , the equations are ni Wi a C bxi D ni Wiˆli ni xi Wi a C bxi D ni Wi xiˆli , ˆli D lnpi /qi D lnri /ni ri . Alternatively, ni W i ni W i xi
n W x a n W ˆl i i 2i D i i i . ni W i xi b ni Wi xiˆli
Hence a ˆ D jSj1 ni Wi li b ni Wi xiˆli
4
The Logit Approach
50
where
n W x2 ni Wi xi jSj1 D i i i D ni W i xi ni W i ni W i ni Wi xi x2 . DD xD ni W i xi / ni Wi and l is defined analogously. So, ni W i ni Wi xiˆli ni Wiˆli ni W i xi bD D ˆ D ni Wi li lxi x/ ni Wi xi x2 a D l bx D ni Wiˆli b ni W i xi / ni W i .
Anscombe [1956] points out that ˆl is biased as an estimate of a C bx. In order to remove the bias almost completely, he proposes to refine ˆl to lr D ˆl D ln[r C 1/2/n r C 1/2]. Now expanding ˆl in Taylor series about r D nP and taking expectations, one can show that lnP D lnP/Q C Q P/2nPQ C On2 , l0 nP D nPQ1 P2 C Q2 /2n2 C On3 and l00 nP D nP2 C nQ2 C On3 . Hence Eˆl D a C bx C On2 ,
var 1 D nPQ1 C On2
and the skewness of ˆl ' 2P Q/nPQ1/2 . Notice that the skewness in 1 is twice that of r and is opposite in sign. After considering the size and sign of the bias for various situations, Anscombe [1956] advocates the modified definition of ˆl and the minimum c2 method, with the weight W, defined empirically by W D rn r/n2 , or W is replaced by a fitted ˆ if n W ˆ D Pˆ Q ˆ exceeds 1 and equal to 0 otherwise. Tukey (see Anscombe weight W [1956]) suggested a further improvement, especially for small n, namely setting r C 12 ˆl D ln C 12 when r D n n r C 12 r C 12 12 when r D 0. D ln n r C 12 so that the range of values of ˆl is widened a little. One might ask whether one should prefer the minimum logit method in comparison with the method of maximum likelihood. The minimum c2 method does not need the elaborate iterations whereas the method of maximum likelihood The does. r i , r i xi . latter produces estimates that are functions of the sufficient statistic However, the method of minimum c2 depends only on r1 , . . . , rk . Taylor [1953]
Method of Minimum Logit c2
51
has shown that the minimum logit c2 estimates are RBAN (regular best asymptotically normal) and efficient and hence are asymtotically equivalent to the mle values. A draw-back of the minimum logit method is that when the number of experimental units at the dose levels are small, then the ri will be small and hence the value of c2 will be unstable. Little [1968] has shown algebraically that the minimum logit method yields estimates for the slope that are consistently smaller (in absolute magnitude) than the mle estimates. When the slope is positive, the intercept is consistently larger than that yielded by the method of maximum likelihood. Berkson [1955, 1968] addresses himself to these issues and provides answers.
4.10. Goodness-of-Fit Tests
The minimum logit method is based on the assumption that c2 is not significant. However, it is desirable to test the goodness of the model before we estimate the parameters. We can use either Pearson’s c2 or the minimum logit c2 with the appropriate number of degrees of freedom in order to test the goodness of the model, given by ˆi ni pi Pˆ i 2 /Pˆ i Q Pearson c2 D logit c2 D ni pi qi li ˆli 2 . However, Anscombe [1956] points out that the above method of goodness-offit will be unsatisfactory since c2 will be sensitive to departures from the binomial distribution of responses at any dose level. A better procedure would be to estimate b2 and b3 which are assumed to be small where lnP/Q D
3
bj x mj .
jD0
The appropriate statistics to be used for estimating these parameters are j iD1 ri xi j D 0, 1, 2, 3. We need large series of observations at each dose level and we assume that the dose levels cover a wide range. If b2 6D 0, the response curve of P against x is not antisymmetrical about x D m. This may be remedied by a suitable transformation of the x scale. If b2 D 0 and b3 6D 0, the logistic curve is not suitable and one should look at other curves. k
4.11. Spearman-Karber Estimator
If in section 4.6 the x values are equally and not too widely spaced, such that PQ may be assumed to be practically equalto zero outside the range of x values, and Pi and xi Pi can be replaced by the appropriate all the ni values are equal, then
4
The Logit Approach
52
integrals using the Euler-MacLaurin formula. For instance, if xi is the i-th dose level, h is the spacing, and ri out of ni respond at dose level xi i D 1, . . . , k, let P D [1 C ebxm ]1 ,
m D a/b.
Then
xk
m D LD50 D
xdP D
xi Ch/2
k
xdP
iD1 x
0
i1 Ch/2
. D
k
[xi C h/2Pi xi1 C h/2Pi1 hPi ]
iD1
D xk C h/2 h
k
Pi , since Pk D 1
iD1
after performing partial integration once. Now, the estimate of m is ˆ D xk C h/2 h m
k
pi
pi D ri /ni , i D 1, . . . , k
iD1
which is the Spearman-Karber formula for ED50 . Since the second moment of the logistic distribution is m2 C p2 /3b2 , we have
2
2
2
m C p /3b D
xi Ch/2
. x dP D k
2
x2 dP
iD1 x
i1 Ch/2
[xi C h/22 Pi xi1 C h/22 Pi1 2Pi xi h] xi Pi , since Pk D 1. D xk C h/22 2h D
An estimate of b can be obtained from the estimate of the right side quantity given by xi pi . xk C h/22 2h After solving, we obtain 2 21 1 2 ˆb D 3/p2 2 h2xk C h pi h pi 2h xi pi .
Notice that the estimate for b was surmised by Anscombe [1956]. Also Anscombe [1956] obtains h2 /12 in addition to the expression for m2 C p2 /3b2 . This can be attributed to ‘Sheppard’s correction’. The Spearman-Karber estimator is nonparametric in the sense that it can be computed without specifying the functional form of the dose-response function.
Spearman-Karber Estimator
53
If xk does not denote the largest dose level (i.e. when Pk 6D 1), Spearman [1908] proposed the following estimator for LD50 by x D p1 x1 h/2 C
k1
xi C h/2piC1 pi C 1 pk xk C h/2,
iD1
since xi D xiC1 h D
k
xj h/2pj
k
jD2
D
k
xi Ch/2pi x1 Ch/2p1 Cp1 x1 h/2Cxk Ch/2
iD2
xj1 xj pj hp1 C xk C h/2 D xk C h/2 h
jD2
k
pj .
jD1
4.11.1. The Infinite Experiment
Consider an experiment having infinitely many dose levels given by xi D x0 C ihi D 0, š1, š2, . . .. Assume that the dose-mesh location x0 and the dose interval h will be arbitrarily chosen and fixed for the time being. The variance of ri at dose level xi is ni Pi 1 Pi . Hence, the variance at all but a finite number of dose levels is negligible. Thus, results pertaining to infinite experiments will be relevant to finite experiments with dose levels covering the range of variance. Brown [1961] has studied certain properties of the Spearman estimator for the infinite experiment given by 1
xD
xi C h/2piC1 pi
iD1
or x D x0 C h/2 C h
1
1 pi h
iD1
0
pi
iD1
because 0
ipiC1 pi D
i C 1piC1
1
iD1 1
1
ipiC1 pi D
iD1
1 iD1
1
ipi
1
iqi D
i 1
qi D
iD1 jD1
1
piC1 D
1 1 1 jD1 iDj
0
pi
and
iD1
qi D
1
1 pj ,
jD1
since qi D piC1 pi . These will be presented in the following. If P(x) has a finite mean m, then the series defining x converges with probability 1 and Exjx0 D
1
xi C h/2PiC1 Pi .
iD1
4
The Logit Approach
54
Lemma 1 Bxjx0 D Exjx0 m.
jBxjx0 j h/2, Proof.
1
Bxjx0 D Exjx0
xdP 1
D
1
xi C h/2PiC1 Pi
iD1
D
1
x 1 iC1
xdP
iD1 x i
xi C h/2 ci PiC1 Pi
iD1
where
xiC1
ci D
xiC1
xdP/ xi
dP and xi ci xiC1 . xi
Now the proof is complete since jxi C h/2 ci j h/2. For a one-point distribution P, with mass point located at one of the dose levels, the mean will be this dose level, whereas the Spearman estimator will be h/2 units lower than that dose level. Thus, the Spearman estimator achieves the bound h/2. Lemma 2 If P(x) is differentiable and P0 x unimodal, with m D max P0 x, then jBxjx0 j mh2 /8. Proof. See Brown [1961, pp. 296–297]. The bound on the bias of the Spearman estimator given by Lemma 2 is sharp since equality is achieved when P0 x is uniform and for suitable choices of x0 and h. 4.11.2. Variance of the Spearman Estimator
If n1 D n2 D Ð Ð Ð D n, then the variance of x is 1 Pi 1 Pi ; Vxjx0 D h2 /n 1
let the approximating integral be
1 Vx D h/n P1 P dx. 1
Lemma 3 For all P, x0 , h and n jVxjx0 Vxj h2 /4n.
Spearman-Karber Estimator
4.1
55
If P(x) is symmetric with two points of inflection, then jVxjx0 Vxj h3 m/3n where m D max P0 x.
4.2
Proof for (1). Let mŁ be a median of P(x). Then P1 P is nondecreasing for x mŁ and nonincreasing for x ½ mŁ . Number the xi values so that x0 mŁ x1 . Now
xi P1 P dx Pi 1 Pi h for i D 0, 1, 2, . . . xi1 xiC1
P1 P dx Pi 1 Pi h
for
i D 1, 2, 3, . . .
xi
x1
P1 P dx 1/2 Ð 1/2 Ð h. x0
Combining the preceding inequalities we obtain
1 1 Pi 1 Pi C h/4 ½ P1 P dx. h 1
1
Multiplying both sides of the above inequality by h/n we have Vxjx0 Vx ½ h2 /4n. By writing down inequalities going in the opposite direction, one can analogously show that the difference is bounded above by h2 /4n. Remark. The inequality for the difference Vxjx0 Vx is sharp since equality is achieved for the two-point distribution with equal probabilities at the two points and with h and x0 appropriately chosen. The computations of Brown [1961] yield V D 0.5642rs2 /n and 0.5513rs2 /n for the normal and logistic P(x) where s2 denotes the variance of the tolerance distribution and h D rs.
Proof for (2). Inequality (2) can be obtained by using Euler-MacLaurin expressions for the variance Vxjx0 . If x0 is random with a uniform distribution on (0, h), then Brown [1961, section 6] shows that Ex D m and Vx D Vx C Oh2 . 4.11.3. Asymptotic Efficiency of the Spearman Estimator
The size of the experiment can be measured by the number of subjects tested per unit interval on the dose scale. This is denoted by n0 D n/h. Let n0 increase and n D 1, that is h D 1/n0 . Then either for fixed or random x0
1 0 1 Vx D n P1 P dx C O1/n02 . 1
4
The Logit Approach
56
Let us denote the first term in Vx by VA x, which can be called the asymptotic variance. Brown [1961] extends Finney’s [1950, 1952] concept of information for the finite experiment to the infinite experiment. The former considers infinite series of information terms corresponding to dose levels. The expectation of this series is taken with respect to the uniform (0, h) distribution of x0 . Brown [1961] obtains the information for the infinite experiment as
1
IDn
[@/@mPx]2 [Px1 Px]1 dx.
0 1
Then the asymptotic efficiency E of the Spearman estimator is defined as the ratio of I1 to the asymptotic variance VA x. Thus E D I1 /VA x 1 1
1 2 1 D Px[1 Px] dx [@/@mPx] [Px1 Px] dx . 1
1
E D 0.9814 when P is normal and equals 1.000 when p is logistic which coincides with the values of Finney [1950, 1952]. Also if
1e
x t m2 Pe x D Ke 1C dt 1 C 2e 1
then E ! 0 as e ! 0, since the asymptotic variance of Pe x tends to infinity while the information is bounded. Notice that Pe x tends to the Cauchy distribution as e ! 0. The theoretical merit of the Spearman estimator has also been demonstrated by Miller [1973], and Church and Cobb [1973]. Chmiel [1976] proposed a Spearman type of estimator for the variance in the quantal response bioassay problem when the mean of the tolerance distribution is both known and unknown. The consistency, asymptotic normality and efficiency of the variance estimator are studied. A single nonparametric variance estimator was proposed also by Epstein and Churchman [1944]. They and Cornfield and Mantel [1950] give a few properties of this variance estimator. Chmiel [1976] also proposes Spearman-type percentile estimator and establishes similar asymptotic properties. The variance estimator when m is known is defined by k1 v D lim pk xk h/2 m2 C piC1 pi xi C h/2 m2 k!1
iDk
C 1 pk xk C h/2 m2
Spearman-Karber Estimator
57
2 h D piC1 pi xi C m 2 1 1
2 $ 1 1 h h D x0 C m C 2h x0 C m ipiC1 pi C h2 i2 piC1 pi . 2 2 1 1 $1 2 1 h ipiC1 pi C h2 ii C 1piC1 pi . D x0 C m C 2hx0 m 2 1 1
Now use the facts: 1
ipiC1 pi D
1
1 1
ii C 1piC1 pi D 2
1
1
piC1 pi
iD1
ii C 1piC1 pi D
1
1
2
0
pj
1
1
1 0
1 pj
i
jD2
jD1
1
j1 pj
1
ll 1plC1 pl D
0
plC1 pl
lD0
l1
jD2
jD0
1
jpj D 2
0
rpr .
1
0
Thus v D x0 C h/2 m2 C 2h
0
pi m xi 2h
iD1
1
1 pi m xi .
iD1
One might ask for which class of distributions the asymptotic efficiency of the Spearman estimator is equal to 1. If Px D Fx m where Fx D 1 C ex 1 , namely the logistic distribution, and hence fx D F0 x D Fx[1 Fx], the asymptotic efficiency equals 1. In the following we will show that for any other Fx m, differentiable everywhere and having mean m, the asymptotic efficiency will be strictly less than 1. Toward this we need the following definitions. If Px D Fx m, where F(x) is differentiable everywhere and having a mean of 0, then the asymptotic efficiency of the Spearman estimator is 1 1
[eF]1 D Fx[1 Fx] dx f2 xfFx[1 Fx]g1 dx . 1
1
Then we have the following theorem. Theorem eF 1 for all F differentiable everywhere and having a mean of 0, with equality if and only if Fx D [1 C ebx ]1 for some b > 0.
4
The Logit Approach
58
Proof. One can write % & % & FX[1 FX] fX 1 [eF] D E E . fX FX[1 FX] Now applying Cauchy-Schwarz inequality we obtain % & % & fX FX[1 FX] 1 E ½ E . FX[1 FX] fX Hence [eF]1 ½ 1 for all F which are differentiable everywhere with equality if and only if fx/Fx[1 Fx] D b (a constant). Integrating on both sides, we have ln [F/1 F] D bx C a or Fx D [1 C eaCbx ]1 . Now that F(x) has zero expectation, a D 0 is implied. Remark. The notion of an infinite experiment is somewhat artificial. Infinite experiments can be treated as limits of finite experiments. For the alternative definition of asymptotic efficiency of the Spearman estimator and other results, the reader is referred to Govindarajulu and Lindqvist [1987].
Historical Note. For the finite experiment, Finney [1950, 1952] evaluated the asymptotic variances of the maximum likelihood estimator, averaged over the choices of x0 , for the normal and logistic tolerance distributions. He also considered the limit of the same as the number of levels tended to infinity. He compared these values with the mean square error of the Spearman estimator over choices of x0 for the same two distributions. The ratios were 0.9814 and 1.000 for the normal and logistic cases, respectively. Cornfield and Mantel [1950] have shown that for the logistic tolerance distribution, the maximum likelihood estimator and the Spearman estimator were approximately equal and this algebraic approximation improved as h ! 0. Bross [1950] evaluates some sampling distributions via enumeration for the mle and the Spearman estimator (Se) using the logistic tolerance distribution and four dose levels, n D 2, and n D 5. He finds that the Se is more closely concentrated around the true mean than the mle. Haley [1952] performs similar computational results for the normal distribution. Problems (1)
Show that the asymptotic efficiency of the Spearman estimator is equal to (a) (b)
0.9814 when the tolerance distribution is normal, 0.8106 when the tolerance distribution is angular: its d.f. is Fx; m D sin2 x m C p/4,
(c)
p p xm ; 4 4
0.8319 when the d.f. is one particle, i.e. Fx, m D 1 expx/m, x > 0;
Spearman-Karber Estimator
59
(d)
(Hint: From Gradshteyn and Ryzhik [1965, formula no. 12 on p. 540] we '1 ln x2 3 D 21.202057 have dx D 2 1 jD0 j 1 x 0 E D 0 when the d.f. is algebraic, i.e. Fx; m D 1 xs , x, s ½ 1; '1 1 y1/s '1 1/2 ' '1 s2 dy. Write D C and obtain s 12s 1 0 y 0 0 1/2 lower bounds.) not defined when the d.f. is uniform, i.e.
(Hint: E1 D
(e)
Fx, m D 1/2 C x m, 1/2 x m 1/2; (f)
(Hint: F(x, m) is not differentiable for all values of m.) 1.00 when the d.f. is logistic, i.e. Fx; m D [1 C exm ]1 , 1 < x < 1. Hint for (a): After partial integration once we get
1
1
1 dx D 2 1
1 xfdx D 2f1 C 2
1
1
p f2 dx D 1/ p.
1
Also numerical integration yields
1
f2 [1 ]1 dx D 0.903. 1
Hint for (b): (2)
p/2 '
sin2 u cos2 udu D p/16.
0
Let
1e
x t m2 Pe x D Ke 1C dt 1 C 2e 1
( where Ke D 1 C e/ p1 C 2e1/2 C e.
Show that the asymptotic efficiency of the Spearman estimator for the above tolerance distribution is arbitrarily close to zero and will tend to zero as e ! 0. Hint: Show that
1
1 1/2 Pe x[1 Pe x]dx D 1 C 2e Fy[1 Fy]dy 1
1
1
D 21 C 2e
1/2
Fy[1 Fy]dy
0
4
The Logit Approach
60
1
½ 1 C 2e1/2
Fy[1 Fy]dy 1
1
½ Ke21e 1 C 2e1/2
y12e dy 1
where Pe x D F
1
@Pe x @m
D Ke2 xm p , and 1 C 2e
1e
1/2
/2e1 C 2e
,
2
fPe x[1 Pe x]g1 dx ½ 8K2 e22e 3 C 4e1 1 C 2e1/2 .
1
4.11.4. Derivation of Fisher’s Information for the Spearman Estimator
Let the dose levels be xi D x0 C ih
i D 0, š1, . . . , šk,
where x0 is chosen at random in an interval (0, h) for a specified dose interval h. Let n subjects be administered dose levels xi i D 0, š1, . . . , šk. If Fx m denotes the tolerance distribution, the Fisher’s information for this experiment is Ik x0 D E[@ ln L/@m2 jx0 ] where LD
kiDk
n [Fxi m]ri [1 Fxi m]nri . ri
Then k @ ln L @Fi /@m[ri nFxi m] D . @m Fxi m[1 Fxi m] iDk
Since the ri are independent, it easily follows that Ik x0 D
k
n[@Fxi m/@m]2 . Fxi m[1 Fxi m] iDk
For an infinite experiment, that is, when k ! 1, the information is I D lim Ex0 [Ik x0 ] k!1
1 D lim k!1 h
h k 0
k
n[@Fxi m/@m]2 dx0 Fxi m[1 Fxi m]
Spearman-Karber Estimator
61
n D lim k!1 h
iC1h
[@Fx m/@m]2 dx Fx m[1 Fx m]
ih
D
n h
1
f2 x n dx D E Fx[1 Fx] h
%
fX FX[1 FX]
&
.
1
Notice that I is free of the location parameter m.
4.12. Reed-Muench Estimate
We assume the dose levels are equally spaced with spacing h. n denotes the common number of experimental units at each dose level. Also, let ri denote the number of responses at the i-th dose level i D 1, . . . , k. Let j k k ri n ri D max[i : rj nk j]. s D max j : 1 j k and 1
jC1
1
Define ˆ RM D xk hk s. m It has been pointed out (see for instance Miller [1973]) that by setting pk D 1/2 in the Spearman-Karber estimator, one gets the Reed-Muench estimator. Also, note that h h ˆ RM . pi ½ C m h 2 2 1 k
ˆ SK D xk C m
4.13. Dragstadt-Behrens Estimator
With the above notation let ) s s k pˆ s D ri ri C n ri s D 1, . . . , k. 1
1
iDsC1
ˆ DB D xk hk s. If there is If there exists an s such that pˆ s D 1/2, then define m more than one s, take the average of the s values. Remark. It seems that the Reed-Muench and Dragstadt-Behrens estimators are equivalent. However, they are inefficient in comparison with the Spearman-Karber estimator. Also notice
4
The Logit Approach
62
that pˆ s D 1/2 implies that
s 1
ksD
k
ri /n D
k
1
ri D
k
n ri , which is turn implies that
iDsC1
pi
1
ˆ sk D m ˆ DB C h/2. Also m
4.14. Application of the Spearman Technique to the Estimation of the Density of Organisms
Johnson and Brown [1961] have given a procedure for estimating the density of organisms in a suspension by a serial dilution assay. The estimator based on the Spearman technique is simple and easy to compute. To estimate the density of a specific organism in a suspension, a common method is to form a series of dilutions of the original suspension. Then from each dilution, a specified volume (hereafter be called a dose) is placed in each of several tubes. The tubes are later examined for evidence of growth of the organism. Let Zi D ai Z0 denote the i-th dilution of Z0 , the highest concentration of original suspension and a > 1 denote the dilution factor i D 0, . . . , k; let xi D ln Zi i D 0, . . . , k. Suppose n tubes receive a unit volume (dose) for each dilution. Let ri denote the number of tubes showing signs of growth at dilution Zi , si D n ri , pi D ri /n and qi D 1 pi i D 0, . . . , k. The model specifies a density of organisms of q per unit volume in the original suspension and a Poison distribution of organisms in the individual doses. Thus, the probability of signs of growth (i.e. the probability of one or more organism) is PZi D 1 expqZi D 1 expqexi D Fxi say. The maximum likelihood method for estimating q will be quite tedious. Fisher [1922] proposed the maximum likelihood estimator (called ‘the most probable number’) for estimating q. Cochran [1950] has provided some guidelines for designing serial dilution assays using the most probable number. Johnson and Brown [1961] have given a procedure for estimating q that is based on the SpearmanKarber estimator. Suppose we are interested in estimating the mean m of F(x). The Spearman-Karber estimator for m is given by ˆ D x0 C d/2 d m
k
pi
iD0
where x0 D ln Z0 , d D ln a, and pi is the sample proportion of tubes having growth at dilution Zi .
Application of the Spearman Technique to the Estimation of the Density of Organisms
63
Also,
1
xqex expqex dx D
mD 1
1 ln y ln qey dy 0
x
D g ln q,
letting qe D y
where g D 0.57722 is Euler’s constant. Then, we can express the parameter of interest q as q D expg m and we propose to use as point estimator ˆ D exp g x0 d/2 C d pi . qˆ D expg m That is, the estimator of the number of organisms per Z0 volume is Z0 qˆ D exp g d/2 C d pi . Furthermore, ˆ 0 D d2 n1 varmjx '
d n
Fxi [1 Fxi ]
1
Fx[1 Fx] dx D
d n
1
d ln 2 , D n
1
1 u du ln u
0
after letting 1 Fx D u
since
1
[xb xa / ln x] dx D ln[1 C b/1 C a] for a, b > 1. 0
ˆ 0 D x0 C d/2 d Emjx D x0 C d/2 d
Fxi [1 expqex0 id ].
Hence ˆ 0 m D x0 C d/2 C g C ln q d ˆ 0 D Emjx Bmjx
[1 expex0 id ].
Numerical computations carried out by the authors indicate that if the range of doses is wide enough and the dilution factor a is 10 or less, then bias is approximately ˆ is asymptotically normal for large n (the common number of tubes at zero. Since m each dose level), the 95% confidence interval for m is ˆ š 1.96d ln 2/n1/2 m
4
The Logit Approach
64
ˆ m) and the corresponding interval estimator for q is (since q/qˆ D expm qˆ exp[1.96d ln 2/n1/2 ] < q < qˆ exp[1.96d ln 2/n1/2 ]. Johnson and Brown [1961] consider a uniform distribution for x0 on (A, A C d) and find that if the doses extend over a wide enough range . . ˆ D ˆ D Em m and varm d ln 2/n. Towards this consider 1 ˆ D EEmjx ˆ o D Em d
* ACd ACd$
k 1 d ˆ o dxo D Emjx xo C d Fxi dxo d 2 o A
A
d d D AC C 2 2
ACd
ACd
Fxo id dxo D A C d A
Fy dy Akd
ACd
D A C d yFyjACd Akd C
yfy dy D m. Akd
Since the doses extend over a wide range. Further, . d ln 2 ˆ o D ˆ D Efvarmjx ˆ o g C varEmjx ˆ o mg2 varm C EfEmjx n d ln 2 ˆ o g D C EfB2 mjx n ˆ o m. ˆ o D Emjx where Bmjx ˆ o D d/2 C od. Towards this write Next, we will show that Bmjx d d . ˆ o D xo C m d Fxo id D xo C m Bmjx 2 2 0 k
k
Fxo ud du 0
xo
d D xo C m 2
Fy dy. xo kd
Next, consider
xo
Fy dy D I1 I2 C I3 xo kd
A
where I1 D
A
Fy dy D Akd
yFyjAAkd
. ydFy D A m.
Akd
Application of the Spearman Technique to the Estimation of the Density of Organisms
65
x
o kd
xo AFA kd I2 D
Fy dy xo AFxo kd, Akd
xo
xo AFA I3 D
Fy dy xo AFxo A
Hence, we see that jI2 j D od and I3 D d C od since Io A d. Consequently ˆ o D xo C Bmjx ˆ o j jBmjx
d m A C m C d C od 2
3d C od. 2
Thus, when k is large and d is small, the bias is negligible. ˆ o does not depend on n. Thus Also note that Bmjx 9 . 1 . d ln 2 ˆ D fd ln 2 C odg C d2 C od2 D C od. varm n 4 n Also, ˆ ˆ ˆ D Eegmˆ D E[egCmmm Eq ] D q E[emm ]Dq 1 ˆ mj /j! . ð E 1j m jD0
ˆ we have Ignoring central moments 3 and higher of m,
d ln 2 . ˆ D . Eq q 1C 2n Thus, a less biased estimate of q would be ˆ qˆ 0 D 2n2n C d ln 21 q. ˆ CV can be approximated as The coefficient of variation of q, + . . ˆ ˆ D varq/q ˆ 1/2 /q D D [q2 varm] [d ln 2/n]1/2 . CVq . ˆ D The authors note that the varm d ln 2/n coincides with the asymptotic variance of the estimator for ln q proposed by Fisher [1922]. Thus, the asymptotic efficiency of the two procedures for estimating q will be the same, namely 88%. In choosing the range of dilutions, the conditions Fx0 ½ 0.99 and Fx0 kd 0.01 should be satisfied. That is, the expected number of organisms per dose at the highest
4
The Logit Approach
66
and lowest concentration should be greater than 5 and less than 0.01, respectively. (i.e. qz0 ½ log 100 and qzk < log 0.99 D 0.01) In designing a serial dilution assay, the desired precision of the estimator must be specified. It could be p in terms of the coefficient of variation of qˆ or in terms of the factor R D exp1.96 d ln 2/n > 1 which multiplies or divides qˆ in order to obtain the upper and lower limits of the 95% confidence interval. Example [Johnson and Brown, 1961, table 1] Consider the following data. Suppose it is known that the density of organisms q is between 102 and 105 organisms per unit volume. Let a 10-fold dilution factor be used. In order to ensure spanning the range from 0.01 to 5 organisms per dose, it is necessary to use 7 dilutions 101 , . . . , 107 of the original suspension, because k
1 Fx0 kd D eqZ0 10 qZ0 10k
½ 0.99 implies that . ln.99 D 0.01
10kC1 102 /q 107 . Dilution Proportion of tubes showing growth Thus n D 3,
101
102
103
104
105
106
107
3/3
3/3
2/3
0/3
0/3
0/3
0/3
pi D 8/3, d D ln 10.
ˆ D ln 101 C m
ln 10 ln 108/3 D 7.2915. 2
Hence qˆ D egmˆ D 824 organisms per unit volume of the original suspension. Since exp[1.96d ln 2/n1/2 ] D 4.2, the 95% confidence interval is 196 D 824/4.2 < q < 8244.2 D 3, 460. Also qˆ 0 D 2n2n C d ln 21 qˆ D 0.79824 D 651 organisms per unit volume.
4.15. Quantit Analysis (Refinement of the Quantal Assay)
Earlier we dealt with parametric methods for analysing quantal response assays based on normal, logistic, uniform and other two-parameter families of tolerance distribution. Besides LD50 , extreme percentages are of interest. For example in sterilization tests for fruit flies, the quarantine officials require small < 10% survival rates. In therapeutics one is interested in determination of the safety margin, namely the difference between curative and lethal doses. Here we wish to estimate ED99 (the dose that cures 99%) and/or the LD01 (the dose that kills 1%). However, one requires a large sample size to estimate the extreme doses or we need a more precise
Quantit Analysis (Refinement of the Quantal Assay)
67
mathematical model for the dose-response function. In general, asymmetric tolerance distribution will be adequate as a dose-response function provided logarithmic transformation of the dosage is used. Copenhaver and Mielke [1977] propose the omega distribution as a model for the tolerance distribution given by F[xq] D q, f[xq] D 1 j2q 1jvC1 0 < q < 1, v > 1. Since xq D F1 q,
dx D [fF1 q]1 dq D [fxq]1 dq,
q
[fxz]1 dz.
xq D 1/2
Special cases: v D 1, gives fxq D 4q1 q. xq D 1/4 ln[q/1 q], which is the logit of q. Fx D 1 C e4x 1 . v D 0 gives the double exponential density given by fx D exp2jxj. As v ! 1, fx D 1, 1/2 < x < 1/2. When v is near 2, the shape of this distribution is similar to that of the normal distribution. Prentice [1976] considered a family of tolerance distributions given by fx D exg1 1 C ex g1 Cg2 /Bg1 , g2 g1 g2 > 0 and Ba, b D ab/a C b. If g1 D g2 D g and g ! 0, we get the double-exponential density and if g ! 1, we get the normal density, if g D 1, we get the logistic density. The omega includes all the symmetric distributions included in the class of Prentice [1976] if 0 < v < 2. The latter, of course, includes many asymmetrical distributions when g1 6D g2 . If Pi is the probability of responding to the dose level xi , then Pi D Fa C bxi and the tolerance distribution is given by fa C bxi D 1 j2Pi 1jvC1 . Furthermore
Pi
1 j2z 1jvC1 1 dz.
a C bxi D hv Pi D 1/2
where hv Pi is called the ‘quantit’ of Pi and hv pi is the observed quantit corresponding to pi which equals the sample proportion responding to dose level xi . Some adjustment is made to pi when all or no subjects respond. 4.15.1. Computational Procedure for Estimating the Parameters
First an initial value v0 D 1 is used for v and a and b are, as the least squares solutions of h1 pi D a C bxi i D 1, . . . , k. An efficient search routine is used for
4
The Logit Approach
68
determining vˆ . The iterative process is continued until an accuracy of two decimal places for v is obtained. If vˆ exceeds 20, vˆ is set at 20 [if vˆ ½ 20, the omega distribution resembles the uniform on 1/2, 1/2]. The iterative method of maximum likelihood is simplified if the second-order partial derivatives of the likelihood function are replaced by their estimated expectations (this is called method of scoring). The quantit hv P can be expressed as infinite series given by
P
[1 j2z 1jvC1 ]1 dz D d/2
hv P D
1
j2P 1jjvC1C1 /[jv C 1 C 1],
jD0
1/2
C1 0 where d D 1
if P > 1/2 if P D 1/2 if P < 1/2.
(Note that the above series is obtained by expanding in power series of j2z 1jvC1 and integrating term by term.) Hence numerical integration can be avoided in computing hv P. The value of hv P can be obtained by summing the first M terms of the series and then adding a suitable remainder term RM , where
1
RM D
j2P 1jwvC1C1 dw D v C 11 wv C 1 C 1
MC1/2
1 ez /z dz y
where y D [M C 1/2v C 1 C 1] ln j2P 1j (Hint: set [wv C 1 C 1] ln j2P 1j D z.) Copenhaver and Mielke [1977] recommend using a well-known continued fraction approximation for the last integral representation of RM given by
1 1 1 1 2 2 3 3 z e /z dz D expx C C C C C C C . . . x > 0. x 1 x 1 x 1 x x
Magnus et al. [1977] provide a faster convergent series for hv P than the above approximation. A measure of the goodness-of-fit of the maximum likelihood solution is the value of the likelihood which also provides a test-of-fit for the logistic model. Let L! and L denote the value of the likelihoods for logit and quantit analyses, respectively, where the likelihood L is given by LD
k , ni iD1
ri
r
Pi i 1 Pi ni ri , where Pi D Fa C bxi .
Then for testing H0 : v D 1, we use T D 2[ln L ln L!] which is asymptotically distributed as c2 with 1 d.f.
Quantit Analysis (Refinement of the Quantal Assay)
69
Copenhaver and Mielke [1977] analysed 22 sets of published data with the quantit method. Except for a couple of data sets, the omega model yields smaller c2 goodness-of-fit (given by c2 D kiD1 ni pi Pi 2 /Pi 1 Pi pi D ri /ni , i D 1, . . . , k and smaller L . The authors also tabulate the estimates and standard errors of the log ED50 and ED99 for each of the logit, probit and quantit models. Although ED50 estimates are in close agreement, significant differences exist among the ED99 estimates, especially for those data sets in which the ML estimates of v is either near 1 or much larger than 1. Note that the omega distribution has lighter (heavier) tails than the logistic distribution when v is greater (less) than 1. Hence, for those omega model solutions with vˆ much greater (less) than 1, the logistic model will very likely over (under) estimate extreme dose levels such as ED99 and its standard error. In other words, in estimating the lower or upper extreme percentiles, the omega model estimates will be closer to (farther from) the log ED50 estimate than the logistic model estimates when vˆ is greater (less) than 1. 4.15.2. Polychotomous Quantal Response
The quantal response in a biological assay is usually based on two possible outcomes. A common example of the two possible outcomes is death or survival. However, there are several assays, such as that of some insecticide based on the mortality of the housefly. There are three possible outcomes, namely, active, moribund, and dead. There is a definite order of these outcomes: active precedes moribund, which in turn precedes dead. It may be more efficient to analyze the data with more than two outcomes, than combining the outcomes into two categories. An approach to the problem based on the method of maximum likelihood has been given by Aitchison and Silvey [1957] and Ashford [1959]. Gurland et al. [1960] provide a general method of analyzing the data when one uses the normits (probits), logits or other monotone transformation. The estimates of the parameters used in the general method are obtained by minimizing the appropriate c2 expressions and have the asymptotic properties of consistency and efficiency. The latter set of authors provide a numerical example involving the housefly insecticides.
4.16. Planning a Quantal Assay
If m denotes the mean and s2 the variance of the tolerance distribution, Brown [1970] discusses the design aspect for estimating LD50 of a single preparation within ˆ should be between m/R and mR. R-fold of its true value. That is, m Assume that bounds m1 and m2 and s1 and s2 exist such that m1 log LD50 m2 and s1 s s2 . Note that the standard deviation can be interpreted as roughly one third to one sixth of the ‘range’ of the dose-response function. Since it is prudent to use a range of doses that ensures a low dose with a low probability of response and a high dose
4
The Logit Approach
70
with a high probability of response, Brown [1966, 1970] recommends choosing x1 D m1 2s2 , xk D m2 C 2s2 . In order to ensure negligible bias due to a coarse dilution factor, the doses should be spaced no more than 2s. Thus, h, the distance on the log scale, should be h D 2s1 . The variance of the Spearman-Karber estimator is
1 1 h2 hs xi m xi m F 1F D Fx[1 Fx] dx n 1 s s n 1
D hs/n if logistic p D hs/n p if normal. Hence, one can approximately take the standard deviation of the Spearman-Karber estimate to be (because from Lemma 3 on p. 55, we have varS K estimator ½ hs h2 hs 2hs hs D D n 4n n 4n 2n SDlog LD50 D hs/2n1/2 . Setting two standard deviations equal to the desired error limit, log R gives log R D 2hs/2n1/2 . Setting h D 2s1 and since s s2 n 4s1 s2 /log R2 . For example, suppose an assay of the ED50 of an estrogen preparation be precise within 1.6 times the true value with 95% confidence. Also assume that 1.00 m 0.0 and 0.40 s 0.60. Hence x1 D 1.00 1.20 D 2.20, xk D 0.0 C 1.20 D 1.20, h D 20.4 D 0.8. Thus, the log doses are xi D 1.20, 0.40, 0.40, 1.20, 2.00, 2.80 and the number of subjects at each dose level should be n D 40.40.6/0.22 D 24. since log 1.6 D 0.2 Maximal Information from Bioassays Emmens [1966] draws the following conclusions based on empirical studies. It is advisable to utilize graded responses such as body weight, organ weight, percentage of blood sugar, or any measure that can be quantitatively expressed, rather than
Planning a Quantal Assay
71
quantal responses. Categorizing responses into grades 1, 2, 3, and 4 will usually be superior to a purely quantal measurement. The weights used in the probit analysis of quantal responses illustrate that a graded response provides, on the average, about twice the information of a quantal response. Also, graded response is more useful because the latter can be measured over a wider range than quantal responses without running into extremes. Quantal responses based on groups of animals are preferable to a series of single observations at several dose levels. Also, observations derived serially from the same animal may be four to six times as precise as observations made simultaneously upon different animals. If it is impossible to take replicated observation on the same animal, the use of littermates or sometimes of genetically homogeneous material is helpful. In large scale assays, especially those which extend over a period of time, some animals may die or some observations may be lost. Emmens [1966] recommends including one or two extra animals per group and to reject results at random from those groups whose eventual number is greater than the smallest. This method will yield an unbiased and balanced experiment as long as animals do not die because of treatment effects. 4.16.1. A Robust Optimal Design of an LD50 Bioassay
Markus et al. [1995] consider optimal design of an LD50 bioassay when the tolerance distribution is a mixture of two logistic distributions differing in locations but having the same scale parameter. Following the approach of Kalish [1990] and Chaloner and Larntz [1989], Markus et al. [1995] take the tolerance distribution to be Fxi ; q D cL1 C 1 cL2 where L1 D 1/[1 C expfxi m1 /sg]
and
L2 D 1/[1 C expfxi m2 /sg]
and q D m1 , m2 , s, c. The constants c and 1 c can be interpreted as the proportions of animals whose tolerance distribution functions are described by the location and scale parameters m1 , s and m2 , s, respectively. The mixed model enables one to represent a skewed tolerance distribution that describes, for example, the ‘super rats’ (i.e., the experimental units that are able to withstand greater doses) that may appear in an actual bioassay. We are interested in estimating the LD50 denoted by m, which is a function of m1 , m2 , s and c. For simplicity, the authors assume that m2 D m1 C gs, so that q D m1 , s, c, g. They impose a prior distribution on m1 , s as follows: m1 and s are independent and uniformly distributed on [mL , mR ] and [sL , sR ], respectively. For symmetry, they translate the location of Fx; q and the center of the dosing vector to zero. For instance, if we adopt a log-dose scheme with a
4
The Logit Approach
72
spacing factor of two (space D 2) to a five-dose bioassay and the prior estimate of the LD50 is 6 (that is, 106 ) then the translation factor is 6; such that doses 102 , 104 , 106 , 108 , 1010 become 104 , 102 , 100 , 102 , 104 . Let Pi denote the true probability of response to dose xi and pi the observed proportion of responses for the n units tested at dose xi . The Spearman-Karber estimator (to be abbreviated as SK) is SK D
k
piC1 pi xiC1 C xi /2
0
where x0 and xkC1 are the theoretical lower and upper doses of the design. That is, x0 (xkC1 has a lower (higher) concentration than the weakest (most potent) dose tested and hypothetically produces 0 (100) percent response. No experimental units are tested at these doses. We can rewrite SK as follows: Let d denote the dose span. Then, SK D 12 xkC1 C xk pkC1 pk C 12 x0 C x1 p1 p0 k1 d piC1 pi xi C . C 2 1 Now, k1 1
d piC1 pi xi C 2
D
k
pj xj1
k1
2
D
k
1
pj xj d
d pi xi C pk p1 2 k1
2
1
d pi xi C pk p1 2
d pj D pk xk p1 x1 C pk p1 d 2 2 k
k d d D pk xk C p1 x1 d pj . 2 2 1
Substituting this in SK we obtain SK D 12 x0 C x1 p0 C 12 xkC1 C xk pkC1 d
k
pj
1
D 12 x0 C x1 p0 C 12 xkC1 C xk
1 2
k
xjC1 C xj1 pj
1
D Ap0
k
Bi pi C CpkC1 say.
1
Planning a Quantal Assay
73
Hence, ESK D AP0
k
Bi Pi C cPkC1
1
and VarSK D
k
B2i Pi 1 Pi /n
1
with Bi D 12 xiC1 xi1 . Note that the above computations are for given m and s. EMSE D E[MSESK] D VarSK C fBiasSKg2 where BiasSK D ESK m. The variance and bias of SK represents the stability and the accuracy of SK, respectively. EMSE the expected mean square error is obtained by integrating MSE with respect to the prior distributions of m1 and s. The optimal design can now be specified by its center (i.e., the median dose level), its space (i.e., d D xiC1 xi ) and the minimal and maximum differences for the theoretical doses, dmin D x1 x0 and dmax D xkC1 xk , respectively. Since the tolerance distribution is not necessarily symmetric, the optimal design is not symmetric. Hence, the center and space are the important parameters of the design and thus the focus of the analysis. The authors consider the following ranges for the experimental parameters: k D 2, 3, . . . , 10; n D 50, 100, 150, 200; c D 1.0, 0.9, 0.8 corresponding to 0, 10 and 20 percent mixtures, respectively; g D 0, 1, 2; and m1 , s 2 mL , mR , sL , sR D f1, 1, 0.98, 1.02, (2,2,0.5,1.5), (3,3,0.33,3.00)g which correspond to wellknown, moderately known and little known priors, respectively. They wrote a Fortran program to determine the optimal design under all combinations of the experimental parameters, using the available subroutines for integration and optimization. The optimal design, determined by the minimum of EMSE is specified by the center, space, dmin and dmax . In order to study the robustness of the optimal design, they set k D 5 or 10, N D kn D 100, m1 2 [2, 2] and s 2 [0.5, 1.5] and determine the optimal design for an assumed Fx; q. For alternative mixtures of observed tolerance distributions, each with m1 D 0 and s D 1, the authors calculate the exact SK estimate, varSK, biasSK and the EMSE. The following are their findings: (1) (2)
4
EMSE is relatively smooth and continuous for the design parameter values considered. As the percentage of animals following a mixed logistic tolerance distribution increases by g D 1 or 2 standard deviations, the design parameters change as follows: the center increases, the space decreases, dmin increases and dmax decreases. For example, when c D 1, g D 0fc D 0.9, g D 1g, center, space, dmin , dmax D 0.0, 1.4, 2.0, 3.0 f0.1, 1.4, 3.5, 2.6g.
The Logit Approach
74
(3)
The optimal space, dmin , dmax and EMSE are independently and jointly affected by the amount of prior information on m1 and s, whereas the optimal center is unaffected when c D 1 (i.e., a standard logistic tolerance distribution). However, when c < 1 (say, 0.8) and g D 2, the center shifts to the right, the spacing factor decreases, and the resulting EMSE is larger. (4) Overall, when c D 1, the choice of the number of doses does not have as much effect on EMSE as does the choice of center and space. However, for the mixed model (c D 0.8, g D 2), the choice of the number of doses has much effect on EMSE. (5) The EMSE varies inversely with the total number of animals in the bioassay. This phenomenon is independent of the choice of the tolerance distribution model and the amount of prior information. Besides, one requires more experimental units as the prior information decreases, so as to yield equivalent values of EMSE. (6) The SK estimate of LD50 increases with c, the amount of mixture. (7) varSK dominates EMSE since the squared bias is close to zero in all cases. 4.16.2. Numerical Application
Malcolm et al. [1985] determined the LD50 for an experiment evaluating the effects of increasing doses of alcohol on mice at 1 atmosphere absolute air. They set k D 7 and n D 10 and found LD50 to be 5.8g/kg. Since a priori it is known that LD50 2 [5.5, 6.5], the authors apply their methods, with m D 0.5, 0.5, s D 0.5, 1, 5, k D 7 and N D 70. Note that they shifted m by 6 such that (0.5–0.5 corresponds to (5.5–6.5). The optimal design was to be centered at 6g/kg with a space of 0.38g/kg and dmin D dmax D 1.11. Thus the doses should be x1 D 4.86, x2 D 5.25, x3 D 5.62, x4 D 6.00, x5 D 6.36, x6 D 6.76, x7 D 7.14, with x0 D 3.75 and x8 D 8.25.
4.17. Dose Allocation Schemes in Logit Analysis
A common experimental design is to select a number of dose levels and administer each dose level to an equal number of subjects. Optimal designs have been discussed in the literature. Tsutakawa [1980] employed Bayesian methods to select dose levels for the logistic response function. Hoel and Jennrich [1979] showed that for a k-parameter response function, the optimal design has k dose levels. Hence, McLeish and Tosh [1985] consider two dose levels for the logit response function and search for optimal dose levels when cost constraints are placed on the experiment. The total cost is equal to the number of observations plus a constant times the expected number of ‘deaths’ or responses. Thus, experimental designs having the same variance but which reduce the expected number of responses have cost advantages over classical designs.
Dose Allocation Schemes in Logit Analysis
75
In the logit model, a subject is administered a dose x and the response Y D 1 or no response Y D 0 of the subject is observed. The probability of response is PY D 1jx D 1 PY D 0jx D Px, where Px D 1 C eabx 1 . Alternative parametrization of P is
1 qr Px D 1 C ebxr pr
4.1
4.2
where b is an unknown scale parameter, r is an unknown root of the equation Px D pr and pr D 1 qr is a known constant 0 < pr < 1. We can also parametrize P in terms of unknown fractiles p1 and p2 and known doses x1 and x2 which satisfy
1 qr pi D 1 C ebxi r i D 1, 2 4.3 pr The usual cost restraint for a two-dose experiment with ni observations at xi i D 1, 2 is c D n 1 C n2 .
4.4
A more general constraint is c D n1 C n2 C Dn1 p1 C n2 p2 ,
4.5
where D is the ‘extra’ cost of a response. For instance, when response corresponds to the death of an experimental unit due to the toxicity of the drug administered, the cost of the experiment would be the number of observations plus a constant times the number of deaths occurred. McLeish and Tosh [1985] consider the constraint (4.5); however, they obtain explicit solutions for (4.4) only. For the sake of simplicity we shall confine to constraint (4.4). Optimal Sampling Strategy We wish to estimate the root r of the equation Px D pr for given pr . A design is said to be optimal if it minimizes the variance of r subject to the restriction (4.4). If we parametrize P in terms of (4.3), the information matrix becomes p n p q 1 0 I 1 D 1 1 1 1 . p2 0 n2 p2 q2 Reparametrization in terms of (4.2) gives the information matrix r b2 w1 C w2 bx1 rw1 C bx2 rw2 I D b bx1 rw1 C bx2 rw2 x1 r2 w1 C x2 r2 w2
4.6
where wi D ni pi qi i D 1, 2.
4
The Logit Approach
76
(Hint: use @Pi /Qi /@r D b, @ lnPi /Qi /@b D xi r, @ ln Qi /@r D bPi and @ ln Qi /@b D xi rPi .) r The 1-1 element of the inverse of I is b
x2 r2 x1 r2 V D b2 x2 x1 2 C , 4.7 n2 p2 q2 n1 p1 q1 r since det I D b2 w1 w2 x2 x1 2 . b Note that V is the asymptotic variance of the maximum likelihood estimate ˆr of r. Thus, we wish to minimize V subject to (4.4). So, define C D V C ln1 C n2 c @C x2 r2 D b2 x2 x1 2 2 ClD0 @n1 n1 p1 q1 @C x1 r1 2 D b2 x2 x1 2 2 C l D 0. @n2 n2 p2 q2 Notice that ln1 C ln2 D V, so that l D V/c. Hence
n1 D
n2 D
c Vp1 q1 c Vp2 q2
1/2 1/2
jx2 rj and bjx2 x1 j jx1 rj . bjx2 x1 j
4.8
Substitution of n1 and n2 in (4.7) gives
jx1 rj 2 jx2 rj C . V D c1 b2 x2 x1 2 p1 q1 1/2 p2 q2 1/2
4.9
Now we determine the values of x1 and x2 which minimize (4.9). Towards this, let hx D fPx[1 Px]g1/2 ,
with
hxi D pi qi 1/2 ,
i D 1, 2.
4.10
Then (9) becomes V D c1 b2 x2 x1 2 [jx2 rjhx1 C jx1 rjhx2 ]2 . Define s D V1/2 D [jx2 rjhx1 C jx1 rjhx2 ]/c1/2 bjx2 x1 j.
Dose Allocation Schemes in Logit Analysis
4.11
77
Without loss of generality assume that x1 < x2 . Then we have the alternate forms for s.
x2 r hx2 sgn x1 r C hx1 sgn x2 r hx2 sgn x1 r p p sD b c x2 x1 b c
x1 r hx2 sgn x1 r C hx1 sgn x2 r hx1 sgn x2 r p p D C . b c x2 x1 b c 4.12 The partial derivative of the first form w.r.t. x1 gives hx1 C [sgnx1 rx2 r]hx2 D x2 x1 h0 x1 .
4.13
The partial derivative of the second form w.r.t. x2 gives hx1 C [sgnx1 rx2 r]hx2 D x2 x1 h0 x2 .
4.14
Now using the fact that dPx/dxjxDxi D pi qi b, we find that h0 xi D pi qi b/2pi qi 1/2 i D 1, 2.
4.15
Using (4.15) in (4.13) and (4.14) we obtain p1 q1 /p1 q1 1/2 D p2 q2 /p2 q2 1/2 .
4.16
Now let gp D 2p 1p p2 1/2 . Since g0 p D 12 p p2 3/2 > 0
for all p.
Thus gp is monotonic increasing and is symmetric about p D 1/2 (i.e., g1/2 u D g1/2 C u for all u > 0). Hence the only solutions of (16) are of the form p1 D q2 . Also, by the definition of r, pr D f1 C ebra g1 . Thus a D 0 and b D 1 imply that r D logpr /qr . Further, pi D f1 C qr /pr exi r g1 and the above value of r will imply that xi D logpi /qi , i D 1, 2. This together with p1 D q2 imply x1 D x2 . Next let us consider the following special cases: Case 1 Let r < x1 < x2 (or x1 < x2 < r). Then equation (14) becomes hx C hx D 2xh0 x
with x D x2 .
Since h is symmetric about zero, hx D xh0 x. From the definition of hy we find that h0 y D 2P 1hy/2.
4
The Logit Approach
78
Substituting this in the preceding equation, we obtain 2p2 1x/2 D 1,
x/2 D p2 q2 1/2
or
with a D 0 and b D 1, p2 q2 D 1 ex /1 C ex . Thus x/2 D 1 ex /1 C ex . Then the specific solutions are x1 D 2.39935 and
x2 D 2.39935
for which p1 D 0.08322 and p1 D 0.91678. Case 2 Let x1 r x2 . Then equation (14) becomes hx hx D 2xh0 x. hx D hx implies that xh0 x D 0. Then either x D 0 or h0 x D 0. x D 0 implies that x1 D x2 D r D 0. h0 x D 0 implies that 2p2 1 D 0, i.e., p2 D 1/2 D 1 C ex 1 which again implies that x D 0. In this case, the optimal solutions are x1 D 2.39935 and
x2 D 2.39935 for which
p1 D 0.08322 and p2 D 0.91678. Then the optimal strategy is to sample at the root r if p1 pr p2 and sample at P1 p1 and P1 p2 when either pr < p1 or p2 < pr . Remark. (1) If we consider constraint (5), then hx D f[1 C DPx]/Px[1 Px]g1/2 . McLeish and Tosh [1985], using the convexity of h(x), show the existence of optimal dose levels. (2) Generally, one hopes that the optimal asymptotic design would be practical for small sample sizes. Here it is not the case, because sampling entirely at the root or at widely spaced quantiles is inefficient for smaller sample sizes. (3) McLeish and Tosh [1990] provide an algorithm for sequentially designing the dose allocation x1 , x2 , . . . and the accompanying sample sizes n1 , n2 , . . .. By this sequential design, the authors achieve full efficiency asymptotically for estimating ED100p in logit or probit assay.
M¨uller and Schmitt [1990] consider the problem of choosing the number of doses for estimating the median dose of a symmetric dose-response curve by the method of maximum likelihood, when the scale parameter is unknown. Sequential methods are not applicable in practical dose response experiments which usually take several weeks to be carried out and one cannot wait for the results of an experiment at a few doses in order to determine the next doses to be administered. Hence, they assume that a fixed number of subjects is given and an optimal choice of doses is to
Dose Allocation Schemes in Logit Analysis
79
be made at the beginning of the experiment. They assume that dose-response curves are given by zm , z>0 Pz D G s where m and s are unknown, and Gu is a distribution function which is twice differentiable and symmetric at zero. It is further assumed that log G and log1 G are convex and g D G0 > 0 whenever 0 < G < 1. Note that ED50 equals m and the slope of the dose-response curve at ED50 is P0 m D g0/s. Given n subjects, dose di (1 i n of a drug is administered to the ith subject. Some doses might coincide. Let yi be the response on ith subject (yi D 1 if the response is a reaction and 0, otherwise). Assume that the doses di are symmetrically spaced around m0 , i.e., di m0 D m0 dniC1 , i D 1, . . . , n, where m0 is the center of the dose-range. The authors assume that the dose range is [0,1] and m0 D 0.5, h is a known density with support [0,1] and is symmetric about m0 . If N is the number of distinct doses, then
di
Hdi D
hxdx D i 1/N 1 or 0
di D H1
i1 N1
,
i D 1, . . . , N.
Examples of hx are 1. 2. 3.
uniform on [0,1], parabolic, hx D 6x2 C 6x and truncated normal h D fx/ f1 0g.
They estimate m by the method of maximum likelihood. The asymptotic variance of the MLE based on the information matrix (which happens to be diagonal) is ns2 / niD1 yxi where yx D g2 x/Gx [1 Gx] and xi D di m0 /s. Not all outcomes of an experiment would lead to the unique existence of a MLE. Outcomes not leading to well-defined MLE’s will be called degenerate. According to Silvapulle [1981], MLE’s are well defined (in addition to the present assumptions on G) if and only if the intersection of the interiors of the two convex cones generated by dose vectors 1, di associated with yi D 1 and those generated by dose vectors associated with yi D 0 is nonempty. For degenerate cases, we assume that the MLE is not computable. So, optimal designs are those for which ns2 1 niD1 yxi is a maximum. Thus, the problem is reduced to that of choosing the number of doses N given the design density h and the model function gx. Once N is specified, n/N subjects will be allocated to each dose. Now, the optimal N is the one that maximizes N n i1 1 m0 s Ðy H N N1 iD1
4
The Logit Approach
80
where the contribution of one subject at one dose is y[di m0 /s]. Since the design density has support [0,1] and m0 D 0.5, the above Riemann sum can be looked upon as an approximation to an integral. That is, N
1
N y
iD1
di m0 s
1/2s
³s
yxdxfor large N.
1/2s
If G is thrice differentiable, then one can obtain y00 0 D 8[g02 0 C g0g00 0 C 4g4 0] and for dose-response models that are unimodal and concave in a neighborhood of m0 , i.e., satisfy g0 0 D 0 and g00 0 < 1, it follows that y00 0 < 0. Therefore, for the probit model and similar dose-response curves, y is concave in the central part. For the probit model y is concave on [1.981, 1.1981] and hence for s, not too small (for the probit model 1/2s 1.981 implies that s ½ 0.417) the Riemann sum can be monotonically increasing towards the integral. Hence, specifying as many doses as possible is equivalent to maximizing the sum. 4.17.1. Numerical Evaluation in the Probit Model
Since the problem cannot be solved analytically for an arbitrary G, the authors ˜ i D consider the probit model, namely Gx D x. Since n is fixed, setting yd ydi m0 /s, minimizing the variance is equivalent to maximizing Ns2 1
N
˜ i yd
iD1
which can easily be computed when s2 and h are known. The authors investigate probit models with m D m0 , s D 0.10.10.6 and uniform, parabolic and truncated normal as the design densities. Their graphs are based on n D 48, and N D 3, 4, 6, 8, 12, 16, 48 doses were chosen according to the design density on [0,1] with 48/N subjects at each dose. Their main findings are as follows: The flatness of y near zero greatly influences whether N D 3 or N > 3. If y is peaked, designs with small N are optimal. Then only doses near m0 contribute to ˜ i , since y declines rapidly for x away from zero. This occurs Ns2 1 NiD1 yd for small s in the probit model which can also be explained by the concavity of y where for the probit model s ½ 0.417 ensures the concavity of y on [1.1981, 1.1981]. If y is peaked, the advantage of designs with large N is more visible for parabolic and truncated normal design densities than for the uniform design density. If y is not peaked, i.e., s is not small, N D 48, the maximum number of doses, is always
Dose Allocation Schemes in Logit Analysis
81
optimal. If s is large, y is flat and the differences between different designs are not very transparent, then N D 48 is always optimal, and the asymptotic variance decreases monotonically as the number of doses selected increases. 4.17.2. Discrete Delta Methods for Estimating Variances of Estimators of ED100p
Let the dose-response relation be denoted by p D Fx where p denotes the probability of a response at dose x. Let ti denote the binary response at dose xi (i D 1, . . . , m) where x1 x2 Ð Ð Ð xm . The individual random responses Ti are assumed to be mutually independent and Ti is Bernoulli with parameter pi D Fxi , i D 1, . . . , m. Note that the model allows the possibility of multiple responses at the same dose level. Let x D x1 , . . . , xm , t D t1 , . . . , tm , p D p1 , . . . , pm and let qˆ D ht; x be an estimate of q D hp, x, a parameter of interest to the experimenter, such as log ED50 (mean log tolerance), etc. If qˆ is approximately normal, then 1 a 100% confidence interval for q is qˆ š za/2 SE
4.17
where za/2 is the 1 a/2th quantile on the standard normal curve and SE denotes ˆ The performance of these intervals for small samples can the standard error of q. be evaluated via simulation studies. For instance, M¨uller and Wang [1990] consider the use of several bootstrap estimators for the SE of the ML estimator of log ED50 and log ED10 in the probit model. Their studies point out that the ML method yields mixed results, but none of the bootstrap methods show a uniform improvement over ML. Cobb and Church [1995] propose discrete delta methods for estimating the variances of estimator of ED100p , which will be described below. Let qˆ D ht; x. Given T D t and ht; x exists, define si to be the m dimensional vector with all coordinates equal to zero except for the ith which is si D 1 2ti so that ti C si D 1 ti and si D š1. Let Di D [ht C si ; x ht; x]/si
4.18
be a first order ith partial difference quotient for h if t C si ; x exists and Di D 0 otherwise (i D 1, . . . , m). ˆ If qR D hT; x, then one can approximately linearize it as . ˆ @h/dpi jpi DTi Ti pi . qT q D hT, x hp; x D m
iD1
Now, replacing the partial derivatives by Di , we have . ˆ Di Ti pi . qT qD m
4.19
iD1
4
The Logit Approach
82
Next, using the independence of T1 , . . . , Tm , an approximation for the variance of qˆ given by . 2 ˆ D varq Di varTi D D2i pi 1 pi m
m
1
1
4.20
ˆ D m D2 pˆ i 1 pˆ i and SE in (4.17) is the square root of var ˆ - q - q. is var i 1 Ł Ł Ł If x1 < x2 < Ð Ð Ð < xk are the equally-spaced distinct dose levels with dose mesh d D xŁ2 xŁ1 , ri denotes the number of responses out of ni independent Bernoulli trials at dose level xŁi where FxŁi D pi and r D r1 , . . . , rk , then varq given by (4.20) takes the form of . ˆ ˆ D ˆ 2 ri pi 1 pi C ˆ 2 ni ri pi 1 pi 4.21 varq qi q qˆ iC q i:ri >0
i:ri
where qˆ i is the estimate of q for the sample modified by changing the ith coordinate of r, namely ri , to ri 1 if ri > 0 and leaving the other coordinates unchanged. Similarly qˆ iC is the estimate of q when ri is changed to ri C 1 and the other coordinates of r are unchanged provided ri < ni . Define qˆ i D qˆ if ri D 0 and qˆ iC D qˆ if ri D ni . Also, the implicit assumption is that ni > 1 for all i. If Fx is nondecreasing in x, one can obtain maximum likelihood estimates of the pi (where p1 p2 Ð Ð Ð pk ) by using the ABERS pooling procedures of Ayer, Brunk et al. [1955]. These are given by v v ri ni . 4.22 p˜ j D max min 1uj jvk
u
u
Formula (4.22) enables us to determine explicitly the fpi g. However, these are not recommended for a practical computation and we proceed as follows: If 0 pˆ 1 pˆ 2 Ð Ð Ð pˆ k ,
then
p˜ i D pˆ i ,
i D 1, . . . , k.
If pˆ j ½ pˆ jC1 for some j (j D 1, . . . , k 1), then p˜ j D p˜ jC1 ; The ratios pˆ j and pˆ jC1 are replaced in the sequence pˆ 1 , . . . , pˆ k by the single ratio rj C rjC1 /nj C njC1 , thereby obtaining an ordered set of only k 1 ratios. This procedure is repeated until an ordered set of ratios is obtained that are monotone and non-decreasing. Then for each i, p˜ i is equal to that one of the final set of ratios to which the original ratio pˆ i D ri /ni contributed. Notice that this method of calculating the p˜ 1 , . . . , p˜ k depends on a grouping of observations which might be more appealing on purely intuitive grounds than equation (4.22). This alternate procedure is recommended especially when k is large. Alternatively, if nj ½ 2, we can estimate pj by pˆ j D rj /nj so that the corresponding estimate for pj 1 pj is pˆ j 1 can be modified . pˆ j . This estimate / by using the bias-corrected variance estimate nj /nj 1 pˆ j 1 pˆ j .
Dose Allocation Schemes in Logit Analysis
83
4.17.3. Application to the Probit Model
Let pi D zi where zi D log xi m/s, i D 1, . . . , k. Suppose we are interested in ML estimation of ED100a for a D 0.10 and a D 0.50. - 100a D m ˆ C sˆ 1 a log ED
ˆ and sˆ denote the MLE’s of m and s, respectively. where m Cobb and Church [1995] denote the three discrete delta estimators DDA, DDB, DDBC based on the three estimators of pj 1 pj by p˜ j 1 p˜ j , pˆ j 1 pˆ j and nj pˆ j 1 pˆ j /nj 1, respectively. Cobb and Church [1995, section 4] carry out a simulation study in order to evaluate the performance of the three discrete delta estimators. They consider k distinct dose levels and the number of trials equal to n at each dose level. They consider m D kn D 60 and two combinations of k, n D 5, 12 and (12,5), and m, s D 0.5, 0.25. Log doses symmetric about 0.5 were equally spaced in [0,1] with the smallest log dose at zero and the largest at one. For each simulation run, kn D 60 Bernoulli trials were performed and the resulting k binomial observations r1 , . . . , rk were checked for cases where ML estimates do not exist. Note that if all the ABERS estimates pˆ j are equal or if p˜ 1 D Ð Ð Ð D p˜ j1 D 0 and p˜ jC1 D Ð Ð Ð D p˜ k D 1 for some j (1 j k) then the joint ML estimates of m and s (s > 0) do not exist. When the ML estimates exist, then log ED10 and log ED50 were computed and the variances of these ML estimates were estimated using the observed information matrix [M¨uller and Wang, 1990] and using the three DD methods. Using these variances, normal 95% confidence intervals were constructed for log ED10 and log ED50 and hence for ED10 and ED50 . One thousand simulation runs were made out of which only 12 runs failed to have ML estimates of m and s. The following conclusions were drawn from the simulation study. DDA and DDP are just about equal in their performances. DDB 95% confidence intervals for ED100a are lower than nominal coverage probabilities in six out of eight cases and the uniformly wide DDBC confidence intervals give improved coverage in these six cases. Thus, DDBC is better than DDB regarding coverage probabilities. Also, DDBC method has small bias as an estimator of the standard error of log ED100a . James, James and Westenberger [1984] describe analogues of R-estimators for the median q (the ED50 ) of a symmetric tolerance distribution, Fx in quantal assay. Among the R estimators, those based on a score function of the form Jt D c logft/1 tg for 0 < t < 1, and c > 0 have desirable properties in terms of asymptotic efficiency when the tolerance distribution is symmetric. When c D 1, J0 t D logft/1 tg, the logit function, is called the logit scores (LS) estimator. James, James and Westenberger [1984], via a simulation study, compare the MSE’s (mean square errors) of the LS estimator and ten other estimators of ED50 including several robust estimators considered by Hamilton [1979]. They infer that the LS estimator has the smallest MSE in 13 out of 22 cases considered. It performed well, especially for heavy-tailed tolerance distributions, thereby justifying its robustness in
4
The Logit Approach
84
terms of asymptotic efficiency. Hoekstra [1991] points out the need for an estimate of the variance of the LS estimator. Cobb and Church [1999] provide an estimate of the variance using the discrete delta method. To extend the definition of R-estimators to quantal bioassay, James et al. [1984] ˜ propose an empirical tolerance distribution Fx based on the p˜ j defined in (4.22) as follows: ˜ Fx D 0 for x xŁ0 D xŁ1 d D p˜ i for x D xŁi i D 1, . . . , k D 1 for x > xŁkC1 D xŁk C d,
4.23
˜ and Fx is linear and continuous on xŁi , xŁiC1 ] for i D 0, . . . , k. A score function J is a nondecreasing integrable function defined on (0,1) with J1 t D Jt and Jt is not identically zero. then the associated R estimator qˆ is the solution of the equation
x kC1
J
˜ ˜ Fx C 1 F2q x ˜ dFx D0 2
4.24
x0
if the solution is unique. Cobb and Church [1999] claim that a solution of (4.24) always exists. If not unique, the solution set is a bounded interval. In the latter case, the R estimator qˆ is taken to be the mid-point of the interval of solutions. For the logit score function the associated R estimator is the LS estimator. 4.17.4. Computation of the LS Estimator
˜ Let Fx be the empirical tolerance distribution given by (4.23). Let
xiC1
˜ q D Ii F,
log xi
˜ ˜ Fx C 1 F2q x ˜ dFx ˜ ˜ F2q x Fx
with xi D x0 C id, i D 0, . . . , k C 1. Then the LS estimator qˆ for q is the solution ˜ q D k0 Ii F, ˜ q D 0. Note that Ii F, ˜ q D 0 if Fx ˜ iC1 D Fx ˜ i . of the equation: hF, h is continuous and nonincreasing in q (since the derivative is negative). It can be ˜ q > 0 and for q D xkC1 , hF, ˜ q < 0. (Check that the shown that, for q D x0 , hF, argument of log is > 1 and < 1, respectively) so that x0 < qˆ < xkC1 . The function ˜ ˜ x is also Fx is linear on the interval [xi , xiC1 ] and it can be shown that F2q 1 linear on [xi , xiC1 ] for q D xr C 2 d (then 2q xiC1 D x2ri and 2q xi D x2rC1i ) (r D 0, . . . , k). For other values of q, it .is piecewise linear on/[xi , xiC1 ] with two ˜ ˜ linear segments. Thus the function tq D 12 Fx C 1 F2q x is piecewise linear
Dose Allocation Schemes in Logit Analysis
85
on [xi , xiC1 ] and the integral
xiC1
˜ q D Ii F,
log xi
tq x 1 tq x
˜ dFx
can be evaluated as follows. Note that for xi < x < xiC1 , ˜ Fx D pi C x xi p˜ iC1 p˜ i /d,
˜ dFx D d1 p˜ iC1 p˜ i dx.
˜ If F2q x is linear in xi , xiC1 , then ˜ ˜ F2q x D F2q xi C
/ x xi . ˜ ˜ F2q xi F2q xiC1 . d
Consequently,
/ x xi . ˜ ˜ ˜ xi C p˜ iC1 p˜ i F2qx C F2qx 1 p˜ i F2q i iC1 tq d D . 1 tq [ ] 4.25 Thus, 1 xiC1
1 tq x log dx D d log1 a bydy loga C bydy 1 tq x xi
0
0
where ˜ xi a D p˜ i F2q and ˜ ˜ xi C F2q xiC1 . b D p˜ iC1 p˜ i F2q Also, by performing integration by parts and simplifying terms, we obtain p˜ iC1 p˜ i d
xiC1
log xi
tq x 1 tq x
dx D
p˜ iC1 p˜ i [1 a log1 a C a log a b 1 a b log1 a b a C b loga C b C 2b].
4.26
Further, by looking at the values pˆ i D ri /ni , we can infer in which interval qˆ may be. Suppose qˆ lies in xi , xiC1 , then we can split it into two intervals, xi , q ˜ ˜ and q, xiC1 , then Fx and F2q x will be linear in each of the intervals and we evaluate them on each of the subintervals by the method outlined above. After ˜ q (some of them may be zero) we can use an iterative evaluating all the Ii F, ˆ From equation (4.21), by procedure such as Newton’s approximation to find q.
4
The Logit Approach
86
using the “bias corrected” estimate fni /ni 1gˆpi 1 pˆ i for the Bernoulli variance at dose level xŁi , we obtain ni 2 ˆ ˆ ˆ - q D var qi q ri pˆ i 1 pˆ i ni1 i:ri >0 ˆ i ri ni pˆ i 1 pˆ i C qˆ iC qn 4.27 ni1 i:r
i
where pˆ i D ri /ni (i D 1, . . . , k). Cobb and Church [1999] carried out an extensive simulation study to investigate the performance of the DDBC estimate based on 1800 replications with k D 11, ni D n D 10 or 20 and various tolerance distributions. they compute the mean, standard deviation and coefficient of variation of the DDBC estimate of RMSE (root mean squared error), and the coverage probability when the nominal confidence level is 95%. The estimated coverage probabilities do quite well for most cases, especially for n D 20. The bias is small and the precision of the standard error is good. Example Cobb and Church [1999] provide the following example. A bioassay by Chen and Ensor [1953] of anticonvulsive effects of Dilantin with cats as subjects yield the following data: zi D Dose (mg/kg) log zi # of Subjects # of Responses
8.3 2.116 10 1
10.0 2.303 10 1
12.0 2.485 10 3
14.4 2.667 10 7
17.3 2.850 10 9
The log doses are equally spaced here with xi D x0 C id, x0 D log12 3 log1.2 and d D log1.2, for i D 1, . . . , 5. Straightforward computations based on (4.22) yield p˜ 1 D 1/10,
p˜ 2 D 1/10,
p˜ 3 D 7/10,
p˜ 4 D p˜ 5 D 1,
Notice that p˜ 4 and p˜ 5 are forced to be 1 because they exceed 1 and ˜ q C I2 F, ˜ q C I3 F, ˜ q ˜ q D I0 F, hF, (since I1 D I4 D I5 D 0). From the pˆ values, we conjecture that q lies in x3 , x4 . So we split x3 , x4 into intervals x3 , q and q, x4 and write I3 D I31 C I32 . After evaluating each of the integrals explicitly using the formula in (4.26) we can use ˆ We can use 2.485 as a starting value of q and Newton’s method to find the solution q. then iterate. Cobb and Church [1999] (without showing any details) say that qˆ Dthe - 50 D exp2.554 D 12.86mg/kg. LS estimator for log ED50 D 2.554 and hence ED The DDBC estimate of the standard error of qˆ is 0.055 and the resulting 95%
Dose Allocation Schemes in Logit Analysis
87
confidence interval for ED50 is exp2.554 š 1.960.55 D 11.54, 14.32. This interval is very close to the intervals obtained by Cobb and Church [1995] for the same data using the probit models, the MLE for ED50 and estimates of standard error based on both the observed information matrix and the DDBC method. These intervals were respectively (11.61,14.40) and (11.57,11.44). Problem For the data in the example above, verify that qˆ D 2.554.
Appendix2 : Justification for Anscombe’s Correction for the Logits li
Let ni be the number of experimental units getting the dose xi and let Ri denote the number responding to that dose level i D 1, . . . , k. Then let Zi d D ln[Ri C d/ni Ri C d] we shall find d such that Zi d are asymptotically unbiased for a C bxi . Hereafter let us drop the subscript i throughout. Let R D nP C Un1/2 . Then EU D 0 and var U D P1 P and U is bounded in probability. Let l D ln[P/1 P]. Consider Zd l D lnR C d ln nP flnn R C d ln[n1 P]g
U d U d ln 1 C D ln 1 C 1/2 C Pn nP 1 Pn1/2 1 Pn D
U d1 2P 1 2PU2 C 2 C op n1 , 1/2 P1 Pn P1 Pn 2P 1 P2 n
where op n1 denote terms of smaller order of magnitude than n1 in probability. Thus 1 EZd l D 1 2P d [P1 Pn]1 C on1 . 2 Hence Zd will be almost unbiased for l when n is sufficiently large, provided d D 1/2. Thus, a single choice of d which is free of P is 1/2. We set 1 0 1 nRC . Z D log R C 2 2 Gart and Zweifel [1967] have, by using similar methods, proposed V D . n C 1n C 2/nR C 1n R C 1 such that var Z D EV, so that V is nearly unbiased for the variance of Z. 2 Cox
4
[1970] served as a source for the material of the appendix.
The Logit Approach
88
Problems (1)
The following data is taken from Morton [1942] which describes the application of retenone in a medium of 0.5% saponin, containing 5% of alcohol. Insects were examined and classified 1 day after spraying. Test of the toxicity of retenone to Macrosiphoniella sanborni: Answer: aˆ D 2.59, bˆ D 4.22 (after 4 iterations). Dose of rotenone log10 dosage Number of Number affected Proportion mg/l x insects, n r killed p 10.2 7.7 5.1 3.8 2.6 0
(2)
50 49 46 48 50 49
44 42 24 16 6 0
0.88 0.86 0.52 0.33 0.12 0
Because of the last row, we can assume that the natural mortality is zero. Estimate a and b by fitting the probit to the above data by the method of maximum likelihood. Starting with b0 D 4.01 and a0 D 2.75, obtain the maximum likelihood estimates of a and b. The following data is taken from Fisher and Yates [1963]: x n Dead p
(3)
1.01 0.89 0.71 0.58 0.41
3 8 0 0
4 8 0 0
5 8 2 0.25
6 8 3 0.375
7 8 3 0.375
8 8 7 0.875
9 8 8 1.000
Berkson [1957], fitting the logit by the method of maximum likelihood obtains (after the third iteration) aˆ D 8.0020 and bˆ D 1.2043. He starts with the provisional values of a0 D 6.94 and b0 D 1.04. Find a0 and b0 as the least-squares estimates based on xi , ˆli , i D 1, . . . , k and then iterate by the method of maximum likelihood. Compare your values with those obtained by Berkson [1957]. Also, starting with Berkson’s provisional values obtain the mle values of a and b. Answer for (2a): aˆ D 8.003, bˆ D 1.204 (after 6 iterations) Answer for (2b): Using a1 D 6.94 and b1 D 1.04 as starting values, the final answers are aˆ D 8.003, bˆ D 1.204. Let n subjects be given a dose x of a toxic substance and let r be the number responding to the same. Define 1 D ln[r C 1/2/n r C 1/2].
Dose Allocation Schemes in Logit Analysis
89
If Px D [1 C eaCbx ]1 where P(x) is the probability of a single subject responding to dose level x, show that El D a C bx C On2 Var1 D nPQ1 C On2 whereQ D 1 Px. (4)
Hint: Refer to Anscombe [1956]. Data taken from Rao [1965]. Artificial data on two response measurements y1 , y2 : Standard preparation, IU 1.25
2.50
5.00
0.125
0.250
0.500
y1
y2
y1
y2
y1
y2
y1
y2
y1
y2
y1
y2
38 39 48 62
51 55 46 51
53 102 81 75
49 53 46 51
85 144 54 85
47 51 39 41
28 65 35 36
53 53 52 54
48 47 54 74
48 51 48 50
60 130 83 60
43 50 48 51
368
178
333
192
187 203 311 199 y1s D 866, y2s D 580 (a) (b) (c)
Test preparation, mg
164 212 223 197 y1t D 720, y2t D 601
Find the relative potency of the test preparation based on y1 alone. Find the relative potency of the test preparation based on y2 alone. Consider y D a1 y1 C a2 y2 . Rao [1965, p. 215] has found the best a1 and a2 in the sense that no other linear function has nonzero regression with the dose levels independently of the given function (i.e., any other response independent of the given function is not influenced by the amount of the drug given). The optimal constants are given by a1 D 0.42066 and a2 D 2.9060 except for a common multiplicative constant. Find the relative potency of the test preparation using y as the response and compare with those in (a) and (b). Answers.
(a) (b) (c)
4
M D f0.39794 0.60206g C D 6.800956. M D 0.812692b D 9.342923 R D 6.496695 b D 57.72712, M D 0.823218 R D 6.656075
The Logit Approach
60 72.16667 D 0.83257. Hence R D 10M 72.66718
90
5 žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Other Methods of Estimating the Parameters1
Wilson and Worcester [1943a, b, c] and Worcester and Wilson [1943] estimated the parameters of a logistic response curve when two or three dose levels with dilutions in geometric progression are used. Wilson and Worcester [1943d] considered the maximum likelihood estimation of the location and scale parameters of a general response curve. The dose levels are said to be in geometric progression if they are of the form D1 , D2 D rD1 , D3 D r2 D1 , . . . Taking their logarithms and setting xi D ln Di ln D1 / log r, we see that the xi will take the values 0, 1, 2, . . . Bacteriologists prefer working with dilutions to working with doses. Notice that dilutions and doses are inversely related: the lowest dilution corresponds to the largest dose level (and typically to the largest response). Note that, throughout this chapter log and ln are the same. 5.1. Case of Two Dose Levels
Let D1 , D2 D rD1 be the two dose levels. Let the response at x D log D be denoted by P where P D [1 C eaCbx ]1 D [1 C ebxg ]1 ,
g D a/b.
Let x1 D log D1 , and x2 D x1 C log r. If Pi denotes the response at xi and Qi D 1 Pi i D 1, 2, then logP1 /Q1 D bx1 g and logP2 /Q2 D bx1 C log r g. Solving for b and g (after setting pi D Pˆ i which is the sample proportion of n animals responding to dose level xi i D 1, 2 we obtain b D bˆ D [logp2 /q2 logp1 /q1 ]/ log r and c D gˆ D [x1 logp2 /q2 x2 logp1 /q1 ]/[logp2 /q2 logp1 /q1 ]. 1 Ashton
[1972] served as a source for part of this chapter.
91
. If 1 D logp/q, then dl D dp/PQ and consequently var b D [P1 Q1 1 C P2 Q2 1 ]/nlog r2 . Now, let Wi D pi qi , Si D logpi /qi , i D 1, 2. Then applying the formula for the variance of a ratio, namely U 2 2 D q4 var 2 [q2 var U C q1 var V 2q1 q2 covU, V] V where EU D q1 and EV D q2 , after setting U D x1 S2 x2 S1 , and V D S2 S1 we have S2 S1 4 s2c D S2 S1 2 [x21 np2 q2 1 C x22 np1 q1 1 ] C x1 S2 x2 S1 2 [np2 q2 1 C np1 q1 1 ] 2S2 S1 x1 S2 x2 S1 [x1 np2 q2 1 C x2 np1 q1 1 ]. That is, nS2
S1 4 s2c
x22 x21 D S2 S1 S2 S1 C W2 W1 x2 x1 x1 S2 x2 S1 C W2 W1 1 C x1 S2 x2 S1 x1 S2 x2 S1 W1 2 C W1
S2 S1
x2 x1 C W2 W1
S1 x21 x1 x2 S2 x1 x2 S1 S2 x22 D S2 S1 C W1 W2 W1 W2 x2 S1 x2 S2 x1 S1 x1 S2 C x1 S2 x2 S1 C W1 W2 W1 W2 S2 x2 ln r x1 S1 ln r D S2 S1 C W1 W2 S2 ln r S1 ln r x1 S2 x2 S1 C , ln r D x2 x1 , W1 W2
where Wi D pi qi , i D 1, 2. Thus nS2 S1 4 2 sc D W1 1 [S2 x2 S2 S1 S2 x1 S2 x2 S1 ] ln r C W1 2 [S1 x1 S2 S1 S1 x1 S2 x2 S1 ] 1 2 2 D W1 1 [S2 ln r] C W2 [S1 ln r].
5
Other Methods of Estimating the Parameters
92
Hence
[lnp1 /q1 ]2 [lnp2 /q2 ]2 C p1 q1 p2 q2 [lnp1 /q1 lnp2 /q2 ]4
n1 ln r2 s2c D
.
If there are two values of b, each determined by two doses on n animals, then varb b0 D var b C var b0 and varc c0 D var c C var c0 . If the values of b and b0 are different, the result is of lesser value than if b D b0 , because standardization at the LD50 points does not guarantee standardization at other points. When the response rates are 0 or 1, the corresponding logits 1i will become infinite in magnitude. A reasonable approach suggested by Berkson [1953, p. 584] is to replace p D 0 by 2n1 and p D 1 by 1 2n1 . This makes sense because the values of p increase in steps of 1/n and 2n1 is halfway between 0 and n1 and also 1 2n1 is halfway between 1 and 1 n1 . Remark. If the number of experimental units are different at the two dose levels, we can replace n by n1 or n2 at the appropriate places.
Example Consider the following data: log10 x
n
r
p
0.41 1.01
50 50
6 44
0.12 0.88
Find b, c D a/b and sc . Computations yield log r D 0.6, 0.88 0.12 ˆ log10 log r D 2.88 b D b D log10 0.12 0.88 0.88 0.12 x2 log10 b log r D 0.71, c D 0.41 log10 0.12 0.88 0.88 0.12 0.120.88 501 0.62 log C log 0.12 0.88 s2c D 0.88 0.12 4 log C log 0.12 0.88 D 0.011383. Thus sc D 0.107.
Case of Two Dose Levels
93
5.2. Natural Mortality
The question is how to incorporate natural mortality among experimental units. So far, we have assumed that the response has occurred solely because of the dose. However, this may not be true; for instance, in using insects, some may die even though not treated with the toxic. Also in tests on eggs, an adjustment has to be made for the proportion of eggs that fail to hatch due to natural causes. In the case of an insecticide, let c denote the proportion of insects that would die due to natural causes, i.e., even in the absence of the toxic. Let P0 denote the proportion of deaths in the insects due to natural causes as well as the toxic. Let D denote the event of the death of an insect and T denote the effect of the toxic. Then P D PDjT D PDT/PT D [PD PDT]/PT D P0 c/1 c. If c is known or an estimate of it is available, one can make an adjustment with maximum likelihood or the logit using the above formula. So, replacing ni by ni 1 c and pi by p0i c/1 c, the likelihood equations become ni p0i P0i D 0, ni xi p0i P0i D 0. Hence, the equations are unaltered. Notice that ni and p0i are the unadjusted observed values. Also since @P0 /@P D 1 c ni Wi , etc., where Wi D Pi Qi , @2 L/@a2 D 1 c the error variances of the estimates a and b are increased by the factor 1 c1 and consequently the precision is decreased. c D 0.2 implies that a natural mortality of 20% will increase the standard errors of a and b and ED50 by a factor of 1.1. The above procedure is simple and approximately correct, although theoretically it is not very sound. The likelihood equations are obtained by replacing P by P0 (Q by Q0 ) in the expression for L since R follows the binomial distribution with parameter P0 . Furthermore, it is P and not P0 that follows the logistic law. Hence, the true likelihood equations are ni p0i P0i @P0i @L D D0 @a P0i Q0i @a ni p0i P0i @P0i @L D D 0. @b P0i Q0i @b Recall that @P/@a D PQ, @P/@b D xPQ and note that P0 p0 D 1 cp P, Q0 D 1 cQ, @P0 /@P D 1 c.
5
Other Methods of Estimating the Parameters
94
Using @P0 @P0 @P D D 1 cPQ, @a @P @a we have for the true likelihood equations Pi @L D ni pi Pi D0 @a Pi C c1 c1 Pi @L D ni xi pi Pi D 0. @b Pi C c1 c1 Since Pi /[Pi C c1 c1 ] < 1, the weights attached to the observations are diminished. The simplicity of the earlier likelihood equations is lost. Since . Pi 1 c/[Pi C c1 c1 ] D 1 c cP1 i ³ 1 c when Pi is large, the simple equations suggested earlier are reasonable unless quite a few Pi are small. Also differentiating the above likelihood equations and taking expectations we obtain @2 L D ni Wi gPi , c @a2 @2 L D ni Wi xi gPi , c E @a@b
E
E
@2 L D ni Wi x2i gPi , c, where @b2
gPi , c D Pi /[Pi C c1 c1 ], Wi D Pi Qi . Hence, we infer that the precision of the estimates is diminished. The likelihood equations are solved by using the iterative method suggested earlier. Now each of the quantities need to be multiplied by a factor gPˆ i , c where c D rc /nc and is obtained from a controlled experiment in which rc out of nc insects die in the absence of the insecticide. Notice that we use the adjusted pi in the place of the original observed p0i . It is suggested that the above method is sufficiently accurate for c 0.10. If c > 0.10 or the control batch is small then we should use an extended method of maximum likelihood procedure in which there are three unknown parameters. However, in practice, either we redo the control experiment so that the control batch is of a reasonable size or we cut short the duration of the experiment so that the resulting c is small. Estimation of Parameters with Non-Zero Natural Mortality Using GLIM one can iteratively fit the probit or logit model when the natural mortality is zero. Barlow and Feigl [1982, 1985] indicate how one can improvise the existing GLIM programs in order to fit the probit or the logit model with the non-zero natural rate c (which is known or estimated from a controlled experiment).
Natural Mortality
95
Example In two alternative forced-choice psychophysical experiments of infant light perception, a light of known intensity is shown into the eyes of the infant from either the right or the left side; the side is randomly chosen. The response is correct if the baby’s eyes shift to the other side. Since ‘unknowns’ are not permitted for the response, we use the terminology ‘forced choice’. Different intensities of light will be termed different ‘dose’ levels. It is expected that in the absence of any light, i.e., at zero dose, 50% of random eye shifts will be ‘correct’ so that the natural response rate is c D 0.50. The following data is taken from Barlow and Feigl [1982].
Dose group
Observations
Intensity (ftl.)
log10
Observations correct
% correct
Abbott’s transforms
i
ni
dose
xi
rŁi
pŁi
ri
pi
1 2 3
40 40 40
100 200 400
2.000 2.301 2.602
24 28 36
0.6 0.7 0.9
8 16 32
.2 .4 .8
The parameter of interest is the ED075 , the intensity required to produce a 75% correct response pŁ D 0.75 which is equivalent to the ED50 on the p scale. Using GLIM3 the best fitting dose-response line by the method of maximum likelihood after two iteration is hˆ dose D 7.086 C 3.025x, where h D a C bx. The estimated ED075 D antilog10 x0 , where x0 is the value of x for which hˆ D 0, thus x0 D 2.34, ED075 D 220. The 95% confidence intervals for x0 obtained by Barlow and Feigl [1982] is (2.033, 2.509) and on the original intensity scale it becomes (108, 323). These wide limits can be attributed to the poor design, namely, the poor choice of doses. Next, towards fitting the logit by the method of least squares we have li D lnpi /qi , i D 1, 2, 3 . . . , then li D 1.386, l2 D 0.405 and l3 D 1.386. x
n
p
2.00 2.30 2.60
40 40 40
0.2 0.4 0.8
1
W D np1 p
1.386 0.405 1.386
6.4 9.6 6.4
W1 8.87 3.89 8.87
Wi xi D 51.54, Wi x2i D 119.64, Wi li D 3.89, Wi li xi D 3.63.
5
Wi D 22.4,
Other Methods of Estimating the Parameters
96
Hence x D 51.54/22.4 D 2.30, 1 D 3.89/22.4 D 0.17 22.43.63 3.8951.54 119.18 D D 5.06 2 22.4119.64 51.54 23.56 a D 0.17 5.062.30 D 11.80
bD
log ED075 D a/b D 2.33 and ED075 D 215. Also, b2 SE2 a/b D sˆ2l C ˆl/b2 s2b (see p. 46) 2 0.17 1 C [119.64 22.42.302 ] D 22.4 5.06 D 0.045 C 0.00111.141 D 0.046 SEa/b D 0.042. 95% confidence limits for a/b are 2.33 š 20.042 D 2.245, 2.415. Hence, 95% confidence limits for ED075 D antilog 2.245, 2.415 D 176, 260. In SAS/STAT User’s guide [1991, pp. 1325–1350] has a Probit Program which calculates maximum likelihood estimates of regression parameters and natural mortality rate for biological quantal assay response data arising from probit or logit regression models.
5.3. Estimation of Relative Potency
If the assays of a standard preparation S and a test preparation T are parallel, then the horizontal distance between the lines will be the same at all levels of response and it provides an estimate of the relative potency of T. If the lines are not parallel, then the ratio of the two ED50 points on the response curves will provide an estimate of the relative potency. In practice, we shall be comparing only two treatments of similar chemical structures. Hence the lines will be parallel at least in the range of doses of practical interest. Then, if we are fitting a logit model, we will minimize the logit c2 , given by k1
niT piT qiT li,T ˆli,T 2 C
iD1
k2
niS piS qiS li,T ˆli,S 2
iD1
with respect to the estimates of the three unknown parameters aT , aS and b where li,T D aT C bxi and li,S D as C bxi , ˆli,T D logpiT /qiT , ˆli,S D logpiS /qiS .
Estimation of Relative Potency
97
The normal equations are niT piT qiT li,T ˆli,T D 0 niS piS qiS li,S ˆli,S D 0 and k1
niT piT qiT xi li,T ˆli,T C
1
k2
niS piS qiS xi li,S ˆli,S D 0.
1
The horizontal distance between the lines fitted via least squares is M D aT aS /b and the estimate of the relative potency is R, where log R D M. Writing ˆli,S D a0S C xi xS b,
a0S D aS C bxS and
ˆli,T D a0T C xi xT b,
a0T D aT C bxT
one can show (as in section 4.6, see p. 46) that the estimated variance of M is given by (after noting that M D xS xT a0S a0T /b: S2M D [S2a0 C S2a0 C xS xT M2 S2b ]/b2 . S
see p. 37
T
In carrying out calculations, the appropriate sums of squares and cross-products are pooled in order to obtain b. A test for parallelism can then be made. The sum of squares due to regression can be decomposed into two parts, one part having one degree of freedom for the lines and the other for nonlinearity having k1 C k2 4 degrees of freedom. If we have two two-point assays, the calculations and fitting of the lines will be much simpler. If the doses are D1 and D2 D rD1 , the standard and the test preparation are diluted in the same proportion and if the lower and upper levels of each preparation are coded as x D 0 and x D 1, respectively, then log R D aT aS log r/b. Example Decamethonium bromide, known concentration and unknown, is tested by an experiment with 24 animals at each dosage [Berkson, 1953]. It yielded the following data:
Drug
Dose, mg/ml
Tested Standard
5
Designation
x
n
r
p
D 1.5D
1 2
0 1
24 24
6 16
0.250 0.667
0.1875 0.2060 0.2221 0.1543
0.016 0.024
3 4
0 1
24 24
7 21
0.292 0.875
0.2067 0.1831 0.1094 0.2128
Other Methods of Estimating the Parameters
W
W1
98
Berkson [1953] obtains: aT D 1.3309 aS D 0.6750 b D 2.2217 M D aT aS /b D 0.2952 log R D M log 1.5 D 0.0520, s2b D 0.2406, xT D 0.5422, s2M D 0.04778,
R D 0.8872
sb D 0.49
W i xi xS D 0.3461 x D Wi
sM D 0.2186
slog R D sM log 1.5 D 0.0385 see log R D M log r sR D Rslog R / log e D 0.0786. Problem Dose D number of nematodes/larvae; n D number of test subjects (larvae) per dose; r D number of larvae that died after 5 days; T spec D nematode species; instar D age of the larvae, each instar is proportional to the age of the insect as it appears between molts. Evaluate the slopes, R and the standard errors. In this bioassay, 10 insect larvae were exposed to a known number of entomogenous nematodes in a 100 ð 15 mm petri dish for 24 h, after which the larvae were separated and held individually in 1-ounce plastic cups. The mortality was recorded as follows, 5 days after the initial exposure to the nematode:
1st instar P. robinie larvae versus S. feltial nematode species Dose n r
0.5 50 3
1 50 3
10 50 25
50 50 44
1st instar P. robinie larvae versus S. bibionis nematode species 200 30 30
0.5 40 1
1 40 1
10 40 24
50 40 36
200 40 40
We have the following data (source: Brian Forschler of the Department of Entomology, University of Kentucky):
Estimation of Relative Potency
99
3rd instar P. robinie larvae versus S. feltial nematode species Dose n r
0.5 30 12
1 30 19
5 30 29
10 30 30
3rd instar P. robinie larvae versus S. bibionis nematode species 50 30 30
6th instar P. robinie larvae versus S. feltial nematode species Dose n r
0.5 1 30 130 15 53
5 130 90
10 130 104
0.5 1 30 30 7 7
5 30 22
10 30 30
50 30 30
6th instar P. robinie larvae versus S. bibionis nematode species 50 130 120
0.5 1 70 60 4 7
5 70 18
10 60 35
50 60 47
200 25 24
Note: Problems can be assigned using the above data sets.
5.4. An Optimal Property of the Logit and Probit Estimates
Taylor [1953] has shown that a class of estimates which, in particular, includes the logit and probit estimates, is regular and best asymptotically normal (RBAN) in the sense of Neyman [1949]. In the following discussion, we shall specialize all the results from the multinomial response to the Bernoulli response. There are k sequences of independent trials and each sequence consists of ni trials. A trial of the i-th sequence is Bernoulli with probabilities Pi and Qi such that Pi C Qi D 1. Let N D k1 ni and li D ni /N. Let q denote a vector of parameters, q1 , . . . , qm and Pi D fi q, Qi D 1 fi where fi are continuous having continuous partial derivatives up to the second order. We assume that fi q > 0. Further, q1 , . . . , qm are assumed to be functionally independent. That is
fj1 ,1 fj1 ,m
@fi
.. and 1 j1 , . . . . , jm k.
6D 0 where fi,j D .
@qj
f
fjm ,m jm ,1 Definition. DP, P0 is said to be a distance function of P D P1 , . . . , Pk and P D P01 , . . . , P0k if 0
(1) (2) (3)
DP, P D 0, DP, P > 0 for P 6D P0 , DP, P0 is continuous with continuous partial derivatives up to the second order.
5
Other Methods of Estimating the Parameters
100
Let p D p1 , . . . , pk denote the observed values of P D P1 , . . . , Pk . Let si p be an estimate of qi , i D 1, . . . , m. Let @DP, p/@qj D j q, p. Lemma (Werner Leimbacher) Let D(P,p) satisfy: (4)
@2 DP, p/@pi @ q1 jpDf D @1 /@pi jpDf D cli fi,1 /fi
(5)
@2 DP, p/@ qr @ qt jpDf D @t /@ qr jpDf D ckiD1 li
fit fir fi 1 fi
where c is a constant. Then the sj p which minimize D(P,p) are RBAN estimates of qj j D 1, . . . , m. Proof. Verify the sufficient conditions of Neyman [1949, theorem 2, p. 248]. We define a class of distance functions F as follows. Definition. D(P,p) is said to belong to F if it is a distance function and it further satisfies conditions (4) and (5) of the lemma. Now we are ready to give the main theorem of Taylor. Taylor’s Theorem [1953] If h(x) is strictly monotone in x for 0 < x < 1, having continuous derivatives up to the 3rd order, and if the function g(u,v) is positive for 0 < u, v < 1, has continuous partial derivatives up to the second order, and satisfies the condition gfi , fi D [@h/@xjxDfi ]2 /fi for all i, then †
D1 P, p D
k
ni gfi , pi [hpi hfi ]2
iD1
Cg1 fi , 1 pi [h1 pi h1 fi ]2
D D11 P, p C D12 P, p belongs to the class F (that is, the estimates si p, i D 1, . . . , m which minimize D1 (P,p) are RBAN estimates).
† For our applications, gfi , pi is a function of one variable only. However, there may be other applications in which g is a function of both the variables. Then we define gfi , pi D [@h/@xjx D fi ]2 /pi or [@h/@xjx D pi ]2 /fi
An Optimal Property of the Logit and Probit Estimates
101
Proof. D1 P,p is a distance function, since h(x) is strictly monotone and continuous and has continuous derivatives. Furthermore, one can easily verify that (since D12 is an explicit function of qi D 1 pi and consequently @D12 /@pi D 0, i D 1, . . . , k fil @2 D1 /@pi @q1 jpDf D 2Nli , i D 1, . . . , k. fi A similar formula holds for @2 D1 /@pi @q2 for i D 1, . . . , k. Also we have @2 D1 /@qr @qt jpDf D 2N
k iD1
li
fir fit . fi 1 fi
Thus, D1 belongs to the class F. Corollary 1 If hx D h1 x for 0 < x < 1, then D1 P, p takes the simple form DŁi P, p D
k
ni [gfi , pi C g1 fi , 1 pi ][hpi hfi ]2 .
iD1
Special Cases Let q D a, b and assume that
fia fib
f1a f1b 6D 0 for 1 i, 1 k. As a special case we take fi a, b D [1 C expa bxi ]1
i D 1, . . . , k
Consider the logit c2 c2A D
k
ni pi 1 pi ln
iD1
pi fi ln 1 pi 1 fi
2
.
Corollary 2 Let Pi D fi a, b, i D 1, . . . , k where the fi are any functions of a and b which are continuous with continuous partial derivatives up to the second order and which satisfy that 0 < Pi < 1, and there is no functional relationship between a and b. Then c2A as given above is a member of F. Proof. Set hx D ln[x/1 x] in Corollary 1. Then it satisfies all the conditions and moreover hx D h1 x. Also @h/@xjxDfi D 1/fi 1 fi . Obviously pi 1 pi D
5
p2i 1 pi 2 p2 1 pi 2 C i . pi 1 pi
Other Methods of Estimating the Parameters
102
Also, gfi , pi D p2i 1 pi 2 /pi satisfies the regularity conditions imposed by Taylor’s theorem. It is positive and has continuous partial derivatives up to the second order and in particular 2 1 @h
D f2i 1 fi 2 /fi . gfi , fi D fi @x xDfi ˆ Substituting the expressions for h and g, c2A takes the form of DŁ1 P, p. Thus, ap ˆ and bp which minimize c2A will be RBAN estimates of a and b, respectively. Remark. It is not necessary to write gfi , pi as a function of two variables when one argument does not appear.
Special Case Probit estimate Pi D fi m, s D [xi m/s] where denotes the cdf of the standard normal variable. Define k ni Gfi , pi [1 pi 1 Pi ]2 , where c2B D iD1
2
@1 Pi
Gfi , pi D [pi 1 pi ] @Pi Pi Dpi 2
1
D [pi 1 pi ]1 @x
dx xD1 pi 1
D [pi 1 pi ]1 [f1 pi ]2 d x is the standard normal density. We will find estimates of m and dx s that minimize c2B . k 2 xi m @c2B /@m D ni Gfi , pi 1 pi D0 s iD1 s
where f D
@c2B /@s D
k 2 xi m xi ni Gfi , pi 1 pi D 0. s iD1 s s
These two equations can be written as k k k Ai C m Bi D Ci s iD1
s
A i xi C m
iD1
Bi x i D
1
Ci xi , where
An Optimal Property of the Logit and Probit Estimates
103
Ai D li Gfi , pi 1 pi ,
Bi D li Gfi , pi
Ci D li Gfi , pi xi . Corollary 3 Let c2B D ni [pi 1 pi ]1 [f1 pi ]2 [1 pi 1 fi ]2 where fi m, s D [xi m/s]. Then c2B is a member of F and hence it follows that ˆ mp and sˆ p, which minimize c2B are RBAN estimates of m and s. Proof. We apply Corollary 1 and set hx D 1 x where 1 x D 1 1 x. Also Gfi , pi can be written as 2 f1 pi C 1 pi 1 [f1 1 pi ]2 p1 i
D gfi , pi C g1 fi , 1 pi where g(u,v) satisfies the conditions of Taylor’s theorem. Hence, c2B can be put in the form of DŁi P, p and thus c2B belongs to F. Concluding Remarks. If the ni are sufficiently large such that li are held constant, then the above method of estimating the parameters a, b or m, s is better than the lengthy iterative process of maximum likelihood estimation.
5.5. Confidence Bands for the Logistic Response Curve
One of the problems of interest in logit analysis is the interval estimation of the logistic response curve. Let Y denote the response variable and let x1 , . . . , xk denote explanatory variables. We assume that Y is a dichotomous variable taking the values 0 or 1. Let the expectation of Y, namely, P(X), be related to the set of explanatory variables, x1 , . . . , xk by PX D exp[lX]/f1 C exp[lX]g where
5.1
0
X D 1, x1 , . . . , xk
lX D b0 X, b D b0 , . . . , bk 0 . Alternatively, one can rewrite the model (1) in terms of P(X) as lX D lnfPX/[1 PX]g. The unknown parameters bi are estimated from the random sample of size N X1 , Y1 . . . XN , YN . When the number of explanatory variables is one (that is k D 1), Brand et al. [1973] give a method for obtaining a confidence band for the logistic response
5
Other Methods of Estimating the Parameters
104
function based on the large sample distribution of the maximum likelihood estimators of the logit model parameters. Hauck [1983] gives an alternative method of obtaining the logistic confidence band which follows closely the corresponding method for regression models. The latter method is computationally easier than the method of Brand et al.[1973]. In the following we present the method of Hauck [1983]. We assume that b is estimated using the method of maximum likelihood assuming that the X values are fixed and that the assumptions on the sequence X1 , . . . , XN , necessary for asymptotic normality of the maximum likelihood estimators are satisfied (see, for instance, Bradley and Gart [1962]). Then, for sufficiently large N, N1/2 bˆ b is k C 1-variate normal with mean 0 and variancecovariance matrix . Using this result, it follows that Nbˆ b0 1 bˆ b is asymptotically c2 with k C 1 degrees of freedom. One can estimate the varianceˆ namely, /N by J1 where the (i,j) element of J is given by covariance matrix of b, Jij D
@2 IbjX1 , . . . , XN jbDbˆ , @bi @bj
i, j D 0, . . . , k.
5.2
and IbjX1 , . . . , XN is the natural logarithm of the likelihood of b. Thus, using (2) we have for large N P[bˆ b0 Jbˆ b c2kC1,1a ] D 1 a
5.3
which defines the 1 a confidence ellipsoid for b. However, from Rao [1965, p. 43] it follows that for any fixed vector U and positive definite matrix A: U0 A1 U D sup x
U0 X2 U0 X2 ½ . X0 AX X0 AX
5.4
Now, letting U D bˆ b and A1 D J, we obtain bˆ b0 Jbˆ b ½ [bˆ b0 X]2 /[X0 J1 X] for any X.
5.5
Using (5.5) in (5.3) we have 1 a P [bˆ b0 X]2 /[X0 J1 X] c2kC1,1a , for all X D P[jbˆ b0 Xj c2kC1,1a X0 J1 X1/2 for all X]. Thus, a conservative 1 a confidence band for lX D b0 X is [lˆ L x, lˆ U x] D [bˆ 0 X c2kC1,1a X0 J1 X1/2 , bˆ 0 X C c2kC1,1a X0 J1 X1/2 ].
5.6
The corresponding confidence band for P(X) is given by taking the inverse logit transform of (5.6): ˆ ˆ ˆ ˆ [Pˆ L X, Pˆ U X] D felL X /[1 C elL X ], elU X /[1 C elU X ]g.
Confidence Bands for the Logistic Response Curve
5.7
105
Example Hauck [1983] applies his formula to the data from the Ontario Exercise Collaborative Study [Rechnitzer et al., 1975, 1982]. Here the explanatory variables are x1 D smoking status taking values 0 or 1with p D 0.010 x2 D serum triglyceride level 100.00 with p D 0.238 N D 341. The response is whether the subject had a MI (myocardial infarction) during the study period (4 years). The logit analysis results are bˆ 0 D 2.2791, 0.7682, 0.001952 and
0.06511
1
. J D 0.04828 0.09839
0.0001915 0.00003572 0.000002586 The author plots the two sets of confidence bands 50 and 95% for smokers x1 D 1 and for nonsmokers x1 D 0. The effect of smoking status is clearly visible as is the large uncertainty in the response curves at the extremes of the observed triglyceride values. 5.6. Weighted Least Squares Approach
Here, we provide a noniterative procedure for estimating the unknown parameters a and b. A special case of this has been dealt with in section 4.9. Let Pi D Fa C bxi , i D 1, 2, . . . , k. Let ni subjects be administered the dose xi , and ri denote the number responding at xi . Also, let pi D ri /ni and li D Gpi ,
Gp D F1 p since
dl D gpdp, where gp D G0 p . var l D fvar pgg2 p. Hence . s21 D fp1 p/ngg2 p D 1/Wp (say). For instance, in the logistic case Gy D ln[y/1 y] and gy D [y1 y]1 . Hence Wp D np1 p. In the probit case Gy D 1 y and gy D 1/j[1 y], and consequently Wp D nj2 [1 p]/p1 p D nj2 l/p1 p. Now, by the method of weighted least squares, we wish to find a and b for which k
Wi li a bxi 2 ,
Wi D Wpi
iD1
5
Other Methods of Estimating the Parameters
106
is minimized. Let x D bD
W i xi
Wi , l D
li Wi
Wi . Then
Wi xi xli l Wi xi x2
D
Wi Wi xi li Wi li W i xi and 2 Wi Wi x2i W i xi
a D l bx. The estimate of log LD50 is a/b and b2 SE2 a/b D
1
Wi
C l/b2
Wi xi x2
1
.
Using the above expression, one can set up confidence intervals for log LD50 . Remark. Brown [1970] points out that in the logit case, the least squares estimator of log LD50 will be fully efficient for large samples and will be reasonable for small samples; Monte Carlo studies, carried out with squared error as the criterion, indicate that a and b are as good as the corresponding mles. However, a/b seem to be slightly inferior to its mle.
Let us apply the method of least squares to the assay of estrone by Biggers [1950] in 50% aqueous glycerol by the intravaginal route. The response is the appearance of cornified epithelial cells in vaginal smears taken at specified time after estrogen administration. Dose 103 mg 0.2 0.4 0.8 1.6
log10 dose, x 0.7 0.4 0.1 0.2
Number of mice, n
Proportion of mice responding, p
27 30 30 30
0.15 0.43 0.60 0.73
Here Wi D 23.90, Wi xi D 4.89, Wi li D 0.79, Wi xi li D 5.87
l D ln[p/1 p] 1.73 0.28 0.41 0.99
W D np 1 p 3.44 7.35 7.20 5.91
Wi x2i D 3.17
23.95.87 4.890.79 144.156 D D 2.78 2 23.93.17 4.89 51.85 a D 0.03 2.780.20 D 0.59
x D 0.20, l D 0.03, b D
log LD50 D 0.21, LD50 D 0.62
Weighted Least Squares Approach
107
b2 SE2 a/b D 0.04 C 0.00005 D 0.04. Hence, SEa/b D 0.07. 95% limits are antilog [log LD50 š 2SEa/b] D antilog 0.35, 0.07 D 0.45, 0.85.
5.7. Nonparametric Estimators of Relative Potency
In a parallel bioassay, if Fx is the dose response curve associated with the standard preparation and Fx C q is the dose response curve associated with the test preparation (when the doses are on a logarithmic scale), then q is the relative potency of the test preparation. Cobb and Church [1990] discuss methods of estimating q using quantal responses when the form of F is unknown. In the following we provide some of these methods. Let us assume that F is continuous and strictly increasing. Let x1 < x2 < Ð Ð Ð < xk be the dose levels selected for the assay of the standard preparation. Let ri be the number of positive responses out of ni Bernoulli trials at dose level xi with Eri D ni Pi where Pi D Fxi (i D 1, . . . , k). Let y1 < Ð Ð Ð < yl be the chosen dose levels for the test preparation and let si denote the number of positive responses out of mi trials at dose level yi (i D 1, . . . , l). If we can choose constants a and b such that 1 < a < x1 < Ð Ð Ð < xk < b < 1 then the Spearman-Karber (SK) estimator of m D
!b
xdFx is given by
a
ID
k
0.5xi C xiC1 piC1 pi
5.8
iD0
where x0 D a, xkC1 D b, p0 D 0, pkC1 D 1 and pi D ri /ni (i D 1, . . . , k). I is unbiased for the quadrature QD
k
0.5xi C xiC1 PiC1 Pi ,
5.9
iD0
which is the trapezoidal approximation to m. For our problem, if I1 and I2 denote the SK estimators of the means of Fx and Fx C q, respectively, then an estimate of the shift parameter q is qˆ D I1 I2 . qˆ can also be interpreted as an estimator of the horizontal distance between two parallel dose-response curves. Using this interpretation, Morton [1981] proposes generalized Spearman (G-S) estimators of shift in terms of weighted average of horizontal distances between estimated doseresponse curves. For example, if GP is any distribution function on [0,1], then the SK estimator of the mean of the composite distribution function GFx is IŁ D
k
0.5xi C xiC1 [GpiC1 Gpi ].
5.10
iD0
5
Other Methods of Estimating the Parameters
108
Since q is also the shift parameter for the family GFx C q, one can estimate q by IŁ1 IŁ2 . Morton [1981] provides various choices for G including the derivation of an “asymptotically optimal” G that minimizes the asymptotic variance, given the form of F. For example, for the binomial data, the derivative of the “optimal” G is gopt P D constantffF1 P/[P1 P]g
5.11
where f denotes the derivative of F. Note that the unbiased estimators pi of a monotone sequences of probabilities Pi , need not be monotone. So, one can consider replacing the pi ’s in (5.8) and (5.10) by some smoothed sequence of estimates. One way is to use the distribution-free maximum likelihood (DFML) estimates of ordered probabilities P1 , . . . , Pk derived by Ayer, Brunk et al. [1955]. when the dose span is constant and ni n, Church and Cobb [1973] have shown that the SK estimator is unchanged. How accurate the GS estimator (5.10) is depends on an informed choice of dose levels, and the estimator may perform poorly when GPk GP1 is much less than unity. If the smoothed vectors of observed proportions q11 , . . . , q1k and q21 , . . . , q2c for standard and test preparation assays, respectively, are such that [q11 , q1k ] and [q21 , q2c ] have non-null intersection, then modified SK estimates of shift can be constructed using the two polygonal curves with vertices x1 , q11 , . . . , xk , q1k and y1 , q21 , . . . , yc , q2c , respectively. The polygonalized DFML estimate of Fx C q was used by Hamilton [1979] to define the trimmed Spearman-Karber (TSK) estimate of a median (ED50 ) of a symmetric dose-response curve based on quantal responses at one set of dose levels. The difference of two such estimates constitutes an estimate of the shift parameter. For a specified ‘trimming constant’ A (0 A 1/2), this estimate is the mean of the polygonalized DFML estimate of the dose response curve truncated at the Ath and 1 Ath quantiles and renormalized to become a cumulative distribution function. Hamilton [1979] imposes the additional restriction that the DFML estimates p1 and pk satisfy p1 A and pk ½ 1 A in order that this estimate be calculable. He infers that the TSK estimators of ED50 can be relatively efficient and robust in certain one-sample problems. The modified TSK estimators of ED50 proposed by James et al. [1984] lead to shift estimators that perform well under a different set of assumptions. In the following, we describe some of these estimators and examine their performances via the simulation study carried out by Cobb and Church [1990]. Nonparametric Maximum Likelihood (NPML) The likelihood function which is to be maximized is Lq D
k " [Fxi ]ri [1 Fxi ]nri [Fyi C q]si [1 Fyi C q]nsi . iD1
For a specified q, Lq is maximized as a function of F by the DFML estimate of the pooled sample, and moreover, Lq is constant on an interval for which the ordering
Nonparametric Estimators of Relative Potency
109
of x1 , . . . , xk , y1 C q, . . . , yk C q remains the same. The authors claim that it can be shown that Lq is maximized at some q among the finite set of values which we can choose them to be the midpoints of the intervals where Lq is constant, or the values which are 0.5d distant from the finite end of a semi-infinite interval where Lq is constant. The point estimate for q is taken to be the value from the preceding set that maximizes Lq, or in the case of ties, the average of such values. Trimmed Spearman-Karber Estimate (TSK) For a single set of dose levels x1 , . . . , xk with DFML estimates q1 , . . . , qk adjoin dose levels x0 D x1 d and xkC1 D xk C d with associated values q0 D 0 and ˆ be the polygonalized estimate of the dose-response curve with qkC1 D 1. Let Fx vertices x0 , q0 , . . . , xkC1 , 1. For some constant A (0 A < 1/2 set #b
ˆ xdFx
HA D 1 2A1 a
with a D Fˆ 1 A and b D Fˆ 1 1 A. Using the same value for the standard and test preparation designs, we determine HA1 and HA2 and define the TSK100A estimate to be qˆ D HA1 HA2 . Generalized Spearman Estimate (GS) Given an increasing function G, with G0 D 0 and G1 D 1, and extrapolated dose levels x0 D x1 d, xkC1 D xk C d, let [see Morton, 1981] I1 D xk C 0.5d d
k
Gpi
5.12
1
where pi D ri /ni , and define I2 analogously for the test preparation sample. Then qˆ D I1 I2 . If GP D P, then I1 and I2 are the usual SK estimators. If the pi ’s are replaced by the DFML estimates, the corresponding estimator is called a smoothed GS estimator. If G is distributed uniformly on A, 1 A, 0 < A < 1/2, the GS estimator is a trimmed type of estimator of the mean of F. The corresponding estimator of the shift is called a trimmed generalized Spearman 100A (GST100A) estimator. One can use either the unbiased pi ’s or the DFML estimators. The smoothed GST and the STK estimators of shift (with the same value of A) are very similar; however, the GST defining formula utilizes the approximations of the integrals that define TSK. Quadrature Unbiased GS Estimate (GSU) Mantel [1983] expressed his concern over the fact that Gpi is biased for GPi when G is nonlinear. In reply, Morton pointed out that when G is a polynomial of degree no larger than n, then unbiased estimators for GPi exist and can be used in the place of Gpi in (5.10).
5
Other Methods of Estimating the Parameters
110
Matched Truncation Spearman Estimator (MTS) If Ux and Vx denote the polygonalized DFML estimates of Fx and Fx C q based on the responses at the x’s and y’s, respectively, let A D minfUx1 , Vy1 g and B D maxfUxk , Vyk g with Ux0 D Vy0 D A, UxkC1 D VykC1 D B. Then qˆ D J2 J1 /B A where J1 D
!b
zdUz, a D x0 , b D xkC1 and J2 is defined in an analogous fashion for
a
the second set of responses. Comparison of Estimates Cobb and Church [1990] carry out an extensive simulation study with two designs using equally spaced dose levels and equal number of trials per dose level (ni n): (a) (b)
k D 5, xi D yi D i 3, i D 2, . . . , 5, n D 8 trials per dose; k D 10, xi D yi D 0.5i 2.75, i D 1, . . . , 10, n D 4 trials per dose.
Both sets of dose levels are symmetrical about zero; the dose range for (a) is 4 and for (b) is 4.5. For each of the designs (a) and (b), a computation/simulation study was carried out for each of two location/scale models, namely the logistic (having Fx D 1 C ex 1 for the standard distribution function) and type I extreme value (having Fx D 1 expex for the standard distribution). For each model the scale parameter b and the location parameter q were selected in such a way that P1 D 0.1 and Pk D 0.9 in design (a) where Pi D Fbxi C q, i D 1, . . . , k. The resulting values of q, b are (0, 1.0986) and (0.9184, 0.7711) for the logistic and extreme value models, respectively. GS1 utilizes a weight function G which is asymptotically optimal for the extreme value distribution. GS2 employs the weight GP D 3p2 2p3 . GS2 U is the quadrature unbiased version of GS2 ; i.e., GP is estimated by 3M2 2M3 , where M2 and M3 are the unbiased moment estimators for P2 and P3 , respectively. The asymptotically optimal G for the logistic distribution is GP D P which can be considered to be the “weight” function for the SK, so that the optimal G for the logistic distribution is the SK, so that the optimal G for the logistic distribution is also included. Note that since the form of F is assumed to be unknown in the shift estimation problem, optimal weight functions cannot be determined in practice; they are used here only to help us in comparing performances of general methods. Another estimator considered by the authors is W D lSK C 1 lNPML which is a weighted average of SK and NPML where l D 1 C 0.02nk1 . Finally, parametric maximum likelihood (PML) estimators were considered for both models in order to compare the shift estimators for the parametric and nonparametric approaches. The PML estimates are obtained by assuming parallel dose response curves
Nonparametric Estimators of Relative Potency
111
of known form (either logistic or extreme value) with three unknown parameters q1 , q2 and b D b1 D b2 for each model. The maximization of the log likelihood function can be achieved by using Newton-Raphson iteration. The PML shift estimate is the difference of the MLE’s, qˆ 1 qˆ 2 . Based on the 800 simulated data sets per estimated design pair, the authors draw the following conclusions: (1) (2) (3)
(4) (5) (6)
Among the Spearman-type estimators, the TSK10, GST10 and GS2 have the lowest average RMSE’s (root mean square errors) which are almost the same. Smoothing provides little or no improvement. The NPML estimator has very little bias. However, its variance exceeds that of the SK-type estimators. The weighted average W was designed to combine the small bias of NPML with the small variance of SK so as to obtain an RMSE smaller than either of the two estimators. Keeping the dose levels fixed and increasing n from these levels yield smaller improvement over NPML and larger improvements over SK. The average RMSE for the PML estimator is more than 30% higher than for SK or for GS2 , and almost 20% higher than for W. For the design cases studied by the authors, the nonparametric estimators of shift outperform the parametric ones, even when the parametric model chosen is the correct one.
Interval Estimation Eight estimators, SK, GS2 , TSK10, TSK20, GST10, GST20, NPML and W were selected for an interval estimation simulation study. The intervals are of the form qˆ 2SE, qˆ C 2SE. 1000 replications of the interval were made. The following conclusions were drawn by the authors. (1) (2)
(3) (4)
5
The percentage coverage probability values of GST20 and TSK20 are close to the nominal ones. The SK method is recommended only when P1 and Pk are closely matched for the two experiments. Further, all are less sensitive than SK to P1 , Pk matching, and all performed better than SK. TSK20 confidence intervals are the shortest with near nominal coverage probability. Results of this study suggest that weighted averages of NPML and TSK estimators are worth considering.
Other Methods of Estimating the Parameters
112
6 žžžžžžžžžžžžžžžžžžžžžžžžžžžž
The Angular Response Curve and Other Topics
Wilson and Worcester [1943d] have considered the bioassay on a general curve and in particular, the angular response curve.
6.1. Estimation by the Method of Maximum Likelihood
Let the response to dose level x be given by the general function, namely P D [1 C Fa C bx]/2 D [1 C Fy]/2
6.1
where F is some given function and y D a C bx. For the logistic (or growth curve) Fy D tanhy/2 and for the angular transformation Fy D sin y, p/2 y p/2. Assuming that at dose level xi , ri experimental units out of ni respond to the dose i D 1, . . . , k where ri has a binomial distribution with parameters ni and Pi , the log likelihood of a and b is ln Ci C ri ln1 C Fi C ni ri ln1 Fi N ln 2 6.2 LD where Ci D nr i , Fi x D Fa C bxi and N D n1 C Ð Ð Ð C nk . Letting f D F0 D @F/ i @y and f0 D @f/@y and noting that @y/@a D 1 and @y/@b D x we have @L/@a D 2ri fi 1 F2i 1 ni fi 1 Fi 1 D 0 @L/@b D 2ri xi fi 1 F2i 1 ni xi fi 1 Fi 1 D 0. 6.3
The above equations can be explicitly solved only when the number of dose levels is two, in which case we have, from the definition of F, F1 2Pi 1 D a C bxi
i D 1, 2.
113
Consequently, b D [F1 2P2 1 F1 2P1 1]/x2 x1 and a D F1 2P1 1 x1 x2 x1 1 [F1 2P2 1 F1 2P1 1] D [x2 F1 2P1 1 x1 F1 2P2 1]/x2 x1 . Now, estimates a and b for a and b, respectively can be obtained by replacing Pi by their estimates pi D ri /ni i D 1, 2 in the above equations. Special cases can be obtained by setting F1 u D 2 tanh1 u and F1 u D sin1 u for the logistic and angular cases, respectively. The mle values of a and b can be obtained from (6.3) by successive iterations. We need the matrix of second derivatives @2 L/@a2 D 2ri f0i 1 F2i 1 C 4ri f2i Fi 1 F2i 2 ni f0i 1 Fi 1 ni f2i 1 Fi 2 . After noting that Eri D 1 C Fi ni /2, we have ni f2i 1 F2i 1 . E@2 L/@a2 D Similarly, one can obtain E@2 L/@a@b D ni xi f2i 1 F2i 1 and ni x2i f2i 1 F2i 1 . E@2 L/@b2 D Let D D det. of the matrix of the expectation of second derivatives. Let si D F1 2pi 1,
pi D ri /ni i D 1, . . ..
One can fit a provisional line for the set of points xi , Si i D 1, . . . , k and let a0 , b0 be the initial values of the intercept and slope. Let Si0 D F1 2Pˆ i,0 1 where si0 corresponds to the ordinate at xi on the provisional line. @L/@a D 0 implies that ni pi Pˆ i fi 1 F2i 1 D 0 [Fi D Fˆsi D 2Pˆ i 1, sˆ i D a C bxi ] and @L/@b D 0 implies that ni pi Pˆ i xi fi 1 F2i 1 D 0. For the angular transformation fs/[1 F2 s] D sec .s. Also write pi Pˆ i D pi Pˆ i0 C Pˆ i0 Pˆ i and . Pˆ i0 Pˆ i D 12 [Fsi0 Fˆsi ] D 12 si0 sˆ i fsi0 D 12 da C xi dbfsi0 ; substituting these relations in the likelihood equations, we obtain, since Fs D sin s, ni pi Pˆ i0 sec si0 D 12 ni da C xi db, ni pi Pˆ i0 xi sec si0 D 12 ni da C xi dbxi
6
The Angular Response Curve and Other Topics
114
where s b0 xi . Thus, i0 D a0 C n n x 2 n p Pˆ i0 sec si0 i i i da D i i or ni xi ni x2i db 2 ni pi Pˆ i0 xi sec si0 da ni x2i ni xi 2 n p Pˆ i0 sec si0 D 1 i i . db D ni 2 ni pi Pˆ i0 xi sec si0 ni ni xi x2 , x D ni xi ni . where D D Notice that iteration here should be easy since the inverse of the information matrix can be evaluated once and for all, and only the right side vector needs to ˆ D 1 1 sin s, be evaluated at each iteration. Since Pˆ D 12 1 C sin s, 1 Pˆ D Q 2 1 ˆ ˆ 1/2 1 ˆ ˆ 1/2 . Thus sec si0 D 2 Pi0 Qi0 . sec s D 2 PQ
6.2. Alternative Method of Estimation
First-order Taylor’s expansion gives ˆ i 1/2 pi Pˆ i D si sˆ i Pˆ i Q Substituting this in the likelihood equations we readily obtain ni si sˆi D 0, and ni xi si sˆi D 0, where sˆ i D a C bxi Hence bD ni xi xsi s ni xi x2 a D s bx, where xD ni xi /N, s D ni si /N,
ND
ni .
Notice that the latter method does not require iteration. In either case
1 . s2b D ni xi x2 , since var si D n1 i D 1, . . . , k i (which follows from the method of differentials: cos si dsi D 2dpi ). 1 x2 and C N ni xi x2 ni xi N ni xi x2 . cova, b D s2a D
Hence, an estimate of the variance of c D a/b is
s2c D b2 [var b C c2 var a 2c cova, b] D b2 s2a0 C c x2 s2b , a0 D a C bx 2 1 x c b2 . D C N ni xi x2
Alternative Method of Estimation
115
6.3. Comparison of Various Methods
There are three methods that are currently used in practice: (1) the probit method, (2) the logit method and (3) the angular transformation method. In the range of interest for the dosage, they all look alike and it does not matter which method you use. It seems to be a matter of taste. The logit method seems to be natural and easy to use. The minimum logit method is elegant. It is possible that one could vary the linear effect function from a C bx to a C bx C gx2 or to a polynomial function.
6.4. Other Models
Other functions considered in the literature are the rectangular and the one suggested by Wilson and Worcester [1943b] dP D bfa C bx dx,
where
fy D 12 1 < y < 1, or
fy D
1 1 2
2 3/2
Cy
and zero elsewhere
1 < y < 1.
The latter implies that y . Py D 12 1 C 1 C y2 1/2 Some investigators, for instance, Bartlett [1936] and Bliss [1937] suggested the transformation P D sin2 q0 q p/2. Since dp D 2 sin q cos qdq where p denotes the sample proportion responding to the treatment, 1 var p P1 P D , if q is expressed in radians, D 4nP1 P 4n 4 sin2 q cos2 q 180 2 1 821 D D , if q is expressed in degrees. p 4n n
var qˆ ³
When the above transformation is used in the bioassay problems, qˆ as an estimate of a C bx is biased. Anscombe [1956] has suggested the following modification in order to overcome this bias. He suggests to get r C 1/4 1/2 ˆq D sin1 , instead of qˆ D sin1 fr/n1/2 g. n C 1/2 Then, one can show (after expanding the right side in powers of nP) Eqˆ D a C bx C 0n2 ,
var qˆ ¾ D 4n1
1/2
ˆ ' P Q/2nPQ1/2 b1 q where b1 q denotes the skewness.
6
The Angular Response Curve and Other Topics
116
6.5. Comparison of Maximum Likelihood (ML) and Minimum c2 Estimates (MCS)
For some simple logit [Berkson, 1955] and probit [Berkson, 1957a] models, the MCS estimates had smaller mean square errors (MSE) than the ML estimates. Approximations to the MSEs of the two estimates to the order n2 (where n denotes the average number of observations per cell) were derived by Amemiya [1980] for the logit model and found that the second-order approximation to the MSE of the MCS estimates was smaller than the corresponding ML approximation in every example presented. Although the MCS estimates seem to be superior to the ML estimates with respect to the quadratic loss function, there is no theoretical justification. However, it should be noted that asymptotically MCS and ML estimates are equivalent. Smith et al. [1984] have carried out Monte Carlo studies in order to study the small-sample behavior of maximum likelihood and minimum c2 estimates in a simple dichotomous logit regression model. Statistics designed to test regression coefficients converge slowly to normality for designs in which doses are placed asymmetrically about the ED50 . Designs with doses symmetric about the ED50 can be used with confidence at moderate sample sizes. There is some evidence that the ML is preferable to MCS when statistical inferences are to be made with the logit model. Amemiya [1980] shows that the minimum chi-square estimator is superior to the MLE for a much broader class of models than those considered by Berkson. However, the difference between Berkson’s estimator and the bias-corrected MLE is mostly negligible. Berkson [1957] after evaluating the exact MSE’s of the MLE and the minimum probit chi-square estimator shows the superiority of the latter over the MLE. Amemiya [1980] has derived the mean squared error matrices to the order n2 of these estimators for the probit model. Fountain and C.R. Rao [1993] investigate Berkson’s original example of a bioassay experiment in which there are 3 dose levels, x1 , x2 , x3 , such that x1 D x2 1 and x3 D x2 C 1 and n 10. The sufficient statistics takes the form of T1 , T2 D ri , r3 r1 . Throughout they set a D 0 and b D log7/3. Let the logistic response probabilities at x1 , x2 , x3 be p1 , p2 and p3 , respectively. They generate different examples by specifying p2 D 0.5, 0.6, 0.7, 0.8 which determine the values of x2 . They generate four more examples imposing the monotonicity condition r1 r2 r3 . They consider seven estimators of a and b, namely, (a) (b) (c) (d) (e)
˜ ˜ b, minimum logit chi-square (denoted by a, ˜ ˜ 1 , T2 ), ˜ 1 , T2 , b˜ MN D EbjT Rao-Blackwellized a˜ and b (namely a˜ MN D EajT conditional median versions of the Rao-Blackwellized estimators (namely ˜ 1 , T2 ]) ˜ 1 , T2 ] and b˜ MD D Median[bjT a˜ MD D Median[ajT ˜ ˜ b to be denoted by a˜ U and b˜ U (see Fountain Amemiya’s [1980] bias corrected a, et al. [1991], for details regarding calculations), ˆ ) maximum likelihood estimators to be denoted by aˆ and b,
Comparison of Maximum Likelihood (ML) and Minimum c2 Estimates (MCS)
117
(f) (g)
partial bias corrected MLE’s to be denoted by aˆ M and bˆ M , bias corrected MLE’s to be denoted by aˆ U and bˆ U .
They use two criteria, namely mean-squared error and Pitman’s measure of closeness (PMC) defined by PMC qˆ 1 , qˆ 2 D Pqˆ 1 q < qˆ 2 q for a pair of estimators qˆ 1 and qˆ 2 of q. They draw the following conclusions: (a) (b) (c) (d) (e)
Among all the estimators of a, the clear winner, based on either MSE or PMC, ˆ was the worst. is aˆ U , the bias corrected MLE. By either criteria, aMLE With respect to MSE, the best estimator of b was bˆ U , the bias-corrected MLE. With PMC, however, the best estimator in 6 out of 8 examples was bˆ MN (see ii). The application of the monotonicity condition had little effect on the performance of the best and worst estimators. Berkson is justified in conjecturing that the minimum chi square estimator is preferable to the MLE. The modifications of the minimum chi-square and the MLE seem to outperform either of the unmodified versions. Further, the “best” need to be qualified with respect to the criterion of MSE or PMC.
6.6. More on Probit Analysis
Fitting a probit regression line by eye to the results of special species of nematodes. The empirical probits are obtained as 1 pi D xi m/s,
xi D log dosei i D 1, . . . , 6.
The first column suggests that the natural mortality of the P. robiniae larvae is zero. Hence there is no need to make any adjustment for natural mortality. When the empirical probits are plotted against xi , they lie nearly on a straight line and such a line can be drawn by eye. From this line probits corresponding to different values of x are read and are given in the last row of table 6.1. They are converted back to percentages by the standard normal table. From figure 6.1 we see that a probit value of 0.0 corresponds to a dose level m D 1.0 which is the estimate of log LD50 . Thus, LD50 is estimated as 10.0. In order to estimate the slope, we note that a change of 0.7 in x will result in an increase of 0.8 in the probit. Thus,the estimated regression coefficient of probit on log dose, or the rate of increase of probit per unit increase in x is
6
The Angular Response Curve and Other Topics
118
Table 6.1. Data of 6th instar P. robiniae larvae versus S. bibionis nematode species (source: Brian Forschler of the Department of Entomology, University of Kentucky)
Fig. 6.1.
b D 0.8/0.7 D 1.14. Also b D 1/s where s D sˆ is the estimate of s D 1/1.14 D 0.875. Thus, the relation between probit and log dose is y D 1.14x 1.0.
More on Probit Analysis
119
Now we compute the expected probits using the above linear relation and test the goodness of the model. x
y
P D y
n
r
nP
r nP
r nP2 /nP1 P
0.30 0 0.70 1.0 1.70 2.30
1.482 1.140 0.342 0.00 0.798 1.482
0.069 0.127 0.367 0.500 0.787 0.931
70 60 70 60 60 25
4 7 18 35 47 24
4.83 7.62 25.69 30.00 42.22 23.28
0.83 0.62 7.69 5.00 4.78 0.72
0.15 0.06 3.64 1.67 2.54 0.32
c24 D
6
ri nPi 2 /nPi 1 Pi D 8.38
iD1
. P-value D Pc24 > 8.38 D 0.08. Hence the probit regression line is adequate. Notice that the number of degrees of freedom is 4 since we have estimated two parameters, namely m and s. Recall that for the probit x 1 P D s2p1/2 exp t m2 /2s2 dt. 1
If y D a C bx D bx C a/b, then m D a/b
and
s D 1/b.
Hence, @Py/@y D fy,
@P/@a D fy
and @P/@b D xfy.
6.7. Linear Logistic Model in 2 × 2 Contingency Tables1
Let Pi D [1 C expx0i b]1 , Pi , li D ln 1 Pi
and
where xi D xi1 , . . . , xis and b D b1 , . . . , bs , s ½ 2. 1 Cox
6
[1970] served as a source for parts of the material in sections 6.7 and 6.8.
The Angular Response Curve and Other Topics
120
Consider a 2 ð 2 contingency table in which there are two different values for the probability of success, namely P1 and P2 corresponding to the two groups of observations. The associated linear logistic model is l1 D a
l2 D a C
where denotes the difference between the two groups in a logistic scale P2 1 P1 . D l2 l1 D ln P1 1 P2 exp is the ratio of the odds of success versus failure in the two groups. Among all criteria for studying the difference between groups, the measure seems to be valid under a broad range of configuration of the parameters P1 and P2 than d D P2 P1 . This is clear because with a fixed , a can vary arbitrarily so that we have a class of situations in which the overall proportion of success is arbitrary; however, a specified difference d is consistent with only a limited range of individual values for P1 and P2 . It should be noted that Berkson [1958], Sheps [1958, 1981] and Feinstein [1973] are critical of taking the ratio of the rates as a measure of association. They point out that the level of the rates is lost. For further details see, for instance, Fleiss [1981, chapter 6]. Let us consider the following case. For each member of a population, there are two binary variables U and W. Thus, there are four types of individuals corresponding to U D W D 0, U D 1, W D 0; W D 1, U D 0; U D W D 1, the corresponding probabilities being p00 , p10 , p01 , p11 , respectively. For instance, 1 if he is a nonsmoker 1 if he has cancer WD UD 0 if he is a smoker 0 if he has no cancer. Suppose that we are interested in comparing the probabilities that W D 1 for two groups of individuals for which U D 0 and U D 1. There are at least three sampling procedures by which data can be generated in order to study this problem. (1) First a random sample may be obtained from the entire population, which enables us to estimate any function of the pij where pij is the probability of the (i,j)-th cell. In particular PW D 1jU D 0 D p01 /p00 C p01 and PW D 1jU D 1 D p11 /p10 C p11 and the difference of the logistic transforms of these is p11 p10 p11 p00 PWjU PWjU log log D log D log log p01 p00 p10 p01 PWjU PWjU if U D 0[W D 0] is denoted by U[W]. (2) We can draw two random samples, one from each of the subpopulations for which U D 0 and U D 1, respectively, the sample sizes being fixed and having no relation to the marginal probabilities p00 C p01 and p10 C p11 . Suppose we take
Linear Logistic Model in 2 ð 2 Contingency Tables
121
equal sample sizes and the values of W obtained in a prospective study (in which one of the populations is defined by the presence and the second by the absence of a suspected previous factor). Individual pij values cannot be estimated and we can estimate only functions of the conditional probabilities of W given U D 0 and U D 1: In particular, the logistic difference given above. (3) Thirdly, we can draw random samples from subpopulations for which W D 0 and W D 1. Thus, we might take equal subsample sizes, constituting a retrospective study (in which one of the two populations is defined by the presence and the second by the absence of the outcome under study). Here we find the value of U for each patient who has cancer and who does not have cancer. In this study we can estimate functions of p10 /p00 C p10 and
p11 /p01 C p11 .
Now the logistic difference of these is the same as obtained in scheme (1). Thus, the logistic difference is the same for both the prospective and retrospective studies. Instead, if we wish to estimate the difference PW D 1jU D 0 PW D 1jU D 1 D p01 p00 C p01 1 p11 p10 C p11 1 , this can be done from scheme (1) and not from scheme (3). Let Rj respond to dose level j among nj subjects where the probability of responding is Pj . Then recall that we noted from Anscombe [1956] that 1 1 n j Rj C 2 Zj D ln Rj C 2 is nearly unbiased for lj D ln[Pj /1 Pj ]
and
Vj D nj C 1nj C 2/nj Rj C 1nj Rj C 1 is also nearly unbiased for the variance of Zj [Gart and Zweifel, 1967]. In a 2 ð 2 contingency table, Z2 Z1 is an estimate of the logistic difference and has a standard error of approximately V1 C V2 1/2 . Hence, using the approximate normality of the distribution of Z2 Z1 , one can test H0 D 0 and set up confidence intervals for . Since the logistic model is saturated (i.e. there are as many independent logistic parameters as there are independent binomial probabilities) the unweighted combination Z2 Z1 is the unique estimate of from the least-squares analysis. Example 2 Data from Sokal and Rohlf [1981]. A sample of 111 mice was divided into two groups: 57 that received a standard dose of pathogenic bacteria followed by an antiserum and a control group of 54 that
6
The Angular Response Curve and Other Topics
122
received the bacteria but no antiserum. After the incubation period and the time for the disease to run its due course had elapsed, it was found that 38 mice were dead and 73 survived the disease. Among the mice that died, 13 had received bacteria and antiserum while 25 had received bacteria only. It is of interest to find out whether the antiserum had in any way protected the mice. The data is displayed in the following 2 ð 2 table: Bacteria and antiserum
Bacteria only
Total
Dead Alive
13 44
25 29
38 73
Total
57
54
111
Proportion of those rats that received bacteria and antiserum D 0.514; proportion of those rats that received bacteria only D 0.486. Z2 D ln44/13 D 1.219,
V2 D 0.095
Z1 D ln29/25 D 0.148,
V1 D 0.073
Z2 Z1 D 1.071 which has mean and standard deviation, 0.095 C 0.0731/2 D 0.41. We reject H0 D 0 against H1 6D 0 at any significance level > 0.009. The 95% confidence interval for is (0.267, 1.875).
6.8. Comparison of Several 2 × 2 Contingency Models
If we wish to combine the information from several 2 ð 2 tables by a weighted mean, then Cox [1970, pp. 78–79] suggests that we use the weighted transforms given by Rj 12 w and Zj D ln nj Rj 12 Vjw D nj 1/Rj nj Rj . For the j-th 2 ð 2 configuration, we consider Rj2 12 Rj1 12 w ˜ ln j D ln n2j Rj2 12 nj1 Rj1 12
Comparison of Several 2 ð 2 Contingency Models
123
with an associated variance w D V j
nj2 1 nj1 1 C . Rj2 nj2 Rj2 Rj1 nj1 Rj1
If the logistic effect is the same for all k 2 ð 2 tables, the weighted leastsquares estimate of is the weighted mean of the separate estimates and is given by (by minimizing j j 2 /Vj with respect to ) ˜ w D
w ˜ jw /V j
w 1/V j
w 1 ˜ w is not a function having the approximate variance [ 1/V ] . Note that j 2 of the sufficient statistic for the problem . If the logistic effects j are the same, w 1/2 ˜ jw ˜ w ]/[V ] ³ standard normal variable. Thus then the residual Sjw D [ j under H0 , k
w ˜ jw ˜ w ]2 /V [ ³ c2k1 . j
jD1
Further, the ranked values of Sjw can be plotted against the expected standard normal order statistics in a sample of size k. Other alternative procedures for combining information from several 2 ð 2 tables have been dealt with in chapter 10 of Fleiss [1981]. For point and interval estimation of the common odds ratio from several 2 ð 2 tables, the reader is referred to Gart [1970]. Example 3 Consider the data in Example 2 and some other fictitious data pertaining to the immunity provided to the rats by the antiserum. Batch
1 2 3 4
Bacteria and antiserum
Bacteria only
total
alive
total
alive
57 50 60 40
44 38 45 29
54 50 60 40
29 26 31 21
2 If
Y1 , . . . , Yn are independent Bernoulli random variables with probability of success for Yi being Pi D! [1 C expx0i b]1 , then the likelihood function can be written as n s n 0 exp jD1 bj Tj /iD1 1 C expxi b where Tj D iD1 xij Yi . Thus T1 , . . . , Ts is the sufficient statistic.
6
The Angular Response Curve and Other Topics
124
Logistic analysis of the above data: Batch
˜ jw
w V j
w 1/V j
w 1/2 1/V j
Residual Sjw
Logistic sum1 D Zjw
1 2 3 4
1.096 1.043 1.053 0.896
0.171 0.186 0.156 0.225
5.848 5.376 6.410 4.444
2.418 2.319 2.532 2.108
0.160 0.030 0.058 0.282
4.64 4.058 4.139 3.822
1 Sum of the weighted logistic transforms for bacteria and antiserum, and for bacteria w w w w only, namely, Zj1 Vj1 1/2 C Zj2 Vj2 1/2 . 4
w
1/Vj D 22.078,
1
˜ w D 22.748/22.078 D 1.030, weighted mean D 4
w2
Sj
D 0.109 (not significant).
1
˜ w 22.0781/2 D 4.84 (highly significant). Also, From the preceding analysis, we draw the following conclusions. The data strongly suggest that there are not significant differences between the batches. Thus, they can be combined to obtain a pooled estimate of . When the ranked residuals are plotted against corresponding expected normal order statistics, the points are almost collinear. The plots of the difference of the logistic transforms against the sum show (see last column) that batch No. 4 not only has a small difference, but also a small sum. We surmise that the effect of the antiserum is highly significant. Remark. The above data can also be analysed using the likelihood of the observations as functions of the differences 1 , . . . , k ( is defined at the beginning of section 6.7). This likelihood approach has been taken by Cornfield [1956] for studying the associations between smoking and lung cancer.
6.9. Dose Response Models with Covariates
In several practical situations, the probability of a response depends no only on the dose level, but also on certain characteristics (i.e., covariates) of the experimental unit such as sex, age, etc. Precision of estimates of the parameters may be diminished if this dependence of response on covariates is ignored when modeling the effects of the dose. Wijesinha and Piantadosi [1995] consider a dose-response model with covariates and test for the covariate effects. Let hx, a denote a general dose response function where x denotes the dose and a D a1 , . . . , as 0 is a vector
Dose Response Models with Covariates
125
of fixed (unknown) parameters and hx, a is the success rate as a function of the dose level. We assume that hx, a behaves like a cumulative distribution function. In some situations, like cancer study, the property that the response curve assumes the asymptotic upper limit of 1 may not be satisfied. Then, one can overcome this difficulty by considering the four-parameter dose response curve discussed by Batkowsky and Rudy [1986] which allows an asymptotic upper limit of less than one. Also, assume that the success rate depends on the covariate effects individually through the parameters ak (k D 1, . . . , s. That is, if Z D z1 , . . . , zp is a vector of covariate values (either continuous or discrete), we can express the dependence of ai on Z as a regression of the form ak D gk Z, gk , k D 1, . . . , s, where gk is the vector of regression coefficients associated with the kth model and the gk need not be the same for all k. Letting g D g01 , . . . , g0s 0 . We can write the final parametric model as px, Z, g D hx, g1 Z, g1 , . . . , gs Z, gs where px, Z, g is the success rate as a function of the dose level and the covariate effects. Let xi denote the ith dose level and yij and zij denote the binary response and the covariate vector, respectively, of the jth unit at the ith dose level (j D 1, . . . , ni and i D 1, . . . , k). Then the log likelihood function of the observed data can be written as ni k " " pyij 1 p1yij lY, g D log iD1 jD1
where p D pxi , Zij , g, Y is a N ð 1 vector of binary responses and N D kiD1 ni . 1 MLE of g can be determined and I g, the inverse of the information matrix can be calculated and the large sample properties of the MLE’s can be utilized.
Example Let hx, a D f1 C expa1 x a2 g1 where a1 denotes the slope and a2 denotes the median effective dose and a D a1 , a2 0 where it is assumed that a1 and a2 are both positive. Hence, let a1 D expg01 Z and
a2 D expg02 Z
where g1 and g2 are p ð 1 vectors of regression coefficients. For instance, suppose the parameters depend on the sex of the experimental unit. Then, Z D 1, z0 where z is an indicator variable denoting sex (female D 0, male D 1). Also, g1 D b0 , b1 0 and g2 D a0 , a1 0 where expb0 and expb0 C b1 represent the slopes for females and males while expa0 and expa0 C a1 denote the median effect doses for females and males. The final parameter model is given by px, Z, g D [1 C expfb0 C b1 zx ea0 Ca1 z g]1 where g D g1 , g2 0 .
6
The Angular Response Curve and Other Topics
126
The author’s second approach is a semi-parametric model in which it is assumed that the logit ratio (the difference between the logit of success rate at the covariate level and that of the baseline level) can be expressed as a regression model in the covariate effects and the dose level. However, it is beyond the scope of this book to deal with semi-parametric models. The authors conduct a simulation study in order to compare these two approaches. They find that in the first approach (i.e., the parametric model) the difference in intercepts seems to be poorly estimated. This can be attributed to the fact that the model is not being parameterized to estimate this quantity efficiently and that it must be calculated from the estimates of the parameter values. On the other hand, the estimate of the slope of the difference from the parametric model has a smaller bias and standard deviation than its counterpart from the semi-parametric model. Finally, the authors consider data from a hypothetical dose response experiment in which the smoking status and the age of the experimental units are the covariates. They analyze the data (which is too long to be reproduced here) using both the models. Here, a1 smoke,age D expfa0 C b1 smk C b2 ageg a2 smoke,age D expfb0 C a1 smk C a2 ageg, where the indicator variable smk (smoker D 1, non-smoker D 0) represents the smoking status and the continuous variable age denotes the age of the subject. The final parametric model is px, smk, age D [1 C expfa1 x a2 g]1 . Using the large-sample properties of the MLE’s, one can test the significance of each parameter based on the Wald statistic. Based on the p-values, the authors surmise that age has no significant effect on the dose-response relationship, whereas the slope and the median effect dose are significantly different for the smokers and non-smokers, indicating that the dose response curves are not the same for the two groups.
Dose Response Models with Covariates
127
7 žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Estimation of Points on the Quantal Response Function
Example Suppose a number of plastic pipes of specified length are subjected to impacts of energies. Several levels of the response may be noted. In particular, we could record whether or not a brittle failure has happened. Let Y(x) denote the response to dose level x and assume that Y(x) takes the value 0 or 1; P[Yx D 1] D Px where P(x) increases with x. We will be interested in finding xp for which Pxp D p. So far we have been dealing with known forms for P(x). Here we assume that P(x) is unknown. This problem is closely related to the Robbins-Monro (R-M) process which is slightly more general and in the following we will review some aspects of the R-M process.
7.1. Robbins-Monro Process
Given a random response Y(x) at x having EYx D Mx, we wish to estimate q such that Mq D a. Robbins and Monro [1951] suggest the following sequential procedure. Guess an initial value x1 and let yr xr denote the response at xr r D 1, 2, . . .. Then choose xnC1 by the formula xnC1 D xn an [yn xn a]
n D 1, 2, . . .
7.1
where ar , r D 1, 2, . . . is a decreasing sequence of positive constants and an tends to zero. If we stop after n iterations, then xnC1 is the estimate of q. As a special case, let a D 0, then the R-M procedure becomes xnC1 D xn an yn xn .
7.2
Notice that xnC1 < xn if yn xn > 0 and xnC1 > xn if yn xn < 0. An appropriate choice for an is c/n, where c is chosen optimally in some sense that will subsequently
128
be made clear. It is not unreasonable to assume that Mx > 0
for all x > q,
Mx < 0
for all x < q.
Derman and Sacks [1959] and Sacks [1958] studied the convergence of xn , the rate of convergence, the choice of an and the asymptotic distribution of xn . Sacks p [1958] has shown that when an D c/n, the asymptotic distribution of xn q n is normal 0, s2 c2 /2cb 1, where b D M0 q provided cb > 1/2, and s2 D varYx D E[Yx Mx]2
for all x.
For a stochastic approximation, we need the following definitions. Types of convergence: Let X1 , X2 , . . . be a sequence of random variables. (1)
Mean square convergence: Xn is said to converge to q in quadratic mean if: lim EXn q2 D 0
n!1
(2)
(This is also called weak convergence.) Convergence in probability: For any e > 0, lim PjXn qj > e D 0.
n!1
(3)
Strong convergence: P lim Xn D q D 1. n!1
Note that 3 ) 2, 1 ) 2. Definition Stochastic approximation is concerned with procedures or schemes which converge to some value that is sought, when the observations are subject to error due to the stochastic nature of the problem. Interesting schemes are those that are selfcorrecting, namely those in which an error always tends to disappear in the limit and in which the convergence to the destined value is of some specified nature (one of the three types mentioned above). In bioassay, sensitivity testing or in fatigue trials, the statistician is often interested in estimating a given quantile of a distribution function on the basis of some data, each datum being of the 0–1 type. Example Let Fx D P (a metallic specimen will fracture if subjected to x cycles in a fatigue trial). A specimen tested in such a way will represent an observation and takes either 0 or 1. Then the problem of interest would be to estimate the number of cycles
Robbins-Monro Process
129
x such that for a given a, Fx D a. Probit analysis and up and down procedures depend upon the functional form of F. For real x, let Y(x) denote the response to an experiment carried out at a controlled level x and have the unknown distribution function Hyjx and regression function 1 Mx D E[Yxjx] D y dHyjx. 1
No knowledge of M(x) and Hyjx is assumed.
7.2. Robbins and Monro Procedure
Let a be a given real number. We wish to estimate the root of the equation, Mx D a, where we assume the existence of a unique root. One can use Newton’s method provided the form of the function M is known. Kiefer-Wolfowitz Process. We wish to find q such that M(q) is a maximum.
(1) (2) (3) (4) (5)
Properties of Interest. To find a recursion for x1 , . . . , xn and to show convergence of xn , that is, to show that lim Exn q2 D 0, n!1 Plim xn D q D 1, asymptotic normality of xn or M(xn ), confidence intervals for q, an optimal stopping rule.
Robbins and Monro [1951] proved the following theorems. Theorem 1 Let fan g be a sequence of positive constants such that 0<
n
a2n < 1 and
1
1
[an /a1 C Ð Ð Ð C an1 ] D 1,
7.3
nD2
which is implied by the condition 1
an D 1,
1
c
dHyjx D 1
PjYxj c D
7.4
c
for all x, some finite c, and for some d > 0, Mx a d for x < q and Mx ½ a C d for x > q. Then the sequence xnC1 D xn C an a yn
7
Estimation of Points on the Quantal Response Function
130
converges in quadratic mean and hence in probability, i.e. lim bn D lim Exn q2 D 0
n!1
n!1
and
xn ! q in probability.
Theorem 2 (Special Case of Theorem 1) If (1) and (2) of Theorem 1 hold and M(x) is nondecreasing, Mq D a, M0 q > 0, then lim bn D 0. n!1
Example Let F(x) be a distribution function such that Fq D a0 < a < 1, F0 q > 0, and fZn g be a sequence of independent random variables such that PZn x D Fx. We are not allowed to know the values of Zn , but for each n we are free to specify a value xn and we observe only the values fyn g where 1 if Zn xn ‘response’ yn D 0 if Zn > xn ‘no response’. Here Mx D Fx and other conditions are satisfied. Robbins and Monro [1951] feel that the assumption of boundedness of Y(x) with probability 1 for all x is somewhat too strong. Wolfowitz [1952] has shown the convergence of xn to q in probability under weaker conditions. Blum [1954] provided the least restrictive conditions for the convergence of xn to q with probability 1. 7.2.1. Blum’s Conditions
jMxj c C djxj for some c, d ½ 0 1 2 sx D [y Mx]2 dHyjx s2 < 1
7.5 7.6
1
Mx < a when x < q
7.7
Mx > a when x > q. Inf
d1 xqd2
[Mx a] > 0
for every d1 , d2 > 0.
7.8
Under the above assumptions Blum [1954] showed that P lim xn D q D 1. n!1
Notice that assumption (4) allows the possibility of Mx ! a as jxj ! 1 and in such a case one would expect that there might be a positive probability of jxn j converging to 1. Dvoretzky [1956] has shown that under the conditions of Blum [1954] xn converges in the mean to q. Derman [1956] points out that
Robbins and Monro Procedure
131
using Kolmogorov’s inequality one can show that (since s2x s2 < 1) 1
an D 1,
a2n < 1 and
aj [yj Mxj ]
jD1
converges with probability 1 and consequently xnC1
n
aj [a Mxj ] D x1
jD1
since
n
aj [yj Mxj ]
jD1 n
aj yj a D
jD1
n
xj xjC1 D x1 xnC1
jD1
converges with probability 1. 7.2.2. Asymptotic Normality
Chung [1954], Hodges and Lehmann [1956], and Sacks [1958] have considered the asymptotic normality of xn . Under certain regularity assumptions, Chung [1954] has shown that n1/2 xn q is asymptotically normal [0, s2 c2 /2a1 c 1] when nan ! c and where a1 D M0 q > 0, c > 2K1 and K inf[Mx a]/x q. Hodges and Lehmann [1956] recommend taking c D 1/a1 since it minimizes s2 c2 /2a1 c 1, the minimum value being s2 /a21 . (In practice one has to guess the value of a1 .) Since K inf[Mx a]/x q where the infimum can be restricted to the value jx qj A, where A is an arbitrarily small number and since M0 q D a1 > 0, it suffices to require that c > 1/2a1 . Hodges and Lehmann [1956] also show that the condition lim Mx/x > 0 as jxj ! 1 is not necessary. Also, in practice we might be tempted to use a ‘safe’ small prior estimate for a1 and hence a correspondingly large c in order to avoid the possibility of c 1/2a1 . In this case the estimates would have unknown behavior. In the following we shall use the alternative approach of Hodges and Lehmann [1956] which works for all values of c > 0 and provides measures of precision for finite n. 7.2.3. Linear Approximation
Here the actual model is replaced by a linear model for which the exact error variances can be evaluated. We assume that Mx D a C bx q and Vx D varYjx D t2 , where b and t are known constants. We might take b D a1 and t2 D s2 [D Vq]. The justification for such an approximation is that M(x) is linear in the vicinity where the xn is likely to fall. Also without loss of generality we can set a D q D 0. That is, Mx D bx and var[Yx] s2 . Since xnC1 D xnC1 xn , yn D xn an yn xn , we have ExnC1 jxn D 1 an bxn ,
7
Estimation of Points on the Quantal Response Function
132
and hence by iteration ExnC1 D x1
n
1 ar b,
rD1
where x1 denotes the initial value. The bias in xnC1 could be zero if x1 D 0. Also squaring xnC1 and taking conditional expectations, we obtain Ex2nC1 jxn D 1 ban 2 x2n C a2n s2 D 1 ban 2 1 ban1 2 x2n1 C s2 [a2n C a2n1 1 ban 2 ]. Hence
Ex2nC1 D x21
n
2
1 bar
C s2
n
rD1
n
a2r
rD1
1 bas 2 .
sDrC1
Now, set an D c/n and obtain ExnC1 D x1 fn cb,
and
Ex2nC1 D x21 f2n cb C
s2 yn cb b2
fn z D n!1
where
n r z D n C 1 z/1 zn!
and
1
yn z D
n n z2 rD1 n rD1
r2
1 z/s2 D
sDrC1
z2 r!2 r2 2 1 C r z
2 n C 1 z n!2
z D cb.
Wetherill [1966, p. 148] has provided more details for getting asymptotic approximations to fn z and yn z which will be given below. The first term in the expression for Ex2nC1 is due to the bias in xnC1 which vanishes when cb is an integer and cb n. Asymptotically fn z D 1 z/n1/2 /nz 1 z. Thus, the contribution of the bias term to the mean squared error of xnC1 is of the order 0n2cb . The evaluation of the second term yn t is somewhat complicated. Applying Stirling’s formula, we have n C 1 z 2 . D 1 z/nn z2z D [nn z2z1 ]1 . n C 1 which can be used in the summation part of yn z. Let n n n r C 1 2 2 2 Sz D r D r rr z2z1 D r z2z1 /r. r C 1 z rD1 rD1 rD1
Robbins and Monro Procedure
133
If z D 1/2, the last summation is asymptotically equivalent to ln n. For z > 1/2 . Sz D
n
r z2z1 dr ³ r
1
n
[r z2z2 zr z2z3 C Ð Ð Ð] dr 1
³ n z2z1 /2z 1. Thus . Ex2nC1 D
s2 loge n/4b2 n
if cb D 1/2
s2 c2 /n2cb 1 if cb > 1/2.
Thus, when cb > 1/2, the mean squared error of xn is 0(n1 ) and it coincides with the asymptotic variance obtained by Chung [1954] and Sacks [1958] for the quasi-linear case, namely Mx D a C bx q C ojb qj. Sacks [1958] proved the asymptotic normality of (xn q), under very general conditions, having mean zero and variance s2 c2 /n2cb 1 with an D c/n and b D M0 q when 2bc > 1. For c D 0.2 (0.2) 0.8 and 1.2, n D 5 (5) 30; nfn c are tabulated by Hodges and Lehmann [1956] and for c D 0.2 (0.2) 0.8, 1.2 (0.4) 2.0 and 3.0, n D 5 (5) 30, 1, the value of nyn c is also tabulated. Also note that nfn 1 D 0, while nf2n 1/2 ! 1/p D 0.318 due to Wallis’ product. For n ½ 30, one can use fn z D f30 z30.5n1 C 21 2z . Also we have the recursion formula yn z D z/n2 C [1 z/n]2 yn1 z. Using a quadratic interpolation formula on nyn z against 1/n at the values 1/n D 0, 1/2, 1 one obtains n3 yn z/z2 D n 1n 22z 11 C 22 z2 n 1 C n. The shortcoming of the linear model is that we do not know how approximately linear M(x) is and how nearly constant V(x) is, where Vx D varYxjx.
7.3. Parametric Estimation
Although the R-M procedure is completely nonparametric since it does not assume any form for M(x) or for Hyjx, in several cases, especially in quantal response situations, Hyjx is known (is Bernoulli) except for the value of a real parameter g. We can reparametrize in such a way that g is the parameter to be estimated. Let E[Yx] D Mg x, varYxjx D Vg x and let these functions satisfy the conditions imposed earlier. Since g determines the model, q is a function of a and g. We further assume that there is a 1-1 correspondence between q and g so that there exists a function h such that g D ha q. Then we may use xn as the R-M
7
Estimation of Points on the Quantal Response Function
134
estimate of q and obtain the estimate of g as ha xn . Now the problem is choosing an (and perhaps a also) in order to minimize the asymptotic normal variance of the estimate of g. One can easily show that n1/2 [ha xn g] is asymptotically normal with mean 0 and variance [h0a q]2 s2 c2 /2a1 c 1. Example Consider the quantal response problem in which Yx D 0 or 1 with P[Yx D 1] D Mg x and Vg x D Mg x[1 Mg x]. By choosing 0 < a < 1 we estimate q by means of the R-M procedure, which yields a sequence of estimates xn such that n1/2 xn q ³ normal (0, [Vg qc2 /2a1 c 1]). Let @Mg x/@x D M0g x,
@Mg /@g D MŁg x.
For given a, the best value of c D lim nan is [M0g q]1 . With this c, the asymptotic variance of n1/2 ha xn is [h0a q]2 s2 q/[M0g q]2 D a1 a/[MŁg q]2 since s2 q D Vg q D a1 a and by differentiating the identity Mha q q D a w.r.t. q, we obtain h0a q D M0g q/MŁg q. Now the value of a, which minimizes a1 a/[MŁg q]2 , is independent of g provided that MŁg q factors into a function of q g or qg[like MŁg x D rsgtx]. For example, we can take P[Yx D 1] D Mg x D F[x g C F1 b]
for some 0 < b < 1,
where F is a distribution function. Now g can be interpreted as the dose x for which the probability of response is b. That is, g D LD100b (lethal dose 100 b). Then the formula for the asymptotic variance becomes a1 a/fF0 [F1 a]g2 since Mg q D a implies F[q g C F1 b] D a which implies F1 a D q g C F1 b, and MŁg q D F0 [q g C F1 b]. The asymptotic variance is independent of b since the problem is invariant under location shifts. Now the value of a that minimizes the asymptotic variance is a D 1/2 when F is normal or logistic. Suppose we want to estimate g D LD100b . Then we do not need the parametric model since we may set a D b and g D q and thus estimate g directly from xn via the R-M scheme. The advantage of this method is that it assumes very little about the form of F, the disadvantage may be a significant loss of efficiency, especially when b is not near 1/2. Example Suppose we wish to estimate the mean bacterial density g of a liquid by the dilution method. For a volume x of the liquid, let Yx D 1 if the number of bacteria is 1 or more. Then P[Yx D 1] D Mg x D 1 expgx under the Poisson model. Then MŁg q D q expgq D 1 a ln1 a/g since 1 expgq D a. Thus, the asymptotic variance becomes [a/1 a ln2 1 a]g2 and whatever be g, this is
Parametric Estimation
135
minimized by minimizing the first factor. The best a is the solution of the equation 2a D ln1 a or a D 0.797. Thus, the recommended procedure is: carry out the R-M scheme with a D 0.797 and an D 4.93/gˆ n [since 1/c D a1 D M0g q D g1 a] where gˆ is our prior estimate of g. Our estimate for g after n steps is 2a/xnC1 D 1.594/xnC1 [since g D ha q D ln1 a/q] and the asymptotic variance is g2 /4a1 a D 1.544g2 since 2a D ln1 a. Block [1957] considers replication at each xi and replacing yi by yni when ni is the subsample size at xi . The rationale behind this is that it may be cheaper to observe a set of observations at a dose level xi rather than the same number of observations at different dose levels. Maximum Likelihood Estimate of q Let fx1 , . . . , xn denote the joint density of x1 , . . . , xn . If yn D Mxn C en D a C a1 xn q C en and if the ei are normal 0, s2 , then the joint density function of x1 , . . . , xn and y1 , . . . , yn is p 1 n 2 yi a a1 xi q fx1 , . . . , xn 2ps exp 2 2s where s2 is known and q is to be estimated. The mle of q is qˆ D x a1 1 y a which is distributed normally q, s2 /a21 n (first condition on x1 , . . . , xn and then uncondition). Thus, the R-M estimator with the optimal choice of c is asymptotically efficient (see section 7.2.2.). Venter [1967] proposed a modification of the Robbins-Monro (R-M) procedure of which the asymptotic variance is the minimum even when M0 q is unknown. He replaces the constant c in the R-M procedure by a sequence which converges to [M0 q]1 . Although Venter’s [1967] modification requires two observations at each stage, the asymptotic variance of Venter’s modification based on n (total) observations with M0 q unknown is the same as that of the R-M procedure under Sack’s [1958] assumptions with n observations with M0 q known. Rizzardi [1985] has shown that the original and Venter’s modification of R-M estimator are locally asymptotically minimax, for a class of regression functions that includes distribution functions in the case where the response is a 0–1 variable. Rizzardi [1985] studied some stochastic approximation procedures for estimating the EDp , namely, the effective dosage level to obtain a preassigned probability p of response when the response curve is assumed to be logistic and the response is a 0–1 variable. He compares the estimators by their expected squared error loss. In particular, when the tolerence distribution is logistic, and the parameter of interest is ED50 , namely the median of the distribution, Rizzardi [1985], based on numerical computations with sample sizes n D 10, 20, infers that the expected squared error loss of the R-M procedure and Venter’s modification are very close to the asymptotic variance. He also studied the problem of estimating the difference of the medians of two logistic distributions when the scale parameters are (1) equal and unknown and (2) unequal and unknown.
7
Estimation of Points on the Quantal Response Function
136
7.4. Up and Down Rule
Dixon and Mood [1948] have proposed a method of estimating LD50 which is simpler than the R-M procedure. In the latter, the dose levels are random and we cannot specify them ahead of time. The Dixon-Mood procedure chooses a series of equally spaced dose levels yi . The response is observed only at these levels. We take the first observation at the best initial guess of LD50 . If the response is positive, the next observation is made at the immediately preceding lower dose level. If the response is zero, the next trial is made at the immediately proceeding dose level. If the positive response is coded as 1 and no response is denoted by 0, then the data may look like the one in figure 7.1. (Suppose we start at 0, let the response be 1; then we go to dose 1 and the response is 0; then go to dose 0, and the response is 1; then go to dose 1 at which the response is 0; then go to zero dose at which the response is 0; then go to dose 1 and suppose the response is 1; then go to dose 0, the response is 0; then take dose 1, the response is 0; then go to dose 2, the response is 1 etc.) The main advantage of this up and down method is that it automatically concentrates testing near the mean. Furthermore, this increases the accuracy of the estimate. The savings in the number of observations may be 30–40%. Another advantage is that the statistical analysis here is quite simple. One possible disadvantage is that it requires each specimen be tested separately, which is not economical, for instance, in tests of insecticides.
Fig. 7.1.
Up and Down Rule
137
The method is not appropriate for estimating dose levels other than LD50 because of the assumption that the normal or the logistic distribution will be violated in the extreme tails of the distribution. Also, the sample size is assumed to be large since only large-sample properties of the estimator will be asserted. Also, one must have a rough estimate of the standard deviation in advance, because the intervals between testing levels should be equal to the standard deviation. This condition will be satisfied if the interval actually used is less than twice the standard deviation. If y denotes the dose let y0 be an estimate of LD50 . Then the tests are made at yi D y0 š id i D 0, 1, 2, . . . where d is an estimate of the standard deviation and y0 is the level of the initial test. Let ni denote the number of successes and mi the number of failures at yi i D 0, 1, . . .. Then the distribution of these variables is pn, mjy0 D K
C1
n
m
pi i qi i ,
n D n1 , n2 , . . .,
m D m1 , m2 , . . .,
where
iD1
pi D [yi m/s] D 1 qi D f1 C exp[a C byi ]g
i D 0, 1, . . . for the probit case, for the logit case,
and K does not involve the unknown parameters. The method of maximum likelihood will be used in order to estimate the unknown parameters. Since jni mi1 j D 0 or 1 for all i, either one of the sets fni g or fmi g contains practically all the information in the sample. If N D ni and M D mi , and assuming that N M, we can write the approximate likelihood function as pm, njy0 , M N D K0 pi qi1 ni , i
. . . since qk D 1 and pk D 1 with nkC1 D mk . Dixon and Mood [1948] maximize the above with respect to the unknown parameters m and s and obtain the estimates. Let Pi k denote the probability that the k-th observation is taken at yi . If the k-th observation is at yi , then the (k 1)-th observation was at either yiC1 or yi1 . Hence we have Pi k D PiC1 k 1piC1 C Pi1 k 1qi1 with boundary conditions P0 1 D 1, Pi 1 D 0 for i 6D 0. Also, because jni mi1 j D . 0 or 1, asymptotically the Pi satisfy (by writing Pi D Pi pi C Pi qi and taking PiC1 D Pi . and piC1 D qi ), Pi pi D Pi1 qi1 . Hence Pi D P0
i i1 qj pj , for i > 0 jD0
D P0
0 jDiC1
7
jD1
pj
1
qj , for i < 0.
jDi
Estimation of Points on the Quantal Response Function
138
Notice that njC1 D mj . Hence, . . EnjC1 D N C MpjC1 D N C nqj for fixed N and M. Hence, EnjC1 /qj D Enj /pj . From this we obtain for i > 0, i1
fEnj C 1/Enj g D Eni /En0 D
jD0
i1
qj /pj D Wi (say).
jD0
So Eni D En0 Wi for i > 0. similarly, we can obtain for i < 0, Eni D En0
i
pj /qj ,
jD1
since ND
1
Eni D En0
1
1
where
Wi D
Wi
1
i1 pj /qj for i > 0 jD0
i pj /qj for i < 0
.
jD1
Then En0 D N
1
Wi
1
and hence Eni D NWi
1
Wi .
1
7.4.1. mle Estimates of m and s
For estimation purposes, only the successes or only the failures will be used depending on which has the smaller total. Assume that N < M. Then, let ini , B D i2 ni . AD Then, Dixon and Mood [1948] obtain ˆ D y0 C d[A/N š 1/2] mDm
Up and Down Rule
139
where y0 is the normalized level corresponding to the lowest level at which the less frequent event occurs. The plus sign (minus sign) is used when the analysis is based on the failures (successes). Also, s D sˆ D 1.620d[NB A2 N2 C 0.029], which is quite accurate when NB A2 /N2 is larger than 0.3. When NB A2 N2 is less than 0.3, Dixon and Mood [1948, appendix B] give an estimate of s based on an elaborate calculation. They also provide confidence intervals for m that are based on the large sample properties of the mle values. Brownlee et al. [1953] obtain an estimate which is asymptotically equivalent to the approximation of Dixon and Mood [1948] and to the maximum likelihood estimator and is given by Lˆ 0.50 D n1
nC1
yr
rD2
for a sequence of n trials, where yr is the level used at the r-th trial. (Notice that the level of the first trial gives no information, although the level at which we would have taken the n C 1-th observation certainly has some information.) 7.4.2. Logit Model
For the logit model, one can obtain the mle of b and a/b and their standard errors based on the method of maximum likelihood. For some new alternative strategies and studies via Monte Carlo trials, the reader is referred to Wetherill [1966, section 10.3]. 7.4.3. Dixon’s Modified Up and Down Method
The up and down procedure proceeds in the manner described earlier until the nominal sample size is reached. The nominal sample size NŁ of an up and down sequence of trials is a count of the number of trials, beginning with the first pair of responses that are unalike. For example, in the sequence of trial result 000101, the nominal sample size is 4. Although the similar responses preceding the first pair of changed responses do not influence NŁ , they are used in estimating the ED50 . For 1 < NŁ 6, the estimate of ED50 is obtained from table 2 of Dixon [1970, p. 253] and is given by log ED50 D yf C kd where yf denotes the final dose level in an up and down sequence and k is read from table 2 of Dixon [1970]. For example, if the series is 011010 and yf D 0.6 and d D 0.3, then NŁ D 6 and the estimate of log ED50 is 0.6 C 0.8310.3 D 0.85. For nominal sample sizes greater than 6, the estimate of log ED50 is yi C dAŁ NŁ
7
Estimation of Points on the Quantal Response Function
140
where the yi values are the log dose levels among the NŁ nominal sample size trials, and where AŁ is obtained from table 3 of Dixon [1970, p. 254] and AŁ depends on the number of initial-like responses and on the difference in the cumulative number of 1 and 0 values in the nominal sample size NŁ . In order to plan an up and down experiment, one needs to specify (1) starting log dose, (2) the log dose spacing, and (3) the nominal sample size. It is desirable to have the starting dose as close to the ED50 as possible, because further the starting dose is from the ED50 , the greater is the likelihood of ending up with a string of similar responses prior to beginning of the nominal sample size. The choices for equal log dose spacings are 2s/3, s, 3s/2. Also, NŁ should be 3, 4, 5 or 6, because the mean square error of the up and down estimate of ED50 for 3 < NŁ 6 is essentially independent of the starting dose and is approximately 2s2 /NŁ , where s2 denotes the variance of the underlying population. An additional factor which is in favor of the up and down method is that its analysis procedure minimizes the sum of squares on log dose rather than on response. The computations are relatively easy and the interpretation follows the well-known regression analysis. The asymptotic maximum likelihood solution given by Dixon and Mood [1948] did not make full use of the initial scores of trials and the occurrence of an unequal number of 0 and 1. The estimate of Brownlee et al. [1953] gave a reduction in MSE by adjusting the sample size for series starting at some distance from the true ED50 . However, both the above estimates do depend on the initial dose level.
7.5. Another Modified Up and Down Method
Jung and Choi [1994] propose another modification of the up-and-down method which seems to perform better in terms of the mean squared error. In the following we will describe this method. It is based on altering the test space (dose span). The first stage consists of an original up-and-down experiment on the predetermined equally spaced dose (test) levels until k changes of response type are observed. The second stage consists of halving the initial test space and restarting the up-and-down procedure at the dose level nearest to the average based on the results up to that point and continuing the experiment at the next higher or the next lower level depending on the response type on the reduced test space. This modification was originally proposed by Wetherill [1963]. A typical modified up-and-down method with k D 6 might look like figure 7.2 where “1” and “0” denote the response (e.g., death) and nonresponse (e.g., survival), respectively. Note that in figure 7.2, at the beginning of stage 2 we observe the response at y0.5 since the proportion of ones at y0 is 4 out of 5. The idea behind this modification is that since the up-and-down method converges rapidly to LD50 , reducing the dose span at a certain stage would hopefully increase the precision of the final estimator. Reducing the dose span could be beneficial, especially if the initial dose span happens to be relatively wide.
Another Modified Up and Down Method
141
Fig. 7.2. Modified up and down method for estimating LD50 .
Let the tolerance distribution be normal with mean m and variance s2 . Let y1i , y2j be log-transformed dose levels for ith subject on the initial test space and jth subject on the reduced test space, respectively. Then, let x1i D y1i m/s D b0 C b1 y1i and x2j D y2j m/s D b0 C b1 y2j where m D b0 /b1 and s D 1/b1 . Suppose that there are n1i (n2j ) responses and m1i (m2j ) non-responses at y1i (y2j ) using the one (reduced) test space. Let p1i (p2j ) be the probability of a response at dose level y1i (y2j ) on the one (reduced) test space and q1i D 1 p1i and q2j D 1 p2j where p1i D b0 C b1 y1i and
p2j D b0 C b1 y2j .
Let J1 and J2 be the total number of test levels at which subjects are actually experimented on the initial test space and the reduced test space, respectively. It is assumed that J1 and J2 are fixed prior to conducting the experiment (although there is no meaningful stopping rule for the up-and-down method). Note that the distributions of n1i and n2j depend conditionally on the associated dose levels x1i and x2j and the number of tests performed at the levels. The test levels and the number of tests are completely determined by the previous history of the sample. Hence, using recursive conditioning on the previous history, it can be shown that the likelihood function is given by Ln1 , m1 , n2 , m2 ; b0 , b1 jy0 D K
J1 iD1
7
n
m
p1i1i q1i1i
J2
n
m
p2i2i q2i2i
7.9
jD1
Estimation of Points on the Quantal Response Function
142
where K does not depend upon b0 and b1 . The likelihood equations are: J1 J2 m1i m2j n1i n2j @ log L/@b0 D f1i C f2j p1i q1i p2j q2j iD1 jD1
7.10
and @ log L/@b1 D
J1
f1i y1i
iD1
m1i n1i p1i q1i
C
J2 jD1
f2j y2j
m2j n2j p2j q2j
,
7.11
where f1i D fx1i
and f2j D fx2j .
Since there is no hope of obtaining closed-form solutions for b0 and b1 , only ˆ is iterative solutions are possible. The estimate of LD50 , say m ˆ D bˆ 0 /bˆ 1 m
7.12
where bˆ 0 and bˆ 1 are the MLE’s obtained by the Levenberg-Marquardt method [1944, 1963]. ˆ computed by the delta method is given by The variance of m ˆ ˆ 0 ˆ @m ˆ @m @m ˆ @m ˆ D varm , var b , 7.13 @b0 @b1 @b0 @b1 0 1 bˆ 0 1 ˆ 1 bˆ 0 7.14 D , 2 I b , 2 , bˆ 1 bˆ 1 bˆ 1 bˆ 1 ˆ then the ˆ is the inverse of the information matrix evaluated at b D b. where I1 b large-sample 1001 a% confidence interval for m is ˆ š za/2 var m ˆ 1/2 . m
7.15
It should be pointed out that when the tolerance distribution is normal, the Fieller’s theorem can be applied in order to obtain the confidence interval. Example For the data in figure 3 (with J1 D 9, J2 D 20), Jung and Choi [1994] obtain ˆ D 0.288/2.998 D 0.096 and bˆ 0 D 0.288, bˆ 1 D 2.998, m 1 1 0.288 0.288 0 11.837 0.968 ˆ D var m . , , 0.968 0.871 2.998 8.988 2.998 8.988 Hence, a 95% confidence interval for m is 0.096 š 0.190 or
0.286, 0.094.
Also, the authors conduct a simulation study to compare the performances of the up-and-down method, their modified up-and-down method, Robbins and Monro
Another Modified Up and Down Method
143
procedure and the Spearman-Karber estimate. Recall that the Robbins-Monro iterative procedure is given by c ynC1 D yn pˆ n 1/2 n
7.16
where pˆ n D 1 or 0 according to the experimental unit responds or not at dose level p yn . Also, the value of c that minimizes varynC1 is 2ps (see section 7.2.2 or Cochran and Davis [1964]). In the simulation study based on 1000 replications with J1 C J2 D 30, the authors also include chi-squared distribution with two degrees of freedom as a possible candidate for the tolerance distribution. They draw the following conclusions. The modified 2:1 up-and-down method (i.e., J1 /J2 D 2) is an efficient method for estimating LD50 , especially when the location of the LD50 and the standard deviation of the tolerance distribution are not known. However, if such information is available, the Robbins-Monro procedure seems to be the most efficient approach. The fixed-sample Spearman-Karber estimator can be recommended only because of its simplicity. With regards to the interval estimation, all up-and-down approaches considered by Jung and Choi [1994] seem to be unsatisfactory with respect to the coverage probability. The authors also point out that there is no meaningful or natural stopping rules for the up-and-down methods. As for the required optimal sample size, there is no practical solution. However, there may be situations where the up-and-down method makes a lot of sense, and hence, is applicable. For instance, the up-and-down method has been used in head injury research (see Choi [1990]) and in Phase I clinical trials involving cancer treatment (see Storer [1989]).
7
Estimation of Points on the Quantal Response Function
144
8 žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Sequential Up and Down Methods
8.1. Up and Down Transformed Response Rule
Wetherill [1963] and Wetherill et al. [1966] proposed an up and down transformed response rule (UDTR) for estimating LD (100p), p > 1/2. Use a fixed series of equally spaced dose levels and after each trial estimate the proportion p0 of positive responses (which are denoted by 1) at the dose level used for the current trial, counting only the trials after the last change of the dose level. If p0 > p, and p0 is estimated from n0 trials or more, go to the immediately lower dose level. If p0 < p, go to the immediately higher dose level, and if p0 D p, do not change the dose level. If p < 1/2, UDTR is modified accordingly by calculating the proportion of negative rather than positive responses. For instance, if n0 D 3, and p > 0.67, define the responses as type U or type D: U D 0, 1, 0, 1, 1, 0; D : 1, 1, 1. The first trial is performed at an arbitrary level x and more trials are carried out at the same level, if necessary, until we have a type U or type D response, and then move one level up or down respectively. If the consecutive results at any level are simply classified as type U or D, the following set of UDTR results can be classified as follows. (1) Results in original form ending in 2nd change: Level
3 2 1 0
Observation No. 1
2
3
1
1
1
4
5
6
1
1
1
7
8
9
1
0
10
11
12
1
1
1
0
145
(2) Results classified as U or D ending at 6th change: Level
Change No. 1
3 2 1 0
– D – –
– – D –
– – – U
– – U –
2
3
4
5
– D – –
– – U –
– D – –
– – U –
6 – U – –
D – – –
If the probability of a positive response at any level j is Pxj then the probability that the trial at this level will result in a change of level downward is [Px1 ]3 . If responses are classified as U or D, then the response curve is Fx D [Px]3 .
8.2 Stopping Rules
Sometimes we are restricted to take only a fixed number of observations. However, often some flexibility is allowed as regards the number of observations, in order to compensate for poor initial guesses by taking additional observations. Hence, it is desirable to have some stopping rules. Wetherill [1966] give two stopping rules. Rule 1. Stop after a given number of changes of response type. Rule 2. Stop when the maximum number of trials at any dose level reaches a specified number. Rule 2 may not be easy to apply, therefore rule 1 is recommended.
8.3. Up and Down Method
Alternative Procedures for Estimating LD50 . Let x0 , x1 , . . . denote the dose levels and we sample according to the up and down method of Dixon and Mood [1948] until kk n turning points (peaks and valleys) are observed. For instance, consider figure 4 [Choi, 1971], where D negative response and ž D positive response. The method of estimating LD50 of a response curve, based on data depicted in figure 4 has been considered by Dixon and Mood [1948], Brownlee et al. [1953], Wetherill [1963], Wetherill et al. [1966], Dixon [1965], Cochran and Davis [1964], Tsutakawa [1967], Hsi [1969], and Choi [1971]. Recall that Brownlee et al. [1953] propose ˆ D n C 11 m
nC1
xj
jD1
8
Sequential Up and Down Methods
146
Fig. 8.1.
where xnC1 is the dosage that would be used at the n C 2-th trial if the experiments were continued. Wetherill et al. [1966] propose w and w ˆ that are based only on the peaks and valleys of the response series. Let xi denote the halfway value between the i-th turning point and the dosage level used at the immediately preceding trial i D 1, 2, . . . , k. Then wD
k
xi /k.
iD1
w ˆ is defined as the average of the dosages based at the turning points themselves. For instance, the data in figure 8.1 yield k D 7, d (spacing) D 1. x1 D 4.5, x2 D 5.5 etc. and w D 4.07, w ˆ D 4 C 6 C Ð Ð Ð C 3/7 D 4.0 ˆ D 4.07. One can easily see that while m w ˆ D w if k is even D w š d/2k if k is odd, d i.e. w ˆ D w C 2R (number of valleys number of peaks), because, when k is even, the number of peaks is equal to the number of valleys; whereas, when k is odd, the number of peaks differs at most by 1 from the number of valleys. Using the Markov chain model approach discussed earlier by Wetherill et al. [1966] to the set of turning points, Choi [1971] was able to obtain the variances of w and w. ˆ The phasing factor defined as the distance of m to the closest dosage level plays an important role. Also, the question of spacing for the estimates w and w ˆ is
Up and Down Method
147
of practical interest, since the smaller the spacing, the larger is the average sample number (ASN) required to yield the specified number of turning points. Dixon [1965] suggests to use ASN times the mean square error (MSE) of an estimator as a criterion. Based on several exact and Monte Carlo studies Choi [1971] reaches the following conclusions. (1) (2) (3) (4) (5) (6) (7)
ˆ the estimators w and w Like m, ˆ are biased although the bias decreases as k increases. It is more advantageous to obtain an odd number of turning points before termination of the experiment. On the basis of Dixon’s [1965] criteria, this optimal spacing seems to be d D 0.8s for any k (s2 denoting the variance of the population). x0 > m if and only if x0 > Ew and x0 Ew is a monotone function of x0 m. The same property holds for w. ˆ x0 > Ew if and only if Ew > m, and Ew m is a monotone function of x0 Ew. Analogous property holds for w. ˆ Var w var w, ˆ however bias w ½ bias w. ˆ ˆ on the basis of Both w and w ˆ appear to be somewhat more efficient than m MSE if d D 0.5s, but not when d D 1.0s. However, uniform superiority of any one of the three estimators cannot be established.
8.4. Finite Markov Chain Approach
Wetherill et al. [1966] point out that the estimation by w or w ˆ can be viewed in terms of a Markov chain. Define the state of a sequence of observations only when there has been a change of response type. Let us abbreviate the state which is a peak by p and a valley by v. Usually there will be an infinite number of states, however, we can impose the restrictions such as Pz D 1.0 for z ½ 10 and Pz D 0 for z 10 and assume there are a finite number m of possible states and the systems of states will be denoted by p
p
fS1 , S2 , . . . , Spm , Sv1 , . . . , Svm g. Let P (positive response at dose level j) D Pj P (negative response at level j) D Qj
and
j D 1, . . . , m.
Also, without loss of generality let the levels be numbered in the decreasing order so that P1 > P2 > . . . > Pm and because of the preceding restriction, Pm D 0 and P1 D 1.
8
Sequential Up and Down Methods
148
p
If the sequence of observations is at a state Si , then only valleys SviC1 , . . . , Svm are possible for the next state. Hence p
tij D Parriving at Svj from Si D Qj
j1
Pr , for i < j, and zero elsewhere, and
rDiC1 p
tij D Parriving at Sj from Svi D Pj
i1
Qr , for i > j, and zero elsewhere.
rDjC1
Thus, the matrix of transition probabilities of the chain is 0 : B . . . . . . where TD A : 0 B D tij A D tij , mxm
mxm
and A is the lower triangular and B the upper triangular. The Markov chain is periodic with period 2. There is strong dependence between successive xi and between averages of successive pairs of xi . For a sequence started at any given state, let n an D [an 1 , . . . , am ],
n bn D [bn 1 , . . . , bm ]
denote the probabilities that the n-th turning point is a specified peak or valley, n respectively. Obviously an m D 0, since there is no peak at level m and b1 D 0 since there is no valley at level 1 n D 1, 2, . . .. One can easily establish the following recurrence relations for a and b anC1 D Bbn anC2 D BAan
and bnC1 D Aan or and bnC2 D ABbn .
As the sample size increases, an and bn approach a steady state and the asymptotic distribution of peaks and valleys can be obtained by computing the principal eigen vectors of BA and AB. Also, one can obtain the exact small sample ˆ Wetherill et al. distribution of peaks and valleys and of the estimators w and w. [1966] calculate the mean of w and Choi [1971] computes the variance of w. Notice that the UDTR of Wetherill [1963] can be regarded as an up and down procedure if performed on a transformed response curve. Thus, the method of estimation and Markov chain theory etc. carry over to UDTR.
8.5. Estimation of the Slope
Let Pi D Fa C bxi . Suppose we are interested in estimating t D 1/b. We conduct two separate experiments; one to estimate LD (100p) and the other to estimate LD (100q) where
Estimation of the Slope
149
q D 1 p, p > 1/2. In both cases, we use either w or w. ˆ If F is symmetric, then tˆ D wp wq /2F1 p. An analogous estimate in terms of w ˆ holds. In the case of the logistic F, tˆ D wp wq /2 lnp/q.
8.6. Expected Values of the Sample Size
Wetherill et al. [1966] give some discussion and methods based on the Markov chain approach for computing the average sample number. For further details, see Wetherill et al. [1966, section 12].
8.7. Up and Down Methods with Multiple Sampling
So far we have discussed up and down methods with an observation at each stage. Tsutakawa [1967] proposed up and down methods with data obtained sequentially in blocks. If L D f. . . , d1 d0 , d1 , . . .g is a set of equally spaced levels with interval d, the experiment starts with K observations at some level Y1 in L and continues for an additional n 1 trials at levels Y2 , . . . , Yn determined by Yi C d if 0 ri k0 YiC1 D Yi if k0 < ri < K k0 Yi d if K k0 ri K where ri denotes the number of responses at the i-th trial and k0 is a specified integer such that 0 k0 < K k0 . (Notice that, for given Yi D dj , ri is binominally distributed with probability of response Fdj for each observation.) The procedure for generating the sequence fYi , ri , i ½ 1g will be called the random walk design and is denoted by WK, k0 . The design W(1,0) is the up and down method of Dixon and Mood [1948]. Using the results of the Markov chain theory, the asymptotic distribution of the sample average, y of the dose levels, is studied and an estimate of its variance is proposed by Tsutakawa [1967]. Numerical studies of the bias and mean square error of y indicate that there is often a loss of efficiency in using K up to 5 instead of K D 1. Tsutakawa [1967] also surmises from the asymptotic properties of the Spearman-Karber estimator, that its large sample efficiency can decrease as the number of distinct levels increases, whereas in small samples, it is quite robust. Hsi [1969] proposed an up and down method using multiple samples (MUD). The procedure is as follows:
8
Sequential Up and Down Methods
150
(1) (2)
A series of dose levels is chosen, usually in units of log-doses. A sequence of trials, using k experimental units at a time is carried out. At each trial the dosage level is selected depending upon the proportion of units responding, as follows: (a) Go to the next higher dose level if s or less responses among k units are found in the current trial; (b) go to the lower dose level if r or more out of k respond at the current level r > s; (c) use the same dose level if the number of responses lies strictly between s and r. (3) The experiment is terminated after n trials and the estimator is computed (usually n is prespecified). Apparently the MUD procedure is slightly inefficient when compared with the sequential up and down method (SUD method), especially if the initial dose level is far away from the percentage dose to be estimated. Hsi [1969] computes the bias and precision of the estimators for LD50 based on the dose-averaging formula for the probit model. A method of estimating the extreme percentage doses is also indicated. The MUD method is more efficient than the nonsequential procedures and is comparable to other SUDs.
8.8. Estimation of Extreme Quantiles
The estimation of low dose levels is of much interest, especially in carcinogenic models. Classical sampling procedures involve placing the doses into equally sized k values, equally spaced groups and then using maximum likelihood methods for estimating the unknown parameters. Sequential methods call for selecting a dose and observing the response before the next dose is administered. Let Y1 be the response at dose level x1 given to a subject and we administer dose x2 to the second subject. The same point estimation procedures that are used in a fixed dose level case may also be used in the sequential case; however, the dose levels are randomly selected. Typically xnC1 depends on x1 , . . . , xn and the respective responses Y1 , . . . , Yn . These were discussed in earlier sections of this chapter. McLeish and Tosh [1983] propose a sequential method for logit analysis for the following reasons: (1) (2) (3)
The sequential method is reasonably robust against changes in response function form; it allows other forms of estimation which are easier to handle than the method of maximum likelihood; it selects the dose levels in such a way that more information pertaining to the extreme quantiles of interest from the data is gained while controlling the number of deaths of experimental units.
Note that property (3) is important in which the major control of a study is associated with the number of deaths among the test units as a result of an experiment, for example, in studies that involve the use of higher mammals such as chimpanzees.
Estimation of Extreme Quantiles
151
8.8.1. Sequential Procedure of McLeish and Tosh [1983]
Let P[Yx D 1] D Fx D [1 C ebxq ]1
8.1
where b is the scale parameter and q is the location parameter. We will be interested in estimating the root r of the equation Fx D p,
8.2
when p is close to zero. For instance, LD5 is the root of Fx D 0.05. Observations will be made at dose levels x1 , x2 , . . . , xn until the desired response is observed. Let Y1 , Y2 , . . . , Yn be the responses corresponding to x1 , x2 , . . . , xn and let > 0 be given. Then for estimating a root r of (2) when p is near 0, the procedure is to sample sequentially at x1 , x2 D x1 C , . . . , xN D x1 C N 1, where
8.3
N D inf fj : Yj D 1g.
8.4
Estimates will be based on x1 , and N or x1 , and the range of doses N 1. After adding a standard continuity correction factor /2, define D D N 1/2. 8.8.2. Estimation of Parameters
We now investigate the estimation of parameters based on the exponential approximation (By the theorem in Appendix, for ! 0 and x1 ! 1, [expbD 1] is approximately negative exponential with mean 1/m where m D ebx1 q /[eb 1].) Hereafter the ‘phrases’-like maximum likelihood (unbiased) will be based on this approximating density. For k realizations of D, d1 , . . . , dk , starting points x11 , . . . , x1k increment sizes 1 , . . . , k and corresponding values of m1 , . . . , mk , the mle values of q and b are obtained from the log likelihood ln L D k ln b C
k jD1
ln mj C b
k jD1
dj
k
mj ebdj 1.
jD1
. likelihood D ln gdi where gd D pdf of D D expedb 1mbmedb . However, one cannot obtain explicit solutions. When b is known and xli x1 , i , the authors claim that the mle of q differs from the uniformly minimum variance unbiased estimator of q by the constant function [yk ln k]/b where y denotes the digamma function [yx D d ln x/dx]. Further, moment estimates of b and q based on the extreme value distribution approximation are ˜ ˜ b˜ D p[6svarxn ]1/2 , q˜ D avfxN C [g lneb 1]/bg,
where svar denotes sample variance, av(Ð) denotes mean and g is Euler’s constant. The above estimates of b, q and hence of r have considerable bias, but small variance.
8
Sequential Up and Down Methods
152
Remark. The convergence to an exponential distribution is valid when Fx ¾ c expax as x ! 1, which is satisfied by a broad class of distributions, but fails when F is normal. However, the authors find the first response estimates, namely b˜ and q˜ to be good when applied to the probit model. The estimation scheme is strictly sequential, that is, Yn must be observed before dose xnC1 is administered. The authors suggest the following remedy. For example, we plan to continue sampling until we observe 10 deaths. Then administer dose x1 to 10 test units and observe the number z1 of deaths. Next, we administer dose x2 D x1 C to 10 z1 units and observe the number z2 deaths. Then administer dose x3 D x2 C to 10 z1 z2 units and observe z3 ; continue in this manner until a total of 10 deaths is observed. The 10 values of N are the indices of the doses xj at which the deaths were recorded. The above procedure is simply a more efficient way of obtaining 10 replicate values of the variable D
8.8.3. Choice of x1 and 1
Based on the exponential approximation given in the appendix and the cost of administering a dose being 1 and the additional cost of death being C1 , the authors propose a method of optimally choosing x1 and by minimizing a certain criterion function, namely the product of the asymptotic variance of ˆr (based on the information matrix) and [EN C C1 ]. For further details see McLeish and Tosh [1983, section 4]. Appendix: Limiting Distribution of D = .N − 1=2/1
The following result is used by McLeish and Tosh [1983]. Theorem As x1 ! 1 and ! 0, mebD 1 converges in distribution to a standard exponential variable where m D ebx1 q /eb 1. Proof. For arbitrary x > 0, define d and the integer n by d D b1 ln1 C x/m0 n d < . Now, if x1 ! 1 and ! 0, n ! 1 provided d is bounded away from zero. However, if a subsequence of d values approaches zero, for this subsequence . d ¾ b1 x/m D x/ebx1 q and consequently d/ ! 1. So, in any case n ! 1 and let us approximately set D D N. Next, since n 1 < d < n, PN > n 1 > PD > d > PN > n, hence ln PN > n 1 < ln PD > d < ln PN > n.
Estimation of Extreme Quantiles
153
Now ln PD > d D
n
ln1 C ebxi q .
1
Also we have the inequality a a2 ln1 C a a for a > 0. Hence ln PD > d < ln PN > n D ln PYj D 0, j D 1, . . . , n D
n
lnf1 C ebxi q g.
1
Thus ln PD > d
n
ebxi q D ebx1 q
iD1
n
ei1b menb 1.
iD1
Similarly, one can obtain ln PD > d ½ ln PN > n 1 D PYi D 0, i D 1, . . . , n 1 D
n1
ln1 C ebxi q ½
iD1
n1
febxi q e2bxi q g.
iD1
Now n1 iD1
ebxi q D
n
ebxi q ebxn q
iD1
where ebxn q D ebx1 qCn1bq ebx1 qCdbq ! 0 as x1 ! 1 and ! 0. Next consider n1
e2bxi q D e2bxi q Ð fe2bn1 1gfe2b 1g1
iD1
mebx1 q Ð eb C 11 e2bd 1 ! 0 as x1 ! 1 and ! 0. Also, note that under the same conditions, ebn 1/ebd 1 ! 1. Hence, applying the above approximations, we have ln PD > d D ln P[mebD 1 > ebd 1m] D ln P[mebD 1 > x] ! x, as x1 ! 1 and ! 0, which completes the proof of the theorem.
8
Sequential Up and Down Methods
154
Next, if Gx denotes the extreme value survivor function Gx D exp[ebxy ], where y D q C b1 lneb 1, we have, from the exponential approximation for x > x1 , . PxN > x D PD > x x1 D Gx/Gx1 , which is the survivor function of a truncated extreme value distribution. The mass truncated is 1 Gx1 D 1 em . Hence, if x1 ! 1 and ! 0 in such a way that m ! 0, then xN q b1 lneb 1 converges in distribution to an extreme value distribution with location zero and scale b. Furthermore, from the theorem, we compute the density of D to be fd D mbebd exp[mebd 1].
Estimation of Extreme Quantiles
. sincePD > d D expfmebd 1g.
155
9 žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Estimation of ‘Safe Doses’
The ‘Delaney Clause’ published in 1958, as part of the Food Additive Amendments to the Food, Drug, and Cosmetic Act, says that if a substance is found to induce cancer in man or animal, after ‘appropriate’ experimental testing, then this substance may not be used as an additive in food. The clause requires unconditional banning of the use of a substance found to induce cancer at extremely high dose levels while ignoring possible beneficial effects available at much lower ‘use’ levels where the carcinogenic risk might be very minimal. This raises questions regarding the definition and determination of ‘safe’ residual doses for potential cancer-inducing substances. A dose of a substance is currently said to be ‘safe’ if the induced cancer rate does not differ appreciably from the zero dose cancer incidence rate.
9.1. Models for Carcinogenic Rates
Models for carcinogenic hazard rates have been primarily proposed by Mantel and Bryan [1961], Armitage and Doll [1961], Peto and Lee [1973] and Hartley and Sielkin [1977]. Here we shall present the model considered by Hartley and Sielkin [1977]. Although in some experiments the time to tumor is available, the model of Hartley and Sielkin [1977] covers situations in which for some or all animals the time to tumor is not available, but it is only known whether or not the animal developed cancer before the time of sacrifice or before death. Let Ft, x; a, b D P (the time to tumor is less than or equal to t, the dose is x) 9.1 where a and b denote vector parameters. Also let ft, x; a, b D dFt, x, a, b/dt
9.2
Ft, x; a, b D 1 Ft, x; a, b.
9.3
156
Let the age-specific tumor incidence rate or hazard rate be denoted by Ht, x; a, b D ft, x; a, b/Ft, x; a, b D
d ln Ft, x; a, b. dt
9.4
If only the tumor incidence counts are available at the termination T of the experiment, we use the symbol Fx D FT, x; a, b to denote dependence of the tumor incidence rate on the dose level. The product model for the hazard rate H stipulates that H can be factorized as follows Ht, x; a, b D gx; a1t; b.
9.5
This model is acceptable for experiments in which an agent is applied at a constant rate continuously over time. Cox [1972] maintains that gx; a should be completely specified and 1t; b need not be specified, that is, choose a non-parametric form for 1t; b. If a is a parameter of prime importance, then 1t; b is a ‘nuisance function’. However, in carcinogenic safety testing the definition of safe dose requires the estimation of both 1t; b and gx; a although the inferences may be more sensitive to the precise form of the function gx; a than to that of 1t; b. Hartley and Sielkin [1977] assume that t
Lt; b D
1t; b dt D
b
br tr
br ½ 0
9.6
rD1
0
where bo D 0, since at t D 0 the hazard rate must be zero and where without loss of generality we can standardize the coefficients so that L1; b D
b
br D 1.
9.7
rD1
The polynomial form (9.6) can be regarded as a weighted average of Weibull hazard rates with positive weight coefficients br . Since positive polynomials constitute a system for representing any continuous function, the model considered by Cox [1972] can be construed as a limiting case of (9.6) with b ! 1. Further, we assume that gx; a D
a
as xs ,
where
9.8
sD0
as ½ 0 (hence gx; a ½ 0 and d2 gx; a/dx2 ½ 0.
9.9
Note that Armitage and Doll [1961] take gx; a to be gx; a D c
a 1 C as x.
9.10
sD1
Models for Carcinogenic Rates
157
9.2. Maximum Likelihood Estimation of the Parameters
Let D denote the number of dose levels and nd denote the number of experimental units at dose level xd . In order to write down the likelihood function, the following assumptions will be made. Each experimental unit is associated with (1) the dose xd , (2) the prespecified time T at which it will be destroyed, (3) the time to cancer L if observable, and (4) the time of death t provided t T. Computation of Likelihood under Various Experimental Data (1) We observe time to cancer t without necropsy (e.g. palpability). Then t is a random variable from uncensored range t T. The associated component of the likelihood is ft, x; a, b. (2) There is negative necropsy at termination T. Then time to cancer occurs some time after T. Hence the associated component of the likelihood is FT, x; a, b. (3) There is negative necropsy at death t and the death is due to causes independent of cancer; t is censored at random value t from the life distribution. Here the factor of likelihood is Ft, x; a, b. (4) There is positive necropsy at termination T or at death t; t can be estimated from the necropsy information. The estimated value of t, denoted by t0 , is equivalent to random observation from the time to cancer distribution. Then the likelihood factor is ft0 , x; a, b. (5) Positive necropsy at termination T or death t; however, t T or t t is all that is known about t. Since t cannot be estimated at necropsy, it is random in the ˆ interval (0, T) or 0, t. Consequently, the likelihood component is FT, x; a, b or Ft, x; a, b. In experiments in which only counts of experimental units with or without cancer are made, we would be confined to data types 2, 3 and 5. In the computation of the likelihood, T is used equivalently to t and t0 equivalently to t. The nd units are divided into three categories with nd1 D the number of experimental units for which the time to tumor is either recorded or estimated on the basis of a positive necropsy (data types 1 and 4) nd2 D the number of units with a positive necropsy, but no known time to tumor or estimated time to tumor (data type 5), and nd3 D the number of units with a negative necropsy (data types 2 and 3). Then the likelihood of a and b is nd1 nd2 nd3 D fti Ftj Ftk , where 9.11 LD dD1
iD1
jD1
kD1
Ft, x; a, b D 1 exp[gx; aLt; b] Ft, x; a, b D 1 Ft, x; a, b ft, x; a, b D gx; a1t; b exp[gx; aLt; b] Lt; b D
b
br tr .
9.12
rD1
9
Estimation of ‘Safe Doses’
158
ˆ bˆ can now be computed by the iterative Maximum likelihood estimators (mle) a, convex programming algorithms, which will be described below.
9.3. Convex Programming Algorithms
We would like to find aˆ and bˆ that would minimize D ln L. For given b, the problem of minimizing , with respect to a subject to as ½ 0 or constraints in (9.10), can be handled by a suitable convex programing algorithm leading to a unique global minimum of . Similarly, we can handle the problem of minimizing for given a as a function of b subject to br ½ 0 or subject to the linear inequality constraints 1t; b ½ 0. Let a0, b0 be the initial consistent estimates. Step 1. Minimize [a, b0] with respect to a subject to linear constraints and let a1 be the minimizing solution. Step 2. Minimize [a1, b] with respect to b subject to the linear constraints and let b1 be the solution. For moderate values of a and b steps 1 and 2 consist of simple convex programming problems, especially if the simple constraints as ½ 0, br ½ 0 and (9.7) are used. In step 1, the constraints as ½ 0 are automatically satisfied, and in step 2 only one linear constraint (9.7) need to be met. Thus, will be nonincreasing at every cycle step and hence will converge to a lower limit Ł . Since we introduce upper bounds for the as , br , the generated sequence aI, bI (I denoting the cycle) will have at least one point of accumulation. Now, the Kuhn-Tucker equations are @/@as C ls D 0 s D 0, . . . , a 9.13 as ½ 0, ls as D 0; ls ½ 0 @/@br C mr C m D 0 r D 1, . . . , b. br ½ 0; mr br D 0; mr ½ 0
b br D 0 9.14 m 1 rD1
because the step 1 and step 2 problems are both satisfied for the step 1 and step 2 problems at every point of accumulation and the objective function has the same value Ł at every point of accumulation. Hence, if we assume that equations (9.13) and (9.14) cannot have two different solution points a, b with the same value of the objective function, then there can be only one point of accumulation; therefore ˆ lim aI D a,
I!1
ˆ lim bI D b.
I!1
That is, the two-step iteration process converges to a solution of equations (9.13) and (9.14).
Convex Programming Algorithms
159
Procedure for Obtaining the Consistent Estimators a0, b0 The cancer data are split into D separate dose groups, and using the convex programming algorithm of step 2 separately for the data in each group omitting the restriction (9.7) and using the values a0 D 1, as D 0, s D 1, . . . , a, we obtain estimators b˜ r,d for each group. These
estimators are global mle values of the true a s parameter functions br xd , and they are consistent. Next, we sD0 as xd at dose compute for each d the quantities Bd D brD1 b˜ r,d which are consistent estimators of the parameters b
a
a s br as xd D as xsd . rD1
sD0
sD0
Next, we compute the unweighted least-square estimators as 0 by fitting the form (9.8) to Bd . The as 0 are then consistent estimators of the as since the Vandermonde matrix xsd does not depend on n. Finally, the consistent estimators br 0 are computed from a D 1 ˜ s br 0 D brd as 0xd . 9.15 D dD1 sD0 Thus, the above method involves D applications of the simple reduced step 2 convex programming algorithm and a least-squares computation. There are basically three nonstandard features involved: namely, (1) the time to tumor data is in general, partially or totally, censored; (2) the varying dose levels make the observation not identically distributed, (3) the parameter space is restricted by linear inequalities. Much progress has been made in recent years to obtain mle values subject to constraints. The following property can easily be verified: the matrix of second derivatives of with respect to a is always positive semidefinite and strictly positive definite if (1) there is at least one time to tumor observation tdi , for each dose and/or at least one tdi record is available for each dose, and (2) the number of doses is larger than a. Similarly, the matrix of second derivatives of with respect to b is always positive semidefinite and is strictly positive definite for at least one dose, d. There are at least b different values of tdi or tdi . The above conditions assure the uniqueness of ˆ ˆ b. the mle a,
9.4. Point Estimation and Confidence Intervals for ‘Safe Doses’
A ‘safe dose’ x specifies a tolerable increment in the cancer rate over the spontaneous cancer rate. For example, let D 108 and then define x by the equation D FTŁ , x; a, b FTŁ , 0; a, b
9
Estimation of ‘Safe Doses’
9.16
160
where a, b are the true parameter vectors and TŁ is a conveniently chosen ‘exposure time’ (typically the duration of the experiment). Equation (9.16) implies that ‘tolerance’, TolTŁ , which is a permissible percentage reduction in the spontaneous tumor-free proportion, is given by TolTŁ D FTŁ , x; a, b/FTŁ , 0; a, b D 1 [FTŁ , 0; a, b]1 .
9.17
However, if TolTŁ rather than is specified, then x is the solution of the equation Ł
[ ln TolT ]
b
1
br T
Łr
D
rD1
a
as xs .
9.18
sD1
The mle of x is obtained by replacing br by bˆ r and as by aˆ s , (the respective mle). Obtaining a lower confidence limit for x is more difficult since the asymptotic ˆ bˆ is not known. Hartley and Sielkin [1977] propose a more covariance matrix of a, direct approach. Split the data randomly into G groups obtaining separate mle aˆ s g, bˆ r g for each group and determining the safe dose estimate xˆ g from the equations Ł
[ ln TolT ]
b
1
bˆ r gTŁr
D
rD1
a
aˆ s gˆxgs ,
g D 1, . . . , G.
9.19
sD1
Now, assuming an approximate normal distribution for zg D ln xˆ g, lower confidence limits z1 can be computed from the equation
z1 D ln x1 D z t0.95,G1
1/2 G [zg z]2
gD1
GG 1
.
9.20
The above procedure may be biased even for moderate sample sizes. Hence, the authors suggest replacing z by ln xˆ where xˆ D [ ln TolT ]/u; u D G Ł
1
b G gD1
bˆ r gT
Łr
aˆ 1 g
9.21
rD1
If aˆ s g D 0 for g D 1, . . . , G and s D 1, . . . , s0 1, then 0 xˆ D [ ln TolTŁ ]1/s /us0 ; b G 1 Łr bˆ r gT aˆ s0 g. us 0 D G
gD1
rD1
Point Estimation and Confidence Intervals for ‘Safe Doses’
161
Table 9.1. Life data and its analysis Data set 1
2
Number of mice
Dose level
Developing cancer, %
137 130 121 211 39 125
100 250 500 100 250 500
3 45 64 27 100 92
Data set
1 2
Estimated safe dose
Lower confidence limit xl
PWM
NPWM
GPM
PWM
NPWM
GPM
0.71 0.08
0.21 0.03
1.1 0.07
0.17 0.01
0.01 0.02
0.31 0.01
Example Hartley and Sielkin [1977] analyzed the data of Dr. D. W. Gayor of the Biometry Division of the National Center for Toxicological Research, which consists of two sets of tumor-rate data. In each set there were D D 3 non-zero dosage levels. Doses of 2-acetylamine fluorene (2-AAF) were fed to a specified strain of mice. The spontaneous rate of bladder tumor was given as zero. Times to tumor were taken to be the number of days from weaning (21 days) of tumor incidence. In each of the two data sets the observations at each dosage level were randomly subdivided into G D 6 groups of approximately equal size, here TŁ D 550 which was approximately the termination time of the experiment. The TolTŁ D 0.9999. In the parametric Weibull model (PWM) a D 3, and in both PWM and the nonparametric Weibull model (NPWM) kˆ D 0.2, 0.4, . . . , 8.0. (see p. 163 for these models.) In the general product model (GPM) a D 3 and b D 8. Also, since the probability of cancer at dose x D 0 was given as zero, we set a0 D 0 in PWM and GPM while go D 0 in NPWM (see (9.8), (9.12) and p. 163 for these models). Table 9.1 contains the observed tumor frequency, the estimated safe dose xˆ and the lower 95% confidence limit xl on the true safe dose x for each data set.
9.5. The Mantel-Bryan Model
Let Y(x) denote the probit transform given by Fx L D 1 LYx
9.22
where L denotes the spontaneous cancer rate. For the conservative procedure, Mantel and Bryan [1961] assume that whatever form the true response curve may take over the region of extrapolation, the average slope is not less than the assumed one.
9
Estimation of ‘Safe Doses’
162
However, by setting z D ln x, the above probit model should satisfy z2 1
z2 z1
dY dz ½ c dz
or
9.23
jYz2 Yz1 j ½ z2 z1 c
9.24
z1
where z2 is the log dose from which downward extrapolation is made, and z1 is the log dose to which extrapolation is made. Since we are concerned with extremely low doses, we must note that the range of extrapolation is 1 < z1 < z2 . Hartley and Sielkin [1977] show that if F(x) is analytic at x D 0, the inequalities (9.23) and (9.24) . may be violated for small doses xi . For instance, if Fx L D an xn an > 0, n ½ 1 for small x, then from (9.22) and using the approximation x D fx/jxj for small x we have . ln x1 D z1 D lnf[Fz1 L]/an g1/n . D ln an /n C n1 lnf1 L2p1/2 exp[Y2 z1 /2]jYz1 j1 g . D [const Y2 z1 /2 ln jYz1 j]/n. Substituting this for z1 in (9.24) we see that (9.24) will not be satisfied as Yz1 ! 1. The authors carry out Monte Carlo and real data studies on the model considered here, the PWM and the NPWM given, respectively, by a
s Ft, x; a, b D 1 exp as x tk , as ½ 0, k > 0, and sD0 k
Ft, xd ; a, b D 1 expgd t , 0 g1 g2 , Ð Ð Ð , gD gdC1 gd gdC2 gdC1 ½ , xdC2 xdC1 xdC1 xd
d D 1, . . . , D
d D 1, . . . , D 2
where gd is the nonparametric representation of gxd ; a. The simulation study indicates that the safe dose estimates are robust to changes in the model for L(t;b). For estimation of x, the parametric models perform better than the nonparametric model.
9.6. Dose-Response Relationships Based on Dichotomous Data
Due to the number of chemicals which must be tested, the number of experimental units that are used in any one experiment is small. Hence, the dose levels should be set high enough to produce tumors in an appreciable fraction (for example,
Dose-Response Relationships Based on Dichotomous Data
163
101 or more) of the experimental units. The statistical problem is to use these high-dose data to estimate the dose level at which the cancer risk in the units would exceed the background cancer risk (that is, the risk at zero dose) by no more than some specified low amount, say 106 . This problem is commonly known as the low-dose extrapolation problem. Crump et al. [1977] address themselves to this problem when the data is dichotomous with a model different from that of Hartley and Sielkin [1977]. 9.6.1. Description of the Model
Any low-dose extrapolation procedure depends strongly on the assumed doseresponse relationship. Many dose-response models provide reasonably good fits to the data in the experimental dose range, but yield risk estimates that differ by several orders of magnitude in the low-dose range. Some models such as the one-hit or onestage model of Armitage and Doll [1961] assume that the risk is approximately linear in dose in the low-dose range. Also, the probit model of Mantel and Bryan [1961] and Mantel et al. [1957] has a dose-response function which has derivatives of all orders approaching zero as the dose level tends to zero. Thus, the dose-response function is extremely flat in the very low-dose region. As one alternative, Crump et al. [1977] consider the dose-response function of the form Fd D 1 expQd
9.25
where Q(s) is a polynomial with nonnegative coefficients and of known degree, that is Qd D
1
qi di
qi ½ 0 for all i
9.26
iD0
and qi D 0 for all but a finite number of i. If qi > 0, F(d) is linear in d for small d. On the other hand F(d) can be made arbitrarily flat by taking a sufficient number of . the low-order coefficients equal to zero, because at very low doses Fd F0 D 1 expq0 q1 d where 1 is the smallest positive integer for which q1 > 0. Notice that F(d) denotes the probability that a unit will develop a particular type of tumor during the duration of the experiment while under continuous exposure to a dose d of the drug. Point estimates of the coefficients qi i D 0, 1, . . . and hence the response probabilities F(d) for any dose level d can be obtained by maximizing the likelihood function of the data. When the degree of the polynomial Q is unknown, we need to estimate an infinite number of the qi . Conditions for existence and uniqueness of solutions to this infinite parameter maximization problem are given by Guess and Crump [1978]. The likelihood L does not achieve a maximum within the class of functions determined by (9.26). However, Guess and Crump [1978] show that the likelihood will achieve a maximum if the class of functions over which the maximum is taken is enlarged so as to include polynomials of infinite degree. That is, if we
9
Estimation of ‘Safe Doses’
164
redefine Q as Qd D
1
qi di C q1 d1 ,
qi ½ 0
i D 0, 1, . . .
9.27
iD0
where we define d1 D 1 if d ½ dn and zero otherwise (dn equals the highest test dose appearing in the likelihood function) and where all but finitely many of the qi are zero. The inclusion of the step function assures the existence of a maximum likelihood solution. The procedure for computing the function Q of the form (9.27) that maximizes the likelihood is called a global maximization. The asymptotic distribution of the maximum likelihood coefficient vector qˆ is derived by Guess and Crump [1978]. Because of the nonnegative constraints qi ½ 0, the asymptotic distribution of qˆ is not multinormal [Crump et al., 1977, p. 450]. Crump et al. [1977] compute confidence intervals for F(d) assuming that there exists a positive integer J such that qi > 0 for 0 i J and qi D 0 for i > J (because then, the usual asympˆ totic multinormality of qˆ and hence of Fd is valid). The simulation studies indicate that the asymptotic confidence intervals are nearly correct. 9.6.2. Testing of Hypotheses
The authors construct likelihood ratio test procedures for H01 : q0 > 0, q1 D Ð Ð Ð D qJ D 0 versus H11 : q0 > 0, qj ½ 0 for j D 1, . . . , J, qi > 0 for some J D 1, . . . , J, for H02 : q1 D 0, qj > 0 for all j D 0, 2, . . . , J versus H12 : qi > 0 for all j D 0, 1, . . . , J. Notice that when J D 1, the two test procedures coincide. Since nonparametric tests are available [Barlow et al., 1972, pp. 192–194] for H01 versus H11 , Crump et al. [1977] focus their attention on H02 versus H12 . They conduct a number of simulation experiments. For the results of the simulation, refer to table 2 of Crump et al. [1977] who also carry out some Monte Carlo goodness-of-fit studies.
Table 9.2. Experimental data and estimates of Q(d) obtained by global maximization of the likelihood function Dose level ppm Dieldrin [Walker et al., 1972] 0.00 1.25 2.50 5.001 DDT [Tomatis et al., 1972] 0 2 10 50 100 1
Number of responders/ number of animals tested
Estimated coefficients from a global fit to data
17/156 11/60 25/58 44/60
qˆ 0 D 0.113 qˆ 1 D 0.051 qˆ 2 D 0.040 qˆ i D 0, i > 2
4/11 4/105 11/124 13/104 60/90
qˆ 0 D 0.045 qˆ 1 D 0.002 qˆ 1 D 0.541 qˆ i D 0, otherwise
Data at higher dose levels are omitted.
Dose-Response Relationships Based on Dichotomous Data
165
ˆ The asymptotic theory of the estimate qˆ , Fd and dˆ a for some small a (0 < a) is given in appendix 1 of Crump et al. [1977], where da is such that Fda F0 D a and a is a known fixed value. Remark. The approach of Hartley and Sielkin [1977] utilizes time to tumor data whereas the approach of Crump et al. [1977] does not. However, when the data is dichotomous, the mle for the dose that yields a given risk should be the same by both procedures. However, the methods are different for obtaining the asymptotic confidence intervals. Crump et al. [1977] fit the polynomial Q(d) to certain data and part of their numerical study is presented in table 9.2.
Crump et al. [1977, p. 443] have graphed upper confidence intervals for added risk and lower confidence intervals on dose for the data sets in table 9.2.
9.7. Optimal Designs in Carcinogen Experiments
Optimal designs associated with estimating response probabilities in low-dose carcinogen are of much interest. Hoel and Jennrich [1979] develop optimal designs for a specific class of models and study their robustness with respect to the prior information regarding the model. They also provide an algorithm for constructing the optimal design. Regression with Nonconstant Variance Let f0 x, . . . , fk x be a Chebyshev system of linearly independent continuous functions on the interval [t,b]. Let Y(x) be a random variable with mean E[Yx] D b0 f0 x C Ð Ð Ð C bk fk x and variance varYx D s2 x > 0
for x 2 [t, b].
ˆ Let Yx be the weighted, by reciprocal variance, least squares estimate of E[Y(t)] based on size samples of n observations taken at points xi in the subinterval [a,b] of [t,b]. Since gj x D fj x/sx, j D 0, . . . , k is also a Chebyshev system on [t,b], one may use the results of Hoel [1966] pertaining to that system to conclude ˆ that var[Yt] can be minimized by using only k C 1 points in [a,b], provided the points and the number of observations at each point are suitably chosen. These points xi are such that they make a regression function of the above form satisfy E[Yxi ] D 1i sxi
i D 0, . . . , k
and are such that the graph of this regression function in the interval [a,b] will be inside the band formed by the two curves y D sx and y D sx. Figure 5 shows the geometry that characterizes the optimizing points for the special case of k D 3 for a regression function that satisfies the preceding conditions.
9
Estimation of ‘Safe Doses’
166
Fig. 9.1.
Since the optimizing points in [a,b] are such that the value of the regression function is either sx or sx and such that its derivative is the same as the derivative of sx or sx at each such point, except a and b, a set of equations based on these facts can be written down and solved for the xi . The weights can be obtained using the theory of Hoel and Jennrich [1979]. Finding an Optimal Design for Estimating P(t) Let the probability of an animal developing cancer due to a dose of level x of a carcinogenic material be given by k j Px D 1 exp aj x aj ½ 0. 9.28 jD0
The above model is widely used in the field of cancer dose response. Hoel and Jennrich [1979] take the model to be k bj xj D 1 exp[Bx], 9.29 Px D 1 exp jD0
where bj need not be nonnegative. Using the theory of Hoel [1966] pertaining to the Chebyshev’s system, one can show that k C 1 dose levels are sufficient for
Optimal Designs in Carcinogen Experiments
167
asymptotic optimality. For both theoretical and practical considerations, one is better off using as few points as possible, and k C 1 is the smallest number that enables the model to be estimated. Hoel and Jennrich [1979] show how to find a design that will minimize the asymptotic variance of an estimate of P(t) where t is any point in (0,a) and observations are taken in [a,b]. They do so by reducing the problem to that of optimal design for extrapolation in a Chebyshev regression model. Let x0 , . . . , xk be any set of k C 1 distinct points in [a,b]. Let N0 , . . . , Nk be the number of trials of an experiment that is performed at those points and let R0 , . . . , Rk denote the number of positive responses at the corresponding dose levels. Since there are k C 1 parameters and k C 1 sample points, estimates of the b values can be obtained either by the method of maximum likelihood or least squares by solving for b from the equations k
j
bj xi D ln1 pˆ i D zˆ i i D 0, . . . , k,
where
9.30
jD0
ˆ i DRi /Ni D pˆ i i D 0, . . . , k. Px Also, p
Ni ˆzi zi ¾
p
Ni pˆ i pi g0 pi D
p
Ni pˆ i pi /1 pi
9.31
where gx D ln1 x, pi D Px and zi D ln[1 Pxi ]. i Let Ni D ci N, ci ½ 0 and ci D 1. Then zˆ 0 , . . . , zˆ k are independent and asymptotically normal variables. Hence, as N ! 1, p
p Ni pˆ i pi pi qi pi Ni ˆzi zi ¾ var D D . 9.32 var 1 pi 2 1 pi 2 qi ˆ ˆ Since the sample function Px D 1 exp[Bx] passes through the k C 1 sample points, its equation can be written in the form k ˆ Px D 1 exp Lj xˆzj 9.33 jD0
where Lj x is the Lagrange polynomial of degree k that satisfies Lj xi D dij . Then, letting gx D ex , and proceeding as in (9.31), one obtains p p ˆ N[Px Px] ¾ Lj xQx Nˆzj zj , where Qx D 1 Px. If v2j D pj /qj , then var
9
p
2 ˆ N Px Lj xQ2 xv2j /cj . Px ¾
Estimation of ‘Safe Doses’
168
If the cj are chosen to minimize the asymptotic variance at x D t, subject to cj D 1 then at that point 2 p
ˆ Pt ¾ N Pt jLj tQtjvj . 9.34 var j
So, the minimization of this variance is now reduced to minimizing jLj tQtjvj Gt D
9.35
over all choices of x0 , . . . , xk where a x0 < . . . < xk b. Now applying the theory developed by Hoel and Jennrich [1979], (which assures the existence of an optimal set of points) one can obtain an optimal set of points by finding a regression function of the form yx D
k
bj xj Qx
9.36
jD0
that satisfies the geometrical properties of figure 9.1 with sx D [PxQx]1/2 . The authors recommend the method of nonlinear least squares technique for finding optimal designs (if a computer is available) as an alternative approach. Optimal Design for Estimating Pt P0. Hoel and Jennrich [1979] show that the same set of k C 1 points that are optimal for estimating P(t) are also optimal for estimating Pt P0. Optimal Design for Estimating t Such That Pt D c. Let Px D Fx, b where the b values occur in the exponent of P(x). As an estimate of ˆt, choose t that satisfies ˆ D c. By expanding F about t and b, one can show that asymptotically Fˆt, b 1 p p @F Nˆt t ³ N[Fˆt, b Ft, b]. 9.37 @x Hence the asymptotic variance of ˆt is a constant multiple of the asymptotic variance of Fˆt, b D Pˆt. This constant depends on t and b, but not on the design. A design that is asymptotically optimal for estimating P(t) is also asymptotically optimal for estimating t. A corresponding result holds if one is interested in estimating t such that Pt P0 D c. Hoel and Jennrich [1979] point out that it does not matter asymptotically j whether we consider restricted polynomials a x or unrestricted polynomials j bj xj in their model. They provide numerical techniques for obtaining optimal designs. Examples For obtaining optimal designs, one should know the response function P(x). Preliminary experiments can be carried out in order to determine the prior probability function P(x).
Optimal Designs in Carcinogen Experiments
169
For illustrating the relative advantage of using optimal designs on models of the type considered here, the data of two experiments that were considered appropriate for this type of modelling will be studied. The first data is cited in Guess et al. [1977] and the second by Guess and Crump [1976]. The maximum likelihood fit of P(x) to the data of vinyl chloride in table 9.3 using model (1) as given by Guess et al. [1977] is P1 x D 1 exp0.000267377x. The corresponding fit of P(x) to the data of benzopyrene in table 9.3 using model (1) of Guess and Crump [1976] is P2 x D 1 exp0.000097x2 0.0000017x3 . Let t D 0.5 be the target value and let [a, b] D [1, 500]. Then t falls outside of this interval as required by the theory of extrapolation [Hoel and Jennrich, 1979]. Since there will not be any appreciable difference in the optimal design, we consider [a, b] D [0, 500], thereby reducing the problem to that of interpolation. Using P1 x and carrying out the computations, the authors obtain, for estimating Pt P0, x0 D 0,
x1 D 47.2,
n0 D 0,
n1 D 0.84N,
x2 D 326,
x3 D 500
n2 D 0.12N,
n3 D 0.04N
ˆ P0] ˆ var[Pt D 3.36 ð 106 , where N denotes the total number of observations. (Note that k D 3 since 4 dose levels were used.) The corresponding variance for the spacing and weighting of table 9.3 was found to be 3.6 times as large as that for the optimal design. General Comments. Empirical results carried out by the authors indicate that an optimal design is likely to yield better estimates of a response function value than will a traditional design. Optimal designs are insensitive to target values selected, provided it is small relative to the interval of experimentation. Also, optimal designs
Table 9.3. Data on vinyl chloride and benzopyrene Vinyl chloride
Benzopyrene
dose (x)
number of animals
number responding
dose (x)
number of animals
number responding
0 50 250 500
58 59 59 59
0 1 4 7
6 12 24 48
300 300 300 300
0 4 27 99
9
Estimation of ‘Safe Doses’
170
are somewhat insensitive to those modifications of the prior function which do not change the nature of the polynomial in the exponent of P(x). Since Px P0 is approximately linear for small values of x, one might consider fitting a linear function and use it for estimation, realizing of course that it would be biased. Although the variance decreases rapidly as the degree of the polynomial in P(x) decreases, the variance increases rapidly as the interval of observations is shortened. The gain in decreasing the bias by considering a short interval near the origin may be considerably setoff by the increase in the variance. Asymptotic confidence intervals for P(t) or Dt D Pt P0 can be obtained ˆ ˆ D Pt ˆ P0 ˆ by using the asymptotic normality of Pt or Dt and the use of formula (7) and p !2
ˆ var N Dx jLj 0Q0 Lj xQxjvj . 9.38 Dx ¾ Corresponding confidence intervals for t in the inverse problem can be obtained by using asymptotic normality and formulas (9.34) and (9.37) or (9.38) and (9.37).
Optimal Designs in Carcinogen Experiments
171
10 žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Bayesian Bioassay
10.1. Introduction
In a quantal bioassay, there is a set of dose levels at which experimental units are tested. Each unit has a threshold or tolerance level. If the dose is less than the tolerance level, the subject does not respond; otherwise a response is observed. Let Ft denote the tolerance distribution which is the proportion of the population with tolerance levels less than or equal to dose level t. At each dose level we observe a binomial variable with parameters ni , the number of units tested at the ith dose and pi D Fti where ti is the ith dose. Often, there is some prior information available from previous assays using similar subjects and similar dose levels. We would like to incorporate such prior information into our estimation procedures. Kraft and van Eeden [1964] characterize the class of all prior distributions for F, find the corresponding Bayes estimates for quadratic loss functions and apply the results of LeCam [1955] to show completeness of the closure of this class of estimates for a certain topology. Let Wx be an arbitrary (fixed) distribution function. If G, a nondecreasing bounded between 0 and 1 function on the real line, is the statistician’s decision, and F is the true distribution determining the distribution of k (the number of responses out of n at dose level t) then the loss is L(F,G) D F G2 dW. 10.1 For this loss the Bayes estimator is the conditional expectation of the process for given k. In a typical bioassay problem, F D Fq D F0 x q for some fixed F0 . Then, the usual loss function is L(Fq1 , Fq2 D q1 q2 2
172
since [Fq1 u Fq2 u]2 D q1 q2 2 [F0x u]2 q1 < x < q2 . The loss function of Kraft and van Eeden [1964] is Fq1 Fq2 2 dW D q1 q2 2 [F0x u]2 dW. The two loss functions are in local agreement if [F0x u]2 dW < M. Based on the method called z-interpolation, Kraft and van Eeden [1964] were able to compute explicitly the estimates of Ft for t of the form j/21 , j is odd and 1 > 3. The well-known nonparametric estimator for the mean of Ft is the SpearmanKarber estimator which is the maximum likelihood estimator when we have equal spacing of doses and equal sample sizes at each dose. This is given by ˆ sk D tT C d/2 d m
T
pˆ i
10.2
iD1
where d is the spacing between doses and tT is the largest dose, and pˆ i D ki /ni , ki denoting the number of responses at dose i. Notice that this estimator makes no use of the available prior information. Bayesian approach to the bioassay problem has been considered by Kraft and van Eeden [1964], Freeman [1970], Ramsey [1972], Tsutakawa [1972] and Wesley [1976]. The last author, besides proposing new estimators, has surveyed the previous work and has served as a basis for part of this chapter.
10.2. The Dirichlet Prior
An important and useful prior in nonparametric Bayes approach is the Dirichlet process prior. A good description of this prior and its properties are given by Ferguson [1973]. Definition Let be a set and b a s-field of subsets of . Let m be a finite, nonnull, nonnegative, finitely additive measure on (, b). A random probability measure P on (, b) is said to be a Dirichlet process on (, b) with parameter m if for every K D 1, 2, . . . and measurable partition B1 , B2 , . . . , BK of , the joint distribution of the random probabilities [P(B1 , . . . , P(BK ] is a Dirichlet process with parameters [m(B1 , . . . , m(BK ] [Antoniak, 1974].
The Dirichlet Prior
173
We can think of m as a measure generated by a multiple of a distribution function m1, t] D M Ð at where M is a positive real number and a is a distribution function. For any integer T ½ 1 and any set of real numbers t1 , . . . , tT such that t1 . . . tT , let Y1 D P1, t1 ]jF D F(t1 Y2 D P((t1 , t2 ]jF D F(t2 F(t1 .. . YTC1 D P((tT , 1jF D 1 F(tT and b1 D m1, t1 ] D M Ð at1 b2 D mt1 , t2 ] D M Ð [at2 at1 ] .. . bTC1 D mtT , 1 D M[1 atT ]. Then 0
Y D Y1 , . . . , YTC1 ¾ Dirichletb1 , . . . , bTC1 D DT b1 , . . . , bTC1 . Notice that the family of prior distributions DT is a consistent family. That is, all marginal distributions belong to the same class of priors. For instance, the marginal distribution of Y1 , . . . , YT is DT1 b1 , . . . , bT1 , bT C bTC1 . Thus, the tolerance distribution F is defined as a random variable. The density of Y1 , . . . , YTC1 can be written as fY y1 , . . . , yT D
TC1 M bj 1 yj , TC1 jD1 bi iD1
where yTC1 D 1
T
yi , on S D
iD1
y1 , . . . , yT : y1 ½ 0, . . . , yT ½ 0,
T
yi 1
iD1
and zero elsewhere. Alternatively, we can write the density of Y1 , . . . , YTC1 as proportional to TC1
bj
yj
10.3
jD1
10
Bayesian Bioassay
174
where averaging is done with respect to the measure TC1 T dyj yj . dv D jD1
10.4
jD1
However, the modes of the two densities are not the same. For the sake of convenience, we use the first representation of fT y1 , . . . , yT as the prior density. One can interpret a as a ‘prior’ distribution function summarizing one’s prior idea of the tolerance distribution function. It is the mean of the random distribution function F. M is equivalent to the amount of confidence we have in the prior, or the number of observations that the prior distribution function is worth.
10.3. The Bayes Solution for Squared Error Loss
With the notation developed in sections 10.1 and 10.2, the Bayes estimate of Ft under squared error loss is the mean of the posterior distribution of Ft. According to Antoniak [1974], for more than one dose, the posterior distribution can be written as a mixture of Dirichlet distributions, and hence the mean can easily be calculated. Then the mean of the tolerance distribution can be estimated by the mean of the ˆ estimated tolerance distribution Ft: If at is normal mp , s2 then the mean of the estimated tolerance distribution is T ˆ i ˆ iC1 Ft Ft mFˆ D mp C s t mp ti mp iD0 iC1 s s
ti mp tiC1 mp ð j j 10.5 s s ˆ i denotes the estimate of Fti at dose ti and we take t0 D 1 and tTC1 D where Ft 1. This estimate is cumbersome to compute since it involves nested summation signs. For a proof of (10.5), see Wesley [1976, Appendix C].
10.4. The Alternate Bayes Approaches
Ramsey’s Approach The posterior distribution of F is proportional to the product of the prior density and the likelihood. The joint mode of the posterior will be used to summarize the posterior distribution. Note that the log of the posterior density is proportional to TC1
ki ln Fti C
ni ki ln[1 Fti ] C
iD1
The Alternate Bayes Approaches
TC1
Mdi ln[Fti Fti1 ],
iD1
175
TC1 where di D atiC1 ati , i D 1, . . . , T C 1, di D 1 and M is a positive i constant. First, we reparametrize the posterior density by setting qi D Fti Fti1 for i D 1, . . . , T. Then Fti D q1 C Ð Ð Ð C qi and the qi are subjected to the constraints q1 C Ð Ð Ð C qT D 1. Now, we maximize the natural logarithm of the posterior density subject to the constraint using the Lagrange method. The difference between the partial derivatives with respect to qi and qiC1 yields (10.6). Setting@/@Fti ln posterior D 0 we obtain
di ki diC1 ni ˆ Fti D M 10.6 ˆ i ] ni ˆ i Ft ˆ i1 ˆ i [1 Ft ˆ iC1 Ft ˆ i Ft Ft Ft
if ti is an observational dose and 0D
at ai aiC1 at , ˆ ˆ iC1 Ft ˆ Ft ˆ i Ft Ft
ai D ati
10.7
if t is not an observational dose and ti < t < tiC1 . Notice that we were able to obtain (10.7) because at t the prior is still defined and for ti < t < tiC1 , we replace the interval ti ; tiC1 by the two intervals ti ; t and t;tiC1 . However, the likelihood does not have the term associated with t. When M D 1, the posterior distribution is dominated by the prior distribution. The prior d.f. tends to a degenerate d.f. giving probability one to the prior mode. On the other hand when M ! 0, the mode of the joint density is the isotonic regression estimator introduced by Ayer et al. [1955]. The solution may be written as 0 for t < t1
s s ˆ D min max kj nj for ti t tiC1 and i D 1, . . . , T, Ft i s T 1 r i
jDr
10.8
jDr
where t1 , . . . , tT are the observational doses and tTC1 is taken to be 1. The modal function given by (10.8) is uniquely defined only at the observational doses t1 , . . . , tT . The interpolation method is arbitrary, subject to the constraint of monotonicity. For 0 < M < 1, the joint posterior density is convex and unimodal. A unique ˆ referred to as the posterior modal function, exists such nondecreasing function Ft that for any set of doses t1 , . . . , tT the mode of the posterior density occurs at ˆ 1 , . . . , Ft ˆ T ]. [Ft Setting the bracketed quantities on the left and right side of (10.6) equal to ˆ as the mode of the likelihood function and the mode of the zero would define Ft prior density, respectively. Since the bracketed quantity on the left side of (10.6) is ˆ i , one can interpret weighted by the Fisher information in the likelihood about Ft ˆ first solve the TM as a measure of the information in the prior. To determine Ft ˆ simultaneous equations in (10.6) for the Fti . Equation (10.7) tells how one should interpolate or extrapolate. ˆ ˆt D g. If for Suppose we now wish to determine the dose level ˆt such that F ˆ some ti , Fti D g, then ˆt D ti . If not, determine the pair of observed doses ti and tiC1
10
Bayesian Bioassay
176
ˆ i < g < Ft ˆ iC1 (with the understanding that t0 D 1 and tTC1 D 1). for which Ft The potency at ˆt between ti and tiC1 may be included in the prior since this has no effect on the posterior at the observational doses; hence we include it in the posterior. Setting the partial derivative w.r.t. Fˆt of the joint posterior density to zero yields: aˆ ai aiC1 aˆ D where aˆ D aˆt. ˆ ˆt ˆFtiC1 F ˆFˆt Ft ˆ i That is aˆ ai aiC1 aˆ D . ˆ i ˆ iC1 g g Ft Ft Letting ˆ iC1 Ft ˆ i ] ˆ i ]/[Ft x D [g Ft
10.9
we obtain aˆ D aiC1 x C ai 1 x D ai C xaiC1 ai .
10.10
The interpolation formulae (10.9) and (10.10) are linear and they impose the condiˆ should have the same shape as the prior mode at tion that the posterior mode Ft piece-wise between observational doses. Ramsey [1972] provides several examples based on artificial data. In all these examples the posterior mode is found by maximizing the posterior density subject to inequality constraints rather than solving (10.6) directly. The author employs an algorithm which converts the problem into an unconstrained maximization problem by the introduction of a ‘penalty’ function. The examples suggest that one observation per dose may be the best design for estimating ED50 . Turnbull’s Approach Turnbull [1974, 1976] describes a method for estimating (nonparametrically) a distribution F when the data are incomplete due to censoring. In the bioassay, the data is either right or left censored. The technique of Turnbull [1974, 1976] consists of maximizing the likelihood by an iterative procedure called the ‘self-consistency algorithm.’ To adapt his procedure to the Bayesian situation, we treat the prior distribution as ‘prior’ observations. Let T 3, and consider the Dirichlet prior b
b
b
b
fY D cy1 1 y2 2 y3 3 y4 4 where y1 D Ft1 , y2 D Ft2 Ft1 , y3 D Ft3 Ft2 and y4 D 1 Ft3 , and c is a constant. We set bi D 1 for i D 1, . . . , 4 so that M, the prior sample size, is 4. Suppose that we have a response at t1 , a non-response at t2 and a response at t3 . Then the usual likelihood is k
1k3
fKjY D c0 y11 1 y1 1k1 y1 C y2 k2 1 y1 y2 1k2 y1 C y2 C y3 k3 y4 D c0 y1 1 y1 y2 y1 C y2 C y3 .
The Alternate Bayes Approaches
177
On the other hand, if we treat the prior as additional observations, the likelihood becomes fŁK D c00 y1 y2 y3 y4 y1 1 y1 y2 y1 C y2 C y3 , which is proportional to the posterior distribution fY fKjY dY fY fKjY
10.11
S
under prior fY and observations K. Thus the self-consistency algorithm with the prior as additional observations maximizes the posterior as we hoped. In general let bi D gi /g where gi and g are integers. Then the posterior is proportional to 1/g TC1 T TC1 T b kj g kj g n k n k g i i yi zj 1 zj j j D yi zj 1 zj j j , f1 D iD1
jD1
iD1
jD1
where zj D
j
yi .
iD1
If we assume that there are gi prior observations in the interval yi , then multiplying each actual observation by the common denominator g, the self-consistency algog rithm will maximize f D f1 . Since the transformation is monotone, the maximizing values of fyi g will also maximize f1 which is proportional to the posterior. Suppose we have doses t1 , t2 , t3 and we desire an estimate at dose tŁ where t1 tŁ t2 . Then we set up the new intervals 1, t1 , t1 , tŁ , tŁ , t2 , t2 , t3 , t3 , 1. The number of prior observations assigned to the new intervals t1 , tŁ and tŁ , t2 will be according to the prior distribution function and their sum will equal the number previously assigned to t1 , t2 . The estimates of the tolerance distribution at a nonobservational dose t are more easily obtained from (10.7) yielding ˆ D Ft ˆ i C Ft
at ati ˆ i ] for ti < t tiC1 i D 0, . . . , T ˆ iC1 Ft [Ft atiC1 ati 10.12
where we set t0 D 1 and tTC1 D 1. The expression for the mean of Ft is again given by (10.5). The estimates based on the mode of the posterior are easier to compute than those based on quadratic loss and seem to give estimates very close to those obtained from the mean of the posterior, especially for symmetric distributions.
10.5. Bayes Binomial Estimators
Let us consider a simple dose experiment consisting of n subjects. The conjugate prior is the beta distribution B[Mat, M1 at]. The Bayes estimate of p, the
10
Bayesian Bioassay
178
probability of a response at t, is pˆ BB D [Mat C k]/M C n
10.13
where k is the number of responses out of n trials. If we follow this procedure for each dose level, we would have estimates of the tolerance d.f. at each dose and these can be combined to obtain a Spearman-Karber type of an estimate for the mean. This will be called the Bayes-binomial estimator ˆ BB . For equally spaced dose introduced by Wesley [1976] and will be denoted by m levels (d being the common distance) and equal sample sizes n at each dose, we have ˆ BB D tT C d/2 d m
T
pˆ i BB D tT C
iD1
dM d dn ki ai 2 MCn MCn n
ˆ sk C Dm
dM pˆ i ai , MCn
since d pˆ i , pˆ i D ki /n. d 2 ˆ BB will be close to m ˆ sK if the observed proportion of responses at each dose is m close to the prior d.f. Although this estimating procedure treats each dose as an independent binomial experiment, it is relatively easy to compute.
ˆ SK D tT C m
10.6. Selecting the Prior Sample Size
In the nonparametric Bayes framework, the prior sample size M can be viewed as the number of ‘observations’ we are willing to give to the expected proportion of responses compared to the number of actual observations we are going to take. In the parametric Bayes approach, let F be normal with mean m and variance t2 and let m be normal with mean 0 and variance s2 . Since we will be concerned with the ratio s/t, without loss of generality we can set t D 1. Hence, 1 1
E[F(t)] D s
t mfm/s dm 1
1 2
2 t mfm/s dm.
1
E[F t] D s
1
Then let VarŁ F(t) D E[F2 t] fE[F(t)]g2 . On the other hand, if we approach the problem using the Dirichlet prior, we would take our prior d.f. at to be t (since t D 1). Also a is the mean of
Selecting the Prior Sample Size
179
the Dirichlet prior and should correspond to the underlying tolerance distribution function Ft. Var0 F(t) D at[1 at]/M C 1 0
where Var denotes the variance under the Dirichlet prior. Equating VarŁ F(t) with 0 Var Ft and solving for M, we have at[1 at] M(t) D 1. VarŁ F(t) We hope that this estimate of M will be fairly constant over the dose range; it also depends on s/t. We could consider averaging over the doses tj or consider a conservative estimate of M to be M D min Mtj . 1 j T
Example For s D 1 (or s D t), VarŁ F0 D 1/12. If at D t, then a0 D 1/2. Thus M0 D 2.
10.7. Adaptive Estimators
Wesley [1976] has modified the cross-validation approach of Stone [1974] and the pseudo-Bayes estimators of Bishop et al. [1975] to the bioassay problem. We will consider these in the following. At each dose we have a separate binomial experiment. Hence, we can apply the multinomial techniques at each dose level to estimate the probability of a response. Alternatively, we can estimate the multinomial cell frequencies by Np˜ j p˜ j1 where N is the total number of experimental units and p˜ j is the isotonic regression estimate of the tolerance distribution at dose j. Then, the estimate of the mean of the tolerance distribution is of the Spearman-Karber type and is given by T i T d d ˆ D tT C d q˜ j D tT C d T i C 1˜qi m 2 2 iD1 jD1 iD1 where q˜ j is the estimate of the multinomial cell probability of having a tolerance greater than dose j 1 and not exceeding dose j. Using the multinomial formulas of Stone [1974] with xi estimated from the differences between successive isotonic regression estimates gives TC1 TC1 w D T Ð n2 x˜ j T Ð n 2 x˜ j [˜xj lj T Ð n 1] jD1
jD1
1 TC1 CT Ð n [˜xj lj T Ð n 1]2
10.14
jD1
10
Bayesian Bioassay
180
where xj D number of observations in the j-th cell, lj D prior probability for the j-th cell, x˜ j D T Ð np˜ j p˜ j1 assuming equal sample sizes at each dose. Then qˆ j w D wlj C 1 wp˜ j p˜ j1 and d T i C 1ˆqi w. d 2 iD1 T
ˆ Stone D tT C m
10.15
If w D M/M C N, then we get a form similar to that of the Bayes-binomial estimator. This cross-validation approach can be viewed as choosing a value of M based on the data. Wesley [1976] obtains another adaptive estimator by applying the pseudo-Bayes approach [Bishop et al., 1975] to the Bayes-binomial estimate. Recall M Ð atj n kj C Ð and MCn MCn n dM dn ki d D tT C ai . 2 MCn MCn n
pˆ j BB D ˆ BB m So
ˆ BB D MSEm
2 d dM pi ai C tT C d pi m MCn 2 2 2 n d C pi 1 pi . 2 M C n n
Maximizing this for M gives
d pi 1 pi n tT C d pi m pi ai d 2 M D 2 !. d d pi ai C tT C d pi m pi ai 2 ˆ SK and pi by pˆ i D ki /n, then If we estimate m by m T pˆ i pˆ 2i
ˆ BB D " M
1 T
#2 D
pˆ i ai
1
Adaptive Estimators
n T 1
T
T
1
2
ki
ki
2n
T 1
k2i
1
ki
T 1
ai C n2
T
2 .
ai
1
181
Hence, Wesley’s estimate of the probability of response at tj is ˆ BB Ð aj n kj M C Ð , and ˆ BB C n M ˆ BB C n n M d ˆ BB . ˆ BB D tT C d ˆ M pˆ j M m 2
ˆ BB D pˆ j M
10.16
10.8. Mean Square Error Comparisons
Let us assume 3 dose levels and 5 subjects at each dose, and the doses are equally spaced d D 2 units apart and are symmetric about the prior mean D 0.6. Also assume that the tolerance distribution is normal m, s2 , where s2 D 0.25. Wesley [1976] graphs the mean square errors of various estimators. Note that the ˆ m for different values of m is symmetric about the prior mean. graph of MSE m, For the Turnbull-Bayes and Bayes-binomial estimators, M is taken to be 1.0. Wesley [1976] draws the following conclusions: Changing M has no effect on the MSE of the Spearman-Karber or the adaptive estimators. For large M, the MSE of Bayes-binomial approaches that of TurnbullBayes estimator. For small M, the Bayes-binomial is equivalent to the SpearmanKarber estimate. When M is large, the Turnbull-Bayes and Bayes-binomial estimators have lower MSE at values of the true mean close to the prior mean (within 0.5–1 SD). However, they do increasingly worse as the true mean moves away from the prior mean. The Bayes-binomial estimator is not much worse than the Turnbull-Bayes estimator when the prior is accurate, and is better for true means far from the prior since it is less dependent on the means. Wesley [1976] graphs the mean square errors for distributions other than normal, namely uniform, Cauchy, and angular. The proposed estimators are fairly robust w.r.t. the specific form of the tolerance distributions. In conclusion, the experimenter must decide the relative merits of protection when the prior is wrong and accuracy when the prior is correct. If no prior knowledge is available use the Spearman-Karber estimate or go to a two-stage assay. If we have a good prior knowledge, the Turnbull-Bayes estimator is suitable. If the prior is accurate within 0.5–1 SD, setting M D 5 or 10 does a much better job than disregarding the prior information and simply using the Spearman-Karber estimator.
10.9. Bayes Estimate of the Median Effective Dose
Although the up and down and the stochastic approximation methods are simple to apply and have high efficiency, the sequence of dose levels they propose may be suboptimal, especially for small sample sizes. Furthermore, in the case of the up and down method, the choice of the stopping rule is left to the experimenter, so that the payoff between the cost of further experimentation and that of less accurate
10
Bayesian Bioassay
182
estimation is not considered explicitly. Marks [1962] considered the problem of Bayesian design in estimating (sequentially) the mean of a quantal probit response curve and obtained detailed results in the special cases when the prior for the median g is a two-point distribution or when the dose-response curve is a step function. Freeman [1970] considered the sequential design of experiments for estimating the median lethal dose parameter of a quantal logistic dose-response curve. He employs the Bayesian decision theory to obtain a stopping rule and a terminal decision rule for minimizing the prior expectation of the total cost of observation plus (quadratic) estimation loss. He numerically evaluates the optimal strategies for the special cases in which observations can be obtained only at one, two or three dose levels. He compared the expected losses with those of the up and down method using a fixed sample size equal to the prior expectation of the number of trials under the sequential design. Surprisingly, he found that the efficiency of the up and down method is in excess of 90%. Freeman [1970] assumes that the scale parameter in the logistic response function is known. He employs the dynamic programming equations. Recall that the logistic response function is given by p D [1 C ebxg ]1 1 < g < 1 where g denotes the median and b the scale parameter. We observe a quantal response variable Y where PY D 1 D p and PY D 0 D 1 p. Let r be the number of 1’s in n independent trials. Then r is distributed binomially with parameters n and p. g is assumed to have the conjugate prior given by fgjr0 , n0 /
er0 bxg [1 C ebxg ]n0 r 1
which corresponds to the usual beta prior / p00 1 p0 n0 r0 1 for p. Freeman [1970] takes r0 D 1 and n0 D 2 to make the relevant distributions proper. Let c denote the cost per observation (assumed to be independent of the outcome) and kgˆ g2 denote the loss incurred in estimating g by gˆ . This corresponds to the loss function $ % p1 pˆ 2 k log b2 pˆ 1 p for p. At a general point (r,n), following Lindley and Barnett [1965], let D(r,n) D loss incurred by stopping and using the optimal estimate for g, BŁ r,n D the loss incurred by taking one further observation and using the optimal strategy thereafter, B(r,n) D the loss incurred by using the optimal strategy.
Bayes Estimate of the Median Effective Dose
183
The dynamic programming equations are BŁ r,n D c C r/nBr C 1, n C 1 C [n r/n]Br,n C 1, B(r,n) D min[BŁ (r,n),D(r,n)], D(r,n) D posterior variance of g. Computations yield fgjr,x D
b Ð n C n0 1! erCr0 bxg . Ð n C n0 r r0 1!r C r0 1! [1 C ebxg ]nCn0
In order to find the posterior mean and variance of g, one can make the transformation 1z z1 D 1 C ebxg i.e. ln D bg x. z and reduce the posterior density of g to a beta density. Upon noting that jdg/dzj D fbz1 zg1 we find fzjr, x D
n C n0 1! znCn0 rr0 1 1 zrCr0 1 , n C n0 r r0 1!
when n0 D 2 and r0 D 1, we can write the density of z as n C 1! 1 z r n fzjr, x D z. r!n r! z Now since g x D b1 logf1 z/zg, we can obtain the mean and variance of g in terms of the mean and variance of logf1 z/zg. Also note that @k @rk
$
r!n r! n C 1!
%
D
%k 1 $ 1z 1z k n ln z dz. z z 0
Thus,
1z k dk /drk fr!n r!g , k D 1, 2, . . . . D E ln r!n r! z
Hence, $ % d 1z d D ln r! C flnn r!g E ln z dr dr 1 1 1 C C ÐÐÐ C 1 C ÐÐÐ C 1 D r r1 nr . D logfr/n rg, and
10
$ %2 n!00 1z 2r!0 fn r!g0 fn r!g00 D C C E ln z r! r! n r! n r!
Bayesian Bioassay
184
where primes denote derivatives with respect to r. Thus, $ % 1z var ln D ln r!00 C flnn r!g00 z . D ln r!0 C f lnn r!g0 D n/rn r. From these we can readily obtain the expressions for Eg and var g. D(r,n) D k Vargjr,x.
10.10. Linear Bayes Estimators of the Response Curve
Bayesian nonparametric estimators of the tolerance distribution (especially these based on Dirichlet priors) are hard to compute. Kuo [1988] obtains linear Bayes estimators which are fairly easy to compute. These will be described below. Using the usual notation, let ni subjects be given dose ti and ri denote the number of responses at dose ti i D 1, . . . , T. Let Ft denote the response function at dose t. The Bayes estimator of F using Ferguson’s [1973] Dirichlet process prior require the specification of the index m where m1, t D Mat, M D m1, 1. It is well-known that EfFtg D at and varfFtg D atf1 atg/M C 1. Note that a represents the prior belief on the shape of F, and M denotes the degree of concentration of F around a. Since the Bayes estimator of F for the dose levels is a mixture (see, for instance, equation (10.5)), which becomes increasingly intractable when T increases. Hence the need for methods of approximating the Bayes rule. If the loss function is LF, d D fFt dtg2 dWt where Wt is a known weight function. We restrict the estimators dt to the linear space generated by r1 , . . . , rT and 1. A linear Bayes rule is the Bayes rule in the linear space. The solution is obtained by point-wise minimization. That is, for each t, we find constants l1 , . . . , lT , l0 depending on t which minimize EfFt l0 l1 r1 Ð Ð Ð lT rT g2 . Note that it is easier to evaluate the linear Bayes estimate than to evaluate the Bayes estimate, namely, EfFtjr1 , . . . , rT g. Thus, the linear Bayes estimate is given by [Kuo, 1988, theorem 1] ˆ j D atj C dt
T
$
ni lˆ i j
iD1
% ri ati ni
10.17
where lˆ i j D jAi, tj j/j covrj.
Linear Bayes Estimators of the Response Curve
10.18
185
covr denotes the variance-covariance matrix of r1 , . . . , rT , Ai, tj denotes this matrix with the ith column replaced by [covr1 , Ftj , . . . , covrT , Ftj ]0 , and j Ð j denotes the determinant of (Ð). Also, note that lˆ 0 j D atj
T
lˆ i jni ati .
10.19
iD1
Then, ˆ D lˆ 0 j C dt
T
lˆ j jri .
10.20
iD1
Straightforward computations yield varri D Efvarri jFg C varfEri jFg D Efni Fti gf1 Fti g C varfni Fti g D ni M C ni ati f1 ati g/M C 1,
10.21
covri , rj D Eri rj Eri Erj D ni nj [EfFti Ftj g ati atj ] $ % ni nj ati f1 atj g/M C 1 if i j D ni nj atj f1 ati g/M C 1 if i > j covri , Ftj D ni EfFti Ftj g ni EfFti gEfFtj g $ % ni ati f1 atj g/M C 1 if i j D . ni atj f1 ati g/M C 1 if i > j
10.22
10.23
Next, suppose we want to estimate F(t) at t not coinciding with the dose levels, say ˆ is obtained by linearly interpolating between dt ˆ j and dt ˆ jC1 tj < t < tjC1 . Then dt [see Kuo, 1988, theorem 2]. Thus, ˆ D atjC1 at dt ˆ j C at atj dt ˆ jC1 dt atjC1 atj atjC1 at
10.24
for tj < t < tjC1 ; j D 0, . . . , T with ˆ 0 , atTC1 D 1 D dt ˆ TC1 , at0 D 0 D dt where lˆ 0 t D at
T
lˆ i tni ati ,
iD1
10
Bayesian Bioassay
186
and lˆ i t is obtained from the right-hand side of (10.18) with Ftj replaced by F(t). Notice that covri , Ft is obtained from (10.23) by replacing tj by t. One can easily verify that for i D 1, . . . , L, covri , Ft D
atjC1 at at atj covri , Ftj C covri , FtjC1 . atjC1 atj atjC1 atj 10.25
Also note that the coefficients lˆ i j can also be obtained from [lˆ 1 j, . . . , lˆ T j]0 D fcovrg1 [covr1 , Ftj , . . . , covrT , Ftj ]0 .
10.26
ˆ j is asymptotically unbiased and consistent Kuo [1988, theorem 3] shows that dt for Ftj as ni ! 1 for i D 1, . . . , T. Further, dtj ! rj /nj as M ! 0 and dtj ! atj as M ! 1since lˆ i j ! 0. Remarks. One of the shortcomings of the linear Bayes estimator is that it may not be nondeˆ creasing. If monotonicity of dt is a must, then the pool-adjacent-violators algorithm [see Barlow et al., 1971, pp. 13–18] can be used for obtaining the isotonic regression ˆ 1 , . . . , dt ˆ T . on dt (2) Kuo analyzes the data of Cox [1970, p. 9] where T D 5, ni 30 dose (log2 (concentration)) levels 0, 1, 2, 3 and 4, and a D Normal(2, 0.5) with various values of m and plots the linear Bayes estimators.
(1)
Example Let M D 1 and n1 D n2 D 100, r1 D 1, r2 D 99, t1 D 1/3, t2 D 2/3 and at D t0 t 1. Computations yield: 10050.5t1 1 t1 10050t1 1 t2 cov r D 10050t1 1 t2 10050.5t2 1 t1 with t1 D 1/3 and t2 D 2/3. We obtain 9 101 50 . cov r1 D 10015151 50 101 Since
$
50ti 1 tj for i j 50tj 1 ti for i > j, 3 101 50 1 tj [lˆ 1 j, l2 j]0 D tj 102151 50 101 3 101 151tj D . 102151 151tj 50 covri , Ftj D
Linear Bayes Estimators of the Response Curve
187
Thus,
ˆ j D tj C 100 dt
1 1 100 3
lˆ 1 j C
99 2 100 3
lˆ 2 j
1 [97101 151tj C 97151tj 50] 102151 97 97 tj D 1C D 2.902tj 0.951. 51 102 D tj C
ˆ 2 D 0.984, and ˆ 1 D 0.016, dt Hence, dt $ % 1 2 ˆ D3 t 0.016 C t dt 0.984 3 3 D 2.904t 0.952 ˆ ˆ < t < 2/3 with d0 D 0 and d1 D 1. Gelfand and Kuo [1991] show how fully Bayesian analysis to obtain the posterior distribution of Ft, for any t, given the data r1 , . . . , lT can be implemented under two rich classes of X prior distributions, namely the Dirichlet process prior and the product beta prior. They employ sampling-based approach, namely the Gibbs sampler which is an iterative Markovian updating scheme. Features of the posterior distributions such as mode, mean and quantiles can be obtained. Second, the authors provide an extension of the quantal bioassay model which allows ordered polytomous response arising from stochastically ordered tolerance distributions. Ramgopal, Laud and Smith [1993] consider the same problem of implementing nonparametric Bayesian analysis of bioassay when the prior restricts the shape of the potency curve (for example, to be convex, concave or ogive). They employ samplingbased approaches for computing the posterior features of interest and provide some numerical examples. for
10
1 3
Bayesian Bioassay
188
11 žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Radioimmunoassays
11.1. Introduction
Radioligand assays (RLA) are primarily used for clinical estimation of antigens. They also have the added feature of estimating potency from very small quantities of materials with high precision. RLA’s include radioimmunoassays (RIA’s) in which antigens are labelled with radio-isotopes and immunoradiometric assays (IRMA’s) in which antibodies are labelled. Although RLA’s are not strictly bioassays in the sense, they do not use responses measured from living organisms or tissues, they are very similar in structure and hence should be considered from the point of view of bioassay. Response curves constructed by early users of RLA’s from free-hand or graphic methods were so precise and smooth that they did not bother to go beyond the graphical or interpolatory methods. Radioimmunoassays had their origin more than a decade ago, when antibodies capable of binding 131 I-labeled insulin were demonstrated in human subjects treated with a mixture of commercial beef and pork insulin. It was shown in the literature that the percentage of insulin bound to antibody decreased as the insulin concentration in the incubation mixtures was increased and unlabeled insulin could displace labeled insulin from the insulin-antibody complexes. Yalow and Berson [1970] provided an excellent survey of the technical aspects of radioimmunoassays which, unlike the traditional bioassays, are dependent on specific chemical reactions that obey the law of mass action and are not subject to errors introduced by the biological variability of test systems. McHugh and Meinert [1970] provided a theoretical model for statistical inference in isotope displacement immunoassay. Radioimmunoassay (RMI) is only one of a series of related techniques. In general, we can call them ‘binding assays’ which are procedures in which quantitation of a material depends on that material and the subsequent determination of its distribution between ‘bound’ and ‘free’ phases. Chard [1990] gives an excellent exposition of the various ‘binding assays’.
189
11.2. Isotope Displacement Immunoassay
McHugh and Meinert [1970] have developed a theoretical model for the bioassay of human insulin in an isotape displacement immunoassay, which include the following steps: (1) (2) (3) (4)
the biochemical system as a basis for the model, statistical considerations including nonlinear curve fitting and the derivation of formulas for confidence limits, comparison of the theoretical model to the logit model, estimating inversely the unknown (‘test’) antigen potency from the results of steps (1) and (2).
A brief outline of the biochemical model is essential in understanding the theoretical model proposed by Meinert and McHugh [1968]. The immune system of the body produces antigens to fight the antibodies which threaten it. These antigens bind to the antibodies, rendering them harmless. The reaction can be formulated as ! Ag C Ab AgAb
11.1
! where Ag denotes antigen, Ab denotes antibody and denotes a two-way reaction. In developing an immunoassay, it has been found that the concentration of bound antigen, AgAb, is a curvilinear function of the initial concentration of antigen and antibody in the system. Thus, by measuring the AgAb in the system, one can infer the initial concentrations of either the antigen or antibody. However, measuring the bound antigen antibody complex may prove difficult. For this reason, part of the antigen in the system is labelled with radioactive traces such as iodine-131. The known quantities in the system are labelled antigen and the antibody. The presence of an unknown amount of unlabelled antigen in the system decreases the labelled antigen that is bound to the antibody and increases the proportion of the unbound labelled antigen. Measurements of the unbound antigen, or the radioactive antigen antibody complex, then allows for an indirect measure of the unlabelled antigen, the amount of which is unknown. Meinert and McHugh [1968] explain how a standard curve is developed, with which specimens of unknown potency can be compared. This standard curve is based on biochemical theory and the law of mass action. At equilibrium, the reaction in (11.1) continues with the reaction going to the right at a velocity vF , and the reaction going to the left at velocity vR . The rate of these velocities is denoted by kk D vF /vR . By the law of mass action k D [AgAb]/[Ag] [Ab]
11.2
where [ ] denotes concentration. Then k can be written in terms of the initial concentration of antigen and antibody denoted by [Ag0 ] and [Ab0 ]. Thus, kD
11
[AgAb] [Ag0 ] [AgAb][Ab0 ] [AgAb]
Radioimmunoassays
11.3
190
where [Ag0 ] [AgAb] D [Ag]
and [Ab0 ] [AgAb] D [Ab].
For ease of computations, let x0 D [Ag0 ],
q1 D [Ab0 ],
h0 D [AgAb] and
q2 D k.
Then (11.3) becomes q2 D h0 /x0 h0 q1 h0
11.4
where the following conditions are dictated by the physical properties of the model. 0 < h0 < x0 ,
q1 > 0,
h0 < q1
and
q2 > 0.
Solving for h0 gives 1 2 0 0 1/2 g/2. h0 D fq1 C x0 q1 2 [q1 C x q2 4x q1 ]
11.5
Note that here we took the negative root of the equation because the positive root will violate the assumption that h0 q1 . Let us write x0 D x C A L where AL is the isotope-labelled antigen and x is the unlabelled antigen, and let h0 D hAL C x. Then (5) can be rewritten as 1 2 h D fq1 q1 2 C AL C x [q1 q2 C AL C x
4q1 AL C x]1/2 g/2AL C x
11.6
which is the equation for the theoretical calibration curve for the deterministic model. Now let hi D Eyij which is the true proportion of radioactive counts associated with bound antigen in the j-th tube in the i-th set of tubes. Thus, due to errors in the measurement process, we have the stochastic model yij D fexpression (11.6) with x replaced by xi g C eij ð j D 1, . . . , ni and i D 1, . . . , s.
Isotope Displacement Immunoassay
11.7
191
The inverse of this calibration (standard) curve will give a value for xi which is the concentration of unlabelled antigen in the system. Thus xi D [q1 q2 hi 1 hi 1 ]/q2 hi AL .
11.8
Then the estimate of xi , namely xˆ i is obtained by substituting the estimates for q1 and q2 in (11.8). However, estimation of q1 and q2 is somewhat difficult because equation (11.6) is nonlinear in these parameters. McHugh and Meinert [1970] make use of the Gauss-Newton iterative method in order to numerically approximate q1 and q2 . The rates of convergence is rapid provided one uses the initial values proposed by the authors: qˆ 01,0 D n1
zij n1
wij qˆ 2,0
2 1 2 ˆq2,0 D n wij zij wij zij n wij wij
qˆ 1,0 D qˆ 01,0 /qˆ 2,0 ,
and
where
zij D yij /1 yij , wij D yij x0i , n D
s
ni .
1
Meinert and McHugh [1968] propose the following termination rule for the iterative processes: Stop at the rth iteration if 8 ˆ 1 2 qˆ 1,r qˆ 1,r1 2 C qˆ 1 2,r q2,r1 10 .
Confidence limits for xi , the expected concentration of unlabelled antigen in the i-th set of tubes, can be computed in two steps. First, the lower and upper confidence interval limits for hi are obtained as follows: yi. L D yi. tn2,1a/2 varyi. 1/2 yi. U D yi. C tn2,1a/2 varyi. 1/2 where varyi is the estimated variance function, which is obtained numerically (an expression for it will be given later), and yi. is the mean proportion of bound antigen observed in the number of test tubes containing the i-th unknown test preparation. In the second step, the confidence limits for xi are calculated by substituting yi. L or yi. U for hi in (11.8) yielding: xi U D fqˆ 1 qˆ 2 yi. U [1 yi. L ]1 g/qˆ 2 yi. L AL xi L D fqˆ 1 qˆ 2 yi. U [1 yi. U ]1 g/qˆ 2 yi. U AL .
11
Radioimmunoassays
11.9
192
Example Data generated in an insulin immunoassay [Yalow and Berson, 1960, 1970]: Concentration of unlabelled insulin of standards, ng/ml
Observed proportion of labelled insulin bound
0 0.008 0.020 0.04 0.10 0.20 0.40 0.6 0.8 1.4
0.581 0.550 0.521 0.515 0.425 0.348 0.264 0.232 0.174 0.114
The computer routine of McHugh and Meinert [1970] gave the starting values qˆ 1,0 D 0.1795 ng/ml qˆ 2,0 D 9.6297 ml/ng. After three iterations, the following least-square estimates were obtained. qˆ 1 D qˆ 1,3 D 0.1641 ng/ml qˆ 2 D qˆ 2,3 D 11.3125 ml/ng, which satisfies the termination rule: 8 ˆ 1 2 qˆ 1,3 qˆ 1,2 2 C qˆ 1 2,3 q2,2 10 .
From these estimates, the estimated standard curve can be obtained by replacing q1 and q2 by qˆ 1 and qˆ 2 in (11.6). Furthermore, (11.7) can be used to calculate xˆ i , the unknown insulin concentration, by knowing the observed proportion of counts of bound insulin, yi and the estimates of q1 and q2 . As an example, suppose that yi D 0.425 is the observed proportion of count of bound insulin with qˆ 1 D 0.1641 ng/ml qˆ 2 D 11.3125 ml/mg
yi D 0.425 AL D 0.1 ng/ml.
So 0.164111.3125 [0.425/1 0.425] 0.1 D 0.1324 ng/ml, 11.31250.425 and the 95% confidence limits are xˆ 0i D
x0i L D 0.078 ng/ml x0i U D 0.194 ng/ml.
Isotope Displacement Immunoassay
193
In this case, McHugh and Meinert [1970] were able to fit a curve that was based on a knowledge of the equation for the chemical reaction that was involved. They also state that in many cases when the biochemical theory is not so well known, they use an empirical function, such as the logistic for their curve-fitting. As a comparison, these same data were fitted using a logistic regression on SAS. The mean-square error obtained for the logistic model is 0.0135327, whereas for the McHugh and Meinert model it is 0.000445. For the logistic model F is 65.77 with P < 0.0001 and r2 D 0.891558.
11.3. Analysis of Variance of the McHugh and Meinert Model
Source
d.f.
Sum of squares
Mean square
Model Error Total
2 8 10
0.281283 0.003567 0.28485
1406.415 ð 104 4.45875 ð 104 284.85 ð 104
F ratio D MS(regression)/MS(error) D 1406.415/4.45875 D 315.4281 > F0.005,2,8 D 11.0. Thus the model is highly significant at a < 0.0001. Remark. The logistic regression model can also provide a good fit for the data at a highly significant level. It is reassuring to note that an empirical model which is commonly used in practice, namely the logistic, compares favorably with the model based on the theory of McHugh and Meinert, which has slightly smaller mean-square error than the logistic model. In their model q1 and q2 have a physical interpretation, namely q1 representing the initial concentration of antibody and q2 denoting the ratio of the concentration of bound antibody-antigen complex to the product of the concentrations of free antibody and antigen at equilibrium. Also, it is worthwhile to point out that a model with a theoretical basis provides a more stable basis for the design of future experiments.
The confidence interval for xi given by (11.9) depends upon the variance of yi . The approximation to this function is:
2 2 1 0 ˆ 0 ˆ km i. D qi C vary fk xi ; qfm xi ; qS sˆ 2 kD1 mD1
where qi is the number of times the i-th unknown is replicated, and where fk x0i ; q D @f(x0i ; q/@qk , f(x0i ; q D hi D E(yij , 1 0 0 2 0 1/2 f1 x0i ; q D 1/2x0i f1 q1 q1 g 2 xi [q1 q2 C xi 4q1 xi ] 1 1 0 0 0 2 0 1/2 f2 x0i ; q D q2 1g 2 /2xi fq1 q2 C xi [q1 q2 C xi 4q1 xi ]
sˆ 2 D
ni s
yij hˆ i 2 /n 2,
n D n1 C Ð Ð Ð C ns ,
iD1 jD1
11
Radioimmunoassays
194
and Skm is the km-th element in the inverse of the matrix Skm where Skm D
s
ˆ m x0 ; q]n ˆ i [fk x0i ; qf i
k D 1, 2 and m D 1, 2.
iD1
From these expressions, we infer that various ways can be found to increase the precision of the assay; they include (1) (2) (3)
increase qi , the number of times the i-th unknown is replicated, increase s, the number of standards, increase n, the total number of tubes upon which the calibration curve is based.
11.4. Other Models for Radioimmunoassays
The kinetic relations of enzyme chemistry, which have a lot in common with the antigen-antibody, can often be reduced to rectangular hyperbolae. Bliss [1970] fits a three-parameter hyperbola (two asymptotes and a constant) to the measurements of human luteinizing hormone presented by Rodbard et al. [1970]. The hyperbola is given by x0 x00 y0 y00 D x00 y00 C c
11.10
where x00 and y00 denote the asymptotes, c is a constant, x0 , the independent variable is the dose and y0 is the dependent variable or the response, in its initial form. We assume that the dose is free of random error. We can convert the hyperbola into a straight line Y D y00 C
x00 y00 C c D a0 C b(x0 C d1 D a0 C bx x0 x00
11.11
where d D x00 and x D x0 C d1 . Since this equation is nonlinear in its parameters, its solution is indirect and stepwise. Bliss [1970] proposes obtaining a preliminary estimate of d as follows: the values of Y are plotted against k values of log x0 and fitted with a smooth curve drawn by free hand. At three equally and widely spaced levels of log x01 , log x02 and log x03 the corresponding values of y1 , y2 , and y3 are interpolated from the curve. Let h D x03 x02 /x02 x01 .
11.12
Then, the initial value of d is computed as d0 D [hx01 y1 h C 1x02 y2 C x03 y3 ]/[h C 1y2 hy1 y3 ].
11.13
(One could average over such values of d0 obtained by selecting different sets of equally and widely spaced levels of log x01 , log x02 and log x03 .) The initial estimate of the statistic d, with which the doses are transformed to x D 100/x0 C d, can be improved by a simple adjustment. For each of several
Other Models for Radioimmunoassays
195
values of di , the slope of the response y upon the transformed dose x is computed by least squares and also the standard deviation s˜ for the scatter in y about each line. The improved value of d is that one for which s˜ is a minimum. When once d is chosen, we can estimate a0 and b by the linear regression method. Bliss [1970] illustrates this method by the data of Rodbard et al. [1970]. Rodgers [1984] has written a survey paper on data analysis and quality control of radioimmunoassays. This will be reviewed here.
11.5. Assay Quality Control1
Recall that a plot of the response versus the analyte concentration (dose) constitutes a calibration curve which represents the quantitative relationship between what we observe, namely, the response, and what we wish to estimate, namely, the concentration of analyte. Using the calibration curve in order to determine the analyte concentration that corresponds to a specified response is called the analyte concentration interpolation, or simply interpolation. Calibration and interpolation are the core aspects of bioassay. However, sometimes, the assay concentration estimates are subject to error; this error must be quantified and controlled. This is the task of assay quality control. The error in dose levels (or analyte concentrations) can be a random or bias (systematic) error. Random error can be controlled to a limited extent by replication of a given measurement. Bias error poses a more difficult problem. Spot samples (also called quality control samples) constitute the main source of information about bias error in an assay.
11.6. Counting Statistics
The response, for instance, in radioimmunoassays is in the form of counts of radioactive decays. As events dispersed randomly in time, these counts behave like Poisson variables. If the number of counts (including background) in the bound fraction of a given sample of a given assay is B, then . var(B) D B, s.d.(B) D B1/2
and
. c.v.(B) D B1/2 /B D B1/2 .
Many assayists use the following sampling rule: stop counting in each assay tube until a preset number of counts or preset counting time has been reached, whichever is realized first. Usually, the preset number of counts is 10,000. Then the standard deviation of such a count is 100. However, if one follows a more rational approach to counting, it can result in a 2- to 17-fold improvement in the efficiency of counting device utilization. Under some statistical assumptions, the error in an assay concentration estimate is proportional to the error in the corresponding response. Hence, 1 Butt
11
[1984] served as a source for sections 11.5–11.8.
Radioimmunoassays
196
we can confine to controlling the error in the assay response. Let the response be b D B/Tt where T D total counts and t D counting time used for obtaining B counts in the bound fraction. Then . var(b) D b/tT,
. s.d.(b) D B1/2 /tT,
. c.v.(b) D B1/2 .
Notice that the standard deviation of b is much smaller than that of B and the coefficients of variation of b and B are equal. This reexpression of counting error in terms of the response, although helpful, does not directly improve our counting strategy. If s2c denotes the variance in b due to counting and s2e denotes the variance due to experimental error, then we can write the total variance in b as s2 D s2e C s2c . Further, often we have the relationship s2e D a C bb where a and b are some constants. Further, if f is such that s2 D f2 1s2e , then the appropriate counting time t may be defined by: t D fTf2 2[a/b C b]g1 ,
since f2 1s2e D s2e C b/Tt
Finney [1978, p. 735] advocates the use of the formula: var B D EBv for some v0 v 2 where EB denotes the expectation of B. One can transform B to BŁ by BŁ D B1J/2 , when J 6D 2, and BŁ D ln B when J D 2 so as to obtain constant variance for B (see also section 3.6).
11.7. Calibration Curve-Fitting
More attention has been given to presenting new formulae for curve-fitting than to good statistical and computing procedures. Two basic problems are: (1) to find an appropriate function for the calibration curve, and (2) to fit it correctly. Although there are a large number of calibration formulae that have been proposed, they fall into three categories: (a) empirical, (b) semi-empirical and (c) model-based. Empirical Methods These are called empirical because their use is not based on some physicochemical model for the assay, and they have been very successful in practice. Polynomials, splines and polygonals are examples of such empirical fits.
Calibration Curve-Fitting
197
Semi-Empirical Methods The techniques are called semi-empirical because they have some theoretical justification why they should fit calibration data. The most popular method is the logistic technique. The four-parameter logistic response curve, introduced by Heally [1972], is given by y D a d[1 C x/cb ]1 C d, a, b, c and d being the unknown parameters. Parameters a and d correspond to the upper and lower asymptotes of the curve, respectively: c is the value of x (analyte concentration) which corresponds to the center or inflection point, and b is related to the slope of the center of the curve. Also, one can rewrite the curve as ln[y d/a y] D b ln x C b ln c The logit transformation is Y D ln[y d/a y]. So, one can fit a straight line Y versus ln x with a slope of b and Y intercept of b ln c. Rodbard et al. [1969] were the first to fit a logistic function to binder-log and assay (this method is known as the logit-log technique) by setting a D 1 and d D 0. The value used for y is the bound counts for a given analyte concentration divided by the bound counts at zero analyte concentration (after correcting all bound count values for the effects of nonspecific binding). The practical difficulty with the four-parameter logistic method is that it is non-linear in its parameters b and c. However, Zivitz and Hidalgo [1977] proposed a strategy by which linear regression may still be applied to the four parameter logistic [see also Davis et al., 1980]. If one has good estimates of a and d based on previous experience, the logit transformation can be performed on the calibration curve data and a straight line fitted to the transformed data, using weighted linear least squares. This yields estimates for b and c. One can then calculate the value of x0 D [1 C x/cb ]1 for each datum and fit a straight line to y versus x0 . This yields new estimates for a and d. Iterate this process until the two linear curve-fitting procedures converge to a desirable accuracy. The most difficult component of this simple procedure is the suitable determination of weights to be used for the two linear fitting procedures. The above method is known as 2 C 2 logistic technique. The Variance Function Rodbard and Hutt [1974] point out that most workers have assumed that the RIA dose-response variable has uniformity of variance, i.e., homoscedasticity, which is, in general, not true. This can be seen if one performs analyses in replicate (i.e., a fixed number ½2 test tubes) for each of the dose levels. Then one can compute s2Y (sample variance) of the response variable and plot this versus Y. Typically, there will be a significant trend of s2Y with respect to Y. Thus, Rodbard and Hutt [1974]
11
Radioimmunoassays
198
propose to take s2Y D a0 C a1 Y C a2 Y2 where Y is a generic for the response variable. In determining the weighted least squares estimates of the parameters in the four or two parameter logistic model, one should use the weight Wi D 1/ˆs2Yi . The relationship s2Y D a2 Y2 appear to hold for IRMA systems. Finney [1978, p. 335] advocates the use of the weights WŁi D 2j asYi 0 j 2. Rodbard and Hutt [1974] conclude that in the four parameter model together with the general weighting function Wi seem to provide a more satisfactory method for data analysis. Rodbard and Hutt [1974] provide the following applications of the general model. 1. In SDS electrophoresis, there is a curvilinear relationship between relative mobility (Rf ) and molecular weight (M). If M is plotted on a logarithmic scale, one obtains a smooth logistic function ad C d. 1 C M/cb
Rf D
Usually in this case d D 0, so a three parameter model can be fitted, assuming non-uniformity of variance in the Rf measurement. 2. In gel filtration there is a curvilinear relationship between Ve (elution volume) and molecular weight Ve D
ad Cd 1 C M/cb
where a D Vt , d D V0 . These last two examples illustrate the versatility of the logistic function. A Fortran program has been developed by Rodbard and Hutt [1974] to “fit” parameters to the model using Wi as a weighting function. They also use the Gauss-Newton method. (The user must provide estimates of a0 , a1 and a2 , when there are several classes of antibody species with differing affinities and specifications, the authors propose to use logit Y D a C b loge x C gloge x2 . Notice that the four-parameter logistic model is symmetrical about a central inflection point. If this is not appropriate for the data, then a generalized logistic can be used. Note that the four-parameter logistic is of the form y D a df1 C exp[fln x]g1 C d,
where
fln x D bln x ln c. Now, a generalized logistic is obtained by setting f to be an arbitrary function. If f is a polynomial, a 2 C n logistic method could be followed in analogy to the 2 C 2 technique, using only linear regression methods. Here n denotes the order of the
Calibration Curve-Fitting
199
polynomial C1, or the number of parameters to be fitted to describe the polynomial chosen. Usually n 4 is adequate. One should make sure that the resulting curve is monotonic. Also, one can employ one of the nonlinear curve-fitting computer programs which are readily accessible. Model-Based Methods The final group of calibration curve-fitting methods is based on simple chemical models for the assay reactions in use. A sample scheme for the central reactions of binder-ligand radioimmunoassay is kf
! P C Q PQ kr kŁf
! PŁ C Q PQ kŁr
where PŁ denotes the labeled ligand, P the unlabelled ligand (the analyte) and Q denotes the binder, often the antibody. These two competing reactions are assumed to be governed by the simple first-order mass action law: KD
kf [PQ] D kr [P][Q]
and
KŁ D
kŁf [PŁ Q] D kŁr [PŁ ][Q]
where the square brackets indicate the concentrations of the contained species at chemical equilibrium, and the K values are the equilibrium constants describing these two reactions. Let q denote the total concentration of binder, and pŁ and p are the total concentrations of labeled and unlabeled ligands, respectively. An example of such a model is that of Naus et al. [1977]. Some presently available models assume that K D KŁ . Others add a parameter Bn to describe nonspecific binding, which is often assumed to be a set fraction of the ligand present (an assumption which is rarely validated). Several models allow for a multiplicity of binding sites (binder heterogeneity). Raab [1983] and Finney [1983] have demonstrated the practical defects in model-based calibration curve-fitting.
11.8. Principles of Curve-Fitting
In the early years, manual techniques were used in immunoassays. Calibration results were plotted and a curve was drawn by hand through these points. However, due to compelling reasons, a statistical fit is preferable. For automated curve-fitting, the most commonly used technique is the method of least squares which makes the following assumptions: (1) the x values are free of error; (2) the formula selected for fitting truly describes the data, and (3) that there are no outliers in the data. Besides, it would be helpful if the errors in the y values are normally distributed. The method of least squares could be linear or nonlinear. The number of parameters in the model plays an important role in determining how flexible a given equation will be. For
11
Radioimmunoassays
200
example, a four-parameter polynomial will be able to fit a wider variety of shapes than a straight line. It is often said jokingly that a four-parameter model is used to describe an elephant, and with a fifth parameter the elephant can be made to wag its tail. By Anova techniques, one can test the validity of the model. The problems one is faced with are: (1) the random errors in the values tend to vary as a function of y (and consequently as a function of x), and (2) the outliers. The problem of heteroscedasticity can be overcome by using weighted least squares regression methods. However, the rejection of outliers has never been satisfactorily resolved. Robust regressionn methods are desirable in this regard. For instance, the method of Tiede and Pagano [1979] has been criticized on several counts by Raab [1981]. The assayist’s judgement should be brought into play regarding outliers. The presence of a large number of outliers is a sign of a loss of quality control. Rodgers [1984] discusses how to overcome other errors in assays, such as bias (or systematic) error, and the assay design. Rodgers [1984] recommends using automated packages rather than writing one’s own program. We should also be wary of commercial software. Select a program that represents the current state of the art, such as the Edinburgh FORTRAN program [McKenzie and Thompson, 1982], the NIH FORTRAN program [Rodbard et al., 1975]. Rodgers [1984] recommends establishing a comprehensive internal quality control program.
11.9. A Response Model using Nonlinear Kinetics
It is of much interest to quantify the risk associated with a given exposure to potentially hazardous chemicals in the environment. Some models have been proposed in the literature to relate the exposure dose to the probability of toxic response. Given such a dose-response model, the problem of estimating the risk is equivalent to a problem of low-dose extrapolation with the adopted model. Human exposure data is usually not available. So the adopted approach has been to estimate the low-dose risk in laboratory animals and convert these risks to humans based on a suitable conversion factor (e.g. on the basis of mg/kg body weight in the diet). However, before such a conversion factor is applied, it is usually necessary to extrapolate downward from the high-dose levels needed to obtain toxic responses from a number of experimental animals to the low-dose levels consistent with the human environmental exposure. For this low-dose extrapolation the selection of the doseresponse model is crucial. Further, as noted by many research workers (for instance, see Hoel, Kaplan and Anderson, [1983]), the shape of the dose-response curve can be greatly influenced by the kinetics governing the transformation of the given dose to the amount of biologically “effective” dose toxic to the responding target organs. Typically, this transformed “effective” dose at the target organ cannot be observed in the test animal without sacrifice. Thus, in chronic animal bioassays, the parameters of the dose response model used for extrapolation are estimated using the “incorrect” given doses rather than the “correct” transformed doses. For example, in the vinyl chloride case (see Van Ryzin and Rai, [1980]), a one-hit (linear) model fits
A Response Model using Nonlinear Kinetics
201
very well the dose response model with the metabolized (transformed) doses which the (untransformed) dose-response curves were extremely non-linear. Consequently, the low-dose extrapolations using the fitted nonlinear model yielded risk estimates that are several orders of magnitude higher in the dosage range than the linear model suitably adjusted for the metabolized doses. The resulting safe-dose estimates using the unadjusted dose levels were several orders of magnitude lower than those using the adjusted doses. Van Ryzin and Rai [1987] give a dose-response model for quantal responses over a two-year period of study. It is assumed that the administered dose at concentration d is transformed into a biologically “effective” toxic concentration D at the target organ via incoming and outgoing Michaelis-Menton nonlinear kinetic equations. The dose-response model used for this transformed dose D is assumed to follow a Weibull model. The Model A dose-response model expresses the probability P of a toxic response in an experimental unit at dose level D (d ½ 0) as a function of d, i.e., P D fd. The choice of f dictates the dose-response model chosen. Depending on the route of administering the dose, the chemical involved in the dose, the species of animal on test and the target organ, the actual “effective” toxic dose at the target organ may be changed to D D gd where g denotes the pharmacokinetic transformation operating on d. For example, the chemical of interest needs to be metabolized to a reactive metabolite in order to cause a toxic response, which is the case with many carcinogens including vinyl chloride and nitrite (activated to nitrosamines). Van Ryzin and Rai [1987] consider the following compartmental model: Compartment C1
Compartment C2
Administered dose at time t,
Compartment C3
Toxic dose at T1
!
D1 (t)
target site at time t, D2 t
T2
!
Eliminated toxic dose at time t
T1 D first process (outgoing from C1 D incoming to target site); T2 D second process (outgoing from C2 D outgoing from target site). Thus, an animal on test at time t is administered a dose D1 t which is transformed by the outgoing process T1 from C1 into an internal toxic dose D2 t at the target organ. This toxic is then eliminated from the target organ or C2 by the outgoing process C2 into a non-toxic compartment C3 . For instance, the process T1 may be a metabolic activation of an administered chemical to a reactive metabolite which then binds genetically to DNA causing a tumor (toxic response). The process T2 may be DNA repair or elimination before binding to DNA. For a sample compartment, the process T is said to follow Michaelis-Menton (MM) nonlinear kinetics if D0 t D bDt/fc C Dtg,
11
Radioimmunoassays
b, c > 0
11.14
202
where b denotes the maximum rate of change and c is the MM-constant, i.e., the dose concentration in the compartment at which the rate of change is b/2. If b/c ! bŁ > 0 as c ! 1, then (11.14) becomes D0 t D bŁ Dt,
bŁ > 0
11.15
which is the case of linear kinetics when the rate of change is proportional to the dose concentration in the compartment. Equation (11.14) denotes nonlinear kinetics common to saturable processes. In C2 , the rate of change in the toxic dose D2 t is governed by two processes following (11.14), an incoming process from C1 with constants b1 , c1 > 0 and an outgoing process from C2 with b2 , c2 > 0. Thus, the resulting rate change in D2 t is D02 t D
b1 D1 t b2 D2 t . c1 C D1 t c2 C D2 t
11.16
Now setting D02 t D 0 (i.e., steady state conditions have been reached and letting D1 t be a constant d and solving for D D D2 t, we obtain D D gd D
a1 d b1 c 2 d D c1 b2 C b2 b1 d 1 C a2 d
11.17
where a1 D b1 c2 /c1 b2 > 0 and a2 D b2 b1 /c1 b2 > M1 with M equal to the maximum administered constant dose d. Note that the restriction on a2 is a necessary condition for a steady state solution of (11.16) requiring D > 0. One can obtain special cases of (11.17) when (1) (2) (3)
T1 is nonlinear and T2 is linear as in (11.15), T1 is linear as in (11.15) and T2 is nonlinear as in (11.14), and both T1 and T2 are linear as in (11.14).
For example, in chronic animal bioassays, equation (11.17) represents the “effective” toxic dose at the target organ provided the above assumptions are satisfied. The next step in building the dose-response model Pd D fgd D fD is to specify the function f. Towards this, the authors take f to be fD D 1 exp[a C lDb ], a, l, b > 0.
11.19
When b D 1, fD is the usual one-hit model or the multi-stage model with onestage where the spontaneous background response is given by f0 D 1 expa. When b > 1 is a positive integer, the model (11.19) is a multi-event model with independent background response and can be viewed, for instance, as a multi-stage model in which all b-stages have the same transition rate. When b > 0, the model (11.19) is the Weibull dose-response model. Thus (11.19) appears to be a reasonable dose-response model to adopt. Now substituting (11.17) into (11.19), we obtain a
A Response Model using Nonlinear Kinetics
203
four-parameter dose response model given by
q3 d Pd; q D 1 exp q1 C q2 1 C q4 d
11.20 b
with q D q1 , . . . , q4 0 where q1 D a > 0, q2 D la1 > 0, q3 D b > 0 and q4 D a2 > M1 , M D maximum administered dose level. Let D fq D q1 , . . . , q4 0 : q1 , q2 , q3 > 0, q4 > M1 g denote the natural parameter space of the model (11.20). The boundary cases q1 D 0 or q2 D 0 are not included in the natural parameter space since the asymptotic theory of maximum likelihood estimation is not valid. Additional work is necessary if the parameters are on the boundary. Note that q3 influences the behavior of the dose-response curve in the low dose range. In the low dose range, this curve is approximately linear if q3 D 1, concave if q3 < 1 and convex if q3 > 1. However, q3 < 1 has limited biological applications. Maximum Likelihood Estimation (MLE) of the Parameters Let ni animals be exposed to dose level di and let ri out of ni have the toxic response of interest (e.g., tumor of a specified type) for i D 0, . . . , m. Assuming that the animal responses are mutually independent, the log-likelihood of q for model (11.20) is
m ni r Pi i 1 Pi ni ri 11.21 l D lq D log Lq, where Lq D ri iD0
where Pi D Pdi ; q given by (11.20). Since there is no hope of obtaining explicit expressions for the MLE’s of q, the authors resort to an IMSL iterative maximization subroutine developed by the Columbia University Computer Center using initial values qˆ 0 D qˆ 10 , . . . , qˆ 40 with qˆ 10 D r0 C 1/n0 C 1, qˆ 20 D expˆa, qˆ 30 D bˆ and qˆ 40 D ˆc/bˆ where aˆ , bˆ and cˆ are the least squares estimates obtained from fitting the equation y D a C blog d C cd through the m-pairs of points fyi , di g where yi D log log
1 [r0 C 1/n0 C 1] , i D 1, . . . , m. 1 [ri C 1/ni C 1]
The motivation for these starting values is to take yi D log log[1 Pˆ 0 /1 Pˆ i ], where Pˆ i D ri C 1/ni C 1, i D 0, . . . , m. Note that one’s are added to both ri and ni in order to avoid undefined logarithms in the ratios. Also, while estimating a, b, . c by the least squares method, we use log1 C q4 d D q4 d. The authors show the uniqueness and the strong consistency of the MLE of q when ci D lim ni /n, n!1
11
Radioimmunoassays
0 c0 < 1,
0 < ci < 1,
i D 1, . . . , m,
11.22
204
Ł Ł Ł nD m 0 ni , m ½ 4, where m D m if c0 D 0 and m D m C 1 if c0 > 0. Further, one can test the goodness of the model by the statistic
SD
m
ˆ i, ri ni Pˆ i 2 /ni Pˆ i Q
11.23
iD0
ˆ and Q ˆ i D 1 Pˆ i . Note that S is asymptotically distributed as where Pˆ i D Pdi , q chi-square with mŁ 4 degrees of freedom (mŁ > 4) under the model (20) with q0 2 . One can also test H0 : q 2 0 versus H1 : q 2 0 via a likelihood ratio test procedure. In particular, one can test H0 : q3 D 1 (i.e., one-hit model in the transformed dose D is suggested), or HŁ0 : q4 D 0 (nonlinear (Michaelis-Menten) kinetic model is suggested by the data). Both of these test procedures are chi-squared tests with one degree of freedom. Low Dose Extrapolation By using the model adopted, we wish to estimate the virtually safe dose (VSD) defined to be that value dŁ such that for some specified small risk P0 , pdŁ D PdŁ , q P0; q D P0 .
11.24
typically, EPA, FDA regulated values for P0 are 104 , 105 and 106 . so the MLE of dŁ is obtained by solving ˆ P0; q ˆ D P0 . ˆ dˆ Ł D PdŁ ; q p
11.25 p
p
The authors surmise that under condition (11.22), ndˆ Ł dŁ and nlog dˆ Ł 2 log dŁ are asymptotically normal with mean zero and variances s21 and s22 D s21 /dŁ , respectively, where
s21 D
@p jdDdŁ @d
2 4 4 rD1 sD1
srs q0
@p @qr
@p @qs
with srs q0 and @p/@qr r D 1, . . . , 4 evaluated at dŁ . The above result will enable us to set up a 100(1 a)% confidence bounds for dŁ using dˆ Ł or log dˆ Ł which are given by p dˆ Ł1 D dˆ Ł za sˆ 1 / n and p p dˆ Ł2 D dˆ Ł expza sˆ 2 / n D dˆ Ł exp[za sˆ 1 / ndˆ Ł ]
A Response Model using Nonlinear Kinetics
205
where za is the 1 ath quantile of the standard normal distribution. By using dˆ Ł2 we can avoid the negative lower confidence bound on the VSD. Note that the lower confidence bound on the VSD is of interest because of our desire to protect against increased excess risk due to exposure. The authors illustrate the procedures given earlier on three animal carcinogenicity bioassays pertaining to vinyl chloride, DDT and saccharin. The response of interest, respectively, was hepatic angiosarcoma, liver hepatoma and bladder tumors. The dose response model (11.20) was fitted to the three data sets. The parameter q3 is most important for low-dose extrapolation. The authors take a conservative approach by taking a small integer value for q3 that is consistent with the data, the rationale being that q3 represents the number of hits of the target site caused by the transformed dose. They test the hypothesis, H0 : q3 D 1, and also test the goodness of fit of the model with q3 D 1, and obtain lower confidence bounds for VSD with P0 D 104 and P0 D 106 .
11
Radioimmunoassays
206
12 žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Sequential Estimation of the Mean Logistic Response Function
12.1 Introduction
In bioassays, the main interest is estimation of the quantile of the tolerance distribution, especially the LD50 . In carcinogenic studies, we will be interested in estimating the low dosages that correspond to small percentages, 100p% for 0 < p < 0.1. In the case of the logistic tolerance distribution, the pth quantile is m s log1/p 1 when m is the LD50 and s is the scale parameter of the tolerance distribution. Several methods are available for estimating the LD50 and the parametric methods such as the maximum likelihood, least squares method, require several iterations in estimating LD50 . However, the Spearman-Karber (SK) estimator is superior to other nonparametric estimators such as the Reed-Muench and Dragstadt-Behrens estimators (see sections 4.12 and 4.13). Finney [1950, 1952] has carried out some computations and showed that the SK estimator based on a fixed number of dose levels, equally spaced and covering a wide range yields small bias and high efficiency when used for estimating LD50 for symmetrical tolerance distributions such as the normal and logistic. Brown [1961] considers the SK estimator in an infinite (dose levels) experimental set-up. His results seem to reinforce with the findings of Finney [1950, 1952]. Church and Cobb [1973] consider the equivalence of the SK estimator and the MLE when the doses are equally spaced and equal number of experimental units are placed at each dose level. Miller [1973] points out that the most economical design is the one in which k (the number of dose levels) tends to infinity while holding n (the number of units at each dose level) fixed in such a way that kh tends to infinity as h (dose span) tends to zero. Following Miller’s [1973] suggestion, Nanthakumar and Govindarajulu [1994, 1999] (to be abbreviated as NG) have considered sequential (risk-efficient and fixed-width interval) estimation of LD50 using the SK estimator when the tolerance distribution is logistic. Govindarajulu and Nanthakumar (GN) [2000] obtain simplified sequential rules and strengthen the results of NG [1994, 1999].
207
12.2. Maximum Likelihood Estimation of Parameters
Let the probability of a subject responding at dose level x be given by Px D 1/f1 C expx q/sg. Let ni be the number of units at dose level xi D x0 C ih where h denotes the dose span and x0 , the initial dose level is chosen close to the guessed value of q. However, for mathematical ease, we choose x0 at random in 0, h. Typically, the xi denote logs of the real dose levels which are usually either concentrations of distributions of the initial dose by a constant factor. Hence, it is reasonable to assume that the xi are equally spaced. Let ri denote the number of responses at xi i D k, . . . , 0, . . . , k. Then the likelihood equations are given by ni pi Pi D 0 12.1 and
ni xi pi Pi D 0 (see sections 4.6).
Straightforward computations yield the information matrix to be s2 ni W i ni xi qWi 4 IDs ni xi qWi ni xi q2 Wi
12.2
where Wi D Pi Qi . When ni n, then the likelihood equations become
pi D
. 1 Pi D h
xkCh/2
h Px dx D xk C q h 2
12.3
xk h/2
. . after integrating by parts and setting Pxk C h/2 D 1 and Pxk h/2 D 0. Also, 1 . 1 r i xi D xi Pi D xPx dx n h h 2 p2 s2 1 2 xk C q C 12.4 D 2h 2 3 after integrating by parts. Now one can estimate q and s in one step (i.e., without iteration) from (12.3) and (12.4). Note also that Anscombe [1956] used similar approximations in the case of logistic regression. Also, when ni n and k is large, the information matrix takes a simpler form, n hs 0 0 . hs n 1 ID 12.5 and I D . np2 3hs 0 0 3hs np2 12
Sequential Estimation of the Mean Logistic Response Function
208
ˆ sˆ is, for Thus, from the large-sample properties of the MLE’s, we infer that q, large k, asymptotically bivariate normal with mean q, s and variance-covariance I1 . Also, qˆ and sˆ are mutually independent for large k.
12.3. Properties of the Spearman-Karber Estimate
Throughout we assume that k is a function of h when h is small. As h tends to zero, both k and kh become large. Recall that the SK estimator of q is given by h h h ri C n ri qˆ k D x0 C 2 n k n 1 0
k
12.6
which coincides with the MLE given by (12.3). Cornfield and Mantel [1950] and Church and Cobb [1973] have also noted this fact. Since x . Pi D x dPx D q, 12.7 Eqˆ k D xk C h 2 qˆ k is approximately unbiased for q. In the following we present some results without proofs. From Lemma 3.1 of GN [2000] we have h . s 12.8 B D Eqˆ k q D h 2q exp kh C 2 where dot over equality implies that terms smaller than Oh are neglected. GN [1999, eqn. (3.8)] obtain 3h h h 2 2 ri ri 2 xi r i 12.9 sˆ k D 2 2 xk C np 2 n which is different from that of Chmiel [1976]. Estimate of s2 given by (12.9) coincides with the MLE given by (12.4). Thus the MLE’s and the SK type of estimators q and s are equivalent when k is sufficiently large. Although the variance of qˆ k is, for large k, given by hs/n [see eqn. (12.5)], GN [2000, Lemma 3.2] obtain a more precise one: hs ˆ varqk D 1 2ekhCh/2/s C oh2 12.10 n or
sqˆ k D
hs n
1 ekhCh/2/s C oh.
12.11
Next we will consider fixed-width interval estimation of q.
Properties of the Spearman-Karber Estimate
209
12.4. Fixed-Width Interval Estimation of q
Let 2D be the specified width and g be the specified confidence coefficient. Then we wish to determine k such that Pjqˆ k qj D ½ g.
12.12
In view of (12.8) it is reasonable to assume that D > jBj D jEqˆ k qj. Then, for sufficiently large k, . Pjqˆ k qj D D D B/sqˆ k B C D/sqˆ k . 12.13 When B > 0, Pjqˆ k qj D ½ 2DB /sqˆ k 1 and setting the latter to ½ g and solving for s2qˆ , we obtain k
s2qˆ k
2
2
D B /z ,
1
z D 1 C g/2.
12.14
Similarly, when B < 0, Pjqˆ k qj D ½ 2D C B/sqˆ k 1, and proceeding as before, we obtain s2qˆ D C B2 /z2 . k
12.15
Now combining (12.14) and (12.15), we obtain s2qˆ D jBj2 /z2 . k
12.16
However, since s2qˆ is unknown, we cannot solve (12.16) for k. Hence, we resort to k the following adaptive sequential rule. The stopping variable K is given by K D inf fk ½ k0 : sqˆ k D jBj/zg.
12.17
Now using (12.8) and (12.11) for B and sqˆ k and upon noting that zsqˆ k C jBj D implies ekhCh/2/s ½
j2q hj z2 hs/n1/2 D z2 hs/n1/2
12.18
provided D > 2hs/n1/2 , the adaptive rule (12.17) becomes 2 1/2 ˆ inf fk ½ k0 : 2k C 1g ½ 2ˆs log j2q hj z hˆs/n 2 hˆ 1/2 h D z s /n . 12.19 KD ˆ k0 if j2q hj D (i.e., if the argument in the log is 1 Note that the sequential rule given by (12.19) is more efficient than the sequential rule obtained by NG [1999] in the sense that it stops earlier because NG [1999] . assume that sqˆ h D hs/n1/2 . Let 2kŁ C 1 denote the number of dose levels required when q and s are known (see (12.18)). Consider the following example.
12
Sequential Estimation of the Mean Logistic Response Function
210
Example 1 Let h D 0.2, D D 0.62, z D 1.645 (i.e., g D 0.90) and n D 3. Also, let q D 1.25 and s D 1 and choose x0 D 0.05. Rule (12.19) yields kŁ D 11 whereas the rule of NG [1999] gives kŁ D 18. A random sample of binomial random variates with n D 3 was generated using logistic distribution for the probability of observing a success. We choose q D 1.25 and s D 1, respectively, for the location and scale parameters of the logistic distribution for generating the data. We choose x0 D 0.05 and k0 D 10. We obtain the following data where each entry gives the observed value of ri : 0000010000000020120
0 # x1
1 # x0
1 # x1
0311112110213333333.
Rule (12.19) stopped at K D 14 with qˆ 14 D 1.35 and NG [1999]’s rule stopped at K D 20 with qˆ 20 D 1.283.
12.5. Asymptotic Properties of Rule (12.19)
In the following we list the asymptotic properties of rule (12.19). (a) (b)
The sequential procedure terminates finitely with probability one. Asymptotic Efficiency EfKh/kŁ hg ! 1 as h D h0 /m ! 0 as m ! 1 for some h0 > 0.
(c)
Asymptotic Consistency Pjqˆ K qj D ! 1 as h ! 0 Pjqˆ kŁ qj D and h is proportional to D2 .
GN [2000] carry out a simulation study consisting of 500 replications with g D 0.95, n D 3 and s D 1 and some combinations of values for h, D, q. They find that the observed coverage probability exceeds 0.95 and EK/kŁ is close to unity. Next, we will turn to risk-efficient estimation of q.
12.6. Risk-Efficient Estimation of q
Let the risk of qˆ k be defined as R D var qˆ k C Bias qˆ k 2 C 2k C 1cn
12.20
where c is proportional to the cost per observation. Note that varqˆ k D s2qˆ D Evarqˆ k jx0 C varEqˆ k jx0 . k
Risk-Efficient Estimation of q
211
However, from (12.10) we see that varqˆ k jx0 is approximately free of x0 for small h. Also, using (12.8), we have . hs RD 12.21 f1 2ekhCh/2/s g C 2q h2 e2kC1h/s C 2k C 1cn. n We can write R as hs 1 2elh/2s C 2q h2 elh/s C lcn, l D 2k C 1. RD 12.22 n 2s log x. Hence, we can write Now, set x D explh/2s. Then l D n 2cns . hs 2hs RD x C 2q h2 x2 log x. 12.23 n n n Setting @R/@x D 0 and taking only the positive root of the resulting quadratic equation and doing some manipulations, we obtain [see GN, 2000, eqns. (5.5) and (5.6)], 2n2 cs 12.24 D 2n2 cs explh/2s D f4n3 csh2q h2 C h4 s2 g1/2 h2 s. x If q and s are known, the optimal l (to be denoted by lŁ ) is given by (12.24). However, since q and s are unknown, we have the following adaptive sequential rule: Stop when the total number of dose levels is L where f4n3 cˆs1 h2qˆ h2 C h4 g1/2 h2 lh/2ˆs L D inf l ½ l0 : e ½ 12.25 2 2n2 c 2n c or approximately we can take h 1/2 ˆ lh/2ˆs ½ j2q hj . L D inf l ½ l0 : e 2cˆs Note that rule (12.25) or (12.26) will be meaningful when c D Oh1Ch for some h > 0 where qˆ and sˆ are based on l dose levels. Furthermore, one can easily establish that the sequential rule terminates finally with probability one as l gets large when n, c and h are fixed. Example 2 Let q D 0.625, s D 0.5, x0 D 0.05, h D 0.2, n D 3 and c D 0.00055. Computations yield lŁ D 13 and hence kŁ D 6 for rule 12.25, lŁ D 15 and hence kŁ D 7 for rule 12.26. In order to illustrate the sequential procedure, we generate a random sample with the above parameter-configuration and obtain 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 1, 2, 2, 2, 3, 3, 3.
12
Sequential Estimation of the Mean Logistic Response Function
212
We stop at L D 15 using (12.26) whereas with the rule of NG [1994], we obtain L D 17, lŁ D 23. Typically rule (12.25) would give slightly smaller values for the stopping time than rule (12.26). Also rules (12.25) and (12.26) are much simpler than the stopping rule in NG [1994, eqn. (2.5)]. GN [2000] carry out a simulation study with n D 3, x0 D 0.05, k0 D 5 and 500 replications. They find the risk ratio is close to unity and the ratio of EL to lŁ to be close to unity. They establish the following asymptotic properties of rule (12.25) or (12.26). (a) (b)
ELh/lŁ h ! 1 as h D h0 /m ! 0 as m ! 1 for some h0 > 0 (asymptotic efficiency). RL /RlŁ ! 1 as h ! 0 (risk-efficiency).
Risk-Efficient Estimation of q
213
žžžžžžžžžžžžžžžžžžžžžžžžžžžž
References
Aitchison J, Silvey SD: The generalization of probit analysis to the case of multiple responses. Biometrika 1957;44:131–140. Amemiya J: The n2 order mean squared errors of the maximum likelihood and the minimum logit chi-square estimator. Ann Statist 1980;8:488–505. Anscombe FJ: On estimating binomial response relations. Biometrika 1956;43:461–464. Antoniak CE: Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. Ann Statist 1974;2:1153–1174. Armitage P, Doll R: Stochastic models for carcinogenesis. Proc 4th Berkeley Symp Math Statist Probl, vol 4. Berkeley, University of California Press, 1961, pp 19–38. Ashford JR: An approach to the analysis of data for semiquantal responses in biological assay. Biometrics 1959;15:573–581. Ashton W: The Logit Transformation. With Special Reference to Its Uses in Bioassay. New York, Hafner, 1972. Aspin AA: An examination and further development of a formula arising in the problem of comparing two mean values. Biometrika 1948;35:88–96. Aspin AA: Tables for use in comparisons whose accuracy involves two variances. Biometrika 1949;36:290–296. Ayer M, Brunk HD, Ewing GM, Reid WT, Silverman E: An empirical distribution function for sampling with incomplete information. Ann Math Statist 1955;26:641–647. Barlow RE, Bartholomew DJ, Bremer JM, Brunk HD: Statistical Inference Under Order Restrictions. New York, Wiley, 1972. Barlow WE, Feigl P: Fitting probit and logit models with nonzero background using GLIM. Unpublished manuscript, 1982. Barlow WE, Feigl P: Analyzing binomial data with a non-zero baseline using GLIM. Comput Statist Data Anal 1985;3:201–204. Barnard G: The Behrens-Fisher test. Biometrika 1950;37:203–207. Bartlett MS: Properties of sufficiency and statistical tests. Proc R Soc 1937;A160:268–282. Bartlett MS: The use of transformations. Biometrics 1947;3:39–52. Behrens WU: Ein Beitrag zur Fehler-Berechnung bei wenigen Beobachtungen. Landw Jb 1929;68:807–837. Berkson J: A statistically precise and relatively simple method of estimating the bioassay with quantal response, based on the logistic function. J Am Statist Assoc 1953;48:565–599.
214
Berkson J: Maximum likelihood and minimum chi-square estimates of the logistic function. J Am Statist Assoc 1955;50:130–162. Berkson J: Tables for use in estimating the normal distribution function by normit analysis. Biometrika 1957a;44:441–435. Berkson J: Tables for the maximum likelihood estimate of the logistic function. Biometrics 1957b;13:28–34. Berkson J: Smoking and lung cancer: Some observations on two recent reports. J Am Statist Assoc 1958;53:28–38. Berkson J: Application of minimum logit c2 estimate to a problem of Grizzle with a notation on the problem of no interaction. Biometrics 1968;24:75–95. Berkson J: Minimum chi-square, not maximum likelihood! Ann Statist 1980;8:447–487. Biggers JD: Observations on the intravaginal assay of natural oestrogens using aqueous egg albumin as the vehicle of administration. J Endocr 1950;7:163–171. Bishop YMM, Fienberg SE, Holland PW: Discrete Multivariate Analysis: Theory and Practice. Cambridge, MIT Press, 1975, pp 401–433. Bliss CI: The method of probits. Science 1934;79:409–410. Bliss CI: The calculation of the dosage-mortality curve. Ann Appl Biol 1935;22:134–167. Bliss CI: The analysis of field experimental data expressed in percentages. Plant Protect 1937;12:67–77. Bliss CI: The Statistics of Bioassay. New York, Academic Press, 1952. Bliss CI: Dose-response curves for radioimmunoassays; in McArthur, Colton (eds): Statistics in Endocrinology. Cambridge MIT Press, 1970, pp 399–410. Block HD: Estimates of error for two modifications of the Robbins-Munro stochastic approximation process. Ann Math Statist 1957;28:1003–1010. Blum JR: Multidimensional stochastic approximation methods. Ann Math Statist 1954;25: 737–744. Bradley RA, Gart JJ: The asymptotic properties of ML estimators when sampling from associated populations. Biometrika 1962;49:205–214. Brand RJ, Pinnock DE, Jackson KL: Large sample confidence bands for the logistic response curve and its inverse. Am Statist 1973;27:157–160. Brown BW Jr: Some properties of the Spearman estimator in bioassay. Biometrika 1961;48: 293–302. Brown BW Jr: Planning a quantal assay of potency. Biometrics 1966;22:322–329. Brown BW Jr: Quantal response assays; in McArthur, Colton (eds): Statistics in Endocrinology. Cambridge, MIT Press, 1970, pp 129–143. Brownless KA, Hodges JL, Rosenblatt M: The up and down method with small samples. J Am Statist Assoc 1953;48:262–277. Burn JH, Finney DJ, Goodwin LG: Biological Standardization. London Oxford University Press, 1950. Butt WR (ed): Practical Immunoassay: The State of the Art. New York, Dekker, 1984, pp 253–368. Chaloner K, Larntz K: Optimal Bayesian design applied to logistic regression experiments. J Statist Plann Infer 1989;21:197–208. Chard T: An Introduction to Radioimmunoassay and Related Techniques, ed 4. New York, Elsevier, 1990. Chen G, Ensor CR: The combined activity and toxicity of dilantin and N-methyl-5-phenylsuccinimide. J Lab Clin Med 1953;41:78–83. Cheng PC, Johnson EA: Some distribution-free properties of the asymptotic variance of the Spearman estimator in bioassays. Biometrics 1972;28:882–889. Chernoff H: Asymptotic studentization in testing hypotheses. Ann Math Statist 1949;20: 268–278.
References
215
Chimosky JE, Spielman WS, Brant MA, Heiderman SR: Cardiac atria of B10 14.6 hamsters are deficient in natriuretic factor. Science 1984;223:820–821. Chmiel JJ: Some properties of Spearman-type estimators of the variance and percentiles in bioassay. Biometrika 1976;63:621–626. Choi SC: An investigation of Wetherill’s method of estimation for the up and down experiment. Biometrics 1971;27:961–970. Choi SC: Interval estimation of LD50 based on up-and-down experiment. Biometrics 1990;46: 485–492. Chung KL: On a stochastic approximation method. Ann Math Statist 1954;25:463–483. Church JD, Cobb EB: On the equivalence of Spearman-Karber and maximum likelihood estimates of the mean. J Am Statist Assoc 1973;68:201–202. Clark AJ: The Mode of Action of Drugs on Cells. London, Arnold, 1933, p. 4. Cobb EG, Church JD: Nonparametric estimators for shift in quantal bioassay. J Statist Comput Simul 1990;37:159–170. Cobb EG, Church JD: A discrete delta method for estimating standard errors for estimators in quantal bioassay. Biomet J 1995;37:691–699. Cobb EG, Church JD: Discrete delta estimation of the variance of the logistic scores estimator of ED50 . Biomet J 1999;41:45–51. Cochran WG: Estimation of bacterial densities by means of the most probably number. Biometrics 1950;6:105–116. Cochran WG, Davis M: Stochastic Approximation to the Median Effective Dose in Bioassay. Stochastic Model in Medicine and Biology, Madison, University of Wisconsin Press, 1964, pp 281–300. Colguhown D: Lectures in Biostatistics. An Introduction to Statistics with Applications in Biology and Medicine. Oxford, Clarendon Press, 1971. Copenhaver TW, Mielke PW: Quantit analysis: A quantal assay refinement. Biometrics 1977; 33:175–186. Cornfield J: A statistical problem arising from retrospective studies. Proc 3rd Berkeley Symp Math Statist Probl, vol 4. Berkeley, University of California Press, 1956, pp 135–148. Cornfield J, Mantel N: Some new aspects of the application of maximum likelihood to the calculation of the dosage response curve. J Am Statist Assoc 1956;45:181–201. Coward KH: The Biological Standardization of the Vitamin, ed 2. London, Bailli`ere, Tindall & Cox, 1947. Cox DR: Some procedures connected with the logistic quantitative response curve; in David (ed): Research Papers in Statistics. Festschrift for J. Neyman. New York, Wiley, 1966, pp 55–71. Cox DR: The Analysis of Binary Data. London, Methuen, 1970. Cox DR: Regression models and life tables. J R Statist Soc Ser B 1972;34:187–202. Cramer EM: Some comparisons of methods of fitting the dosage response curve for small samples. J Am Statist Assoc 1964;59:779–793. Creasy MA: Limits for the ratio of means. J R Statist Soc Ser B 1954;16:186–192. Crump KS, Guess HA, Deal KL: Confidence intervals and tests of hypotheses concerning dose response relations inferred from animal carcinogenicity data. Biometrics 1977;33:437–451. Davis HT: The Analysis of Economic Time Series, San Antonio, Trinity University Press, 1941. Davis SE, Jaffe ML, Munson PJ, Rodbard D: RIA data processing with a small programmable calculator. J Immunoassay 1980;1:15–25. Derman G: Stochastic approximation. Ann Math Statist 1956;27:879–886. Dixon WJ: The up and down method for small samples. J Am Statist Assoc 1965;60:967–978.
References
216
Dixon WJ: Quantal response variable experimentation: The up and down method; in McArthur, Colton (eds): Statistics in Endocrinology. Cambridge, MIT Press, 1970, pp 251–267. Dixon WJ, Mood AM: A method for obtaining and analysing sensitivity data. J Am Statist Assoc 1948;43:109–126. Dorn HF: The relationship of cancer of the lung and the use of tobacco. Am Statist 1954;8: 7–13. Draper NR, Smith H: Applied Regression Analysis, ed 2. New York, Wiley, 1981, sect 6.3, 6.4. Dvoretzky A: On stochastic approximation. Proc 3rd Berkeley Symp Math Statist Probl, vol 1. Berkeley, University of California Press, 1956, pp 39–55. Emmens CW: The dose/response relation for certain principles of the pituitary gland, and of the serum and urine of pregnancy. J Endocr 1940;2:194–225. Emmens CW: Principles of Biological Assay. London, Chapman & Hall, 1948. Epstein B, Churchman CW: On the statistics of sensitivity data. Ann Math Statist 1944;15: 90–96. Feinstein AR: Clinical biostatistics. XX. The epidemiologic trohoc, the ablative risk ratio, and retrospective research. Clin Pharmacol Ther 1973;14:291–307. Ferguson TS: A Bayesian analysis of some nonparametric problems. Ann Statist 1973;1:200–230. Fieller EC: The biological standardization of insulin. J R Statist Soc 1940;(suppl 7):1–64. Fieller EC: A fundamental formula in the statistics of biological assay, and some applications. Q J Pharm Pharmacol 1944;17:117–123. Fieller EC: Some problems in interval estimation. J R Statist Soc Ser B 1954;16:175–186. Finney DJ: The principles of biological assay. J R Statist Soc 1947;(suppl 9):46–91. Finney DJ: The estimation of the parameters of tolerance distributions. Biometrika 1949;36: 239–256. Finney DJ: The estimation of the mean of a normal tolerance distribution. Sankhya 1950;10: 341–360. Finney DJ: Two new uses of the Behrens-Fisher distribution. J R Statist Soc Ser B 1951;12: 296–300. Finney DJ: Probit Analysis, A Statistical Treatment of the Sigmoid Response Curve, ed 2. London, Cambridge University Press, 1952. Finney DJ: The estimation of the ED50 for logistic response curve. Sankhya 1953;12:121–136. Finney DJ: Probit Analysis, ed 3. London, Cambridge University Press, 1971a. Finney DJ: Statistical Methods in Biological Assay, ed 2. London, Griffin, 1971b. Finney DJ: Radioligand assays. Biometrics 1976;32:721–740. Finney DJ: Statistical Methods in Biological Assay, ed 3. London, Griffin, 1978, chapter 16, pp 328–346. Finney DJ: Response curves for radioimmunoassay. Clin Chem 1983;29:1562–1566. Fisher RA: On the mathematical foundations of theoretical statistics. Phil Trans R Soc 1922; A222:309–368. Fisher RA: The case of zero survivors. Ann Appl Biol 1935a;22:164–165. Fisher RA: The fiducial argument in statistical inference. Ann Eugen 1935b;6:391–398. Fisher RA: The asymptotic approach to Behrens integral with further tables for the d test of significance. Ann Eugen 1941;11:141–172. Fisher RA: Sampling the reference set. Sankhya Ser A 1961a;23:3–8. Fisher RA: The weighted mean of two normal samples with unknown variance ratio. Sankhya Ser A 1961b;23:103–144. Fisher RA, Yates F: Statistical Tables for Biological, Agricultural and Medical Research, ed 6. Edinburgh, Oliver & Boyd, 1963.
References
217
Fleiss J: Statistical Methods for Rates and Proportions, ed 2. New York, Wiley, 1981. Fountain RL, Keating JP, Rao CR: A controversy arising from Berkson’s conjecture. Commun Statist Theory Method 1991;20:3457–3472. Fountain RL, Rao CR: Further investigations of Berkson’s example. Commun Statist Theory Method 1993;22:613–629. Freeman PR: Optimal Bayesian sequential estimation of the median effective dose. Biometrika 1970;57:79–89. Freundlich H: Colloid and Capillary Chemistry. New York, Dutton, 1922, p 141. Gaddum JH: Pharmacology, ed 3. London, Oxford University Press, 1948. Galton F: The geometric mean in vital and social statistics. Proc R Soc 1879;29:365–367. Gart JJ: Point and interval estimation of the common odds ratio in the combination of 2 ð 2 tables with fixed marginals. Biometrika 1970;57:471–475. Gart JJ, Zweifel JR: On the bias of various estimators of the logit and its variance with applications to quantal bioassay. Biometrika 1967;54:181–187. Garwood F: The application of maximum likelihood to dosage-mortality curves. Biometrika 1941;32:46–58. Gelfand AE, Kuo L: Nonparametric Bayesian bioassay including ordered polytomous response. Biometrika 1991;78:657–666. Grizzle JE: A new method of testing hypotheses and estimating parameters for the logistic model. Biometrics 1961;17:372–385. Govindarajulu Z, Lindqvist BH: Asymptotic efficiency of the Spearman estimator and characterizations of distributions. Ann Inst Statist Math Tokyo 1987;39A:349–361. Govindarajulu Z, Nanthakumar A: Sequential estimation of the mean logistic response function. Statistics 2000;33:309–332. Gradshteyn IS, Ryshik IN: Tables, Integrals and Series and Products. New York, Academic Press, 1965, formula No. 12 on p 540. Guess HA, Crump KS: Low dose-rate extrapolation of data from animal carcinogenicity experiments: Analysis of a new statistical technique. Math Biosci 1976;30:15–36. Guess H, Crump K: Can we use animal data to estimate safe doses for chemical carcinogens?; in Whittmore (ed): Environmental Health: Quantitative Methods, Philadelphia, Society for Industrial and Applied Mathematics, 1977, pp 13–28. Guess HA, Crump KS: Maximum likelihood estimation of dose-response functions subject to absolutely monotonic contraints. Ann Statist 1978;6:101–111. Guess H, Crump K, Peto R: Uncertainty estimates for low dose rate extrapolation of animal carcinogenecity data. Cancer Res 1977;37:3475–3483. Gurland J, Lee I, Dahm P: Polychotomous quantal response in biological assay. Biometrics 1960;16:382–398. Hamilton MA: Robust estimates of the ED50 . J Am Statist Assoc 1979;74:344–354. Hartley HO: The modified Gauss-Newton method for the fitting of nonlinear regression functions by least squares. Technometrics 1961;3:269–280. Hartley HO, Sielkin RL: Estimation of ‘safe dose’ in carcinogenic experiments. Biometrics 1977;33:1–30. Hauck WW: A note on confidence bands for the logistic response curve. Am Statist 1983;37: 158–160. Heally MJR: Statistical analysis of radioimmunoassay data. Biochem J 1972;130:207–210. Hodges JL Jr: Fitting the logistic by maximum likelihood. Biometrics 1958;14:453–461. Hodges JL Jr, Lehmann EL: Two approximations to the Robbins-Monro process. Proc 3rd Berkeley Symp Math Statist Probl, vol 1. University of California Press, 1956, pp 95–104. Hoekstra JA: Estimation of the LC50 . A review. Environmetrics 1991;2:139–152. Hoel DG, Kaplan NL, Anderson MW: Implication of nonlinear kinetics on risk estimation in carcinogenesis. Science 1983;219:1032–1037.
References
218
Hoel PG: A simple solution for optimal Chebyshev regression extrapolation. Ann Math Statist 1966;37:720–725. Hoel PG, Jennrich RI: Optimal designs for dose-response experiments in cancer research. Biometrika 1979;66:307–316. Hsi BP: The multiple sample up and down method in bioassay. J Am Statist Assoc 1969;64: 147–162. Hsu PL: Contributions to the theory of Student’s t-test as applied to the problem of two samples. Statist Res Mem 1938;2:1–24. Ihm P, M¨uller HG, Schmitt T: PRODOS: An interactive computer program for probit analysis of dose-response curves. Am Statist 1987;41:79. Iznaga N, Nunez G, Solozabal J, Morales A, Artaza E, Rubio R, Cardenas E: A personal computer-based system for parallel line analysis of bioassay data. Comput Meth Progr Biomed 1995;47:167–175. James AT, Wilkinson GN, Venables WW: Interval estimates for a ratio of means. Sankhya Ser A 1974;36:177–183. James BR, James KL, Westenberger H: An efficient R-estimator for the ED50 . J Amer-Kuss. Statist Assoc 1984;79:164–173. Jerne NK, Wood EC: The validity and meaning of the results of biological assays. Biometrics 1949;5:273–299. Johnson EA, Brown BW Jr: The Spearman estimator for serial dilution assays. Biometrics 1961;17:79–88. Jung H, Choi SC: Sequential method of estimating the LD50 using a modified up-and-down rule. J Biopharmaceut Statist 1994;4/10:19–30. Kalish LA: Efficient design for estimation of medial lethal dose and quantal dose-response curves. Biometrics 1990;46:737–748. Karber G: Beitrag zur kollectiven Behandlung pharmakologischer Reihenversuche. Arch Exp Path Pharmak 1931;162:480–487. Kraft CH, van Eeden C: Bayesian bioassay. Ann Math Statist 1964;35:886–890. Kundson LF, Curtis JM: The use of the angular transformation in biological assays. J Am Statist Assoc 1947;42:282–296. Kuo L: Linear Bayes estimators of the potency curve in bioassay. Biometrika 1988;75:91–96. Langmuir I: The shapes of group molecules forming the surfaces of molecules. Proc Natn Acad Sci USA 1917;3:251–257. LeCam L: An extension of Wald’s theory of statistical decision functions. Ann Math Statist 1955;26:69–81. Levenberg K: A method for the solution of certain problems in least squares. Q Appl Math 1944;2:164–168. Lindgren BW: Statistical Theory, ed 3. New York, McMillan, 1976. Little RE: A note on estimation for quantal response data. Biometrika 1968;55:578–579. McHugh RB, Meinert CL: A theoretical model for statistical inference in isotope displacement immunoassay; in McArthur, Colton (eds): Statistics in Endocrinology. Cambridge, MIT Press, 1970, pp 399–410. McKenzie GM, Thompson RCH: Design and implementation of a software package for analysis of immunoassay data; in Hunter, Corrie (eds): Immunoassays for Clinical Chemistry. A Workshop Meeting. Edinburgh, Churchill Livingstone, 1982. McLeish DL, Tosh DH: The estimation of extreme quantiles in logit bioassay. Biometrika 1983;70:625–632. McLeish DL, Tosh DH: Two-dose allocation schemes in logit analysis with cost restraints (unpublished 1985). McLeish DL, Tosh D: Sequential designs in bioassay. Biometrics 1990;46:103–116. Maggio EG: Enzyme-Immunoassay. Boca Raton, CRC Press, 1980.
References
219
Magnus A, Mielke PW, Copenhaver TW: Closed expression for the sum of an infinite series with application to quantal response assays. Biometrics 1977;33:221–224. Malcolm RD, Finn DA, Syapin PJ, Alkana RL: Reduced lethality from ethanol or ethanol plus pentobarbital in mice exposed to 1 or 12 atmospheres absolute helium-oxygen. Psychopharmacology 1985;86:409–412. Mantel N: Morton’s generalization of Spearman estimates (letter, with reply by R Morton). Biometrics 1983;39:529–530. Mantel N, Bohidar N, Brown C, Ciminera J, Tukey J: An improved Mantel-Bryan procedure for ‘safety’ testing of carcinogens. Cancer Res 1975;35:865–872. Mantel N, Bryan WR: ‘Safety’ testing of carcinogenic agents. J Natn Cancer Inst 1961;27: 455–470. Mantel N, Haenszel W: Statistical aspects of the analysis of data from retrospective studies of disease. J Natn Cancer Inst 1959;22:719–748. Marks BL: Some optimal sequential schemes for estimating the mean of a cumulative normal quantal response curve. J R Statist Soc Ser B 1962;24:393–400. Markus RA, Frank J, Groshen S, Azen SP: An alternative approach to the optimal design of an LD50 bioassay. Statist Med 1995;14:841–852. Marquardt D: An algorithm for least squares estimation of nonlinear parameters. SIAM J Appl Math 1963;11:431–441. Meinert CL, McHugh RB: The biometry of an isotope displacement immunologic microassay. Math Biosci 1968;2:319–338. Miller RG: Nonparametric estimators of the mean tolerance in bioassay. Biometrika 1973;60: 535–542. Moody AJ, Stan MA, Stan M, Gliemann J: A simple free-fat cell bioassay for insulin. Horm Metab Res 1974;6:12–16. Morton JT: The problem of the evaluation of retenone-containing plants. VI. The toxicity of l-elliptone and of poisons applied jointly, with further observations on the retenone equivalent method of assessing the toxicity of derris root. Ann Appl Biol 1942;29:69–81. Morton R: Generalized Spearman estimators of relative dose. Biometrics: 1981;37:223–233. M¨uller HG, Schmitt T: Choice of number of doses for maximum likelihood estimation of the ED50 for quantal dose response data. Biometrics 1990;46:117–129. M¨uller HG, Wang JL: Bootstrap confidence intervals for effective doses in the probit model for dose-response data. Biome J 1990;32:117–129, 529–544. Nanthakumar A, Govindarajulu Z: Risk-efficient estimation of the mean of the logistic response function using the Spearman-Karber estimator. Statist Sin 1994;4:305–324. Nanthakumar A, Govindarajulu Z: Fixed-width estimation of the mean of logistic response function using Spearman-Karber estimator. Biomet J. 1999;41:445–456. Naus AJ, Kuffens PS, Borst A: Calculation of radioimmunoassay standard curves. Clin Chem 1977;23:1624–1627. Neyman J: Contributions to the theory of the chi-square test. Proc Berkeley Symp Math Statist Probl, Berkeley, University of California Press, 1949, pp 239–272. Oliver FR: Methods of estimating the growth function. J R Statist Soc Ser C 1964;13:57–66. Pearl R: Studies in Human Biology. Baltimore, Williams & Wilkins, 1924. Peto R, Lee P: Weibull distributions for continuous carcinogenesis experiments. Biometrics 1973;29:457–470. Prentice RL: A generalization of the probit and logit methods for dose-response curves. Biometrics 1976;32:761–768. Raab GM: Robust calibration and radioimmunoassay (letter). Biometrics 1981;37:839–841. Raab GM: Comparison of a logistic and a mass-action curve for radioimmunoassay data. Clin Chem 1983;29:1757–1761.
References
220
Ramgopal P, Laud PW, Smith AFM: Nonparametric Bayesian bioassay with prior constraints on the shape of the potency curve. Biometrika 1993;80:489–498. Ramsey FL: A Bayesian approach to bioassay. Biometrics 1972;28:841–858. Rao CR: Estimation of relative potency from multiple response data. Biometrics 1954;10:208–220. Rao CR: Linear Statistical Inference and Its Applications. New York, Wiley, 1965. Ratnakowsky DA, Rudy TJ: Choosing near-linear parameters in the four-parameter logistic model for radioligand and related assays. Biometrics 1986;42:575–583. Rechnitzer PA, Cunningham DA, Andrew GM, Buck CW, Jones NL, Kavanagh TB, Oldridge NB, Parker JO, Shephard RJ, Sutton JR, Donner A: The relationship of exercise to the recurrence rate of myocardial infarction in men – Ontario Exercise Heart Collaborative Study. Am J Cardiol 1983;51:65–69. Rechnitzer PA, Sango S, Cunningham DA, Andrew G, Buck C, Jones NL, Kavanagh T, Parker JO, Shepherd RJ, Yuhasz MS: A controlled prospective study of the effect of endurance training on the recurrence of myocardial infarction – A description of the experimental design. Am J Epidem 1975;102:358–365. Richards FJ: A flexible growth function for empirical use. J Exp Bot 1959;10:290–300. Rizzardi FR: Some asymptotic properties of Robbins-Monro type estimators with applications to estimating medians from quantal response. Diss. University of California, Berkeley, 1985. Robbins H, Monro S: A stochastic approximation method. Ann Math Statist 1951;22:400–407. Rodbard D, Bridson W, Rayford PL: Rapid calculation of radioimmunoassay results. J Lab Clin Med 1969;74:770–781. Rodbard D, Faden VB, Knisley S, Hutt DM: Radio-Immunoassay Data Processing; LogitLog, Logistic Method, and Quality Control, ed 3. Reports No. PB246222, PB246223, and PB246224. Springfield, National Technical Information Science, 1975. Rodbard D, Hutt DM: Statistical analysis of radioimmunoassays and immunoradiometric (labelled antibody) assays: A generalized weighted, iterative least-squares method for logistic curve-fitting. Radioimmunoassay and Related Procedures in Medicare. Vienna, International Atomic Energy Agency, vol 1, 1974, pp 165–192. Rodbard D, Rayford PL, Ross GT: Statistical quality control of radioimmunoassays; in McArthur, Colton (eds): Statistics in Endorcrinology. Cambridge, MIT Press, 1970, pp 411–429. Rodgers RPC: Data analysis and quality control of assays: A practical primer; in Butt (ed): Practical Immunoassay, the State of the Art, New York, Marcel Dekker, 1984, pp 253–368. Sacks J: Asymptotic distribution of stochastic approximation procedures. Ann Math Statist 1958;29:373–405. SAS-STAT: User’s Guide, ed 4. GLM-Van Comp. Version 6. SAS Institute, 1991, vol 2, chapt 35, pp. 1325–1350. Scheff´e H: On solutions to the Behrens-Fisher problem based on the t-distribution. Ann Math Statist 1943;14:35–44. Schultz H: The standard error of a forecast from a curve. J Am Statist Assoc 1930;25:139–185. Shepard HH: Relative toxicity at high percentages of insect mortality. Nature 1934;134: 323–324. Sheps MC: Shall we count the living or the dead? N Engl J Med 1958;259:1210–1214. Sheps MC: Marriage and mortality. Am J Publ Hlth 1961;51:547–555. Silvapulle MJ: On the existence of maximum likelihood estimators for the binomial response models. J R Statist Soc B 1981;43:310–313. Smith KC, Savin NE, Robertson JL: A Monte Carlo comparison of maximum likelihood and minimum chi-square sampling distributions in logit analysis. Biometrics 1984;40:471–482.
References
221
Sokal RR: A comparison of fitness characters and their response to density in stock and selected cultures of wild type and black Tribolium castaneum. Tribolium Inf Bull 1967;10: 142–147. Sokal RR, Rohlf FJ: Biometry, ed 2. New York, Freeman, 1981, p 733. Solomon L: Statistical estimation. J Inst Act Stud Soc 1948a;7:144–173. Solomon L: Statistical estimation. J Inst Act Stud Soc 1948b;7:213–234. Spearman C: The method of ‘right and wrong cases’ (‘constant stimuli’) without Gauss’ formulae. Br J Psychol 1908;2:227–242. Spurr WA, Arnold DR: A short-cut method of fitting a logistic curve. J Am Statist Assoc 1948;43:127–134. Stevens WL: Mean and variance of an entry in a contingency table. Biometrika 1951;38: 468–470. Stone M: Cross-validation and multinomial prediction. Biometrika 1974;61:509–515. Storer B: Design and analysis of phase I clinical trials. Biometrics 1989;45:925–937. Suits CG, Way EE: The Collected Works of Irwing Langmuir, vol 9, Surface Phenomena. New York, Pergamon, 1961, pp 88, 95, 445. Sukhatme PV: On Fisher and Behrens test of significance for the difference in means of two normal populations. Sankhya 1938;4:39–48. Taylor WF: Distance functions and regular best asymptotically normal estimates. Ann Math Statist 1953;24:85–92. Thomas DG, Gart JJ: A table of exact confidence limits for differences and ratios of two proportions and their odds ratios. J Am Statist Assoc 1977;72:73–76. Tiede JJ, Pagano M: The application of robust calibration to radioimmunoassay. Biometrics 1979;35:567–574. Tomatis L, Turusov V, Day N, Charles RT: The effect of long-term exposure to DOT on CF-1 mice. Int J Cancer 1972;10:489–506. Tsutakawa RK: Random walk design in bioassay. J Am Statist Assoc 1967;62:842–856. Tsutakawa RK: Design of experiments for bioassay. J Am Statist Assoc 1972;67:584–590. Tsutakawa RK: Selection of dose levels for estimating a percentage point of a logistic quantal response curve. Appl Satist 1980;29:25–33. Turnbull BW: Nonparametric estimation of a survivorship function with doubly censored data. J Am Statist Assoc 1974;69:169–173. Turnbull BW: The empirical distribution function with arbitrarily grouped, censored, and truncated data. J R Statist Soc Ser B 1976;38:290–295. Van Ryzin J, Rai K: The use of quantal-response data to make predictions; in Witschi HR (ed): Scientific Basis of Toxicity Assessment. New York, Elsevier/North Holland Biomedical Press, 1980, pp 273–290. Van Ryzin J, Rai K: A dose response model incorporating nonlinear kinetics. Biometrics 1987;43:95–105. Venter JH: An extension of the Robbins-Monro procedure. Ann Math Statist 1967;38:181–190. Vølund A: Application of the four-parameter logistic model to bioassay: Comparison with slope ratio and parallel line models. Biometrics 1978;34:357–365. Wald A: Testing the difference between the means of two normal populations with unknown standard deviations; in Wald (ed): Selected Papers in Statistics and Probability. Stanford, University Press, 1955, pp 669–675. Walker AIT, Thorpe E, Stevenson DE: The toxicology of dieldrin (HEOD). I. Long-term oral toxicity studies in mice. Food Cosm Toxicol 1972;11:415–432. Wallace D: Asymptotic approximations to distributions. Ann Math Statist 1958;29:635–654. Wang MY: The statistical analysis of slope ratio assays using SAS software. Comput Meth Progr Biomed 1994;45:233–238.
References
222
Wang MY: On the statistical analysis of bioassays. Comput Meth Progr Biomed 1996;45: 191–197. Waud DR: On biological assays involving quantal responses. J Pharmacol Exp Ther 1972;183: 577–607. Welch BL: The generalization of Student’s problem when several different population variances are involved. Biometrika 1947;34:23–35. Wesley MN: Bioassay: Estimating the mean of the tolerance distribution. Stanford University Technical Report No: 17 (1 R01 GM 21215-01) 1976. Wetherill GB: Sequential estimation of quantal response curve. J R Statist Soc Ser B 1963;25: 1–48. Wetherill GB: Sequential Methods in Statistics. London, Methuen, 1966. Wetherill GB, Chen H, Vasudeva RB: Sequential estimation of quantal response curves: A new method of estimation. Biometrika 1966;53:439–454. Wijesinha MC, Piantadosi S: Dose-response models with covariates. Biometrics 1995;51: 977–987. Wilks SS: Mathematical Statistics. New York, Wiley, 1962, p 370. Wilson EB, Worcester J: The determination of LD50 and its sampling error in bioassay. Proc Natn Acad Sci USA 1943a, part I, pp 19–85, part II, pp 114–120, part III, pp 257–262. Wilson EB, Worcester J: Bioassay on a general curve. Proc Natn Acad Sci USA 1943b;29: 150–154. Wolfowitz J: On the stochastic approximation method of Robbins and Monro. Ann Math Statist 1952;23:457–461. Worcester J, Wilson EB: A table determining LD50 or the 50 percent end point. Proc Natn Acad Sci 1943;29:207–212. Yalow RW, Berson SA: Plasma insulin in man (editorial). Am J Med 1960;29:1–8. Yalow RW, Berson SA: Radioimmunoassays; in McArthur, Colton (eds): Statistics in Endocrinology. Cambridge, MIT Press, 1970, pp 327–344. Yates F: An apparent inconsistency arising from tests of significance based on fiducial distributions of unknown parameters. Proc Camb Phil Soc 1939;35:579–591. Zivitz M, Hidalgo JV: A linearization of the parameters in the logistic function; curve fitting radioimmunoassays. Comput Progr Biomed 1977;7:318.
References
223
žžžžžžžžžžžžžžžžžžžžžžžžžžžž
Subject Index
Ł
Please note that an asterisk on a word denotes that it appears too many times to be mentioned. Only the page number on which the word appears for the first time is given. Abbott formula 49 ABERS pooling procedures 83 Acetylamine fluorene 162 Action curve 35 Adaptive rule 210 Adsorption 36 Algorithm 79, 159, 160, 166, 177, 178 self-consistency 177, 178 Ł Analysis 2 least squares 122 logit 75, 77, 79, 104, 106, 151 probit 72, 118, 119, 130 statistical 2, 3, 28, 31, 137 Analyte concentration 196, 198 Anova 17, 20, 34, 194, 201 Antibodies 189, 190, 194, 195, 199, 200 species 199 Antigens 89, 189–192, 194, 195 isotope labelled 191 Antilogits 44 Antiserum 122–125 Ł Assay(s) 1 analytical 9, 14 binding 189 comparative 9 digitalis 3
dilution 9, 14, 63, 67 direct 3, 13 estrone 107 free fat cell 31 immunoradiometric 189 indirect 3, 13 logit 79 natriuretic factor 3, 8, 9, 11 parallel 108 line 31 probit 79 qualitative 3 quantal 67, 69–71, 84, 97, 188 quantitative 1, 13 technique 14 two-point 98 validity 14 Asymptotes 195, 198 Asymptotic approximations 133 consistency 187, 211 distribution 5, 20, 30, 57, 105, 122, 129–131, 134, 135, 149, 150, 165, 171 efficiency 20, 56–60, 66, 84, 85, 211, 213 normality 57, 105, 130, 132, 134, 171
224
optimality 168 properties 57, 70, 150, 211, 213 variance 5, 57, 59, 66, 77, 80, 82, 109, 134–136, 153, 168, 169 Asymptotically optimal 109 unbiased 20, 88, 187 Atrial natriuretic factor 3 Average sample number 148, 150 Backward elimination procedure 34 Bacterial density 135 Ł Bayes (Bayesian) 75 analysis 188 approach 179 binomial estimator 178, 179, 181, 182 design 183 estimator 172, 180, 182, 185, 187 methods 75 nonparametric (estimators) 179, 185, 188 pseudo-estimator 180, 181 Benzopyrene 170 Behrens distribution 7 Bernoulli trials 83, 84, 108 Beta prior 183, 188 product prior 188 Ł Bias 5 squared 75 Binary response 82, 126 Binder heterogeneity 200 Ł Bioassay 1 Bayesian 172, 174, 176, 178, 180, 182, 184, 186, 188 binding 139 carcinogenicity 206 chronic animal 201, 203 fat-cell insulin 27 quantal 57, 85, 97, 172, 188 Biochemical theory 190, 194 Biological assays 1, 3, 31, 70 parallel line 31, 32, 108 types 3 variability 189 Bladder tumors 162, 202, 206 Blum conditions 131 Bootstrap estimators 82 Boundary conditions 138
Subject Index
Calibration 196, 197 formula 197 Carcinogenic experiments 167 Carcinogenic models 151 Carcinogenic rates 156, 157 Carcinogenic studies 207 Carcinogenicity bioassays 206 Cat method 3 Center of gravity 45 Chebyshev 166–168 regression model 168 system 166, 167 Chi-square tests 32, 205 Cochran theorem 33 Coefficient of variation 66, 67, 87 Collinearity 32, 45, 46 Completeness 172 Computing procedure 197 Concavity 81 Ł Concentration 9 analyte 196 dose 203 Confidence 71 band 104–106 bounds 192, 205, 206 ellipsoid 105 intervals or limits 6–11, 30, 31, 37, 64, 67, 82, 84, 88, 96, 97, 107, 112, 122, 123, 130, 140, 143, 150, 160–162, 165, 166, 171, 172, 190, 192–194 asymptotic 30, 143, 165, 166, 171 lower bounds 206 Conjugate prior 178, 183 Consistency 20, 44, 57, 70, 177, 178, 204, 211 Consistent family 174 Constraint 27, 75, 76, 79, 159, 160, 165, 176, 177 linear 159 Contingency table 120–122 models 128 Convergence 30, 129–131, 153, 192 mean 131 quadratic 129, 131 square 129, 131 probability 129, 131 rate 129, 192 strong 129, 132
225
Conversion factor 201 Convex cones 80 programming 159, 160 Covariance 6, 8, 30, 105, 161, 186, 209 asymptotic 30 Coverage probability 84, 87, 112, 144, 211 observed 211 Covariates 11, 125–127 effect 126, 127 Criterion function 153 Cross validation 180, 181 Ł Curve 14 action 35 calibration 191, 195–200 inverse 192 dose-response 28, 30, 31, 36–39, 79–81, 108–111, 126, 127, 183, 201, 202, 204 four parameter 126 growth 113 logistic 40–42, 52, 113 nonlinear 202 parallel 37, 108, 111 polygonal 109 probit response 183 Ł response 25 sigmoid 40 sine 41 standard 27, 190, 192, 193 Urban 41 Curve-fitting 190, 194, 197–200 Curvilinear function 190 relationship 199 DDT 165, 206 Degenerate outcomes 80 Degrees of freedom 6, 7, 10, 12, 29, 33, 34, 52, 98, 105, 120, 144, 205 Delaney clause 156 Delta method 39 Density of organisms 63, 65, 67 Ł Design 2 density 81 experimental 75 most economical 207 optimal 72–75, 80, 166–171 asymptotic 79
Subject Index
parameter values 74 sequential 79, 183 traditional 170 Deviation 8, 17, 18, 20, 23, 27, 28, 38, 70, 71, 74, 87, 123, 127, 138, 144, 196, 197 not significant 18 DFML, polygonalized estimate 109, 110 Dichotomous data 104, 163, 165 Difference quotient 82 Differential approach 46 Digamma function 152 Dilution 2, 9, 14, 63, 64, 66, 67, 71, 91, 135 geometric progression 91 Direct assay 3, 13 Dirichlet 173–175, 177, 179, 180, 185, 188 distribution 175 prior 173, 177, 179, 180, 185 process 173, 185, 186, 188 Discrete delta method 82, 83, 85 estimator 84 Distance function 100–102 Ł Distribution 5 algebraic 60 angular 59 asymptotic 5, 30, 105, 129, 149, 150, 165 Behrens 7, 11 beta 178, 184 binomial 113 Cauchy 57 central t 10, 12, 31 chi-square 69, 144, 205 Dirichlet prior 177, 185, 188 double exponential 68 empirical 85 extreme value 111, 112, 152, 155 fiducial 6 logistic 36, 53, 58–60, 67, 68, 70, 72, 111, 136, 138, 211 normal 21, 31, 47, 59, 67, 68, 103, 161, 206 omega 68–70 one-particle 59 one-point 55 Poisson 63 posterior 175, 176, 178, 188
226
prior 72, 74, 172, 174–178, 188 belief 185 product-beta 188 small sample 149 step function 183 Ł tolerance 40 asymmetric 68 empirical 85 heavy tailed 84 mixed 74 skewed 72 stochastically ordered 188 symmetric 178, 207 two-parameter 67 two-point 183 uniform 56, 60, 65, 67, 69 Distribution-free 109 Dixon-Mood procedure 137 Ł Dose 1 allocation 75, 77, 79 differences 16 equally spaced 45, 62, 83, 111, 137, 141, 145, 179 extrapolated 110 extreme 11, 67, 70 high 70, 156 Ł level 45 log 9, 21, 25–27, 35–37, 71, 72, 84, 87, 118, 119, 141, 151, 163 low 70, 151, 163 median effect 42, 126 metameter 15 mortality curve or line 35, 36 Ł response 3 curves 37, 38, 68, 108, 111, 126, 164, 202 model 31, 81, 125, 127, 164, 201–203, 206 parabolic 80, 81 truncated normal 80, 81 uniform 80, 81 risk 201 safe 156, 157 estimates 161, 163, 202 spacing 141, 173 span 43, 73, 109, 141, 207, 208 virtually safe 205
Subject Index
Dragstadt-Behrens estimator 62, 207 Dynamic programming 183, 184 ED50 26, 28, 42, 53, 70, 71, 80, 82, 84, 87, 88, 94, 96, 97, 109, 117, 136, 140, 141, 177 ED99 67, 70 Effective toxic dose 202, 203 Efficiency 50, 56–60, 66, 70, 79, 84, 85, 135, 150, 182, 183, 196, 207, 211, 213 full 79 loss 135, 150 Eigen vectors 149 Empirical probits 118 Enzyme chemistry 195 Ł Estimate 2 asymptotically normal 5, 20, 50, 52, 64, 100, 132, 135, 168, 205 best 20, 52, 100 consistent 159 efficient 52 iterated 30 interval 65 maximum likelihood 20, 21, 77, 83, 89, 97, 136, 174 large sample properties 209 mean square error 148 root 112 moment 152 nonparametric 112, 174 optimal 183 parametric 112 precision 10 prior 73, 132, 136 ratio 4 risk-efficient 211 unbiased 5, 7 weighted 7 Ł Estimation 1 clinical 189 interval 6, 104, 112, 124, 144, 207, 209, 210 fixed-width interval 207, 209, 210 points on quantal response function 128, 130, 132, 134, 136, 138, 140, 142, 144 potency 26 risk-efficient 207, 211, 213
227
Ł
Estimation (continued) simultaneous trial 27 unbiased 152 uniformly minimum variance 152 Ł Estimator 5 adaptive 180–182 maximum likelihood 20, 50, 59, 63, 105, 117, 140, 159, 173, 208 nonparametric 108, 109, 111, 112, 173, 185, 207 Estrogen preparation 71 Euler constant 64, 152 Euler-MacLaurin formula 53, 56 Excess risk 206 Existence 36, 79, 80, 130, 164, 165, 169 Experiment balanced 72 two-dose 96 Experimental error 197 Ł units 13 Exponential approximation 152, 153, 155 variable 153 Exposure time 161 Extrapolation 162–164, 168, 170, 201, 202, 205, 206 low-dose 164, 201, 202, 205, 206 Extreme value distribution 111, 152, 155 Fatigue trials 129 Fiducial distribution 6 approach 6 Fieller theorem 5, 8, 9, 37, 143 improvisation 8 Finite experiment 54, 57, 59, 61 termination 211, 212 Fixation 36 Forced choice 96 Fractile 31, 76 Freundlich formula 36 Ł Function 8 curvilinear 190 nuisance 157 Gauss-Newton method 199 Geometric progression 91 Gibb sampler 188 GLIM 95, 96 Global maximization 165
Subject Index
Goodness-of-fit 31, 52, 69, 70, 165, 206 of the model 120, 205, 206 Haldane discrepancy 49 Hazard rate 156, 157 Weibull 157 Hellinger distance 49 Hepatic angiosarcoma 206 Hepatoma 206 Heteroscedasticity 198, 201 Homoscedasticity 18, 19, 23, 24, 198 Horizontal distance 6, 37, 97, 98, 108 Hyperbola 195 Hypothesis alternative 34 null 19, 34 Immunoassay 27, 189–196, 198, 200, 202, 204, 206 Independent variables 30 Inference, statistical 117, 189 Infinite experiment 54, 57–59, 61, 207 Inflection point 42, 55, 198, 199 Information 8, 47, 50, 57, 61, 71, 72, 75, 76, 80, 84, 88, 115, 123, 124, 126, 138, 140, 143, 144, 151, 153, 158, 166, 172, 173, 176, 182, 196, 208 matrix 47, 50, 76, 80, 84, 88, 115, 126, 143, 153, 208 maximal 71 observed 84, 88 prior 75, 166, 172, 173, 182 Insecticide 9, 13, 70, 94, 95, 137 Insulin 1, 27, 189, 190, 193 immunoassay 193 Intercept 15, 17, 20, 27, 30, 52, 114, 127, 198 Integration by parts 86, 208 Internal consistency 44 Interpolation 8, 134, 170, 173, 176, 177, 196 interval estimation 104, 112, 209, 210 quadratic 134 Invariant 20, 42, 135 Inverse problem 171 Isotonic regression 176, 180, 187 Isotope displacement 189–191, 193
228
Linear approximation 132 Iteration 8, 9, 23–25, 29–31, 34, 44, Bayes estimators 185, 187 49–51, 89, 96, 112, 114, 115, 128, 133, space 185 159, 192, 193, 207, 208 Liver hepatoma 206 two-step 159 Ł Logistic 16 Iterative process 24, 42, 44, 69, 95, 104, analysis 125 177, 192 difference 122 Gauss-Newton method 192, 199 effects 124 maximization subroutine 204 equations 15 scheme 21, 23, 34 function 16, 30, 40, 42, 47, 53, 198, 199 solution 21 model 27, 28, 31, 69, 70, 120–122, 194, 199 Kiefer-Wolfowitz process 130 parameters 122 Kinetics 201–203, 205 response curve 31, 75, 91, 104, 105, linear 201–203, 205 146, 198 Michael-Menton 205 sum 125 nonlinear 201–203, 205 technique 198 Ł Kolmogorov inequality 132 Logit 29 Ł Kuhn-Tucker equation 159 approach 35 Kullback-Leibler separator 49 assay 79 estimate 100 optimal property 100 LACKFIT program 31 method 36, 51, 52, 116 Lagrange 168, 176 modified 51 method 176 ratio 127 polynomial 168 scores estimator 35, 84 Langmuir formula 36 Ł Loss function 117, 172, 173, 183, 185 LD5 152 quadratic 117, 136, 172, 178, 183 LD50 31, 42, 53, 54, 67, 70–73, 75, 93, 107, 108, 118, 137, 138, 141–143, 146, Lung cancer 125 Luteinizing hormone 195 151, 207 LDp 145, 149 Least squares 33, 40, 50, 68, 89, 96, 98, Mantel-Bryan model 162, 164 106, 107, 122, 160, 166, 168, 169, 196, Markov chain 147–150 198–201, 204, 207 Matrix 30, 47, 50, 76, 80, 84, 88, 105, estimates 33, 107, 193 114, 115, 126, 143, 149, 153, 160, 161, nonlinear 169 186, 195, 208 unweighted 160 inverse 30, 46, 75, 77, 91, 105, 115, weighted 50, 106, 107, 124, 166, 198, 126, 143, 171, 190, 192, 195 199, 201 positive definite 105, 160 Levenberg-Marquardt method 143 second derivatives 114, 160 Ligand 189, 200 transition probabilities 149 Ł Likelihood 16 Vandermonde 160 equation 21, 43, 44, 48–50, 94, 95, variance-covariance 30, 105, 161, 186, 114, 115, 143, 208 209 function 21, 69, 109, 112, 124, 126, consistent 187 138, 142, 158, 164, 165, 176 unbiased asymptotically 187 Ł ratio 29, 31, 33, 165, 205 Maximum likelihood 16 bias corrected 117, 118 test procedure 205
Subject Index
229
Ł
Maximum likelihood (continued) consistent 187 Ł estimates 30 fit 170 iterated 23 large-sample properties 126, 127, 140 strong consistency 204 uniqueness 160, 204 Mean square 7, 17, 20 error 6, 59, 84, 112, 117, 118, 133, 134, 141, 148, 150, 182, 194 root 87 Means 7, 188 Measures of association 121 Median 42, 56, 74, 79, 84, 109, 117, 126, 127, 136, 182, 183 effective dose 182 Ł Method or approach 1 best asymptotically normal 20 cat 3 comparison of various 116 Davis 42 differentials 115 dilution 135 Gauss-Newton 199 graphical 189 interpolatory 189 least squares 40, 107, 168, 196, 204, 207 linear regression 196, 199 logit 116 maximum likelihood 16, 20, 28, 43, 50–52, 63, 69, 70, 79, 89, 94–96, 104, 105, 138, 140, 151, 168, 207 minimum chi-square 20, 49, 51, 117, 118 logit chi-square 50–52, 117 modified chi-square 49 probit chi-square 117 moments 20 nonparametric 111, 207 parametric 67, 111, 207 parametric Bayes 179 Pearl 42 probit 116 pseudo-Bayes 181 rectangular 116 Schultz 42
Subject Index
scoring 69 sequential 79, 151 Michaelis-Menton kinetic equation 202 model 205 Minimax 136 Minimum chi-square method, see Method or approach Minimum modified chi-square, see Method or approach Ł Mode 1 posterior 178 prior 177 Ł Model 16 compartmental 202 growth 42 logistic 27, 28, 31, 69, 70, 120–122, 194, 199 four parameter 27, 28, 203 two and three parameter 199 logit 76, 95, 97, 105, 117, 140, 190 multi-stage 203 Poisson 135 probit 95, 117, 151 sigmoid 40 Moments central 66 Monotone 15, 70, 83, 101, 102, 109, 148, 178 increasing 78, 81 Monotonicity 117, 118, 176, 187 Monte Carlo studies 57, 82, 84, 107, 109, 111, 112, 117, 127, 143, 144, 148, 163, 165, 211, 213 Mortality line 36 MUD procedure 150, 151 Multinomial techniques 180 formulas 180 Multiple sampling 150 Myocardial infarction 106 Natural mortality 49, 89, 94, 95, 97, 118 parameter space 204 Necropsy 158 Nematode species 99, 100, 119 Newton method 29, 87, 130, 199 Raphson iteration 112 Nitrite 202 Noniterative procedure 106 Nonlinear kinetics 201–203, 205
230
Percentile estimator 57, 70 Pharmacology 95 Phase 144, 189 Phasing factor 147 Piece-wise linear 85 Pitman measure of closeness 118 Point of accumulation 159 Point-wise minimization 185 Polygonals 109–111, 197 Polynomials 110, 116, 157, 164, 166, 168, 169, 171, 197, 184, 188, 199–201 Pool-adjacent-violators algorithm 187 Pooled standard deviation 8 Posterior density 20, 175–177, 184, 188 Objective function 159 features 188 Odds ratio 124 mean 184 Ogive 188 variance 184 Omega distribution 68–70 homogeneity 9 One-hit linear model 201, 205 Potency 1–4, 9, 14–16, 25–27, 29–31, Ł Optimal 31 37–39, 90, 97–99, 108, 109, 111, 177, asymptotic design 79 188–190 center 75 curve 188 choice of doses 79 estimate 2, 31 design 72–75, 79, 80, 166–171 ratio 14, 38, 39 dose level 75, 79 relative 2, 9, 13, 14, 26, 90, 97–99, sample size 144 108, 109, 111 solutions 79 Precision space 75 estimate 10, 11, 125 stopping rule 130 assay 195 strategy 79 Ł Preparation 2 Optimization 74 standard 2, 9, 14, 15, 24, 25, 28, 31, Optimizing points 166, 167 36–39, 90, 97, 108 Order statistics 124, 125 test 2, 3, 9, 14, 15, 25, 27, 28, 34, 38, Outgoing process 202, 203 39, 90, 97, 98, 110 Outliers 32, 200, 201 Prior 20, 72–75, 132, 136, 141, 142, 166, 169, 171–183, 185, 188 P value 120 belief 185 Parallelism 31, 32, 98 density 20, 172, 175, 176 Parameter configuration 212 estimate 132, 136 Parametrize 76, 127, 134, 176 information 75, 166, 172, 173, 182 Parametrization 76 mean 182 Partial derivative 21, 69, 78, 82, 100–103, Probit 31, 40, 47, 70, 72, 79, 81, 82, 84, 176, 177 88, 89, 95, 97, 100, 101, 103, 106, Pathogenic bacteria 122 116–120, 130, 138, 151, 153, 162–164, Peaks 81, 146–149 183 Pearson chi-square 31 analysis 72, 118, 119, 130 Penalty function 177 assay 79 bound 189 free 189 estimate 100, 101, 103 Nonlinearity 98 Nonparallelism 32 Nonparametric 53, 57, 108, 109, 111, 112, 134, 157, 162, 163, 165, 173, 177, 179, 185, 188, 207 Bayes approach 173 estimators 112, 185 maximum likelihood 109 tests 165 Normal approximation 11 Normal equation 98 Normit 70 Numerical integration 12, 69
Subject Index
231
Probit (continued) model 81, 82 optimal property 100 procedure 31 value 118 Prospective study 122 Provisional line 114 value 44, 89 Psychophysical experiment 96 Qualitative assays 3 Quality control 196, 201 Quantal 1, 3, 13, 23, 31, 40, 57, 67, 69–72, 84, 85, 97, 108, 109, 128, 130, 132, 134–136, 138, 140, 142, 144, 172, 183, 188, 202 Quantile 79, 82, 109, 129, 151, 188, 206, 207 extreme 151 Quantit 67–70 method 70 Radioligand assays 189 Radioimmunoassay 189, 190, 192, 194–196, 198, 200, 202, 204, 206 binder-ligand 200 Radioisotope 189 Random walk design 150 Range of doses 15, 41, 64, 70, 97, 152 Ratio estimates 4, 5 RBAN 52, 100, 101, 103, 104 Recursion 130, 134 Recurrence relation 149 Reed-Muench estimate 62, 207 Regression 3, 6, 7, 11, 13, 14, 16, 17, 20, 22–27, 29–34, 90, 97, 98, 105, 117, 118, 120, 126, 127, 130, 136, 141, 166–169, 176, 180, 187, 194, 196, 198, 199, 201, 208 analysis 30 coefficients 6, 7, 27, 117, 118, 126 deviation from linear 17, 20, 27 equation 17 function 130, 166, 167, 169 isotonic 176, 180, 187 linear 11, 17, 26, 27, 32 unweighted 24
Subject Index
lines 6, 31, 33 logistic 27, 29, 194, 208 metametric 16 model 32, 97, 168 nonlinear 11, 16, 17 relationship 14 robust methods 201 step-wise 33 weighted 23, 27, 30, 201 Relative potency, estimation 97, 99 Reparametrize 134, 176 Residual sum of squares 23, 33 Residuals 125 Ł Response 1 angular 113, 114, 116, 118, 120, 122, 124, 126 dose 3, 13, 14, 16, 18, 20, 22, 24, 26, 28, 30–32, 34, 36–39, 53, 68, 70, 79–82, 96, 108–111, 125–127, 163–165, 167, 183, 198, 201–204, 206 graded 71, 72 logistic 75, 91, 104, 105, 117, 183, 198, 207, 208, 210, 212 mean 2, 16, 26, 37, 38 metameter 15, 22 multinomial 100 natural 96 observed 15 ordered polytomous 188 polychotomous quantal 70 quantal 1, 13, 23, 31, 40, 57, 67, 70, 72, 108, 109, 128, 130, 132, 134–136, 138, 140, 142, 144, 183, 202 quantitative 3, 16, 24, 40 toxic 201, 202, 204 type D 145 type U 145 working 21–23 Riemann sum 81 Retenone 89 Retrospective study 122 Risk-efficiency 213 Robbins-Monro estimator 136 modification 136 procedure 135–137, 143, 144 process 128
232
Robust 72–74, 84, 109, 150, 151, 163, 166, 182, 201 estimators 84 Robustness 74, 84, 166 Rule, adaptive 210, 212 S shaped 35 Saccharin 206 SAS programs 31 Scale parameter 72, 76, 79, 91, 111, 136, 152, 183, 207, 211 Scatter 196 Search routine 68 Second order efficiency 50 Semi-parametric model 127 Sensitivity testing 129 Sequential up and down method 145, 146, 148, 150–152, 154 Sheppard correction 53 Sigmoid (systematic) 28, 35, 40 Similarity 14, 15 Simulation, see Monte Carlo studies Skew symmetric 41 Skewness 51, 116 Slope 15, 17, 20, 26–28, 30, 31, 38, 52, 80, 99, 114, 118, 126, 127, 149, 162, 196, 198 Smoothed GS estimator 110 Ł Solutions 6 closed form 143 iterative 21, 143 Spacing 53, 62, 73, 75, 141, 147, 148, 170, 173 Spearman-Karber estimator 52, 53, 55–63, 71, 73, 108, 144, 150, 173, 174, 179, 180, 182, 207, 209 generalized 110 modified 109 properties 207 smoothed generalized 110 trimmed 109, 110 Splines 197 Squared error 87, 107, 117, 118, 133, 134, 136, 141, 175 Standard error 6, 9, 17, 20, 37, 47, 70, 82, 84, 87, 88, 94, 99, 122, 140 of slope 26, 27
Subject Index
Statistical package 31 procedure 13 Steady state 149 conditions 203 solution 203 Stirling formula 133 Stimulus 2, 40 Stochastic approximation 129, 136, 182 Stopping rules 130, 142, 144, 146, 182, 183, 213 natural 144 time 213 Strategy, optimal 76, 79, 183 Strong law of large numbers 21 Sufficient statistics 20, 36, 45, 51, 117, 124 minimal 44, 45 Sukhatme D-statistic 7 Sum of squares 17, 20, 23, 33, 34, 98, 141, 194 error 33, 194 lack of fit 33 residual 23, 33 Super rats 72 Survival 13, 17, 19, 67, 70, 141 data 19 Survivor functions 155 Susceptibility 35 Suspension 63, 67 Taylor series 51 Terminal decision rule 183 Ł Test 2 Bartlett 18 likelihood ratio 165, 205 nonparametric 165 sensitivity 129 simultaneous 27 Time to tumor 156, 158, 160, 166 Ł Tolerance 11 relationship 11 Topology 172 Toxicity 76, 89 Transfer method 45 of response 45 Transformation 10, 15, 16, 18–20, 23, 24, 30, 36, 40, 41, 52, 68, 70, 113, 114, 116, 178, 198, 201, 202
233
Transformation (continued) angular 19, 41, 113, 114, 116 arc sine 20 inverse logit 105 log 41, 68 logit 198 metametric 16, 23, 36 monotone 70 pharmacokinetic 202 scedasticity 23, 24 Transforms 10, 15, 16, 18–20, 23, 24, 27, 29–31, 36, 40, 41, 52, 68, 70, 96, 121, 123, 125 inverse logit 105 logistic 121, 125 probit 162 weighted 123, 125 Transition probabilities 149 rate 203 Tumor incidence 157, 162 Turnbull-Bayes estimators 182 Turning points 146–149 Unimodal 55, 81, 176 Uniqueness 160, 164, 204 Up and down method 137, 140–152, 154, 182, 183 modification 141
Subject Index
modified 140–144 procedure 130, 137, 140, 141, 143, 144, 149, 182, 183 UDTR 145, 149 Validity, statistical 31 Valleys 146, 147, 149 Ł Variance 5 asymptotic 5, 57, 59, 66, 77, 80, 82, 109, 134–136, 153, 168, 169 error 132 estimate (bias corrected) 183 heterogeneity 18, 19, 200 homogeneity 9, 19, 20, 24 homoscedasticity 18, 19, 198 of ratio 92 pooled 10 Vinyl chloride 170, 201, 202, 206 Weibull model 162, 163, 202, 203 parametric 162, 163 nonparametric 162, 163 Wallis product 134 Weight function 22, 111, 185 Weighted mean 123–125 Weighting coefficient 23 Wesley estimate 182
234