Log-Linear Models and Logistic Regression
Ronald Christensen
Springer
To Sharon and Fletch
This page intentionally...
133 downloads
2138 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Log-Linear Models and Logistic Regression
Ronald Christensen
Springer
To Sharon and Fletch
This page intentionally left blank
Preface to the Second Edition
As the new title indicates, this second edition of Log-Linear Models has been modified to place greater emphasis on logistic regression. In addition to new material, the book has been radically rearranged. The fundamental material is contained in Chapters 1-4. Intermediate topics are presented in Chapters 5 through 8. Generalized linear models are presented in Chapter 9. The matrix approach to log-linear models and logistic regression is presented in Chapters 10-12, with Chapters 10 and 11 at the applied Ph.D. level and Chapter 12 doing theory at the Ph.D. level. The largest single addition to the book is Chapter 13 on Bayesian binomial regression. This chapter includes not only logistic regression but also probit and complementary log-log regression. With the simplicity of the Bayesian approach and the ability to do (almost) exact small sample statistical inference, I personally find it hard to justify doing traditional large sample inferences. (Another possibility is to do exact conditional inference, but that is another story.) Naturally, I have cleaned up the minor flaws in the text that I have found. All examples, theorems, proofs, lemmas, etc. are numbered consecutively within each section with no distinctions between them, thus Example 2.3.1 will come before Proposition 2.3.2. Exercises that do not appear in a section at the end have a separate numbering scheme. Within the section in which it appears, an equation is numbered with a single value, e.g., equation (1). When reference is made to an equation that appears in a different section, the reference includes the appropriate chapter and section, e.g., equation (2.1.1).
viii
Preface to the Second Edition
The primary prerequisite for using this book is knowledge of analysis of variance and regression at the masters degree level. It would also be advantageous to have some prior familiarity with the analysis of two-way tables of count data. Christensen (1996a) was written with the idea of preparing people for this book and for Christensen (1996b). In addition, familiarity with masters level probability and mathematical statistics would be helpful, especially for the later chapters. Sections 9.3, 10.2, 11.6, and 12.3 use ideas of the convergence of random variables. Chapter 12 was originally the last chapter in my linear models book, so I would recommend a good course in linear models before attempting that. A good course in linear models would also help for Chapters 10 and 11. The analysis of logistic regression and log-linear models is not possible without modern computing. While it certainly is not the goal of this book to provide training in the use of various software packages, some examples of software commands have been included. These focus primarily on SAS and BMDP, but include some GLIM (of which I am still very fond). I would particularly like to thank Ed Bedrick for his help in preparing this edition and Ed and Wes Johnson for our collaboration in developing the material in Chapter 13. I would also like to thank Turner Ostler for providing the trauma data and his prior opinions about it. Most of the data, and all of the larger data sets, are available from STATLIB as well as by anonymous ftp. The web address for the datasets option in STATLIB is http://www.stat.cmu.edu/datasets/. The data are identified as “christensen-llm”. To use ftp, type ftp stat.unm.edu and login as “anonymous”, enter cd /pub/fletcher and either get llm.tar.Z for Unix machines or llm.zip for a DOS version. More information is available from the file “readme.llm” or at http://stat.unm.edu/∼fletcher, my web homepage. Ronald Christensen Albuquerque, New Mexico February, 1997
BMDP Statistical Software is distributed by SPSS Inc., 444 N. Michigan Avenue, Chicago, IL, 60611, telephone: (800) 543-2185. MINITAB is a registered trademark of Minitab, Inc., 3081 Enterprise Drive, State College, PA 16801, telephone: (814) 238-3280, telex: 881612. MSUSTAT is marketed by the Research and Development Institute Inc., Montana State University, Bozeman, MT 59717-0002, Attn: R.E. Lund.
Preface to the First Edition
This book examines log-linear models for contingency tables. Logistic regression and logistic discrimination are treated as special cases and generalized linear models (in the GLIM sense) are also discussed. The book is designed to fill a niche between basic introductory books such as Fienberg (1980) and Everitt (1977) and advanced books such as Bishop, Fienberg, and Holland (1975), Haberman (1974a), and Santner and Duffy (1989). It is primarily directed at advanced Masters degree students in Statistics but it can be used at both higher and lower levels. The primary theme of the book is using previous knowledge of analysis of variance and regression to motivate and explicate the use of log-linear models. Of course, both the analogies and the distinctions between the different methods must be kept in mind. [From the first edition, Chapters I, II, and III are about the same as the new 1, 2, and 3. Chapter IV is now Chapters 5 and 6. Chapter V is now 7, VI is 10, VII is 4 (and the sections are rearranged), VIII is 11, IX is 8, X is 9, and XV is 12.] The book is written at several levels. A basic introductory course would take material from Chapters I, II (deemphasizing Section II.4), III, Sections IV.1 through IV.5 (eliminating the material on graphical models), Section IV.10, Chapter VII, and Chapter IX. The advanced modeling material at the end of Sections VII.1, VII.2, and possibly the material in Section IX.2 should be deleted in a basic introductory course. For Masters degree students in Statistics, all the material in Chapters I through V, VII, IX, and X should be accessible. For an applied Ph.D. course or for advanced Masters students, the material in Chapters VI and VIII can be incorporated. Chapter VI recapitulates material from the first five chapters using matrix notation. Chapter VIII recapitulates Chapter VII. This material is necessary (a) to get standard errors of estimates in anything other than the saturated model, (b) to explain the Newton-Raphson (iteratively reweighted least squares) algorithm, and (c) to discuss the weighted least
x
Preface to the First Edition
squares approach of Grizzle, Starmer, and Koch (1969). I also think that the more general approach used in these chapters provides a deeper understanding of the subject. Most of the material in Chapters VI and VIII requires no more sophistication than matrix arithmetic and being able to understand the definition of a column space. All of the material should be accessible to people who have had a course in linear models. Throughout the book, Chapter XV of Christensen (1987) is referenced for technical details. For completeness, and to allow the book to be used in nonapplied Ph.D. courses, Chapter XV has been reprinted in this volume under the same title, Chapter XV. The prerequisites differ for the various courses described above. At a minimum, readers should have had a traditional course in statistical methods. To understand the vast majority of the book, courses in regression, analysis of variance, and basic statistical theory are recommended. To fully appreciate the book, it would help to already know linear model theory. It is difficult for me to understand but many of my acquaintance view me as quite opinionated. While I admit that I have not tried to keep my opinions to myself, I have tried to clearly acknowledge them as my opinions. There are many people I would like to thank in connection with this work. My family, Sharon and Fletch, were supportive throughout. Jackie Damrau did an exceptional job of typing the first draft. The folks at BMDP provided me with copies of 4F, LR, and 9R. MINITAB provided me with Versions 6.1 and 6.2. Dick Lund gave me a copy of MSUSTAT. All of the computations were performed with this software or GLIM. Several people made valuable comments on the manuscript; these include Rahman Azari, Larry Blackwood, Ron Schrader, and Elizabeth Slate. Joe Hill introduced me to statistical applications of graph theory and convinced me of their importance and elegance. He also commented on part of the book. My editors, Steve Fienberg and Ingram Olkin, were, as always, very helpful. Like many people, I originally learned about log-linear models from Steve’s book. Two people deserve special mention for how much they contributed to this effort. I would not be the author of this book were it not for the amount of support provided in its development by Ed Bedrick and Wes Johnson. Wes provided much of the data used in the examples. I suppose that I should also thank the legislature of the state of Montana. It was their penury, while I worked at Montana State University, that motivated me to begin the project in the spring of 1987. If you don’t like the book, blame them! Ronald Christensen Albuquerque, New Mexico April 5, 1990 (Happy Birthday Dad)
Contents
Preface to the Second Edition
vii
Preface to the First Edition
ix
1
Introduction 1.1 Conditional Probability and Independence 1.2 Random Variables and Expectations . . . 1.3 The Binomial Distribution . . . . . . . . . 1.4 The Multinomial Distribution . . . . . . . 1.5 The Poisson Distribution . . . . . . . . . . 1.6 Exercises . . . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
2 Two-Dimensional Tables and Simple Logistic Regression 2.1 Two Independent Binomials . . . . . . . . . . . . . 2.1.1 The Odds Ratio . . . . . . . . . . . . . . . . 2.2 Testing Independence in a 2 × 2 Table . . . . . . . 2.2.1 The Odds Ratio . . . . . . . . . . . . . . . . 2.3 I × J Tables . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Response Factors . . . . . . . . . . . . . . . 2.3.2 Odds Ratios . . . . . . . . . . . . . . . . . . 2.4 Maximum Likelihood Theory for Two-Dimensional Tables . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Log-Linear Models for Two-Dimensional Tables . . 2.5.1 Odds Ratios . . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
1 2 11 13 14 18 20
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
23 23 29 30 32 33 37 38
. . . . . . . . . . . .
42 47 51
xii
Contents
2.6
Simple Logistic Regression . . . . . . . . . . . . . . . . . . 2.6.1 Computer Commands . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .
54 61 61
Three-Dimensional Tables 3.1 Simpson’s Paradox and the Need for Higher-Dimensional Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Independence and Odds Ratio Models . . . . . . . . . . . . 3.2.1 The Model of Complete Independence . . . . . . . 3.2.2 Models with One Factor Independent of the Other Two . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Models of Conditional Independence . . . . . . . . 3.2.4 A Final Model for Three-Way Tables . . . . . . . . 3.2.5 Odds Ratios and Independence Models . . . . . . . 3.3 Iterative Computation of Estimates . . . . . . . . . . . . . 3.4 Log-Linear Models for Three-Dimensional Tables . . . . . 3.4.1 Estimation . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Testing Models . . . . . . . . . . . . . . . . . . . . . 3.5 Product-Multinomial and Other Sampling Plans . . . . . 3.5.1 Other Sampling Models . . . . . . . . . . . . . . . . 3.6 Model Selection Criteria . . . . . . . . . . . . . . . . . . . . 3.6.1 R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Adjusted R2 . . . . . . . . . . . . . . . . . . . . . . 3.6.3 Akaike’s Information Criterion . . . . . . . . . . . . 3.7 Higher-Dimensional Tables . . . . . . . . . . . . . . . . . . 3.7.1 Computer Commands . . . . . . . . . . . . . . . . . 3.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
75 79 83 85 87 89 92 94 99 102 104 104 105 106 108 110 113
Logistic Regression, Logit Models, and Logistic Discrimination 4.1 Multiple Logistic Regression . . . . . . . . . . . . . . . . 4.1.1 Informal Model Selection . . . . . . . . . . . . . . 4.2 Measuring Model Fit . . . . . . . . . . . . . . . . . . . . 4.2.1 Checking Lack of Fit . . . . . . . . . . . . . . . . 4.3 Logistic Regression Diagnostics . . . . . . . . . . . . . . 4.4 Model Selection Methods . . . . . . . . . . . . . . . . . . 4.4.1 Computations for Nonbinary Data . . . . . . . . 4.4.2 Computer Commands . . . . . . . . . . . . . . . . 4.5 ANOVA Type Logit Models . . . . . . . . . . . . . . . . 4.5.1 Computer Commands . . . . . . . . . . . . . . . . 4.6 Logit Models for a Multinomial Response . . . . . . . . . 4.7 Logistic Discrimination and Allocation . . . . . . . . . . 4.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . .
116 120 122 127 129 130 136 138 139 141 149 150 159 170
Independence Relationships and Graphical Models 5.1 Model Interpretations . . . . . . . . . . . . . . . . . . . . .
178 178
2.7 3
4
5
70 72 72
Contents
5.2 5.3 5.4 5.5
Graphical and Decomposable Collapsing Tables . . . . . . Recursive Causal Models . . Exercises . . . . . . . . . . .
Models . . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
6 Model Selection Methods and Model Evaluation 6.1 Stepwise Procedures for Model Selection . . . . . . . . 6.2 Initial Models for Selection Methods . . . . . . . . . . 6.2.1 All s-Factor Effects . . . . . . . . . . . . . . . . 6.2.2 Examining Each Term Individually . . . . . . . 6.2.3 Tests of Marginal and Partial Association . . . 6.2.4 Testing Each Term Last . . . . . . . . . . . . . 6.3 Example of Stepwise Methods . . . . . . . . . . . . . . 6.3.1 Forward Selection . . . . . . . . . . . . . . . . . 6.3.2 Backward Elimination . . . . . . . . . . . . . . 6.3.3 Comparison of Stepwise Methods . . . . . . . . 6.3.4 Computer Commands . . . . . . . . . . . . . . . 6.4 Aitkin’s Method of Backward Selection . . . . . . . . . 6.5 Model Selection Among Decomposable and Graphical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Use of Model Selection Criteria . . . . . . . . . . . . . 6.7 Residuals and Influential Observations . . . . . . . . . 6.7.1 Computations . . . . . . . . . . . . . . . . . . . 6.7.2 Computing Commands . . . . . . . . . . . . . . 6.8 Drawing Conclusions . . . . . . . . . . . . . . . . . . . . 6.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . .
xiii
. . . .
. . . .
182 192 195 209
. . . . . . . . . . . .
. . . . . . . . . . . .
211 212 215 215 217 217 218 224 226 230 232 233 234
. . . . . . .
. . . . . . .
240 246 247 249 253 254 256
7 Models for Factors with Quantitative Levels 7.1 Models for Two-Factor Tables . . . . . . . . . . . . 7.1.1 Log-Linear Models with Two Quantitative Factors . . . . . . . . . . . . . . . . . . . . . 7.1.2 Models with One Quantitative Factor . . . . 7.2 Higher-Dimensional Tables . . . . . . . . . . . . . . 7.2.1 Computing Commands . . . . . . . . . . . . 7.3 Unknown Factor Scores . . . . . . . . . . . . . . . . 7.4 Logit Models . . . . . . . . . . . . . . . . . . . . . . 7.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
258 259
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
260 262 266 268 269 275 277
8 Fixed 8.1 8.2 8.3 8.4
. . . .
. . . .
. . . .
. . . .
279 279 282 286 293
and Random Zeros Fixed Zeros . . . . . . . . . . . . . Partitioning Polytomous Variables Random Zeros . . . . . . . . . . . Exercises . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
xiv
9
Contents
Generalized Linear Models 9.1 Distributions for Generalized Linear 9.2 Estimation of Linear Parameters . . 9.3 Estimation of Dispersion and Model 9.4 Summary and Discussion . . . . . . 9.5 Exercises . . . . . . . . . . . . . . .
. . . . .
. . . . .
297 299 304 306 311 313
10 The Matrix Approach to Log-Linear Models 10.1 Maximum Likelihood Theory for Multinomial Sampling 10.2 Asymptotic Results . . . . . . . . . . . . . . . . . . . . . 10.3 Product-Multinomial Sampling . . . . . . . . . . . . . . . 10.4 Inference for Model Parameters . . . . . . . . . . . . . . 10.5 Methods for Finding Maximum Likelihood Estimates . . 10.6 Regression Analysis of Categorical Data . . . . . . . . . 10.7 Residual Analysis and Outliers . . . . . . . . . . . . . . . 10.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .
314 318 322 339 342 345 347 354 360
11 The Matrix Approach to Logit Models 11.1 Estimation and Testing for Logistic Models . . . . 11.2 Model Selection Criteria for Logistic Regression . . 11.3 Likelihood Equations and Newton-Raphson . . . . 11.4 Weighted Least Squares for Logit Models . . . . . . 11.5 Multinomial Response Models . . . . . . . . . . . . 11.6 Asymptotic Results . . . . . . . . . . . . . . . . . . 11.7 Discrimination, Allocation, and Retrospective Data 11.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
363 363 371 372 375 377 378 387 394
12 Maximum Likelihood Theory for Log-Linear Models 12.1 Notation . . . . . . . . . . . . . . . . . . . . . . 12.2 Fixed Sample Size Properties . . . . . . . . . . 12.3 Asymptotic Properties . . . . . . . . . . . . . . 12.4 Applications . . . . . . . . . . . . . . . . . . . 12.5 Proofs of Lemma 12.3.2 and Theorem 12.3.8 .
. . . . .
. . . . .
. . . . .
. . . . .
396 396 397 402 412 418
. . . . . .
422 422 424
. . . . . . .
424 434 436 438 440 441 446
Models . . . . . Fitting . . . . . . . . . .
. . . . .
. . . . .
. . . . .
13 Bayesian Binomial Regression 13.1 Introduction . . . . . . . . . . . . . . . . . . . . 13.2 Bayesian Inference . . . . . . . . . . . . . . . . . 13.2.1 Specifying the Prior and Approximating Posterior . . . . . . . . . . . . . . . . . . 13.2.2 Predictive Probabilities . . . . . . . . . . 13.2.3 Inference for Regression Coefficients . . 13.2.4 Inference for LDα . . . . . . . . . . . . . 13.3 Diagnostics . . . . . . . . . . . . . . . . . . . . . 13.3.1 Case Deletion Influence Measures . . . . 13.3.2 Model Checking . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . . the . . . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . . . .
. . . . . . .
Contents
xv
13.3.3 Link Selection . . . . . . . . . . . . . . . . . . . . . 13.3.4 Sensitivity Analysis . . . . . . . . . . . . . . . . . . Posterior Computations and Sample Size Calculation . . .
447 448 449
Appendix: Tables A.1 The Greek Alphabet . . . . . . . . . . . . . . . . . . . . . . A.2 Tables of the χ2 Distribution . . . . . . . . . . . . . . . . .
455 455 456
References
458
Author Index
475
Subject Index
479
13.4
This page intentionally left blank
1 Introduction
This book is concerned with the analysis of cross-classified categorical data using log-linear models and with logistic regression. Log-linear models have two great advantages: they are flexible and they are interpretable. Loglinear models have all the modeling flexibility that is associated with analysis of variance and regression. They also have natural interpretations in terms of odds and frequently have interpretations in terms of independence. This book also examines logistic regression and logistic discrimination, which typically involve the use of continuous predictor variables. Actually, these are just special cases of log-linear models. There is a wide literature on log-linear models and logistic regression and a number of books have been written on the subject. Some additional references on log-linear models that I can recommend are: Agresti (1984, 1990), Andersen (1991), Bishop, Fienberg, and Holland (1975), Everitt (1977), Fienberg (1980), Haberman (1974a), Plackett (1981), Read and Cressie (1988), and Santner and Duffy (1989). Cox and Snell (1989) and Hosmer and Lemeshow (1989) have written books on logistic regression. One reason I can recommend these is that they are all quite different from each other and from this book. There are differences in level, emphasis, and approach. This is by no means an exhaustive list; other good books are available. In this chapter we review basic information on conditional independence, random variables, expected values, variances, standard deviations, covariances, and correlations. We also review the distributions most commonly used in the analysis of contingency tables: the binomial, the multinomial, product multinomials, and the Poisson. Christensen (1996a, Chapter 1) contains a more extensive review of most of this material.
2
1.1
1. Introduction
Conditional Probability and Independence
This section introduces two subjects that are fundamental to the analysis of count data. Both subjects are quite elementary, but they are used so extensively that a detailed review is in order. One subject is the definition and use of odds. We include as part of this subject the definition and use of odds ratios. The other is the use of independence and conditional independence in characterizing probabilities. We begin with a discussion of odds. Odds will be most familiar to many readers from their use in sporting events. They are not infrequently confused with probabilities. (I once attended an American Statistical Association chapter meeting at which a government publication on the Montana state lottery was disbursed that presented probabilities of winning but called them odds of winning.) In log-linear model analysis and logistic regression, both odds and ratios of odds are used extensively. Suppose that an event, say, the sun rising tomorrow, has a probability p. The odds of that event are Odds =
Pr (Event Occurs) p = . 1−p Pr (Event Does Not Occur)
Thus, supposing the probability that the sun will rise tomorrow is .8, the odds that the sun will rise tomorrow are .8/.2 = 4. Writing 4 as 4/1, it might be said that the odds of the sun rising tomorrow are 4 to 1. The fact that the odds are greater than one indicates that the event has a probability of occurring greater than one-half. Conversely, if the odds are less than one, the event has probability of occurring less than one-half. For example, the probability that the sun will not rise tomorrow is 1 − .8 = .2 and the odds that the sun will not rise tomorrow are .2/.8 = 1/4. The larger the odds, the larger the probability. The closer the odds are to zero, the closer the probability is to zero. In fact, for probabilities and odds that are very close to zero, there is essentially no difference between the numbers. As for all lotteries, the probability of winning big in the Montana state lottery was very small. Thus, the mistake alluded to above is of no practical importance. On the other hand, as probabilities get near one, the corresponding odds approach infinity. Given the odds that an event occurs, the probability of the event is easily obtained. If the odds are O, then the probability p is easily seen to be p=
O . O+1
For example, if the odds of breaking your wrist in a severe bicycle accident are .166, the probability of breaking your wrist is .166/1.166 = .142 or about 1/7. Note that even at this level, the numerical values of the odds and the probability are similar.
1.1 Conditional Probability and Independence
3
Examining odds really amounts to a rescaling of the measure of uncertainty. Probabilities between zero and one half correspond to odds between zero and one. Probabilities between one half and one correspond to odds between one and infinity. Another convenient rescaling is the log of the odds. Probabilities between zero and one half correspond to log odds between minus infinity and zero. Probabilities between one half and one correspond to odds between zero and infinity. The log odds scale is symmetric about zero just as probabilities are symmetric about one half. One unit above zero is comparable to one unit below zero. From above, the log odds that the sun will rise tomorrow are log(4), while the log odds that it will not rise are log(1/4) = − log(4). These numbers are equidistant from the center 0. This symmetry of scale fails for the odds. The odds of 4 are three units above the center 1, while the odds of 1/4 are three-fourths of a unit below the center. For most mathematical purposes, the log odds are a more natural transformation than the odds. Example 1.1.1. N.F.L. Football On January 5, 1990, I decided how much of my meager salary to bet on the upcoming Superbowl. There were eight teams still in contention. The Albuquerque Journal reported Harrah’s Odds for each team. The teams and their odds are given below. Team San Francisco Forty-Niners Denver Broncos New York Giants Cleveland Browns Los Angeles Rams Minnesota Vikings Buffalo Bills Pittsburgh Steelers
Odds even 5 to 2 3 to 1 9 to 2 5 to 1 6 to 1 8 to 1 10 to 1
These odds were designed for the benefit of Harrah’s and were not really anyone’s idea of the odds that the various teams would win. (This will become all too clear later.) Nonetheless, we examine these odds as though they determine probabilities for winning the Superbowl as of January 5, 1990, and their implications for my early retirement. The discussion of betting is quite general. I have no particular knowledge of how Harrah’s works these things. The odds on the Vikings are 6 to 1. These are actually the odds that the Vikings will not win the Superbowl. The odds are a ratio, 6/1 = 6. The probabilities are Pr (Vikings do not win) =
6 6 = 6+1 7
4
1. Introduction
and Pr (Vikings win) =
1 6 1 6
+1
=
1 1 = . 1+6 7
Similarly, the odds on Denver are 5 to 2 or 5/2. The probabilities are 5 2
Pr (Broncos do not win) = and Pr (Broncos win) =
5 2
2 5 2 5
+1
+1
=
=
5 5 = 5+2 7
2 2 = . 5+2 7
San Francisco is even money, so their odds are 1 to 1. The probabilities of winning for all eight teams are given below. Team San Francisco Forty-Niners Denver Broncos New York Giants Cleveland Browns Los Angeles Rams Minnesota Vikings Buffalo Bills Pittsburgh Steelers
Probability of Winning .50 .29 .25 .18 .17 .14 .11 .09
There is a peculiar thing about these probabilities: They should add up to 1 but do not. One of these eight teams had to win the 1990 Superbowl, so the probability of one of them winning must be 1. The eight events are disjoint, e.g., if the Vikings win, the Broncos cannot, so the sum of the probabilities should be the probability that any of the teams wins. This leads to a contradiction. The probability that any of the teams wins is .50 + .29 + .25 + .18 + .17 + .14 + .11 + .09 = 1.73 = 1. All of the odds have been deflated. The probability that the Vikings win should not be .14 but .14/1.73 = .0809. The odds against the Vikings should be (1 − .0809)/.0809 = 11.36. Rounding this to 11 gives the odds against the Vikings as 11 to 1 instead of the reported 6 to 1. This has severe implications for my early retirement. The idea behind odds of 6 to 1 is that if I bet $100 on the Vikings and they win, I should win $600 and also have my original $100 returned. Of course, if they lose I am out my $100. According to the odds calculated above, a fair bet would be for me to win $1100 on a bet of $100. (Actually, I should get $1136 but what is $36 among friends.) Here, “fair” is used in a
1.1 Conditional Probability and Independence
5
technical sense. In a fair bet, the expected winnings are zero. In this case, my expected winnings for a fair bet are 1136(.0809) − 100(1 − .0809) = 0. It is what I win times the probability that I win minus what I lose times the probability that I lose. If the probability of winning is .0809 and I get paid off at a rate of 6 to 1, my expected winnings are 600(.0809) − 100(1 − .0809) = −43.4. I don’t think I can afford that. In fact, a similar phenomenon occurs for a bet on any of the eight teams. If the probabilities of winning add up to more than one, the true expected winnings on any bet will be negative. Obviously, it pays to make the odds rather than the bets. Not only odds but ratios of odds arise naturally in the analysis of logistic regression and log-linear models. It is important to develop some familiarity with odds ratios. The odds on San Francisco, Los Angeles, and Pittsburgh are 1 to 1, 5 to 1, and 10 to 1, respectively. Equivalently, the odds that each team will not win are 1, 5, and 10. Thus, L.A. has odds of not winning that are 5 times larger than San Francisco’s and Pittsburgh’s are 10 times larger than San Francisco’s. The ratio of the odds of L.A. not winning to the odds of San Francisco not winning is 5/1 = 5. The ratio of the odds of Pittsburgh not winning to San Francisco not winning is 10/1 = 10. Also, Pittsburgh has odds of not winning that are twice as large as L.A.’s, i.e., 10/5 = 2. An interesting thing about odds ratios is that, say, the ratio of the odds of Pittsburgh not winning to the odds of L.A. not winning is the same as the ratio of the odds of L.A. winning to the odds of Pittsburgh winning. In other words, if Pittsburgh has odds of not winning that are 2 times larger than L.A.’s, L.A. must have odds of winning that are 2 times larger than Pittsburgh’s. The odds of L.A. not winning are 5 to 1, so the odds of them winning are 1 to 5 or 1/5. Similarly, the odds of Pittsburgh winning are 1/10. Clearly, L.A. has odds of winning that are 2 times those of Pittsburgh. The odds ratio of L.A. winning to Pittsburgh winning is identical to the odds ratio of Pittsburgh not winning to L.A. not winning. Similarly, San Francisco has odds of winning that are 10 times larger than Pittsburgh’s and 5 times as large as L.A.’s. In logistic regression and log-linear model analysis, one of the most common uses for odds ratios is to observe that they equal one. If the odds ratio is one, the two sets of odds are equal. It is certainly of interest in a comparative study to be able to say that the odds of two things are the same. In this example, none of the odds ratios that can be formed is one because no odds are equal. Another common use for odds ratios is to observe that two of them are the same. For example, the ratio of the odds of Pittsburgh not winning
6
1. Introduction
relative to the odds of L.A. not winning is the same as the ratio of the odds of L.A. not winning to the odds of the Denver not winning. We have already seen that the first of these values is 2. The odds for L.A. not winning relative to Denver not winning are also 2 because 51 / 52 = 2. Even when the corresponding odds are different, odds ratios can be the same. Marginal and conditional probabilities play important roles in logistic regression and log-linear model analysis. If Pr(B) > 0, the conditional probability of A given B is Pr(A|B) =
Pr(A ∩ B) . Pr(B)
It is the proportion of the probability of B in which A also occurs. To deal with conditional probabilities when Pr(B) = 0 requires much more sophistication. It is an important topic in dealing with continuous observations, but it is not something we need to consider. If knowing that B occurs does not change your information about A, then A is independent of B. Specifically, A is independent of B if Pr(A|B) = Pr(A) . This definition gets tied up in details related to the requirement that Pr(B) > 0. A simpler and essentially equivalent definition is that A and B are independent if Pr(A ∩ B) = Pr(A)Pr(B) . Example 1.1.2. Table 1.1 contains probabilities for nine combinations of hair and eye color. The nine outcomes are all combinations of three hair colors, Blond (BlH), Brown (BrH), and Red (RH), and three eye colors, Blue (BlE), Brown (BrE), and Green (GE). Table 1.1 Hair-Eye Color Probabilities Eye Color Blue Brown Green Blond .12 .15 .03 .22 .34 .04 Hair Color Brown .06 .01 .03 Red The (marginal) probabilities for the various hair colors are obtained by summing over the rows: Pr(BlH) = .12 + .15 + .03 = .3 Pr(BrH) = .6 Pr(RH) = .1 .
1.1 Conditional Probability and Independence
7
Probabilities for eye colors come from summing the columns. Blue, Brown, and Green eyes have probabilities .4, .5, and .1, respectively. The conditional probability of Blond Hair given Blue Eyes is Pr(BlH|BlE)
= Pr((BlH, BlE))/Pr(BlE) = .12/.4 = .3 .
Note that Pr(BlH|BlE) = Pr(BlH), so the events BlH and BlE are independent. In other words, knowing that someone has blue eyes gives no additional information about whether that person has blond hair. On the other hand, Pr(BrH|BlE)
= .22/.4 = .55 ,
while Pr(BrH) = .6, so knowing that someone has blue eyes tells us that they are relatively less likely to have brown hair. Now condition on blond hair, Pr(BlE|BlH) = .12/.3 = .4 = Pr(BlE) . We again see that BlE and BlH are independent. In fact, it is also true that Pr(BrE|BlH) = Pr(BrE) and Pr(GE|BlH) = Pr(GE) . Knowing that someone has blond hair gives no additional information about any eye color. Example 1.1.3. Consider the eight combinations of three factors: economic status (High, Low), residence (Montana, Haiti), and beverage of preference (Beer, Other). Probabilities are given below.
High Low Total
Beer Montana .021 .189 .210
Haiti .009 .081 .090
Other Montana Haiti .049 .021 .441 .189 .490 .210
Total .1 .9 1.0
The factors in this table are completely independent. If we condition on either beverage category, then economic status and residence are independent. If we condition on either residence, then economic status and beverage
8
1. Introduction
are independent. If we condition on either economic status, residence and beverage are independent. No matter what you condition on and no matter what you look at, you get independence. For example, Pr(High|Montana, Beer)
= .021/.210 = .1 = Pr(High) .
Similarly, knowing that someone has low economic status gives no additional information relative to whether their residence is Montana or Haiti. The phenomenon of complete independence is characterized by the fact that every probability in the table is the product of the three corresponding marginal probabilities. For example, Pr(Low, Montana, Beer) = .189 = (.9)(.7)(.3) = Pr(Low)Pr(Montana)Pr(Beer) .
Example 1.1.4. Consider the eight combinations of socioeconomic status (High, Low), political philosophy (Liberal, Conservative), and political affiliation (Democrat, Republican). Probabilities are given below.
High Low Total
Democrat Liberal Conservative .12 .12 .18 .18 .30 .30
Republican Liberal Conservative .04 .12 .06 .18 .10 .30
Total .4 .6 1.0
For any combination in the table, one of the three factors, socioeconomic status, is independent of the other two, political philosophy and political affiliation. For example, Pr(High, Liberal, Republican)
= .04 = (.4)(.1) = Pr(High)Pr(Liberal, Republican) .
However, the other divisions of the three factors into two groups do not display this property. Political philosophy is not always independent of socioeconomic status and political affiliation, e.g., Pr(High, Liberal, Republican)
= .04 = (.4)(.16) = Pr(Liberal)Pr(High, Republican) .
1.1 Conditional Probability and Independence
9
Also, political affiliation is not always independent of socioeconomic status and political philosophy, e.g., Pr(High, Liberal, Republican)
= .04 = (.4)(.16) = Pr(Republican)Pr(High, Liberal) .
Example 1.1.5. Consider the twelve outcomes that are all combinations of three factors, one with three levels and two with two levels. The factors and levels are given below. They are similar to those in a study by Reiss et al. (1975) that was reported in Fienberg (1980). Factor Attitude on Extramarital Coitus Virginity Use of Contraceptives
Levels Always Wrong, Not Always Wrong Virgin, Nonvirgin Regular, Intermittent, None
The probabilities are Use of Contraceptives Regular Virgin Nonvirgin 3/50 12/50 3/50 12/50 Intermittent Virgin Nonvirgin 1/80 2/80 3/80 2/80 None Virgin Nonvirgin 3/40 1/40 6/40 2/40
Always Wrong Not Always
Always Wrong Not Always
Always Wrong Not Always
Consider the relationship between attitude and virginity given regular use of contraceptives. The probability of regular use is Pr(Regular) = =
3 12 12 3 + + + 50 50 50 50 30/50 .
The conditional probabilities given regular use are computed by dividing the entries in the 2×2 subtable for regular use by the (marginal) probability of regular use, 30/50, e.g., the probability for Always Wrong, Virgin given Regular is (3/50) (30/50) = .1.
10
1. Introduction
Conditional Probabilities Given Regular Use of Contraceptives Virgin Nonvirgin Total Always Wrong .1 .4 .5 .1 .4 .5 Not Always Total .2 .8 1.0 Note that each entry is the product of the row total and the column total, e.g., Pr(Always Wrong and Virgin|Regular) = .1 = (.2)(.5) = Pr(Always Wrong|Regular)Pr(Virgin|Regular) . Because this is true for the entire 2 × 2 table, attitude and virginity are independent given regular use of contraceptives. Similarly, the conditional probabilities given no use of contraceptives are Always Wrong Not Always Total
Virgin 3/12 6/12 3/4
Nonvirgin 1/12 2/12 1/4
Total 1/3 2/3 1
Again, it is easily seen that attitude and virginity are independent given no use of contraceptives. Although we have independence given either no use or regular use, the probabilities of virginity and attitude change drastically. For regular use, nonvirginity is four times more probable than virginity. For no use, virginity is three times more probable. For regular use, attitudes are evenly split. For no use, the attitude that extramarital coitus is not always wrong is twice as probable as the attitude that it is always wrong. If the conditional probabilities given intermittent use also display independence, we can describe the entire table as having attitude and virginity independent given use. Unfortunately, this does not occur. Conditional on intermittent use, the probabilities are Always Wrong Not Always Total
Virgin 1/8 3/8 1/2
Nonvirgin 1/4 1/4 1/2
Total 3/8 5/8 1
Virgins are three times as likely to think extramarital coitus is not always wrong, but nonvirgins are evenly split. Conditional odds are readily obtained from the unconditional probabilities and other conditional probabilities. The odds that a virgin intermittent
1.2 Random Variables and Expectations
11
contraceptive user thinks that extramarital coitus is not always wrong are Pr(Not Always|Virgin, intermittent use) Pr(Always Wrong|Virgin, intermittent use) Pr(Not Always, Virgin|intermittent use) = Pr(Always Wrong, Virgin|intermittent use) Pr(Not Always, Virgin, intermittent use) = Pr(Always Wrong, Virgin, intermittent use) = 3. The reader should verify that all of these probability ratios give 3/1. Similarly, the odds that a nonvirgin intermittent contraceptive user thinks that extramarital coitus is not always wrong is Pr(Not Always|Nonvirgin, intermittent use) = (1/4) (1/4) Pr(Always Wrong|Nonvirgin, intermittent use) = (2/80) (2/80) =
1.
The odds for virgin intermittent users are different than for nonvirgin intermittent users; thus, independence does not hold. For nonusers, the odds for both virgins and nonvirgins are 2, so independence holds. For regular users, the odds for both virgins and nonvirgins are 1, so again independence holds. Rather than checking for equality of the odds for virgins and nonvirgins, we could look at the ratio of the odds. If the odds ratio is one, then the odds are equal and conditional independence given a particular use holds.
1.2
Random Variables and Expectations
A random variable is simply a function from a set of outcomes to the real numbers. A discrete random variable is one that takes on values in a countable set. The distribution of a discrete random variable is a list of the possible values for the random variable along with the probabilities that the values will occur. The expected value of a random variable is a number that characterizes the middle of the distribution. For a random variable y with a discrete distribution, the expected value is E(y) = rPr(y = r) . all r Distributions with the same expected value can be very different. For example, the expected value indicates the middle of a distribution but
12
1. Introduction
does not indicate how spread out it is. The variance is a measure of how spread out a distribution is from its expected value. Let E(y) = µ, then the variance of y is (r − µ)2 Pr(y = r) . Var(y) = all r One problem with the variance is that it is measured on the wrong scale. If y is measured in meters, Var(y) involves the terms (r − µ)2 ; hence, it is measured in meters squared. To get things back on a comparable scale, we consider the standard deviation of y Std. dev.(y) =
Var(y) .
Standard deviations and variances are useful as measures of the relative dispersions of different random variables. The actual numbers themselves do not mean much. Moreover, there are other equally good measures of dispersion that can give results that are inconsistent with these. One reason standard deviations and variances are so widely used is because they are convenient mathematically. Of particular importance in applied work is the fact that the commonly used normal (Gaussian) distributions are completely characterized by their expected values (means) and variances. With these two numbers, one knows everything about a normal distribution. Normal distributions are widely used in statistics, so variances and their cousins, standard deviations, are also widely used. The covariance is a measure of the linear relationship between two random variables. Suppose y1 and y2 are random variables. Let E(y1 ) = µ1 and E(y2 ) = µ2 . The covariance between y1 and y2 is Cov(y1 , y2 ) =
all
(r − µ1 )(s − µ2 )Pr(y1 = r, y2 = s) .
(r,s)
It is immediate that Var(y1 ) = Cov(y1 , y1 ) . In an attempt to get a handle on what the numerical value of the covariance means, it is often rescaled into a correlation coefficient. Corr(y1 , y2 ) = Cov(y1 , y2 )
Var(y1 )Var(y2 ) .
A perfect increasing linear relationship is indicated by a 1. A perfect decreasing linear relationship gives a −1. The absence of any linear relationship is indicated by a value of 0. Exercise 1.6.5 contains important results on the expected values, variances, and covariances of linear combination of random variables.
1.3 The Binomial Distribution
1.3
13
The Binomial Distribution
There are a few distributions that are used in the vast majority of statistical applications. The reason for this is that they tend to occur naturally. The normal distribution is one. It occurs in practice because the central limit theorem dictates that other distributions will approach the normal. Two other distributions, the binomial and the multinomial, occur in practice because they are so simple. A fourth distribution, the Poisson, also occurs in nature because it is the distribution arrived at in another limit theorem. In this section, we discuss the binomial. Subsequent sections discuss the multinomial and the Poisson. If you have independent identical trials and are counting how often something (anything) occurs, the appropriate distribution is the binomial. What could be simpler? Typically, the outcome of interest is referred to as a success. If the probability of a success is p in each of N independent identical trials, then the number of successes n has a binomial distribution with parameters N and p. Write n ∼ Bin(N, p) . The distribution of n is
N r Pr(n = r) = p (1 − p)N −r r
for r = 0, 1, . . . , N . Here, N N! = r r!(N − r)! and for any positive integer m, m! = m(m − 1)(m − 2) · · · (2)(1). Given the distribution, we can find the mean (expected value) and variance. By definition, the mean is N N r r p (1 − p)N −r . E(n) = r r=0 By writing n as the sum of N independent Bin(1, p) random variables and using Exercise 1.6.5a, it is easily seen that E(n) = N p . The variance of n is N r (r − N p) p (1 − p)N −r . Var(n) = r r=0 N
2
14
1. Introduction
Again, by writing n as the sum of N independent Bin (1, p) random variables and now using Exercise 1.6.5b, it is easily seen that Var(n) = N p(1 − p) . In this book, we will often need to look at both the number of successes and the number of failures at the same time. If the number of successes is n1 and the number of failures is n2 , then n2 n1
= N − n1 ∼ Bin(N, p)
and n2 ∼ Bin(N, 1 − p) . The last result holds because, with independent identical trials, the number of outcomes that we call failures must also have a binomial distribution. If p is the probability of success, the probability of failure is 1 − p. Of course, E(n2 ) = N (1 − p) Var(n2 ) = N (1 − p)p . Note that Var(n1 ) = Var(n2 ), regardless of the value of p. Finally, Cov(n1 , n2 ) = −N p(1 − p) and Corr(n1 , n2 ) = −1 . There is a perfect linear relationship between n1 and n2 . If n1 goes up one unit, n2 goes down one unit. When we look at both successes and failures, write (n1 , n2 ) ∼ Bin N, p, (1 − p) .
1.4
The Multinomial Distribution
The multinomial distribution is a generalization of the binomial to more than two categories. Suppose we have N independent identical trials. On each trial, we check to see which of q events occurs. In such a situation, we assume that on each trial, one of the q events must occur. Let ni , i = 1, . . . , q, be the number of times that the ith event occurs. Let pi be the probability that the ith event occurs on any trial. Note that the pi ’s must satisfy p1 +p2 +· · ·+pq = 1. In this situation, we say that (n1 , . . . , nq ) has a multinomial distribution with parameters N, p1 , . . . , pq . Write (n1 , . . . , nq ) ∼ Mult(N, p1 , . . . , pq ) .
1.4 The Multinomial Distribution
15
The distribution is Pr(n1 = r1 , . . . , nq = rq ) = =
N! pr1 · · · prqq r 1 ! · · · rq ! 1 q N ! ri
q pi i=1 ri ! i=1
for ri ≥ 0 and r1 + · · · + rq = N . Note that if q = 2, this is just a binomial distribution. In general, each individual component is ni ∼ Bin(N, pi ) so E(ni ) = N pi and Var(ni ) = N pi (1 − pi ) . Also, it can be shown that Cov(ni , nj ) = −N pi pj
for
i = j .
Example 1.4.1. In Example 1.1.4, probabilities were given for the eight categories determined by combining high and low socioeconomic status, liberal and conservative political philosophy, and Democratic and Republican political affiliation. Suppose a sample of 50 individuals was taken from a population that had the probabilities associated with Example 1.1.4,
High Low
Democrat Liberal Conservative .12 .12 .18 .18
Republican Liberal Conservative .04 .12 .06 .18
Total .4 .6
The number of individuals falling into each of the eight categories has a multinomial distribution with N = 50 and these pi ’s. The expected numbers of observations for each category are given by N pi . It is easily seen that the expected counts for the cells are
High Low
Democrat Liberal Conservative 6 6 9 9
Republican Liberal Conservative 2 6 3 9
Note that the expected counts need not be integers. The variance for, say, the number of high liberal Republicans is 50(.04)(1 − .04) = 1.92. The variance of the number of high liberal
16
1. Introduction
Democrats is 50(.12)(1 − .12) = 5.28. The covariance between the number of high liberal Republicans and the number of high liberal Democrats is −50(.04)(.12) = −.24. The correlationbetween the numbers of high liberal Democrats and Republicans is −.24/ (1.92)(5.28) = −0.075. Now, suppose that the 50 individuals fall into the categories as listed in the table below. Democrat Republican Liberal Conservative Liberal Conservative High 5 7 4 6 8 7 3 10 Low The probability of getting this particular table is 50! (.12)5 (.12)7 (.04)4 (.12)6 (.18)8 (.18)7 (.06)3 (.18)10 = .000007. 5!7!4!6!8!7!3!10! The fact that this is a very small number is not surprising. There are a lot of possible tables, so the probability of getting any particular one is small. In fact, the table that has the highest probability can be shown to have a probability of only .000142. Although this probability is also very small, it is more than 20 times larger than the probability of the table given above.
Product-Multinomial Distributions For i = 1, . . . , t, take independent multinomials where the ith has si possible outcomes, i.e., (ni1 , . . . , nisi ) ∼ Mult(Ni , pi1 , . . . , pisi ) ; then we say that the nij ’s have a product-multinomial distribution. By independence, the probability of any set of outcomes, say Pr(nij = rij all i, j), is the product of the multinomial probabilities for each i. In other notation, t Pr(nij = rij all j) Pr(nij = rij all i, j) = i=1
and for rij ≥ 0, j = 1, . . . , si , with
si
rij = Ni , we have
j=1
Pr(nij = rij
si si all j) = Ni ! rij ! (pij )rij . j=1
Thus, Pr(nij = rij all i, j) =
t i=1
j=1
si si Ni ! rij ! (pij )rij , j=1
j=1
1.4 The Multinomial Distribution
17
where rij ≥ 0 all i, j and ri1 + · · · + risi = Ni for all i. Expected values, variances, and covariances within a particular multinomial are obtained by ignoring the other multinomials. Covariances between counts in different multinomials are zero because such observations are independent. Example 1.4.2. In Example 1.4.1 we considered taking a sample of 50 people from a population with the probabilities given in Example 1.1.4. Suppose we can identify and sample two subpopulations, the high socioeconomic group and the low socioeconomic group. If we take independent random samples of 30 from the high group and 20 from the low group, the numbers of individuals in the eight categories has a product-multinomial distribution with t = 2, N1 = 30, s1 = 4, N2 = 20, and s2 = 4. The probabilities of the four categories associated with high socioeconomic status are the conditional probabilities given high status. For example, the probability of a liberal Republican in the high group is .04/.4 = .1; the probability of a liberal Democrat is .12/.4 = .3. Similarly, the probability of a liberal Republican in the low socioeconomic group is .06/.6 = .1. The table of probabilities appropriate for the product-multinomial sampling described is the table of conditional probabilities given socioeconomic status:
High Low
Democrat Liberal Conservative .3 .3 .3 .3
Republican Liberal Conservative .1 .3 .1 .3
Total 1.0 1.0
Although the probabilities for each category are the same for both high and low status, this is just an oddity of the particular example under consideration. Typically, the probabilities will be different in the different groups. In fact, the different groups do not even need to be divided into the same categories, although in most of our applications, the categories will be identical for all groups. The expected counts for cells are computed separately for each multinomial. The expected count for high liberal Republicans is 30(.1) = 3. With samples of 30 from the high group and 20 from the low group, the expected counts are
High Low
Democrat Liberal Conservative 9 9 6 6
Republican Liberal Conservative 3 9 2 6
Total 30 20
Similarly, variances and covariances are found for each multinomial separately. The variance of the number of high liberal Republicans is 30(.1)(1 − .1) = 2.7. The covariance between the numbers of low liberal Democrats and low liberal Republicans is −20(.3)(.1) = −0.6. The covariance between
18
1. Introduction
counts in different multinomials is zero because counts in different multinomials are independent, e.g., the covariance between the numbers of high liberal Democrats and low liberal Republicans is zero because all high status counts are independent of all low status counts. To find the probability of any particular table, find the probability associated with the high group and multiply it by the probability of the low group. The probability of getting the table
High Low is
Democrat Liberal Conservative 10 10 5 8
30! (.3)10 (.3)10 (.1)2 (.3)8 10!10!2!8!
Republican Liberal Conservative 2 8 1 6
Total 30 20
20! (.3)5 (.3)8 (.1)1 (.3)6 5!8!1!6! = (.045716)(.008117) = .000371 .
Exercise 1.1. Find the expected counts for a sample of size 20 from the population with probabilities given in Example 1.1.3. Now, conditioning on residence, find the expected counts for a sample of size 8 from Montana and a sample of size 12 from Haiti.
1.5
The Poisson Distribution
The binomial and multinomial distributions are appropriate and useful when the number of trials are not too large (whatever that means) and the probabilities of occurrences are not too small. For phenomena that have a very small probability of occurring on any particular trial, but for which an extremely large number of trials are available, the Poisson distribution is appropriate. For example, the number of suicides in a year might have a Poisson distribution. The probability of anyone committing suicide is very small, but in a large population, a substantial number of people will do it. One of the most famous examples of a Poisson distribution is due to Bortkiewicz (1898). He examines the yearly total of men in the Prussian army corps who were kicked by horses and died of their injuries. Again, the probability that any individual will be mortally hoofed on a given day is very small, but for an entire army corps over an entire year, the number is fairly substantial. In particular, Fisher (1925) cites the 200 observations from 10 corps over a 20-year period as: Deaths Frequencies
0 109
1 65
2 22
3 3
4 1
5+ 0
1.5 The Poisson Distribution
19
The idea is to view these as the results of a random sample of size 200 from a Poisson distribution. (Incidentally, the identity of the individual who introduced this example is one of the compelling mysteries in the history of statistics. It has been ascribed to at least four different people: Bortkiewicz, Bortkewicz, Bortkewitsch, and Bortkewitch.) A third example of Poisson sampling is the number of microorganisms in a solution. One can imagine dividing the solution into a very large number of hypothetical units with very small volume (i.e., just big enough for the microorganism to be contained in the unit). If microorganisms are rare in the solution, then the probability of getting an organism in any particular unit is very small. Now, if we extract say one cubic centimeter of the solution, we have a very large number of trials. The number of organisms in the extracted solution should follow a Poisson distribution. Note that the Poisson distribution depends on having relatively few organisms in the solution. If that assumption is not true, one can dilute the solution until it is true. Finally, and perhaps most importantly, the number of people who arrive during a 5-minute period to buy tickets for a Bruce Springsteen concert can be modeled with a Poisson distribution. Time can be divided into arbitrarily small intervals. The probability that anyone in the population will show up during any particular interval is very small. However, in 5 minutes there are a very large number of intervals. The Poisson distribution can be arrived at as the limit of a Bin(N, p) distribution where N → ∞ and p → 0. However, the two convergences must occur in such a way that N p → λ. (To do this rigorously, we would let p be a function of N , say pN .) The value λ is the parameter of the Poisson distribution. If n is a random variable with a Poisson distribution and parameter λ, write n ∼ Pois(λ) . The distribution is defined by giving the probabilities and outcomes, i.e., Pr(n = r) = λr e−λ /r!
(1)
for r = 0, 1, . . . . It is not difficult to arrive at (1) by looking at binomial probabilities. The corresponding binomial probability for n = r is N! N r . (2) p (1 − p)N −r = [(N p)r (1 − p)N /r!](1 − p)−r (N − r)!N r r With N → ∞, p → 0, and N p → λ, (N p)r (1 − p)N (1 − p)−r N !/(N − r)!N r
→ λr → e−λ → 1 → 1
20
1. Introduction
Substituting these limits into the right-hand side of (2) gives the probability displayed in (1). Using (1), we can compute the expected value and the variance of n. It is not difficult to show that E(n) = λ and Var(n) = λ . Lindgren (1993) gives a more detailed discussion of the assumptions behind the Poisson model. Fisher (1925) gives a nice discussion of the uses of the Poisson model and the analysis of Poisson data. We close with two facts about independent Poisson random variables. If n1 , . . . , nq are independent with ni ∼ Pois(λi ), then the total of all the counts is n1 + n2 + · · · + nq ∼ Pois(λ1 + · · · + λq ) and the counts given the total are (n1 , . . . , nq )|N ∼ Mult(N, p1 , . . . , pq ) where N = n1 + · · · + nq and pi = λi /(λ1 + · · · + λq ),
i = 1, . . . , q .
The conditional distribution is important for the analysis of log-linear models. If we have a table of counts that is comprised of independent Poisson random variables, we can always compute the grand total for the table. Looking at the conditional distribution given the total leads us to an analysis based on a multinomial distribution. The multinomial is the most commonly assumed distribution for tables of counts. Our discussion will focus almost entirely on multinomial and product-multinomial sampling.
1.6
Exercises
Exercise 1.6.1. In a Newsweek article on “The Wisdom of Animals” (May 23, 1988), one of the key issues considered was whether animals (other than humans) understand relationships between symbols. Some animals can associate symbols with objects; the question is whether they can tell the difference between commands such as “take the red ball to the blue ball” and “take the blue ball to the red ball.” In discussing sea lions, it was indicated that out of a large pool of objects, they correctly identify symbols 95% of the time but are only correct 45% of the time on relationships. Presumably, this referred to a simple relationship between two objects; for
1.6 Exercises
21
example, a sea lion could be shown symbols for “blue ball,” “take to,” “red ball.” It was then concluded that, “considering the number of objects present in the pool, the odds are exceedingly long of getting even that proportion [45%] right by sheer chance.” Assume a simple model in which sea lions know the nature of the relationship (it is repeated in a long series of trials), e.g., take one object to another, but make independent choices for identifying each object and the order in the relationship. Assume also that they have no idea what the correct order should be in the relationship, i.e., the two possible orders are equally probable. Compute the probability a sea lion will perform the task correctly. Why is the conclusion given in the article wrong? What does the number of objects present in the pool have to do with all this? Exercise 1.6.2. Consider a 2 × 2 table of multinomial probabilities that models how subjects respond on two separate occasions. Second Trial A B
First Trial A B p11 p12 p21 p22
Show that Pr(A Second Trial |B First Trial ) = Pr(B Second Trial |A First Trial ) if and only if the event that a change occurs between the first and second trials is independent of the outcome on the first trial. Exercise 1.6.3. Weisberg (1975) reports the following data on the number of boys among the first seven children born to a collection of 1,334 Swedish ministers. Number of Boys Frequency
0 6
1 57
2 206
3 362
4 365
5 256
6 69
7 13
Assume that the number of boys has a Bin(7, .5) distribution. Compute the probabilities for each of the eight categories 0, 1, . . . , 7. From the sample of 1,334 families, what is the expected frequency for each category? What is the distribution of the number of families that fall into each category? Summarize the fit of the assumed binomial model by computing X2 =
7 2 (Observationi − Expectedi ) i=0
Expectedi
.
The statistic X 2 is known as Pearson’s chi-square statistic. For large samples such as this, if the Expected values are correct, X 2 should be one observation from a χ2 (7) distribution. (The 7 is one less than the number
22
1. Introduction
of categories.) Compute X 2 and compare it to tabled values of the χ2 (7) distribution. Does X 2 seem like it could reasonably come from a χ2 (7)? What does this imply about how well the binomial model fits the data? Can you distinguish which assumptions made in the binomial model may be violated? Exercise 1.6.4. The data given in the previous problem may be 1,334 independent observations from a Bin(7, p) distribution. If so, use the defining assumptions of the binomial distribution to show that this is the same as one observation from a Bin(1334 × 7, p) distribution. Estimate p with pˆ =
Total number of boys . Total number of trials
Repeat the previous problem, replacing .5 with pˆ. Compare X 2 to a χ2 (6) distribution, reducing the degrees of freedom by one because the probability p is being estimated from the data. Exercise 1.6.5. Let y1 , y2 , y3 , and y4 be random variables and let a1 , a2 , a3 , and a4 be real numbers. Show that the following relationships hold for finite discrete distributions. (a) E(a1 y1 + a2 y2 + a3 ) = a1 E(y1 ) + a2 E(y2 ) + a3 . (b) Var(a1 y1 + a2 y2 + a3 ) = a21 Var(y1 ) + a22 Var(y2 ) for y1 and y2 independent. 2 4 (c) Cov(a1 y1 + a2 y2 , a3 y3 + a4 y4 ) = i=1 j=3 ai aj Cov(yi , yj ). Exercise 1.6.6.
Assume that (n1 , . . . , nq ) ∼ Mult(N, p1 , . . . , pq )
and let t be an integer less than q. Define y = n1 + · · · + nt and p˜ = p1 + · · · + pt . Show that (y, nt+1 , . . . , nq ) ∼ Mult(N, p˜, pt+1 , . . . , pq ) so that E(y) = N p˜ and Var(y) = N p˜(1 − p˜). Exercise 1.6.7. Suppose y ∼ Bin(N, p). Let pˆ = y/N . Show that E(ˆ p) = p and that Var(ˆ p) = p(1 − p)/N .
2 Two-Dimensional Tables and Simple Logistic Regression
At this point, it is not our primary intention to provide a rigorous account of logistic regression and log-linear model theory. Such a treatment demands extensive use of advanced calculus and asymptotic theory. On the other hand, some knowledge of the basic issues is necessary for a correct understanding of applications of logistic regression and log-linear models. In this chapter, we address these basic issues for the simple case of twodimensional tables and simple logistic regression. For a more elementary discussion of two-dimensional tables and simple logistic regression including substantial data analysis, see Christensen (1996a, Chapter 8). In fact, we assume that the reader is familiar with such analyses and use the topics in this chapter primarily to introduce key theoretical ideas.
2.1
Two Independent Binomials
Consider two binomials arranged in a 2 × 2 table. Our interest is in examining possible differences between the two binomials.
Example 2.1.1. A survey was conducted to examine the relative attitudes of males and females about abortion. Of 500 females, 309 supported legalized abortion. Of 600 males, 319 supported legalized abortion. The data can be summarized in tabular form:
24
2. Two-Dimensional Tables and Simple Logistic Regression
OBSERVED VALUES Female Male Total
Support 309 319 628
Do Not Support 191 281 472
Total 500 600 1,100
Note that the totals on the right-hand side of the table (500 and 600) are fixed by the design of the study. The totals along the bottom of the table are observed random variables. It is assumed that for each sex, the numbers of supporters and nonsupporters form a binomial random vector (ordered pair). We are interested in whether these numbers indicate that a person’s sex affects their attitude toward legalized abortion. Note that the categories are Support and Do Not Support legalized abortion. Not supporting legalized abortion is distinct from opposing it. Anyone who is indifferent neither supports nor opposes legalized abortion. We now introduce the notation that will be used for tables of counts in this book. For a 2 × 2 table, the observed values are denoted by nij , i = 1, 2 and j = 1, 2. Marginal totals are written ni· ≡ ni1 +ni2 and n·j ≡ n1j +n2j . The total of all observations is n·· ≡ n11 + n12 + n21 + n22 . The probability of having an observation fall in the ith row and jth column of the table is denoted pij . The number of observations that one would expect to see in the ith row and jth column (based on some statistical model) is denoted mij . For independent binomial rows, mij = ni· pij . Marginal totals pi· , p·j , mi· , and m·j are defined like ni· and n·j . All of this notation can be summarized in tabular form. OBSERVED VALUES
Rows
1 2 Totals
Columns 1 2 n11 n12 n21 n22 n·1 n·2
Totals n1· n2· n··
1 2 Totals
Columns 1 2 p11 p12 p21 p22 p·1 p·2
Totals p1· p2· p··
PROBABILITIES
Rows
2.1 Two Independent Binomials
25
EXPECTED VALUES
Rows
1 2 Totals
Columns 1 2 m11 m12 m21 m22 m·1 m·2
Totals m1· m2· m··
Our interest is in finding estimates of the pij ’s, developing models for the pij ’s, and performing tests on the pij ’s. Equivalently, we can concern ourselves with estimates, models, and tests for the mij ’s. In Example 2.1.1, our interest is in whether sex is related to support for legalized abortion. Note that p11 + p12 = 1 and p21 + p22 = 1. (Equivalently m11 + m12 = 500 and m21 + m22 = 600.) If sex has no effect on opinion, the distribution of support versus nonsupport should be the same for both sexes. In particular, it is of interest to test the null hypothesis (model) H0 : p11 = p21 and p12 = p22 . With pi1 + pi2 = 1, the equality p11 = p21 holds if and only if p12 = p22 holds. In other words, females and males have the same probability of “support” if and only if they have the same probability for “do not support.” It suffices to test that the probability of support is the same for both sexes, i.e., H0 : p11 = p21 or, equivalently, H0 : p11 − p21 = 0 . To test this hypothesis, we need an estimate of p11 − p21 and the standard error (SE) of the estimate. Each row is binomial with sample size ni· , so a natural estimate of pij is the proportion of observations falling in cell ij relative to the total number of observations in the ith row, i.e., pˆij = nij /ni· . For the abortion example, pˆ11 = 309/500 and pˆ21 = 319/600 . The estimate of p11 − p21 is pˆ11 − pˆ21 = (n11 /n1· ) − (n21 /n2· ) . The two rows of the table were sampled independently so the variance of pˆ11 − pˆ21 is Var(ˆ p11 − pˆ21 )
= Var(ˆ p11 ) + Var(ˆ p21 ) = p11 p12 /n1· + p21 p22 /n2· ,
cf. Exercise 1.6.7. Finally, SE(ˆ p11 − pˆ21 ) =
pˆ11 pˆ12 /n1· + pˆ21 pˆ22 /n2· .
26
2. Two-Dimensional Tables and Simple Logistic Regression
For the abortion example, (309/500)(191/500) (319/600)(281/600) = .0298 . SE(ˆ p1 − pˆ2 ) = + 500 600 One other thing is required before we can perform a test. We need to know the distribution of [(ˆ p11 − pˆ21 ) − (p11 − p21 )]/SE(ˆ p11 − pˆ21 ). By appealing to the Central Limit Theorem and the Law of Large Numbers (cf. Lindgren, 1993), if n1· and n2· are large, we can use the approximate distribution (ˆ p11 − pˆ21 ) − (p11 − p21 ) ∼ N (0, 1) . SE(ˆ p11 − pˆ21 ) To perform a test of H0 : p11 − p21 = 0 versus HA : p11 − p21 = 0, assume that H0 is true and look for evidence against H0 . If H0 is true, the approximate distribution is (ˆ p11 − pˆ21 ) − 0 ∼ N (0, 1) . SE(ˆ p11 − pˆ21 ) If the alternative hypothesis is true, pˆ11 − pˆ21 still estimates p11 − p21 so the test statistic (ˆ p11 − pˆ21 ) − 0 SE(ˆ p11 − pˆ21 ) tends to be either a large positive value if p11 − p21 > 0 or a large negative value if p11 − p21 < 0. An α = .05 level test rejects H0 if (ˆ p11 − pˆ21 ) − 0 > 1.96 SE(ˆ p11 − pˆ21 ) or if
(ˆ p11 − pˆ21 ) − 0 < −1.96 . SE(ˆ p11 − pˆ21 )
The values −1.96 and 1.96 cut off the probability .025 from the bottom and top of a N (0, 1) distribution, respectively. Thus, the total probability of rejecting H0 when H0 is true is .025 + .025 = .05. Recall that this test is based on a large sample approximation to the distribution of the test statistic. For the abortion example, the test statistic is (309/500) − (319/600) = 2.90 . .0298
2.1 Two Independent Binomials
27
Because 2.90 > 1.96, H0 is rejected at the α = .05 level. There is evidence of a relationship between sex and attitudes about legalized abortion. These data indicate that females are more likely to support legalized abortion. Before leaving this test procedure, we mention an alternative method for computing SE(ˆ p11 − pˆ21 ). Recall that Var(ˆ p11 − pˆ21 ) = p11 p12 /n1· + p21 p22 /n2· . If H0 is true, p11 = p21 and p12 = p22 . These facts can be used in estimating the variance of pˆ11 − pˆ21 . A pooled estimate of p ≡ p11 = p21 is (n11 + n21 )/(n1· + n2· ) = n·1 /n·· = 628/1100. A pooled estimate of (1 − p) ≡ p12 = p22 is n·2 /n·· = 472/1100. This yields Var(ˆ p11 − pˆ21 ) = p(1 − p)(1/n1· + 1/n2· ) and SE(ˆ p11 − pˆ21 )
= (628/1100)(472/1100)[(1/500) + (1/600)] = .0300 .
The test statistic computed with the new standard error is (309/500) − (319/600) = 2.87777 . .0300 For these data, the results are essentially the same. The test procedures discussed above work nicely for two independent binomials, but, unfortunately, they do not generalize to more than two binomials or to situations in which there are more than two possible outcomes (e.g., support, oppose, no opinion). An alternative test procedure is based on what is known as the Pearson chi-square test statistic. This test is equivalent to the test given above using the pooled estimate of the standard error. Moreover, Pearson’s chi-square is applicable in more general problems. The Pearson test statistic is based on comparing the observed table values in the 2 × 2 table with estimates of the expected values that are obtained assuming that H0 is true. In the abortion example, if H0 is true, then p = p11 = p21 and pˆ = 628/1100. Similarly, (1 − p) = p12 = p22 and (1 − pˆ) = 472/1100. As before, the expected values are mij = ni· pij . The estimated expected values under (0) H0 are m ˆ ij = ni· pˆij , where pˆij is pˆ if j = 1 and (1 − pˆ) if j = 2. More generally, (0) (1) m ˆ ij = ni· (n·j /n·· ) . The Pearson chi-square statistic is defined as 2 (0) 2 2 nij − m ˆ ij . X2 = (0) m ˆ ij i=1 j=1
28
2. Two-Dimensional Tables and Simple Logistic Regression (0)
If H0 is true, then nij and m ˆ ij should be near each other, and the terms (0)
(0)
(nij − m ˆ ij )2 should be reasonably small. If H0 is not true, then the m ˆ ij ’s, which are estimates based on the assumption that H0 is true, should do a (0) ˆ ij )2 should be larger poor job of predicting the nij ’s. The terms (nij − m when H0 is not true. (0) Note that a prediction m ˆ ij that is, say, three away from the observed value nij , can be either a good prediction or a bad prediction depending (0) on how large the value in the cell should be. If nij = 4 and m ˆ ij = 1, the (0)
prediction is poor. If nij = 104 and m ˆ ij = 101, the prediction is good. The (0)
m ˆ ij in the denominator of each term of X 2 is a scale factor that corrects for this problem. The hypothesis H0 : p11 = p21 and p12 = p22 is rejected at the α = .05 level if X 2 ≥ χ2 (.95, 1) . The test is based on the fact that if H0 is true, then as n1· and n2· get large, X 2 has approximately a χ2 (1) distribution. This is a consequence of the Central Limit Theorem and the Law of Large Numbers, cf. Exercise 2.1. For the abortion example (0)
m ˆ ij Female Male Totals
Support 285.5 342.5 628
Do Not Support 214.5 257.5 472
Totals 500 600 1100
X 2 = 8.2816, χ2 (.95, 1) = 3.84 . Because 8.2816 > 3.84, the α = .05 test rejects H0 . Note that 8.2816 = (2.8777)2 and that 3.84 = (1.96)2 . For 2 × 2 tables, the results of Pearson chi-square tests are exactly equivalent to the results of normal theory tests using the pooled estimate in the standard error. By definition, χ2 (1 − α, 1) = [z(1 − α2 )]2 for α ∈ (0, .5]. Also, X2 =
Exercise 2.1.
(ˆ p11 − pˆ21 )2 . pˆ(1 − pˆ) n11· + n12·
(2)
Prove equation (2). (0)
ˆ ij ’s, we can examine the nature of the By comparing the nij ’s to the m differences in the two binomials. One simple way to do this comparison is (0) ˆ ij )’s. In order to make to examine a table of residuals, i.e., the (nij − m
2.1 Two Independent Binomials
29
(0)
accurate evaluations of how well m ˆ ij is predicting nij , the residuals need to be rescaled or standardized. Define the Pearson residuals as (0)
r˜ij =
nij − m ˆ ij , (0) m ˆ ij
i = 1, 2, j = 1, 2. Note that X 2 = abortion data are
Support 1.39 −1.27
r˜ij Female Male
ij
2 r˜ij . The Pearson residuals for the
Do Not Support −1.60 1.46
The positive residual 1.39 indicates that more females support legalized abortion than would be expected under H0 . The negative residual −1.27 indicates that fewer males support abortion than would be expected under H0 . Together, the values 1.39 and −1.27 indicate that proportionately more females support legalized abortion than males. Equivalently, proportionately more males do not support legalized abortion than females.
2.1.1
The Odds Ratio
A commonly used technique in the analysis of count data is the examination of odds ratios. In the abortion example, the odds of females supporting legalized abortion is p11 /p12 . The odds of males supporting legalized abortion is p21 /p22 . The odds ratio is p11 p22 (p11 /p12 ) = . (p21 /p22 ) p12 p21 Note that if the two binomials are identical, then p11 = p21 and p12 = p22 , so p11 p22 = 1. p12 p21 An alternative to using Pearson’s chi-square for examining whether two binomials are the same is to examine the estimated odds ratio. Using pˆij = nij /ni· gives pˆ11 pˆ22 n11 n22 = . pˆ12 pˆ21 n12 n21 For the abortion example, the estimate is (309)(281) = 1.425 . (191)(319) This is only an estimate of the population odds ratio, but it is fairly far from the target value of 1. In particular, we have estimated that the odds
30
2. Two-Dimensional Tables and Simple Logistic Regression
of a female supporting legalized abortion are about one and a half times as large as the odds of a male supporting legalized abortion. We may wish to test the hypothesis that the odds ratio equals 1. Equivalently, we can test whether the log of the odds ratio equals 0. The log odds ratio is log(1.425) = .354. The asymptotic (large sample) standard error of the log odds ratio is 1 1 1 1 1 1 1 1 + + + + + + = n11 n12 n21 n22 309 191 319 281 = .123 . The estimate minus the hypothesized value over the standard error is .354 − 0 = 2.88 . .123 Comparing this to a N (0, 1) distribution indicates that the log-odds ratio is greater than zero, thus the odds ratio is greater than 1. Note that this test is not equivalent to the other tests considered, even though the numerical value of the test statistic is similar to the other normal theory tests. A 95% confidence interval for the log odds ratio has the end points .354 ± 1.96(.123). This gives an interval (.113, .595). The log odds ratio is in the interval (.113, .595) if and only if the odds ratio is in the interval (e.113 , e.595 ). Thus, a 95% confidence interval for the odds ratio is (e.113 , e.595 ), which simplifies to (1.12, 1.81). We are 95% confident that the true odds of women supporting legalized abortion is between 1.12 and 1.81 times greater than the odds of men supporting legalized abortion.
2.2
Testing Independence in a 2 × 2 Table
In Section 1, we obtained a 2 × 2 table by looking at two populations, each divided into two categories. In this section, we consider only one population but divide it into two categories in each of two different ways. The two different ways of dividing the population will be referred to as factors. In Section 1, we examined differences between the two populations. In this section, we examine the nature of the one population being sampled. In particular, we examine whether the two factors affect the population independently or whether they interact to determine the nature of the population. Example 2.2.1. As part of a longitudinal study, a sample of 3182 people without cardiovascular disease were cross-classified by two factors: personality type and exercise. Personality type was categorized as type A or type B. Type A persons show signs of stress, uneasiness, and hyperactivity. Type
2.2 Testing Independence in a 2 × 2 Table
31
B persons are relaxed, easygoing, and normally active. Exercise is categorized as persons who exercise regularly and those who do not. The data are given in the following table:
Exercise
nij Regular Other Totals
Personality A B 483 477 1101 1121 1584 1598
Totals 960 2222 3182
Although notations for observations (nij ’s), probabilities (pij ’s), and expected values (mij ’s) are identical to those in Section 1, the meaning of these quantities has changed. In Section 1, the rows were two independent binomials, so p11 +p12 = 1 = p21 +p22 . In this section, there is only one population, so the constraint on the probabilities is that p11 +p12 +p21 +p22 = 1. In this section, our primary interest is in determining whether the row factor is independent of the column factor and if not, how the factors deviate from independence. The probability of an observation falling in the ith row and jth column of the table is pij . The probability of an observation falling in the ith row is pi· . The probability of the jth column is p·j . Rows and columns are independent if and only if for all i and j pij = pi· p·j .
(1)
The sample size is n·· , so the expected counts in the table are mij = n·· pij . If rows and columns are independent, this becomes mij = n·· pi· p·j .
(2)
It is easily seen that condition (1) for independence is equivalent to mij = mi· m·j /n·· .
(3)
Pearson’s chi-square can be used to test independence. Pearson’s statistic is again 2 (0) 2 2 ˆ ij nij − m X2 = (0) m ˆ ij i=1 j=1 (0)
where m ˆ ij is an estimate of mij based on the assumption that rows and ˆ ·j = n·j , then equation columns are independent. If we take m ˆ i· = ni· and m (3) leads to (0) (4) m ˆ ij = ni· n·j /n·· .
32
2. Two-Dimensional Tables and Simple Logistic Regression
Equation (4) can also be arrived at via equation (2). An obvious estimate of pi· is pˆi· = ni· /n·· . Similarly, pˆ·j = n·j /n·· . Substitution into equation (2) leads to equation (4). It is interesting to note that equation (4) is numerically identical to formula (2.1.1), which gives m ˆ ij for two independent binomials. Just as in Section 1, for the purposes of testing, X 2 is compared to a χ2 (1) distribution. Pearson residuals are again defined as (0) ˆ ij nij − m r˜ij = . (0) m ˆ ij For the personality-exercise data, we get (0) m ˆ ij
Exercise
Regular Other Totals
Personality A B 477.9 482.1 1106.1 1115.9 1584 1598
Totals 960 2222 3182
X 2 = .156 The test is not significant for any reasonable α level. (The P value is greater than .5.) There is no significant evidence against independence of exercise and personality type. In other words, the data are consistent with the interpretation that knowledge of personality type gives no new information about exercise habits or, equivalently, knowledge of exercise habits gives no new information about personality type.
2.2.1
The Odds Ratio
Just as in examining the equality of two binomials, the odds ratio can be used to examine the independence of two factors in a multinomial sample. In the personality-exercise data, the odds that a person exercises regularly are p1· /p2· . In addition, the odds of exercising regularly can be examined separately for each personality type. For type A personalities, the odds are p11 /p21 , and for type B personalities, the odds are p12 /p22 . Intuitively, if exercise and personality types are independent, then the odds of regular exercise should be the same for both personality types. In particular, the ratio of the two sets of odds should be one. Proposition 2.2.2.
If rows and columns are independent, then the
2.3 I × J Tables
odds ratio
33
(p11 /p21 ) p11 p22 = (p12 /p22 ) p12 p21
equals one. Proof.
By equation (1) p11 p22 p1· p·1 p2· p·2 = = 1. p12 p21 p1· p·2 p2· p·1 2
If the odds ratio is estimated under the assumption of independence, pˆij = pˆi· pˆ·j = ni· n·j /(n·· )2 ; so the estimated odds ratio is always one. A more interesting approach is to estimate the odds ratio without assuming independence and then see how close the estimated odds ratio is to one. With this approach, pˆij = nij /n·· and pˆ11 pˆ22 n11 n22 = . pˆ12 pˆ22 n12 n21 In the personality-exercise example, the estimated odds ratio is (483)(1121) = 1.03 (477)(1101) which is very close to one. The log odds are .0305, the asymptotic standard error is [1/483 + 1/477 + 1/1101 + 1/1121]1/2 = .0772, and the test statistic for H0 that the log odds equal 0 is .0305/.0772 = .395. Again, there is no evidence against independence. Exercise 2.2. Give a 95% confidence interval for the odds ratio. Explain what the confidence interval means.
2.3 I × J Tables The situation examined in Section 1 can be generalized to consider samples from I different populations, each of which is divided into J categories. We assume that the samples from different populations are independent and that each sample follows a multinomial distribution. This is productmultinomial sampling. Similarly, a sample from one population that is categorized by two factors can be generalized beyond the case considered in Section 2. We allow one factor to have I categories and the other factor to have J categories. Between the two factors, the population is divided into a total of IJ categories. The distribution of counts within the IJ categories is assumed
34
2. Two-Dimensional Tables and Simple Logistic Regression
to have a multinomial distribution. Consequently, this sampling scheme is multinomial sampling. An I × J table of the observations nij , i = 1, . . . , I, j = 1, . . . , J, can be written
Factor 1 (Populations)
nij 1 2 .. .
1 n11 n21 .. .
I Totals
nI1 n·1
Factor 2 (Categories) 2 ... J n12 . . . n1J n22 . . . n2J .. .. . . nI2 . . . nIJ n·2 . . . n·J
Totals n1· n2· .. . nI· n··
with similar tables for the probabilities pij and the expected values mij . The analysis of product-multinomial sampling begins by testing whether all of the I multinomial populations are identical. In other words, we wish to test the model H0 : p1j = p2j = · · · = pIj for all j = 1, . . . , J .
(1)
against the alternative HA : model (1) is not true . This is described as testing for homogeneity of proportions. We continue to use Pearson’s chi-square test statistic to evaluate the appropriateness of the null hypothesis model. Pearson’s chi-square requires estimates of the expected values mij . Each sample i has a multinomial distribution with ni· trials, so mij = ni· pij . If H0 is true, pij is the same for all values of i. A pooled estimate of the common value of the pij ’s is (0)
pˆij = n·j /n·· . In other words, if all the populations have the same probability for category j, an estimate of this common probability is the total number of observations in category j divided by the overall total number of observations. From this probability estimate we obtain (0)
m ˆ ij = ni· (n·j /n·· ) . (0)
(0)
ˆ ij , the superscript (0) is used to indicate that the In both pˆij and m estimate was obtained under the assumption that H0 was true. Pearson’s
2.3 I × J Tables
35
chi-square test statistic is
X2 =
J I
(0)
nij − m ˆ ij
2
(0)
i=1 j=1
m ˆ ij
.
For large samples, if H0 is true, the approximation X 2 ∼ χ2 ((I − 1)(J − 1)) is valid. H0 is rejected in an α level test if X 2 > χ2 (1 − α, (I − 1)(J − 1)) . Note that if I = J = 2, these are precisely the results discussed in Section 1. The analysis of a multinomial sample begins by testing for independence of the two factors. In particular, we wish to test the model H0 : pij = pi· p·j ,
i = 1, . . . , I, j = 1, . . . , J .
(2)
We again use Pearson’s chi-square. The marginal probabilities are estimated as pˆi· = ni· /n·· and pˆ·j = n·j /n·· . Because mij = n·· pij , if the model in (2) is true, we can estimate mij with (0)
m ˆ ij
= n·· pˆi· pˆ·j = n·· (ni· /n·· )(n·j /n·· ) = ni· n·j /n··
(0)
where the (0) in m ˆ ij indicates that the estimate is obtained assuming that (2) holds. The Pearson chi-square test statistic is
X2 =
2 (0) I J ˆ ij nij − m (0)
i=1 j=1
m ˆ ij
which, if (2) is true and the sample size is large, is approximately distributed as a χ2 ((I − 1)(J − 1)). H0 is rejected at the α level if X 2 > χ2 (1 − α, (I − 1)(J − 1)) . Once again, if I = J = 2, we obtain the previous results given for 2×2 tables. Moreover, the test procedures for product-multinomial sampling and
36
2. Two-Dimensional Tables and Simple Logistic Regression
for multinomial sampling are numerically identical. Only the interpretations of the tests differ. Example 2.3.1. Fifty-two males between the ages of 11 and 30 were operated on for knee injuries using arthroscopic surgery. The patients were classified by type of injury: twisted knee, direct blow, or both. The results of the surgery were classified as excellent (E), good (G), and fair or poor (FP). These data can reasonably be viewed as either multinomial or productmultinomial. As a multinomial, we have 52 people cross-classified by type of injury and result of surgery. However, we can also think of the three types of injuries as defining different populations. Each person sampled from a population is given arthroscopic surgery and then the result is classified. Because our primary interest is in the result of surgery, we choose to think of the sampling as product-multinomial. The form of the analysis is identical for both sampling schemes. The data are
Injury
nij Twist Direct Both Totals
E 21 3 7 31
Result G F-P 11 4 2 2 1 1 14 7
Totals 36 7 9 52
The estimated expected counts under H0 are (0)
Injury
m ˆ ij Twist Direct Both Totals
E 21.5 4.2 5.4 31
Result G 9.7 1.9 2.4 14
F-P 4.8 .9 1.2 7
Totals 36 7 9 52
with X 2 = 3.229 and df = (3 − 1)(3 − 1) = 4 . If the sample size is large, X 2 can be compared to a χ2 distribution with four degrees of freedom. If we do this, the P value for the test is .52. Unfortunately, it is quite obvious that the sample size is not large. The number of observations in many of the cells of the table is small. This is a serious problem and aspects of the problem are discussed in Section 4, the subsection of Section 3.5 on Other Sampling Models, and Chapter 8. However, to the extent that this book focuses on distribution theory, it focuses on asymptotic distributions. For now, we merely state that in this
2.3 I × J Tables
37
(0)
example, the nij ’s and m ˆ ij ’s are such that, when taken together with the very large P value, we feel safe in concluding that these data provide no evidence of different surgical results for the three types of injuries. (This conclusion is borne out by the fact that an exact small sample test yields a similar P value, cf. Section 3.5.)
2.3.1
Response Factors
In Example 2.3.1, the result of surgery can be thought of as a response, whereas the type of injury is used to explain the response. Similarly, in Example 2.1.1, opinions on abortions can be considered as a response and sex can be considered as an explanatory factor. The existence of response factors is often closely tied to the sampling scheme. Product-multinomial sampling is commonly used with an independent multinomial sample taken for every combination of the explanatory factors and the categories of the multinomials being the categories of the response factors. This is illustrated in Example 2.1.1 where there are two independent multinomials (binomials), one for males and one for females. The categories for each multinomial are Support and Do Not Support legalized abortion. Example 3.5.2 in the next chapter involves two explanatory factors, Sex and Socioeconomic Status, and one response factor, Opinion on Legalized Abortion. Each of the four combinations obtained from the two sexes and the two statuses define an independent multinomial. In other words, there is a separate multinomial sample for each combination of sex and socioeconomic status. The categories of the response factor, Support and Do Not Support legalized abortion, are the categories of the multinomials. More generally, the categories of a response factor can be cross-classified with other response factors or explanatory factors to yield the categories in a series of independent multinomials. This situation is of most interest when there are several factors involved. Some factors can be cross-classified to define the multinomial populations while other factors can be cross-classified with the response factors to define the categories of the multinomials. Example 2.3.1 illustrates the simplest case in which there is one explanatory factor, Injury, crossed with one response factor, Result, to define the categories of the multinomial. Both Injury and Result have three levels so the multinomial has a total of nine categories. With only two factors in the table, there can be only one multinomial sample because there are no other factors available to define various multinomial populations. Example 3.5.1 is more general in that it has two independent multinomials, one for each sex. Each multinomial has six categories. The categories are obtained by cross-classifying the explanatory factor, Political Party, having three levels, with the response factor, Abortion Opinion, having two levels.
38
2. Two-Dimensional Tables and Simple Logistic Regression
In this more general sampling scheme, one often conditions on all factors other than the response so that the analysis is reduced to that of the original sampling scheme in which every combination of explanatory factors defines an independent multinomial. Again, this is illustrated in Example 2.3.1. While the sampling was multinomial, we treated the sampling as productmultinomial with an independent multinomial for each level of the Injury. The justification for treating the data as product-multinomial is that we conditioned on the Injury. While the sampling techniques described above are probably the most commonly used, there are alternatives that are also commonly used. For example, in medicine a response factor is often some disease with levels that are various states of the disease. If the disease is at all rare, it may be impractical to sample different populations and see how many people fall into the various levels of the disease. In this case, one may need to take the disease levels as populations, sample from these populations, and investigate various characteristics of the populations. Thus the “explanatory” factors discussed above would be considered descriptive factors here. This sampling scheme is often called retrospective for obvious reasons. The other schemes discussed above are called prospective. These issues are discussed in more detail in the introduction to Chapter 4 and in Sections 4.7 and 11.7.
2.3.2
Odds Ratios
The null hypotheses (1) and (2) can be rewritten in terms of odds ratios. Proposition 2.3.2. Under product-multinomial sampling p1j = · · · = pIj > 0 for all j = 1, . . . , J if and only if pij pi j =1 pij pi j for all i, i = 1, . . . , I and j, j = 1, . . . , J. Proof. a) Equality of probabilities across rows implies that the odds ratios equal one. By substitution, pij pij pij pi j = = 1. pij pi j pij pij b) All odds ratios equal to one implies equality of probabilities across rows. Recall that pi· = 1 for all i = 1, . . . , I, so that p·· = I. In addition, pij pi j /pij pi j = 1 implies pij pi j = pij pi j . Note that pij = pij p·· /I
=
J I 1 pij pi j I i =1 j =1
2.3 I × J Tables
=
39
J I 1 pij pi j I i =1 j =1
=
J I 1 pij pi j I j =1
=
=
1 I
J
i =1
pij p·j
j =1
J 1 pij p·j I j =1
1 p·j pi· = I = p·j /I . Because this holds for any i and j, p·j /I = p1j = p2j = · · · = pIj for j = 1, . . . , J. 2 Proposition 2.3.3. Under multinomial sampling, 0 < pij = pi· p·j for all i = 1, . . . , I , j = 1, . . . , J if and only if pij pi j =1 pij pi j for all i, i = 1, . . . , I and j, j = 1, . . . , J. Proof.
a) Independence implies that the odds ratios equal one. pi· p·j pi · p·j pij pi j = = 1. pij pi j pi· p·j pi · p·j
b) All odds ratios equal to one implies independence. If pij pi j /pij pi j = 1 for all i, i , j, and j , then pij pi j = pij pi j . Moreover, because p·· = 1, pij = pij p·· =
J I
pij pi j
=
i =1 j =1
J I
pij pi j
i =1 j =1
=
I
pi j
i =1
=
I
J j =1
pi j pi·
i =1
= pi·
I i =1
pi j
pij
40
2. Two-Dimensional Tables and Simple Logistic Regression
= pi· p·j
2
There is a great deal of redundancy in specifying that pij pi j =1 pij pi j for all i, i , j, and j . For example, if i = i , then pij pi j /pij pi j = pij pij /pij pij = 1 and no real restriction has been placed on the pij ’s. A similar result occurs if j = j . More significantly, if p12 p23 /p13 p22 = 1 and p12 p24 /p14 p22 = 1, then 1
=
(p12 p23 /p13 p22 )(p14 p22 /p12 p24 )
= p14 p23 /p13 p24 . In other words, the fact that two of the odds ratios equal one implies that a third odds ratio equals one. It turns out that the condition p11 pij =1 p1j pi1 for i = 2, . . . , I and j = 2, . . . , J is equivalent to the condition that all odds ratios equal one. Proposition 2.3.4. pij pi j /pij pi j = 1 for all i, i , j and j if and only if p11 pij /p1j pi1 = 1 for all i = 1, j = 1. Proof. Clearly, if the odds ratios are one for all i, i , j, and j , then p11 pij /p1j pi1 = 1 for all i and j. Conversely, p11 pij p11 pi j p11 pij p11 pi j 1 = p1j pi1 p1j pi 1 p1j pi1 p1j pi 1 pij pi j = pij pi j 2 Of course, in practice the pij ’s are never known. We can investigate independence by examining the estimated odds ratios pˆij pˆi j /ˆ pij pˆi j = nij ni j /nij ni j
2.3 I × J Tables
41
or, equivalently, we can look at the log of this. For large samples, the log of the estimated odds ratio is normally distributed with standard error 1 1 1 1 + + + . SE = nij nij n i j n i j This allows the construction of asymptotic tests and confidence intervals for the log odds ratio. Of particular interest is the hypothesis H0 : pij pi j /pij pi j = 1 . After taking logs, this becomes H0 : log(pij pi j /pij pi j ) = 0 .
Example 2.3.5. We continue with the knee injury data of Example 2.3.1. From Proposition 2.3.4, it is sufficient to examine n11 n22 = 21(2)/11(3) = 1.27 , n12 n21 n11 n23 = 21(2)/4(3) = 3.5 , n13 n21 n11 n32 = 21(1)/11(7) = .27 , n12 n31 n11 n33 = 21(1)/4(7) = .75 . n13 n31 Although the X 2 test indicated no difference in the populations (populations = injury types), at least two of these estimated odds ratios seem substantially different from 1. In particular, relative to having an F-P result, the odds of an excellent result are about 3.5 times larger with twist injuries than with direct blows. Also, relative to having a good result, the odds of an excellent result from a twisted knee are only .27 of the odds of an excellent result with both types of injury. These numbers seem quite substantial, but they are difficult to evaluate without some idea of the error to which the estimates are subject. To this end, we use the large sample standard errors for the log odds ratios. Testing whether the log odds ratios are different from zero, we get odds ratio 1.27 3.5 .27 .75
log (odds ratio) 0.2412 1.2528 −1.2993 −0.2877
SE 0.9858 1.0635 1.1320 1.2002
z 0.24 1.18 −1.15 −0.24
42
2. Two-Dimensional Tables and Simple Logistic Regression
The large standard errors and small z values are consistent with the result of the X 2 test. None of the odds ratios appear to be substantially different from 1. Of course, it should not be overlooked that the standard errors are really only valid for large samples and we do not have large samples. Thus, all of our conclusions about the individual odds ratios must remain tentative.
2.4
Maximum Likelihood Theory for Two-Dimensional Tables
In this section, we introduce the likelihood function, maximum likelihood estimates, and (generalized) likelihood ratio tests. A valuable result for maximum likelihood estimation is given below without proof. r Lemma 2.4.1. Let f (p1 , . . . , pr ) = i=1 ni log pi . If ni > 0 for i = 1, . . . , r, then, subject to the conditions 0 < pi < 1 and p· = 1, the maxp1 , . . . , pˆr ) imum of f (p1 , . . . , pr ) is achieved at the point (p1 , . . . , pr ) = (ˆ where pˆi = ni /n· . In this section, we consider product-multinomial sampling of I populations, with each population divided into the same J categories. The I populations will form the rows of an I × J table. No results will be presented for multinomial sampling in an I × J table. The derivations of such results are similar to those presented here and are left as an exercise. The probability of obtaining the data ni1 , . . . , niJ from the ith multinomial sample is J ni· ! n pijij .
J j=1 nij ! j=1 Because the I multinomials are independent, the probability of obtaining all of the values nij , i = 1, . . . , I, j = 1, . . . , J, is I J ! n n i· (1) pijij . J n ! j=1 ij j=1 i=1 Thus, if we know the pij ’s, we can find the probability of obtaining any set of nij ’s. In point of fact, we are in precisely the opposite position. We do not know the pij ’s, but we do know the nij ’s. The nij ’s have been observed. If we think of (1) as a function of the pij ’s, we can write I J ni· ! nij pij (2) L(p) =
J j=1 nij ! J=1 i=1 where p = (p11 , p12 , . . . , pIJ ). L(p) is called the likelihood function for p. Some values of p give a very small probability of observing the nij ’s that
2.4 Maximum Likelihood Theory for Two-Dimensional Tables
43
were actually observed. Such values of p are unlikely to be the true value of p. The true value of p is likely to be some value that gives a relatively large probability of observing what was actually observed. If we wish to estimate p, it makes sense to use the value of p that gives the largest probability of seeing what was actually observed. In other words, it makes sense to estimate p with a value pˆ that maximizes the likelihood function L(p). Such a value is called a maximum likelihood estimate (MLE) of p. Rather than maximizing the likelihood function (which involves many products), it is often easier to maximize the log of the likelihood function (in which products change to sums). Because the logarithm is a strictly increasing function, the maximum of the likelihood and the maximum of the log of the likelihood occur as the same point. For product-multinomial sampling, the log-likelihood function is I J J log(ni· !) − log(nij !) + nij log pij . log L(p) = i=1
j=1
j=1
To maximize this as a function of p, we can ignore any terms that do not depend on p. It suffices to maximize (p) =
I J
nij log pij .
i=1 j=1
The J maximum is achieved when we maximize each of the terms ˆ, j=1 nij log pij . By Lemma 2.4.1, the maximum is achieved at p = p where pˆij ≡ nij /ni· . We can also obtain maximum likelihood estimates for the expected counts mij . Because mij = ni· pij , the MLE of mij is m ˆ ij = ni· pˆij = nij . This follows from the invariance of maximum likelihood estimates: For any ˆ the MLE of a function of θ, say f (θ), is the parameter θ and MLE θ, ˆ cf. Cox and Hinkley (1974, p. corresponding function of the MLE, f (θ), 287). If we change the model so that the null hypothesis H0 : p1j = . . . = pIj ,
j = 1, . . . , J,
is true, we get different maximum likelihood estimates. Let πj = p1j = · · · = pIj . The log-likelihood function becomes I J J log(ni· !) − nij ! + nij log πj . log L(p) = i=1
j=1
j=1
44
2. Two-Dimensional Tables and Simple Logistic Regression
Ignoring terms that do not involve the pij ’s, we are led to maximize I J
nij log πj
i=1 j=1
or, equivalently,
J
n·j log πj .
j=1
By Lemma 2.4.1, the maximum likelihood estimates become (0)
pˆij = π ˆj = n·j /n·· (0)
where the (0) in pˆij is used to indicate that the estimate was obtained assuming that H0 was true. Maximum likelihood estimates of the mij ’s are easily obtained under the null model H0 . Because mij = ni· pij , (0)
(0)
m ˆ ij = ni· pˆij = ni· n·j /n·· . (0)
(0)
ˆ ij are precisely the estimates used in Section 3 to Note that pˆij and m test H0 . The likelihood function can also be used as the basis for a test of whether H0 is true. The data have a certain likelihood of being observed, which can be summarized as the maximum value that the likelihood function achieves. If we place any restriction on the possible values of the pij ’s, we will reduce the likelihood of observing the data. If placing a restriction on the pij ’s reduces the likelihood too much, we can infer that the restriction on the pij ’s is not likely to be valid. The relative reduction in the likelihood can be measured by looking at the maximum of L(p) subject to the restriction divided by the overall maximum of L(p). If this ratio gets too small, we will reject the assumption that the restriction on the pij ’s is valid. In particular, if the restriction on the pij ’s is that H0 is true, we reject H0 when the likelihood ratio is too small. Again, we can simplify the mathematics by examining the log of the likelihood ratio and rejecting H0 when the log gets too small. Of course, the log of the likelihood ratio is just the difference in the log-likelihoods. The maximum value of the log-likelihood when the reduced model H0 is true is I J J log(ni· !) − log(nij !) + nij log(n·j /n·· ) . log L(ˆ p(0) ) = i=1
j=1
j=1
The overall maximum of the log-likelihood is I J J log(ni· !) − log(nij !) + nij log(nij /ni· ) . log L(ˆ p) = i=1
j=1
j=1
2.4 Maximum Likelihood Theory for Two-Dimensional Tables
45
The difference is I J i=1 j=1
nij log(n·j /n·· ) −
I J
nij log(nij /ni· ) .
i=1 j=1
If we multiply by −2 and simplify, we get a likelihood ratio test statistic I J m ˆ ij m ˆ ij log G2 = 2 (0) m ˆ ij i=1 j=1 (0)
ˆ ij = where m ˆ ij = nij is the MLE of mij in the unrestricted model and m ni· n·j /n·· is the MLE of mij under the restriction that H0 is true. The reason for multiplying by −2 is that with this multiplication, the approximation G2 ∼ χ2 ((I − 1)(J − 1)) is valid when H0 is true and the samples are large. Note that because H0 was to be rejected for small values of the likelihood ratio, after taking logs and multiplying by −2, H0 should be rejected for large values of G2 . In particular, for large samples, an α level test of H0 is rejected if G2 > χ2 (1 − α, (I − 1)(J − 1)) .
Example 2.4.2. Computing the likelihood ratio test statistic using the data and estimated expected cell counts on knee operations in Example 2.3.1 gives G2 = 3.173 . This is similar to, but distinct from, the Pearson test statistic X 2 = 3.229. Both are based on 4 degrees of freedom. In this example, formal tests using either statistic suffer from the fact that the sample is not large. Larntz (1978) indicates that, for small samples, the actual size of the test that rejects H0 if X 2 > χ2 (1 − α, (I − 1)(J − 1)) tends to be nearer the nominal size α than the corresponding likelihood ratio test. This is related to the fact that G2 becomes too large when the observations are small but the estimated expected cell counts are not. Kreiner (1987) comes to similar conclusions. From the results of Larntz and others, Fienberg (1979) concludes that (a) if the minimum expected cell count is about 1, χ2 tests often work well and (b) if the sample size is 4 to 5 times the number of cells in the table, χ2 tests give P values with the correct order of magnitude. In practice, the first of these conclusions must
46
2. Two-Dimensional Tables and Simple Logistic Regression
compare the estimated expected cell counts to 1. In Example 2.4.2, the test statistics are similar, so the choice of test is not important. The data also pass both of the criteria mentioned by Fienberg. The rules of thumb given in this paragraph can be applied to higher-dimensional tables. Although X 2 has the advantage alluded to above, G2 is more convenient to use in analyzing higher-dimensional tables. The likelihood ratio test statistic will be used almost exclusively after Chapter 3.
Discussion There are philosophical grounds for preferring the use of G2 . The likelihood principle indicates that one’s conclusions should depend on the relative values of the likelihood function. The likelihood function depends only on the data that actually occurred. Because G2 is computed from the likelihood, its use can be consistent with the likelihood principle. Unfortunately, the standard use of G2 is to compute a P value or to perform an α level test. Both of these procedures depend on data that could have happened but did not, so these uses for G2 violate the likelihood principle. An excellent discussion of the likelihood principle is given by Berger and Wolpert (1984). Although formal tests are conducted throughout this book, the real emphasis is on informal evaluation of models using G2 and other tools. The emphasis is on modeling and data analysis, not formal inferential procedures. Nevertheless, a relatively complete account is given of the standard results in formal log-linear model methodology. Bayesian methods are the primary inferential methods that satisfy the likelihood principle. Chapter 13 discusses Bayesian logistic regression — but not log-linear models. Santner and Duffy (1989) include discussion of Bayesian methods. Exercise 2.3. For multinomial sampling, H0 is the restriction that pij = pi· p·j for all i and j. Show that (a) the unrestricted MLE of pij is pˆij = nij /n·· (b) the unrestricted MLE of mij is m ˆ ij = nij (0)
(c) the MLE of pij under H0 is pˆij = pˆi· pˆ·j = (ni· /n·· )(n·j /n·· ) (0)
(d) the MLE of mij under H0 is m ˆ ij = ni· n·j /n·· (e) the likelihood ratio test rejects H0 when 2
G =2
I J i=1 j=1
gets too large.
m ˆ ij log
m ˆ ij (0)
m ˆ ij
2.5 Log-Linear Models for Two-Dimensional Tables
2.5
47
Log-Linear Models for Two-Dimensional Tables
It is our intention to exploit the similarities between analysis of variance (ANOVA) and regression on the one hand and log-linear and logistic regression models on the other. We begin by discussing two-factor analysis of variance. Consider a balanced ANOVA model yijk = µ+αi +βj +γij +eijk . We can change the symbols used to denote the parameters and rewrite the model as (1) yijk = u + u1(i) + u2(j) + u12(ij) + eijk , i = 1, . . . , I , j = 1, . . . , J , and k = 1, . . . , K. The eijk ’s are assumed to be independent N (0, σ 2 ). We can estimate σ 2 and test for interaction. If interaction exists, we can look at contrasts in the interaction; if no interaction exists, we can test for main effects and look at contrasts in the main effects. If some factor levels correspond to quantitative values, then regression ideas can be incorporated into the ANOVA. The estimate of σ 2 is the mean squared error 1 2 (yijk − y¯ij· ) . IJ(K − 1) i=1 j=1 I
MSE =
J
K
k=1
Everything in the analysis other than the estimate of σ 2 is a function of the y¯ij· ’s. In particular, we can form an I × J table of the y¯ij· ’s. The goal of the analysis is to explore the structure of this table. The ANOVA model (1) and the corresponding contrasts in interactions and main effects have proved to be very useful tools in exploring this I × J table. Let us reconsider what the ANOVA model is really saying. Basically, the ANOVA model is saying that the yijk ’s are independent and that yijk ∼ N (mij , σ 2 ) where mij = u + u1(i) + u2(j) + u12(ij) .
(2)
Our goal is to examine the structure of the mij ’s. To do that, we use the MLEs of the mij ’s, which are m ˆ ij = y¯ij· . Our statistical inferences are based on the fact that the m ˆ ij ’s are independent with m ˆ ij ∼ N (mij , σ 2 /K) and that the MSE is an estimate of σ 2 , which is independent of the m ˆ ij ’s. It is of interest to note that although the MSE is not the MLE of σ 2 , exactly
48
2. Two-Dimensional Tables and Simple Logistic Regression
the same tests and confidence intervals for the mij ’s would be obtained if the MLE for σ 2 was used in place of the MSE (and suitable adjustments in distributions were made). If we impose a restriction on the mij ’s – for example, the restriction of no interaction (3) mij = u + u1(i) + u2(j) – the MLEs of the mij ’s change. In particular, yi·· − y¯··· ) + (¯ y·j· − y¯··· ) m ˆ ij = y¯··· + (¯
(4)
and the MLE of σ 2 also changes. It can be shown that the usual F test for no interaction is just the likelihood ratio test for no interaction. To examine an I ×J table of counts, we use similar techniques. The table entries have the property that E(nij ) = mij . Again, we are interested in the structure of the mij ’s; however, instead of considering linear models like (2) and (3), we consider log-linear models such as log(mij ) = u + u1(i) + u2(j) + u12(ij) and log(mij ) = u + u1(i) + u2(j) . Our analysis will again rely on the MLEs of the mij ’s and on likelihood ratio tests; however, there are some differences. The nij ’s are typically multinomial or product-multinomial. Small sample results similar to those from standard analysis of variance are not available. Traditionally, tests and confidence intervals have been based on large sample approximate distributions. On the other hand, multinomial distributions depend only on the pij ’s or, equivalently, the mij ’s, so there is no need to deal with a term analogous to σ 2 in normal theory. Finally, the ANOVA model (1) is balanced; it has K observations in each cell of the table. This balance leads to simplifications in the analysis. If there are different numbers of observations in the cells, the simplifications are lost. For example, the simple formula (4) for MLEs under the no-interaction model does not apply. Log-linear models are analogous to ANOVA models with unequal numbers of observations. They almost never display all the simplifications associated with balanced observations in ANOVA and they only occasionally have simple formulas for MLEs. Although most work on log-linear models has used large sample (asymptotic) distributions, recently there has been considerable work on exact conditional inference and Bayesian inference for small samples. See the subsection on Other Sampling Methods in Section 3.5 for further discussion of exact conditional inference and Chapter 13 for a discussion of Bayesian methods.
2.5 Log-Linear Models for Two-Dimensional Tables
49
There are several reasons for writing ANOVA type models for the log(mij )’s rather than the mij ’s. One is that the large sample theory can be worked out. In other words, one reason to do it is because it can be done. Another reason is that log-linear models arise in a natural fashion from the mathematics of Poisson sampling, cf. Chapter 9. Multinomial expected cell counts are bounded between 0 and the sample size N , these bounds place awkward limits on the parameters of ANOVA type models for the mij ’s. Such problems do not arise in log-linear modeling. One of the best reasons for considering log-linear models is that they often have very nice interpretations. We now examine the interpretations of log-linear models for two factors. Consider a multinomial sample. We know that mij = n·· pij . We can write a log-linear model log(mij ) = u + u1(i) + u2(j) + u12(ij) .
(5)
This has accomplished absolutely nothing! The terms u12(ij) are sufficient to explain the mij ’s. The u, u1(i) , and u2(j) terms are totally redundant; they can have any values at all, and yet, by choosing the u12(ij) ’s appropriately, equation (5) can be made to hold. Because model (5) has enough u terms to completely explain any set of mij ’s, model (5) is referred to as a saturated model. A more interesting example of a log-linear model occurs when the rows and columns of the table are independent. If mij = n·· pi· p·j , then log mij = log n·· + log pi· + log p·j . In other words, if rows and columns are independent, a log-linear model of the form (6) log mij = u + u1(i) + u2(j) holds. However, if we are to base our analysis on log-linear models, it is even more important to know that if model (6) holds, then rows and columns are independent. Theorem 2.5.1. For multinomial sampling in an I × J table, log(mij ) = u + u1(i) + u2(j) , i = 1, . . . , I , j = 1, . . . , J, if and only if pij = pi· p·j , i = 1, . . . , I , j = 1, . . . , J. Proof. We have already shown that independence implies the log-linear model. If the log-linear model holds, then mij = eu+u1(i) +u2(j) .
50
2. Two-Dimensional Tables and Simple Logistic Regression
Let a = eu , a1(i) = eu1(i) , and a2(j) = eu2(j) . Let a1(·) = similarly for a2(·) . Note that
I i=1
a1(i) and
= mij /n·· = a a1(i) a2(j) /n·· ,
pij pi·
= aa1(i) a2(·) /n·· , = aa1(·) a2(j) /n·· ,
p·j and
1 = p·· = aa1(·) a2(·) /n·· . Substitution gives pi· p·j
= aa1(i) a2(·) aa1(·) a2(j) /n2·· = (aa1(i) a2(j) /n·· )(aa1(·) a2(·) /n·· ) = aa1(i) a2(j) /n·· = pij .
Thus, the log-linear model implies independence.
2
For product-multinomial sampling, mij = ni· pij
(7)
and the log-linear model log mij = u + u1(i) + u2(j) + u12(ij) holds trivially. Now, consider the model under H0 . If πj = p1j = · · · = pIj for all j = 1, . . . , J, then mij = ni· πj .
Theorem 2.5.2. For product-multinomial sampling in an I × J table where rows are independent samples, log mij = u+u1(i) +u2(j) , i = 1, . . . , I, j = 1, . . . , J, if and only if p1j = · · · = pIj , j = 1, . . . , J. Proof. If for each j the probabilities pij are equal, we have mij = ni· πj and log mij = log ni· + log πj . Taking u = 0, u1(i) = log(ni· ), and u2(j) = log(πj ) shows that the log-linear model holds. Conversely, if log mij = u + u1(i) + u2(j) , then mij = aa1(i) a2(j) , where a = eu , a1(i) = eu1(i) , and a2(j) = eu2(j) . Note that pi· = 1, so from (7), mi· = ni· and ni· = a a1(i) a2(·) .
2.5 Log-Linear Models for Two-Dimensional Tables
51
Because pij = mij /ni· , pij
= aa1(i) a2(j) /ni· = aa1(i) a2(j) /aa1(i) a2(·) = a2(j) /a2(·) .
This is true for any i, so a2(j) /a2(·) = p1j = p2j = · · · = pIj , j = 1, . . . , J. 2
2.5.1
Odds Ratios
In applications with high-dimensional tables, it is rare that there are no important interactions. In order to explore the nature of the interactions, we need to look at contrasts in the interactions. To do this, we need a method of defining contrasts in the interactions. We begin by reviewing methods for examining interactions in analysis of variance. Let qij , i = 1, . . . , I, j = 1, . . . , J, be any set of numbers with the property that qi· = q·j = 0. In balanced analysis of variance, I J
qij mij
(8)
i=1 j=1
is a contrast in the interactions. Using model (2) and the fact that qi· = q·j = 0, the contrast (8) can also be written as J I
qij u12(ij)
i=1 j=1
which involves only the interactions. The most interpretable way of obtaining a contrast in the interactions is to define the interaction contrast in terms of contrasts in the main effects. Let ai , i = 1, . . . , I, determine a contrast in the rows (thus, a· = 0) and let bj , j = 1, . . . , J, determine a contrast in the columns (so b· = 0). Then, if we take qij = ai bj , we get a contrast in the interactions. Recall that if there is no interaction, all interaction contrasts equal zero. Conversely, the interaction has (I − 1)(J − 1) degrees of freedom, so specifying that any (I − 1)(J − 1) linearly independent contrasts in the interaction are all zero is equivalent to specifying that there is no interaction. A valuable data analytic technique for examining interactions in twoway analysis of variance is the interaction plot. It consists of plotting the I curves determined by connecting the points (j, m ˆ ij ), j = 1, . . . , J, with line segments. In this plot, m ˆ ij = y¯ij· , the estimate of mij in model (2). If there is no interaction, mij = u + u1(i) + u2(j) and the I theoretical curves (j, mij ) are parallel. If interaction exists, the theoretical curves are not
52
2. Two-Dimensional Tables and Simple Logistic Regression
parallel. The curves (j, m ˆ ij ) estimate the theoretical curves. If the curves (j, m ˆ ij ) are approximately parallel, it suggests that there is no interaction. If interaction exists, the estimated curves can suggest the nature of the interaction. Whether the plots are approximately parallel depends on the variability of the m ˆ ij ’s. Rather than plotting the I curves based on (j, m ˆ ij ), one can plot the J curves based on (i, m ˆ ij ), i = 1, . . . , I. Again, in the absence of interactions, the curves should be approximately parallel. If the column treatments corˆ ij ) respond to quantitative levels, say, xj , j = 1, . . . , J, then plots of (xj , m are appropriate. Again, one looks for parallelism. Similar plots can be constructed for row treatments with quantitative levels. In log-linear models, the same procedures can be applied to the log(mij )’s. In particular, specifying that an odds ratio equals one is equivalent to specifying that an interaction contrast is zero. First, note that odds ratios can be written in terms of expected values. For product-multinomial sampling, mij = ni· pij and for multinomial sampling, mij = n·· pij . In either case,
pij pi j mij mi j = . pij pi j mij mi j
If
mij mi j = 1, mij mi j
then taking logs gives log mij − log mij − log mi j + log mi j = 0 . This is precisely the statement that the interaction contrast I J
qrs log(mrs )
(9)
r=1 s=1
equals zero, where qij = qi j = 1, qij = qi j = −1, and qrs = 0 for all other pairs (r, s). In particular, the coefficients qrs can be obtained by combining the contrast in the rows ai = 1, ai = −1, and ar = 0 for all other r with the contrast in the columns bj = 1, bj = −1, and bs = 0 for all other s. Observe that the contrast (9) can also be written I J r=1 s=1
qrs u12(rs)
2.5 Log-Linear Models for Two-Dimensional Tables
53
where we have used model (5) and the fact that qr· = q·s = 0. If we specify that m11 mij =1 m1j mi1 for all i = 2, . . . , I and j = 2, . . . , J, then we have specified that (I−1)(J −1) linearly independent interaction contrasts in the log(mij )’s are all equal to zero; hence, there is no interaction. As with analysis of variance, an interaction plot can be a valuable tool in the analysis of log-linear models. The I curves that connect the sets of points (j, log(m ˆ ij )), j = 1, . . . , J, are the basis of the interaction plot. The estimated expected counts m ˆ ij are estimated using model (5), which contains interaction. Under model (5), m ˆ ij = nij . The I curves estimate the theoretical curves based on (j, log(mij )). If there is no interaction, the theoretical curves are parallel and estimated curves should indicate this. If interaction exists, the nature of the interaction should be suggested by the estimated curves. Example 2.5.3. Consider the data given below on the relationship between college of enrollment and political affiliation for university students.
College
Letters Engineering Agriculture Education Totals
Political Affiliation Rep. Dem. Ind. 34 61 16 31 19 17 19 23 16 23 39 12 107 142 61
Total 111 67 58 74 310
The Pearson and likelihood ratio test statistics for independence (no interaction) are X 2 = 16.16 and G2 = 16.39 . The test has (4 − 1)(3 − 1) = 6 degrees of freedom. The 99th percentile of a χ2 (6) is 16.81, so the P value for either statistic is a little above .01. An interaction plot uses the values log(nij ) given below.
Letters Engineering Agriculture Education
Political Affiliation Rep. Dem. Ind. 3.5 4.1 2.8 3.4 2.9 2.8 2.9 3.1 2.8 3.1 3.7 2.5
The interaction plot is given in Figure 2.1. The curves for Letters and Education are almost parallel. The curve for Agriculture is similar but not
2.6 Simple Logistic Regression
55
is viewed as an independent trial. The result of a trial is 1 if any field O-rings failed on the flight and 0 if all the O-rings functioned properly. A simple logistic regression uses temperature to model the probability that any O-ring failed. Such a model allows us to predict O-ring failure from temperature. TABLE 2.1. O-Ring Failure Data Case
Flight
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
14 9 23 10 1 5 13 15 4 3 8 17 2 11 6 7 16 21 19 22 12 20 18
Failure
Success
Temperature
1 1 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0
0 0 0 0 1 1 1 1 1 1 1 1 0 0 1 1 1 0 1 1 1 1 1
53 57 58 63 66 67 67 67 68 69 70 70 70 70 72 73 75 75 76 76 78 79 81
Let pi be the probability that any O-ring fails in case i. A simple linear logistic regression model for these data is logit(pi ) ≡ log
pi 1 − pi
= β0 + β1 τi ,
where τi is the known temperature and β0 and β1 are unknown intercept and slope parameters (coefficients). The logistic regression model presents the log odds of O-ring failure as a linear function of temperature. We again use maximum likelihood estimates. The likelihood function for logistic regression is discussed later in this section. The procedure for finding maximum likelihood estimates is discussed later in the book. For now, we merely present results and use analogies to standard regression. The coefficient estimates, standard errors, and z values are
56
2. Two-Dimensional Tables and Simple Logistic Regression
Variable Intercept Temperature
Estimate 15.04 −0.2321
Std. Error 7.316 0.1073
z 2.06 −2.16
The z values are simply the estimate divided by the standard error. They are test statistics for testing whether a coefficient equals zero. In particular, z = −2.16 yields a P value for H0 : β1 = 0 that is approximately .03. An alternative and preferred test is presented later. To predict the probability of any O-ring failures for a flight at a temperature of τ , p = β0 + β1 τ log 1−p which can be rearranged into p=
exp(β0 + β1 τ ) . 1 + exp(β0 + β1 τ )
The estimated probability is pˆ =
exp(βˆ0 + βˆ1 τ ) . 1 + exp(βˆ0 + βˆ1 τ )
Figure 2.2 gives a plot of the estimated probabilities as a function of temperature. The Challenger was launched at τ = 31 degrees Fahrenheit, so the predicted log odds are 15.04 − (.2321)31 = 7.8449 and the predicted probability of an O-ring failure is e7.8449 /(1 + e7.8449 ) = .9996. Actually, there are problems with this prediction because we are predicting very far from the observed data. The lowest temperature at which a shuttle had previously been launched was 53 degrees, very far from 31 degrees. According to the fitted model, a launch at 53 degrees has probability .939 of O- ring failure, so even with the caveat about predicting beyond the range of the data, the model indicates an overwhelming probability of failure. Before discussing logistic regression in general, we review standard oneway ANOVA and simple linear regression with normal errors. Suppose we have independent observations yij on I populations. The one-way ANOVA model is yij = mi + εij (1) εij ’s independent N (0, σ 2 ), i = 1, . . . , I, j = 1, . . . , Ni . Here, E(yij ) ≡ mi . Alternatively, when a predictor variable xi is available for each population, a simple linear regression model for the yij ’s is yij = β0 + β1 xi + εij .
(2)
Model (2) is specifying a linear structure for the mi ’s defined in model (1), i.e., for i = 1, . . . , I mi = β0 + β1 xi .
58
2. Two-Dimensional Tables and Simple Logistic Regression
i. Also, pi ≡ pi1 and 1 − pi ≡ pi2 with similar definitions for the expected values. In particular, mi = Ni pi = ni· pi1 = mi1 and mi2 = Ni (1 − pi ), so pi /(1 − pi ) = mi1 /mi2 . The log-linear version of model (3) is log(mij ) = u1(i) + u2(j) + ηj xi
(4)
where the usual interaction term u12(ij) from (2.5.5) is being replaced in the model by a more specific interaction term, ηj xi . Of course, xi is the known predictor variable, but ηj is an unknown parameter. This is an interaction term because it involves both the i and j subscripts, just like u12(ij) . The relationship between the logistic model (3) and the log-linear model (4) is that pi mi1 log = log 1 − pi mi2 = log(mi1 ) − log(mi2 ) ! ! = u1(i) + u2(1) + η1 xi − u1(i) + u2(2) + η2 xi ! = u2(1) − u2(2) + [η1 xi − η2 xi ] ≡ β0 + β1 xi ! where β0 ≡ u2(1) − u2(2) and β1 ≡ [η1 − η2 ]. As in Section 4, we can use maximum likelihood to estimate the parameters and to generate tests. The likelihood function L(p) for a twodimensional table was given in (2.4.2). Equation (3) can be rearranged to give pi
=
1 − pi
=
exp(β0 + β1 xi ) , 1 + exp(β0 + β1 xi ) 1 . 1 + exp(β0 + β1 xi )
Recalling that pi ≡ pi1 , (1 − pi ) ≡ pi2 , yi ≡ ni1 , and Ni − yi ≡ ni2 , substitution into (2.4.2) gives the likelihood function L(β0 , β1 ) = " #ni1 " #ni2 I ni· ! 1 exp(β0 + β1 xi ) .
2 1 + exp(β0 + β1 xi ) 1 + exp(β0 + β1 xi ) j=1 nij ! i=1 It is by no means obvious what values of β0 and β1 will maximize this function. In Chapters 10 and 11, we discuss the Newton–Raphson method for obtaining such maxima. For now, we rely on a computer program to give us the maximizing values. (See Subsections 2.6.1 and 4.4.2 for SAS, BMDP, and GLIM computer commands.)
2.6 Simple Logistic Regression
59
As in the example, if p is the probability for a predictor x, exp(β0 + β1 x) p . log = β0 + β1 x and p = 1−p 1 + exp(β0 + β1 x) Given the MLEs βˆ0 and βˆ1 , we get the estimated probability associated with x: exp(βˆ0 + βˆ1 x) pˆ = . 1 + exp(βˆ0 + βˆ1 x) In particular, this formula provides the pˆi ’s when doing predictions at the ˆ ij ’s through m ˆ ij = ni· pˆij . We can try to test model xi ’s. It also provides m (4) against the more general saturated model (2.5.5). Recall that the MLEs for the expected cell counts under model (2.5.5) are just the nij ’s, so G
2
I 2
nij = 2 nij log m ˆ ij i=1 j=1 = 2
I
[ni1 log(ni1 /m ˆ i1 ) + ni2 log(ni2 /m ˆ i2 )]
i=1
=
2
I
[yi log(yi /Ni pˆi ) + (Ni − yi ) log((Ni − yi )/Ni (1 − pˆi ))] .
i=1
In this formula, if yi = 0, then yi log(yi ) is taken as zero. The Pearson test statistic is X2
=
I 2 (nij − m ˆ ij )2 m ˆ ij i=1 j=1 I (yi − Ni pˆi )2
[(Ni − yi ) − Ni (1 − pˆi )]2 Ni pˆi Ni (1 − pˆi ) i=1 I (yi − Ni pˆi )2 (yi − Ni pˆi )2 + = Ni pˆi Ni (1 − pˆi ) i=1 =
=
+
I (yi − Ni pˆi )2 . Ni pˆi (1 − pˆi ) i=1
The degrees of freedom for the tests are 23 − 2 = 21, i.e., the number of cases minus one for the intercept and one for temperature. This computation is based on model (3). Alternatively, based on model (4), the degrees of freedom are the number of cells in the two-way table, 23 × 2, minus 23 for fitting row effects and the grand mean, minus 1 for column effects, and minus 1 for fitting the interaction term based on temperature, i.e., 46 − 23 − 1 − 1 = 21.
60
2. Two-Dimensional Tables and Simple Logistic Regression
G2 and X 2 are appropriate test statistics, but, unfortunately, for them to have large sample χ2 distributions, we need the nij ’s to get large. In this example, the nij ’s are 0 or 1, so a χ2 test is inappropriate for this example. In general, a χ2 test of a logistic regression model against the saturated model (2.5.5) is appropriate only when the sample sizes Ni for the I populations are all large. We can also use model (4) as a full model and test it against a reduced model. Since models (4) and (3) are equivalent, we specify a reduced model for model (3), say pi = β0 . (5) log 1 − pi Testing model (5) against model (3) is equivalent to testing H0 : β1 = 0. ˆ ˆ Given an estimate βˆ0 for model (5), we get pˆi = eβ0 /(1+eβ0 ), estimates pˆij , (0) and estimates, say, m ˆ ij = ni· pˆij , where the (0) indicates that the expected cell count is estimated under H0 : β1 = 0. The test statistic is 2 I m ˆ ij 2 m ˆ ij log . G =2 (0) m ˆ ij i=1 j=1 Unlike standard regression analysis where the t test for H0 : β1 = 0 is equivalent to the F test, in logistic regression the z test described earlier can give different results than the G2 test described here. Both tests of H0 will generally be valid whenever I is large. Exercise 2.4 Show that the independence model (2.5.6) implies model (5). Hint: Use the same method as was used to show that model (4) implies model (3). Crude standardized residuals can be defined as r˜i =
yi − Ni pˆi
, (6) Ni pˆi (1 − pˆi ) s ˜i2 . Note that Var(yi ) = so that Pearson’s chi-squared is X 2 = i=1 r this definition of crude standardized residuals an esNi pi (1 − pi ), making timate of yi − E(yi )/ Var(yi ). (These are “crude” in that they ignore the variability of pˆi .) When Ni = 1, the residuals will not have an asymptotic normal distribution, which is a major reason why these residuals do not behave like residuals in normal theory models. Example 2.6.1 Continued. For the simple linear logistic regression model, G2 = 20.315 with 21 degrees of freedom. For the intercept-only model, G2 = 28.267 with 22 degrees of freedom. Since Ni = 1 for all i, neither of these G2 ’s is compared directly to a chi-squared distribution.
2.7 Exercises
61
The model based test for H0 : β1 = 0 has G2 = 28.267 − 20.315 = 7.952 on df = 22−21 = 1. Comparing this to a χ2 (1) distribution, the P value for the test is approximately .005. It is considerably smaller than the P value for the z test of H0 . This test is generally preferred to the z test. Since Ni = 1 for all i, we delay consideration of residuals until Chapter 4. All of the methods presented in this section carry over to multiple logistic regression in which there is more than one predictor variable. Such models are discussed in Chapter 4.
2.6.1
Computer Commands
The data are in a file ‘oring.dat’ that looks like Table 2.1 except it has an extra column at the right (which contains the actual number of O-rings that failed on each flight). Perhaps the simplest way to fit the logistic regression model in SAS is to use PROC GENMOD. options ps=60 ls=72 nodate; data oring; infile ’oring.dat’; input ID flt f s temp junk; n = 1; proc genmod data = oring; model f/n = temp / link=logit dist=binomial; run; The first line controls printing of the output. The next four lines define the data. The variable “n” is used to specify that there is only one trial in each of the 23 binomials. PROC GENMOD needs the data specified: “data = oring”. GENMOD also needs information on the model. “link = logit” and “dist = binomial” are both needed to specify that a logistic regression is being fitted. “model f/n = temp” indicates that we are modeling the number of failures in “f” out of “n” trials using the predictor “temp” (and implicitly an intercept). A more powerful SAS program for logistic regression is PROC LOGISTIC. Commands for this, BMDP-LR, and GLIM are given in Subsection 4.4.2.
2.7
Exercises
Exercise 2.7.1. The data in Table 2.2 are on graduate admissions by sex at the University of California, Berkeley, and are given by Bickel et al. (1975) and Freedman et al. (1978). Test for independence, examine the
62
2. Two-Dimensional Tables and Simple Logistic Regression
Pearson residuals, and evaluate the odds ratio. What conclusions do you reach? (Do not put too much credence in your analysis; the data will be reanalyzed in Exercise 3.6.4.) TABLE 2.2. Graduate Admissions at Berkeley Admitted Rejected
Male
Female
1198 1493
557 1278
Exercise 2.7.2. Cram´er (1946) presents data on the distribution of birth dates for males and females born in Sweden in 1935. The data given in Table 2.3 presume a natural ordering for the months of the year that Cram´er does not specify. Analyze the data. Is it better to think of this as one multinomial sample or as two independent multinomial samples? TABLE 2.3. Swedish Birth Dates Month January February March April May June July August September October November December
Female 3537 3407 3866 3711 3775 3665 3621 3596 3491 3391 3160 3371
Male 3743 3550 4017 4173 4117 3944 3964 3797 3712 3512 3392 3761
Exercise 2.7.3. Gilby (1911) presents data on the relationships among instructor’s evaluation of general intelligence, quality of clothing, and school standard. General intelligence was classified using a system of Karl Pearson’s that was reported in Waite (1911). Briefly, the Intelligence classifications are A – Mentally Defective, B – Dull, C – Slow, E – Fairly Intelligent, F – Capable, and G – Very Able. Clothing was classified as I – Very Well Clad, II – Well Clad, III – Poor but Passable, IV – Insufficient, V – Worse than Insufficient. Throughout, intelligence category A was combined with B and clothing category V was combined with IV. This was done because of small numbers of observations. The third variable, Standard, seems to
2.7 Exercises
63
be similar to the American idea of a school grade. For example, roughly half of 10-year-olds were in Standard III with most of the others in II or IV. For 10 12 -year-olds, about two-thirds were in standards III or IV with most of the rest in II or V. Data were collected from 36 instructors spread over eight different primary schools. Tables 2.4, 2.5, and 2.6 summarize some of the data; use the methods of Chapter 2 to analyze these data. TABLE 2.4. Intelligence versus Clothing Clothing I II III IV,V
B
C
33 41 39 17
48 100 58 13
Intelligence D E 113 202 70 22
209 255 61 10
F
G
194 138 33 10
39 15 4 1
TABLE 2.5. Intelligence versus Standard Standard I II III IV V VI VII VIII
B
C
17 23 42 16 18 10 4 0
27 34 42 25 38 32 19 2
Intelligence D E 45 61 69 41 66 73 39 13
50 66 117 75 77 80 52 18
F
G
27 36 72 53 45 98 35 9
1 1 10 11 6 18 11 1
TABLE 2.6. Clothing versus Standard Standard I II III IV V VI VII VIII
I 20 71 157 82 101 127 59 19
Clothing II III 87 88 134 77 117 145 81 22
56 42 41 45 29 32 18 2
IV,V 4 20 20 17 3 7 2 0
64
2. Two-Dimensional Tables and Simple Logistic Regression
Exercise 2.7.4. Partitioning Tables. The examination of odds ratios and residuals provide two ways to investigate lack of independence in a two-way table. The partitioning methods of Irwin (1949) and Lancaster (1949) provide another. Christensen (1996a, Section 8.6) gives extensive examples of the application of these methods. Table 2.7 gives data on the occupation of family heads for families of various religious groups. The occupations are A – Professions, B – Owners, Managers, and Officials, C – Clerical and Sales, D – Skilled, E – Semiskilled, F – Unskilled, G – Farmers, H – No Occupation. The data were extracted from Lazerwitz (1961). Although the data were collected using a complex sampling design (cf. Section 3.5), ignore this fact in your analysis. To establish the effect of the Protestant groups on the lack of independence, we can isolate the Protestant groups in a separate reduced table. We can also pool the Protestants together in a collapsed table that includes the non-Protestant groups. These are both given in Table 2.8. Test each of the three tables for independence. Note that G2 for the full table equals the sum of the G2 ’s for the reduced table and the collapsed table. Continue the analysis of these data by using the partitioning procedure on the reduced and collapsed tables and on subsequent generations of reduced and collapsed tables. Note that tables can also be partitioned on their columns. At its logical extreme, this leads to a collection of 2 × 2 tables, each with one degree of freedom for testing independence. The Lancaster-Irwin partitioning provides a method of breaking the interaction (lack of independence) G2 for the full table into one degree of freedom components that add up to the original G2 . This is similar to using orthogonal contrasts to break up the interaction sum of squares in a balanced analysis of variance. For a theoretical justification of the Lancaster-Irwin procedure, see Exercise 8.4.3. TABLE 2.7. Occupation and Religion Religion White Baptist Black Baptist Methodist Lutheran Presbyterian Episcopalian Roman Catholic Jewish No Religion
A
B
C
D
E
F
G
H
43 9 73 23 35 27 102 36 19
78 2 80 36 54 27 140 60 12
64 9 80 43 38 20 127 30 6
135 23 117 59 46 14 279 17 12
135 47 102 46 19 7 254 17 25
57 77 58 26 22 5 127 2 9
86 18 66 49 11 2 51 0 14
114 41 153 46 46 15 190 26 28
Exercise 2.7.5. Fisher’s Exact Test. Consider the problem of testing whether the probability of success is the same for two independent binomials. Let yi ∼ Bin(Ni , pi ), i = 1, 2. Write the 2 × 2 table as
2.7 Exercises
65
TABLE 2.8. Partitioned Tables Religion
A
Reduced Table B C D
White Baptist Black Baptist Methodist Lutheran Presbyterian Episcopalian
43 9 73 23 35 27
78 2 80 36 54 27
Religion
A
Collapsed Table B C D
Protestant Roman Catholic Jewish No Religion
210 102 36 19
y1 y2 t
277 140 60 12
64 9 80 43 38 20
135 23 117 59 46 14
254 127 30 6
394 279 17 12
N1 − y1 N2 − y2 N1 + N2 − t
E
F
G
H
135 47 102 46 19 7
57 77 58 26 22 5
86 18 66 49 11 2
114 41 153 46 46 15
E
F
G
H
356 254 17 25
245 127 2 9
232 51 0 14
415 190 26 28
N1 N2 N1 + N2
(a) Find Pr(y1 = r1 and t = t0 ) for arbitrary r1 and t0 . (b) Assuming p1 = p2 , find Pr(y1 = r1 |t = t0 ). (c) Consider the following subset of the knee injury data of Example 2.3.1 Injury Direct Twist
Result E G 3 2 7 1
Using the conditional distribution of (b), find the probability of getting the observed value 3. The P value for Fisher’s exact test is the sum of the Pr(y1 = r1 |t = 10)’s for every r1 value that satisfies Pr(y1 = r1 |t = 10) ≤ Pr(y1 = 3|t = 10). Find the P value for the data given above. Note that this test does not depend on any large sample approximations, so it is exact even for small samples. On the other hand, the computations become difficult with large samples. Exercise 2.7.6. Yule’s Q. For 2 × 2 tables, a measure of association similar to a correlation coefficient is Yule’s Q, which is defined as Q=
p11 p22 − p12 p21 . p11 p22 + p12 p21
66
2. Two-Dimensional Tables and Simple Logistic Regression
Find Q in terms of the odds ratio. Show that Q lies between −1 and 1. Exercise 2.7.7. Freeman-Tukey Residuals. Freeman and Tukey (1950) suggest a variance stabilizing transformation for Poisson data that leads to using the quantities √ (0) nij + nij + 1 − 4m ˆ ij + 1 as residuals, cf. Bishop, Fienberg, and Holland (1975, Section 4.4). Reexamine the data of Example 2.3.1 using the Freeman-Tukey residuals. Exercise 2.7.8. Power Divergence Statistics. Cressie and Read (1984) and Read and Cressie (1988) have introduced the power divergence family of test statistics λ nij 2 nij − 1 , 2I λ = (0) λ(λ + 1) ij m ˆ ij
where for λ = −1, 0 the statistics are defined by taking limits. They establish that for any λ, the large sample distribution under H0 is χ2 with the usual degrees of freedom. Show that X 2 = 2I 1 and G2 = 2I 0 . Find the relationship between 2I −1/2 and the Freeman-Tukey residuals discussed in Exercise 2.7.7. Exercise 2.7.9. Compute the power divergence test statistics 2I −1/2 and 2I 1/2 for the knee injury data of Example 2.3.1. Compare the results to G2 and X 2 . What conclusions can be reached about knee injuries? Exercise 2.7.10. Testing for Symmetry. Consider a multinomial sample arranged in an I × I table. In square tables with similar categories for the two factors, it is sometimes of interest to test H0 : pij = pji for all i and j. (a) Give a procedure for testing this hypothesis based on testing equality of probabilities (homogeneity of proportions) in a 2 × I(I − 1)/2 table. If you think of the I × I table as a matrix, the rows indicate whether a cell is above or below the diagonal. The columns are corresponding off diagonal pairs. Illustrate the test for a 4 × 4 table. (b) Give a justification for the procedure in terms of a (conditional) sampling model. (c) The data in Table 2.9 were given by Fienberg (1980), Yule (1900), and earlier by Galton. They report the relative heights of 205 married couples.
2.7 Exercises
67
TABLE 2.9. Heights of Married Couples Husband
Tall
Wife Medium
Short
Tall Medium Short
18 20 12
28 51 25
14 28 9
Test for symmetry and do any other appropriate analysis for these data. Do the data display symmetry? Exercise 2.7.11. Correlated Data. There are actually 410 observations in Exercise 2.7.10 and Table 2.9. There are 205 men and 205 women. Why was Table 2.9 set up as a 3 × 3 table with only 205 observations rather than as Table 2.10, a 2 × 3, sex versus height table with 410 observations? TABLE 2.10. Heights of Married Couples Sex
Tall
Height Medium
Short
50 60
104 99
51 46
Wife Husband
Exercise 2.7.12. McNemar’s Test. McNemar (1947) proposes a method of testing for homogeneity of proportions among two binary populations when the data are correlated. (A binary population is one in which all members fall into one of two categories. Homogeneity means that the proportions in each category are the same for both groups.) If we restrict attention in Exercise 2.7.10 and Table 2.9 to the subpopulation of Tall and Medium people, we get an example of such data. The data on a husband and wife pair cannot be considered as independent, but this problem is avoided by treating each pair as a single response. The data from the subpopulation are given below. Husband Tall Medium
Tall 18 20
Wife Medium 28 51
Conditionally, these data are a multinomial sample of 117. The probability of a tall woman is p11 + p21 and the probability of a medium woman is
68
2. Two-Dimensional Tables and Simple Logistic Regression
one minus that. The probability of a tall man is p11 + p12 and again the probability of a medium man can be found by subtraction. It follows that the probability of a tall woman is the same as the probability of a tall man if and only if p21 = p12 . Thus, for 2 × 2 tables, the problem of homogeneity of proportions is equivalent to testing for symmetry. McNemar’s test is just the test for symmetry in Exercise 2.7.10 applied to 2 × 2 tables. Check for homogeneity of proportions in the subpopulation. For square tables that are larger than 2 × 2, the problem of testing for marginal homogeneity is more difficult and cannot, as yet, be addressed using log-linear models. Nonetheless, a test can be obtained from basic asymptotic results, cf. Exercise 10.8.6. Exercise 2.7.13. Suppose the random variables nij , i = 1, 2, j = 1, . . . , Ni , are independent Poisson(µi ) random variables. Find the maximum likelihood estimates for µ1 and µ2 and find the generalized likelihood ratio test statistic for H0 : µ1 = µ2 . Exercise 2.7.14. Yule’s Q (cf. Exercise 2.7.6.) is one of many measures of association that have been proposed for 2 × 2 tables. Agresti (1984, Chapter 9) has a substantial discussion of measures of association. It has been suggested that measures of association for 2 × 2 multinomial tables should depend solely on the conditional probabilities of being in the first column given the row, i.e., p11 /p1· and p21 /p2· , or, alternatively, on the conditional probabilities of being in the first row given the column, i.e., p11 /p·1 and p12 /p·2 . Moreover, it has been suggested that the measure of association should not depend on which set of conditional probabilities are used. Show that any measure of association p11 p21 , f p1· p2· can be written as some function of the odds p11 p21 g , . p12 p22
Show that if f
p11 p21 , p1· p2·
=f
p11 p12 , p·1 p·2
for any sets of probabilities, then g(x, y) = g(ax, ay) for any x, y, and a. Use this to conclude that any such measure of association is a function of the odds ratio.
3 Three-Dimensional Tables
Just as a multinomial sample can be classified by the levels of two factors, a multinomial sample can also be classified by the levels of three factors.
Example 3.0.1. Everitt (1977) considers a sample of 97 ten-year-old school children who were classified using three factors: classroom behavior, risk of home conditions, and adversity of school conditions. Classroom behavior was judged by teachers to be either nondeviant or deviant. Risk of home conditions either identify the child as not at risk (N) or at risk (R). Adversity of school condition was judged as either low, medium, or high. The observations are denoted as nijk , i = 1, 2, j = 1, 2, k = 1, 2, 3. The three-dimensional table of nijk ’s is
Classroom Behavior (i)
Risk (j) Nondeviant Deviant Total
Adversity of School (k) Low Medium High N R N R N R 16 7 15 34 5 3 1 1 3 8 1 3 17 8 18 42 6 6
Total 80 17 97
The totals at the right-hand margin are n1·· = 80 and n2·· = 17. The totals along the bottom margin are n·11 = 17, n·21 = 8, n·12 = 18, n·22 = 42, n·13 = 6, n·23 = 6, and n··· = 97.
70
3. Three-Dimensional Tables
In general, a three-dimensional table of counts is denoted nijk , i = 1, . . . , I, j = 1, . . . , J, k = 1, . . . , K. Marginal totals are denoted nij· =
K
nijk ,
ni·k =
K J
nijk =
j=1 k=1
n·j· =
nijk ,
n·jk =
j=1
k=1
ni·· =
J
K I
nijk ,
n··· =
nijk ,
i=1
J
nij· =
j=1
n··k =
i=1 k=1
and
I
K
ni·k ,
k=1 J I
nijk ,
i=1 j=1
nijk .
ijk
Similar notations are used for tables of probabilities pijk , tables of expected values mijk , and tables of estimates of the pijk ’s and mijk ’s. Note that the values nij· define a two-dimensional I × J marginal table. The values ni·k and n·jk also define marginal tables. Product-multinomial sampling is raised to a new level of complexity in three-dimensional tables. For example, we could have samples from I populations with each sample cross-classified into JK categories, or we could have samples from IJ (cross-classified) populations where each sample is classified into K categories. Section 2 of this chapter discusses independence and odds ratio models for three-dimensional tables under multinomial sampling. Section 3 examines the iterative proportional fitting algorithm for finding estimates of expected cell counts. Section 4 introduces log-linear models for threedimensional tables. Section 5 considers the modifications necessary for dealing with product-multinomial sampling and comments on other sampling schemes. Section 6 introduces model selection criteria and Section 7 introduces tables with four of more dimensions. We begin with a discussion of Simpson’s paradox and the need for tables with more than two factors.
3.1
Simpson’s Paradox and the Need for Higher-Dimensional Tables
It really is necessary to deal with three-dimensional tables; accurate information cannot generally be obtained by examining each of the three simpler two-dimensional tables. In fact, the conclusions from two-dimensional marginal tables can be contradicted by the accurate three-dimensional information. In this section, we demonstrate and examine the problem via an example.
3.1 Simpson’s Paradox and the Need for Higher-Dimensional Tables
71
Example 3.1.1. Consider the outcome (success or failure) of two medical treatments classified by the sex of the patient. The data are given below.
Outcome 1
Patient Sex Male Female Success Failure Success Failure 60 20 40 80
Treatment 2
100
50
10
30
Considering only the males, we have a two-way table of treatment versus outcome. The estimated probability of success under treatment 1 is 60/80 = .75. For treatment 2, the estimated probability of success is 100/150 = .667. Thus, for males, treatment 1 appears to be more successful. Now consider the table of treatment versus outcome for females only. Under treatment 1, the estimated probability of success is 40/120 = .333. Under treatment 2, the estimated probability of success is 10/40 = .25. For women as for men, treatment 1 appears to be more successful. Now examine the marginal table of treatment versus outcome. This is obtained by collapsing (summing) over the sexes. The table is given below.
Treatment
1 2
Outcome Success Failure 100 100 110 80
The estimated probability of success for treatment 1 is 100/200 = .50, while the estimated probability of success for treatment 2 is 110/190 = .579. The marginal table indicates that treatment 2 is better than treatment 1, whereas we know that treatment 1 is better than treatment 2 for both males and females! This contradiction is Simpson’s paradox. Simpson’s paradox can occur because collapsing can lead to inappropriate weighting of the different populations. Treatment 1 was given to 80 males and 120 females, so the marginal table is indicating a success rate for treatment 1 that is a weighted average of the success rates for males and females with slightly more weight given to the females. Treatment 2 was given to 150 males and only 40 females, so the marginal success rate is a weighted average of the male and female success rates with most of the weight given to the male success rate. It is only a slight oversimplification to say that the marginal table is comparing a success rate for treatment 1 that is the mean of the male and female success rates, to a success rate for treatment 2 that is essentially the male success rate. Since the success rate for males is much higher than it is for females, the marginal table gives the illusion that treatment 2 is better.
72
3. Three-Dimensional Tables
The moral of all this is that one cannot necessarily trust conclusions drawn from marginal tables. It is generally necessary to consider all the dimensions of a table. Situations in which marginal (collapsed) tables yield valid conclusions are discussed in Section 5.3.
3.2
Independence and Odds Ratio Models
For multinomial sampling and a two-dimensional table, there was only one model of primary interest: independence of rows and columns. With threedimensional tables, there are at least eight interesting models. Half of these are very easy to imagine. If we refer to the three dimensions of the table as rows, columns, and layers, we can have (0) rows, columns, and layers all independent, (1) rows independent of columns and layers (but columns and layers not necessarily independent), (2) columns independent of rows and layers, and (3) layers independent of rows and columns. Three of the remaining four models involve conditional independence: (4) given any particular layer, rows and columns are independent, (5) given any column, rows and layers are independent, and (6) given any row, columns and layers are independent. The last of the eight models is that certain odds ratios are equal. Section 4 discusses these models in relation to log-linear models. We now examine the models in detail. The essential part of the loglikelihood for multinomial sampling is (p) ≡ i,j,k nijk log(pijk ). For all of the (conditional) independence models, the MLEs can be obtained using Lemma 2.4.1. The trick is to break (p) into a sum of terms, each of which can be maximized separately. The discussion below emphasizes a more general approach to finding MLEs.
3.2.1
The Model of Complete Independence
To put it briefly, the model of complete independence is that everything (rows, columns, and layers) is independent of everything else, cf. Example 1.1.3. Technically, the model is M (0) : pijk = pi·· p·j· p··k where the superscript (0) is used to distinguish this model from the other models that will be considered. The MLE of pijk under this model is (0)
pˆijk
= pˆi·· pˆ·j· pˆ··k =
(ni·· /n··· )(n·j· /n··· )(n··k /n··· ) .
3.2 Independence and Odds Ratio Models
73
Since mijk = n··· pijk , the MLE of mijk is (0)
m ˆ ijk
(0)
= n··· pˆijk = ni·· n·j· n··k /n2··· .
This is another application of the general result, discussed in Section 2.4, that the MLE of a function of the parameters is just the function applied to the MLEs. The Pearson chi-square statistic for testing lack of fit of M (0) is 2 (0) I J K nijk − m ˆ ijk X2 = . (0) m ˆ ijk i=1 j=1 k=1 The likelihood ratio test statistic is G2 = 2
J K I
(0) nijk log nijk /m ˆ ijk .
i=1 j=1 k=1
An α level test is rejected if the test statistic is greater than χ2 (1−α, IJK − I − J − K + 2). As in Section 2.4, the maximum likelihood estimate of mijk without any restrictions is m ˆ ijk = nijk ; thus, nijk is used in the formula for G2 . Degrees of freedom for the tests given in this section will be discussed in Section 4 Chapter 10 establishes that the MLEs for M (0) are characterized by m ˆ ijk
= n··· pˆi·· pˆ·j· pˆ··k ˆ ·j· m ˆ ··k m ˆ i·· m = n2···
and the marginal constraints m ˆ i·· = ni·· , m ˆ ·j· = n·j· , and m ˆ ··k = n··k . In other words, any set of values m ˆ ijk that satisfy the marginal constraints and satisfy model M (0) must be the maximum likelihood estimates. The (0) estimates m ˆ ijk given above are, under weak restrictions on the nijk ’s, the unique values that satisfy both sets of conditions. Example 3.2.1. In the longitudinal study mentioned in Example 2.2.1, out of 3182 people without cardiovascular disease, 2121 neither exercised regularly nor developed cardiovascular disease during the 4 12 -year study. We restrict our attention to these 2121 individuals. The subjects were cross-classified by three factors: Personality type (A,B), Cholesterol level (normal, high), and Diastolic Blood Pressure (normal, high). The data are nijk Personality A B
Cholesterol Normal High Normal High
Diastolic Blood Pressure Normal High 716 79 207 25 819 67 186 22
74
3. Three-Dimensional Tables
The fitted values assuming complete independence are (0)
m ˆ ijk Personality A B
Cholesterol Normal High Normal High
Diastolic Blood Pressure Normal High 739.9 74.07 193.7 19.39 788.2 78.90 206.3 20.65
Note that the m ˆ ijk ’s satisfy the property that ni·· = m ˆ i·· , n·j· = m ˆ ·j· , ˆ ··k for all i, j, and k. For example, the Type A totals are n1·· = and n··k = m 716+79+207+25 = 1027 and m ˆ 1·· = 739.9+74.07+193.7+19.39 = 1027.06. The difference is roundoff error. Pearson’s chi-square is X2 =
(22 − 20.65)2 (716 − 739.9)2 + ··· + = 8.730 . 739.9 20.65
The likelihood ratio chi-square is G2 = 2 [716 log(716/739.9) + · · · + 22 log(22/20.65)] = 8.723 . The degrees of freedom for either chi-square test are df = (2)(2)(2) − 2 − 2 − 2 + 2 = 4 . Since χ2 (.95, 4) = 9.49, an α = .05 level test will not reject the hypothesis of independence. In particular, the P value is .07. There is no clear evidence of any relationships among personality type, cholesterol level, and diastolic blood pressure level for these people who do not exercise regularly and do not have cardiovascular disease. Although the test statistics give no clear evidence that complete independence does not hold, similarly they give no great confidence that complete independence is a good model. Deviations from independence can be examined using the Pearson residuals r˜ijk =
ˆ ijk nijk − m . m ˆ ijk
The Pearson residuals for these data are r˜ijk Personality A B
Cholesterol Normal High Normal High
Diastolic Blood Pressure Normal High −0.879 0.573 0.956 1.274 1.097 −1.340 −1.413 0.297
3.2 Independence and Odds Ratio Models
75
In particular, the residual for Type A, Normal, Normal is −.879 = (716 − √ 739.9)/ 739.9. Relative to complete independence, high blood pressure and high cholesterol are overrepresented in Type A personalities (those showing signs of stress) and normal blood pressure and cholesterol are overrepresented in Type B personalities (relaxed individuals). These results agree well with conventional wisdom. Note also that for Type B personalities, individuals with only one high categorization are underrepresented. The patterns in the residuals are interesting, but remember that there is no clear evidence (at this point) for rejecting the hypothesis of complete independence.
3.2.2
Models with One Factor Independent of the Other Two
With three factors, there are three ways in which one factor can be independent of the other two. For example, we can have rows independent of columns and layers, cf. Example 1.1.4. This model says nothing about the relationship between columns and layers. Columns and layers can either be independent or not independent. If they were not independent, typically we would be interested in examining how they differ from independence. Specifically, the three models are rows independent of columns and layers, M (1) : pijk = pi·· p·jk , columns independent of rows and layers, M (2) : pijk = p·j· pi·k , and layers independent of rows and columns, M (3) : pijk = p··k pij· . All three of these models include the model of complete independence M (0) as a special case. If M (0) is true, then all three of these are true. The analyses for all three models are similar; we will consider only M (1) in detail. Under M (1) , no distinction is drawn between columns and layers. In fact, this model is equivalent to independence in an I × (JK) two-dimensional table where the columns of the two-dimensional table consist of all combinations of the columns and layers of the three-dimensional table. Example 3.2.2. Consider again the classroom behavior data of Example 3.0.1. The test of M (1) is simply a test of the independence of the two rows: nondeviant, deviant, and the six columns: Low-N, Low-R, Medium-N, Medium-R, High-N, High-R. From our results in Chapter 2, the MLE of pijk under M (1) is (1)
pˆijk
= pˆi·· pˆ·jk =
(ni·· /n··· )(n·jk /n··· ) .
76
3. Three-Dimensional Tables
The MLE of mijk is (1)
(1)
= n··· pˆijk
m ˆ ijk
= ni·· n·jk /n··· . (1)
(1)
ˆ ijk is used to indicate that these estiThe superscript (1) in pˆijk and m mates are obtained assuming that M (1) is true. The Pearson chi-square test statistic for M (1) is
X2 =
2 (1) I J K ˆ ijk nijk − m (1)
i=1 j=1 k=1
m ˆ ijk
.
The likelihood ratio test statistic is G2 = 2
J K I
(1) ˆ ijk . nijk log nijk m
i=1 j=1 k=1
These are compared to percentage points of a chi-square distribution with degrees of freedom df
(I − 1)(JK − 1)
=
= IJK − I − JK + 1 . Similar to M (0) , the MLEs for model M (1) are any values m ˆ ijk that satisfy M (1) and the marginal constraints m ˆ i·· = ni··
and m ˆ ·jk = n·jk .
(1)
Again, the values m ˆ ijk given above are the unique MLEs under mild reˆ ijk = nijk strictions on the nijk ’s. It is interesting to note that choosing m satisfies the marginal constraints but, except in the most bizarre cases, does not satisfy model M (1) . On the other hand, taking the estimated cell counts from the complete independence model m ˆ ijk = ni·· n·j· n··k /n2··· sat(1) but typically does not satisfy the marginal constraints. isfies M Example 3.2.2, continued.
(1)
The table of m ˆ ijk ’s is
(1)
m ˆ ijk Classroom Behavior (i)
Low Risk (j) Non. Dev. m ˆ ·jk
N 14.02 2.98 17
R 6.60 1.40 8
Adversity (k) Medium N R 14.85 34.64 3.15 7.36 18 42
High N R 4.95 4.95 1.05 1.05 6 6
m ˆ i·· 80 17 97
3.2 Independence and Odds Ratio Models
77
(1)
As displayed in the margins of the nijk and m ˆ ijk tables, the MLEs satisfy (1)
(1)
the conditions m ˆ i·· = ni·· and m ˆ ·jk = n·jk . The Pearson chi-square test statistic is X 2 = 6.19 =
(3 − 1.05)2 (16 − 14.02)2 + ··· + . 14.02 1.05
The likelihood ratio test statistic is G2 = 5.56 = 2 [16 log(16/14.02) + · · · + 3 log(3/1.05)] . The degrees of freedom for the chi-square test are df
= =
(2 − 1)[(2)(3) − 1] 5.
The 95th percentile of a chi-square with 5 degrees of freedom is χ2 (.95, 5) = 11.07 . Both X 2 and G2 are less than χ2 (.95, 5), so an α = .05 level test provides no evidence against M (1) . In other words, we have no reason to doubt that classroom behavior is independent of risk and adversity. It is quite possible in this study that our primary interest would be in explaining classroom behavior in terms of risk and adversity. Unfortunately, classroom behavior seems to be independent of both of the variables with which we were trying to explain it. On the other hand, examining the relationship between risk and adversity becomes very simple. If classroom behavior is independent of risk and adversity, we can study the marginal table of risk and adversity without worrying about Simpson’s paradox. The marginal table of counts is n·jk Risk (j)
N R n··k
Low 17 8 25
Adversity (k) Medium High 18 6 42 6 60 12
n·j· 41 56 97
The model of independence for this marginal table is M : p·jk = p·j· p··k ,
j = 1, . . . , J , k = 1, . . . , K .
The expected counts for the marginal table under M are m ˆ ·jk
= n··· pˆ·j· pˆ··k = n·j· n··k /n··· .
The table of estimated expected counts is
78
3. Three-Dimensional Tables
Risk (j)
Adversity (k) Low Medium High 10.57 25.36 5.07 14.43 34.64 6.93 25 60 12
N R m ˆ ··k
m ˆ ·j· 41 56 97
yielding X 2 = 10.78
and G2 = 10.86
on df = (2 − 1)(3 − 1) = 2 . 2
2
Both X and G are significant at the α = .01 level. Either the residuals or the odds ratios can be used to explore the lack of independence. The table of residuals is
Risk (j)
N R
Low 1.98 −1.69
Adversity (k) Medium High −1.46 0.35 1.25 −0.35
For highly adverse schools, the residuals are near zero, so independence seems to hold. For schools with low adversity (i.e., good schools), the notat-risk students are overrepresented and at-risk students are underrepresented. For schools with medium adversity, the at-risk students are overrepresented and the not-at-risk students are underrepresented. (I wonder if the criteria for determining whether a student is at risk may have been applied differently to students in high-adversity schools.) Using odds ratios, we see that the odds of being not at risk for lowadversity schools (17/8) are about five times greater than for mediumadversity schools (18/42). In particular, the odds ratio is (17)(42) = 4.96 . (8)(18) The odds of being not at risk in a low-adversity school are only about twice as large as the odds of being not at risk in a high-adversity school [i.e., (17)(6)/(8)(6) = 2.125]. Finally, the odds of being not at risk in a medium-adversity school are only about half as large [18(6)/42(6) = .429] as the odds of being not at risk in a high-adversity school. Of course, the sample is small, so there is quite a bit of variability associated with these estimated odds ratios. Before leaving this example, it is of interest to note a relationship between the two likelihood ratio test statistics that were considered. The models and statistics are M (1) : pijk
= pi·· p·jk ,
G2 = 5.56 ,
df = 5
3.2 Independence and Odds Ratio Models
M : p·jk
= p·j· p··k ,
G2 = 10.86,
79
df = 2.
Taken together, these models imply that M (0) : pijk = pi·· p·j· p··k holds. The likelihood ratio test statistic for M (0) with these data is G2 = 16.42 with 7 degrees of freedom. As will be seen later, it is no accident that the test statistics G2 satisfy 5.56 + 10.86 = 16.42 and that the degrees of freedom satisfy 5 + 2 = 7. Exercise 3.1. Examine the residuals from fitting M (1) . Are any of them large enough to call in question the further analysis that was based on tentatively assuming that M (1) was true?
3.2.3
Models of Conditional Independence
Given that one is at a particular level of some factor, the other two factors could be independent. For example, for any given category in the levels, the rows and the columns may be independent, cf. Example 1.1.5. By the definition of conditional probability, the probability of row i and column j given that the layer is k is Pr(row = i, col = j | layer = k) = Pr(row = i, col = j, layer = k)/Pr(layer = k) = pijk /p··k .
(1)
Conditional independence of rows and columns for each layer means that for all i, j, and k Pr(row = i, col = j | layer = k) = Pr(row = i|layer = k)Pr(col = j|layer = k) = (pi·k /p··k )(p·jk /p··k ) .
(2)
Assuming that every layer has a possibility of occurring (i.e., p··k > 0 for all k), then the model of conditional independence can be rewritten. Setting (1) and (2) equal and multiplying both sides by p··k gives the requirement pijk = pi·k p·jk /p··k for independence of rows and columns given layers.
80
3. Three-Dimensional Tables
The nature of the conditional independence between rows and columns may or may not depend on the particular layer. For example, if rows, columns, and layers are all independent, then M (0) holds and for any layer k Pr(row = i, col = j | layer = k) = pi·· p·j· . This does not depend on the layer. However, if rows are independent of columns and layers so that M (1) holds, then Pr(row = i, col = j | layer = k) = pi·· (p·jk /p··k ) . The column probabilities depend on the layer, but the row probabilities do not. Similarly, for M (2) , the columns are independent of rows and layers; thus, Pr(row = i, col = j | layer = k) = (pi·k /p··k )p·j· . The row structure depends on layers, but the column structure does not. Of course, the most interesting case of rows and columns independent given layers is when none of these simpler cases apply. If two factors are to be independent given the third factor, there are three ways in which the conditioning factor can be chosen. This leads to three models: rows and columns independent given layers M (4) : pijk = pi·k p·jk /p··k , rows and layers independent given columns M (5) : pijk = pij· p·jk /p·j· , and columns and layers independent given rows M (6) : pijk = pij· pi·k /pi·· . As in the previous subsection, the analyses for all three models are similar. We consider only M (4) in detail. The MLE for pijk is (4)
pˆijk
= pˆi·k pˆ·jk /ˆ p··k =
(ni·k /n··· )(n·jk n··· ) (n··k /n··· )
= ni·k n·jk /n··k n··· . The MLE for mijk = n··· pijk is (4)
m ˆ ijk
= n··· pˆi·k pˆ·jk /ˆ p··k = ni·k n·jk /n··k .
3.2 Independence and Odds Ratio Models
81
The MLEs are any numbers m ˆ ijk that satisfy model M (4) and the marginal ˆ ·jk = n·jk , and (redundantly) m ˆ ··k = n··k . The test relations m ˆ i·k = ni·k , m statistics are 2 (4) I J K nijk − m ˆ ijk X2 = (4) m ˆ ijk i=1 j=1 k=1 and G2 = 2
J K I
(4) nijk log nijk /m ˆ ijk .
i=1 j=1 k=1
The tests actually pool the K separate tests for independence of rows and columns which are computed for each individual layer. There are (I −1)(J − 1) degrees of freedom for the test at each layer. The degrees of freedom for the pooled test is df = (I − 1)(J − 1)K .
Example 3.2.3. Consider again the data of Example 3.2.1. In that example, we found that the P value for testing complete independence of personality type, cholesterol level, and diastolic blood pressure level was .067. Our residual analysis pointed out some interesting differences between personality types. We now examine the model M (6) that cholesterol level and diastolic blood pressure level are independent given personality type. The test is really a simultaneous test of whether independence holds in each of the tables given below.
n1jk Cholesterol Normal High
(Personality Type A) Diastolic Blood Pressure Normal High 716 79 207 25
n2jk Cholesterol Normal High
(Personality Type B) Diastolic Blood Pressure Normal High 819 67 186 22
Each table has (2 − 1)(2 − 1) = 1 degree of freedom, so the overall test has 2 degrees of freedom. The table of estimated cell counts under conditional independence is
82
3. Three-Dimensional Tables (6)
m ˆ ijk A Personality B
Cholesterol Normal High Normal High
Diastolic Blood Pressure Normal High 714.5 80.51 208.5 23.49 813.9 72.08 191.1 16.92
giving X 2 = 2.188
and G2 = 2.062
and df = 2 . This is a very good fit. Given the personality type, there seems to be no relationship between cholesterol level and diastolic blood pressure level. (6) As in the previous examples, observe that the estimates m ˆ ijk satisfy the (6)
(6)
likelihood equations m ˆ i·k = ni·k and m ˆ ij· = nij· . (The likelihood equations are just the marginal constraints.) Note that the odds of being normal in either cholesterol or blood pressure is higher for Type B personalities than for Type A personalities. If for each personality type, cholesterol and blood pressure are independent, we can examine the relationship between either personality and cholesterol or between personality and blood pressure from the appropriate marginal table, cf. Section 5.3. For example, to examine personality and cholesterol, the marginal table is Personality A B
Cholesterol Normal High 795 232 886 208
The odds of having normal cholesterol for Type A personalities is 795/232 = 3.427. The odds of having normal cholesterol for Type B personalities is 886/208 = 4.260. The odds ratio is 795(208) pˆ11· pˆ22· = .804 . = pˆ12· pˆ21· 232(886) The odds of having a normal cholesterol level with personality Type A are only about 80% as large as the odds for personality Type B. A similar analysis shows that the odds of having a normal diastolic blood pressure level with personality Type A is 78.6% of the odds for personality Type B. Although this odds ratio of .786 is further from one than the odds ratio for cholesterol, it turns out to be less significant. The variabilities of these point estimates depend on the sample sizes in all the cells. The personality–blood pressure marginal table has some smaller cells than the
3.2 Independence and Odds Ratio Models
83
cholesterol–blood pressure table; thus, the personality–blood pressure odds ratio is subject to more variability. We will see in Example 3.4.1 that one could reasonably take personality and blood pressure to be independent, but personality and cholesterol are not independent.
3.2.4
A Final Model for Three-Way Tables
The last of the standard models for three-way tables is due to Bartlett (1935) and must be stated in terms of odds ratios. To look at an odds ratio in a three-way table, one fixes a factor and looks at the odds ratio relating the other two factors. For example, we can fix layers and look at the odds ratio p11k pijk /p1jk pi1k . The last of our models is that these odds ratios are the same for every layer. In particular, the model is M (7) :
p111 pij1 p11k pijk = pi11 p1j1 pi1k p1jk
for all i = 2, . . . , I, j = 2, . . . , J, and k = 2, . . . , K. M (7) is stated as if layers are fixed, but, in fact, it is easily shown that the model is unchanged if stated for rows fixed or columns fixed. (7) (7) There are no simple formulae for pˆijk or m ˆ ijk . Iterative computing methods (cf. Section 3) must be used to obtain the MLEs. It can be shown that ˆ i·k = ni·k , the MLEs must satisfy the marginal constraints m ˆ ij· = nij· , m ˆ 111 m ˆ ij1 /m ˆ i11 m ˆ 1j1 = m ˆ ·jk = n·jk and also the model; i.e., we need m ˆ ijk /m ˆ i1k m ˆ 1jk for i, j, k ≥ 2. Given the MLEs, the test statistics m ˆ 11k m are computed as usual.
X2 =
2 (7) I J K nijk − m ˆ ijk (7)
i=1 j=1 k=1
and G2 = 2
I J K
m ˆ ijk
(7) nijk log nijk /m ˆ ijk .
i=1 j=1 k=1
Although there is, at this point, no obvious reason for this figure, the degrees of freedom for the chi-square test is df = (I − 1)(J − 1)(K − 1) . Example 3.2.4. Fienberg (1980) and Kihlberg, Narragon and Campbell (1964) report data on severity of drivers’ injuries in auto accidents along with the type of accident and whether or not the driver was ejected from the vehicle during the accident. We consider the results only for small cars. The data are listed below.
84
3. Three-Dimensional Tables nijk Injury (j) Driver Ejected (i)
No Yes
Accident Type (k) Collision Rollover Not Severe Severe Not Severe Severe 350 150 60 112 26 23 19 80
Since I = J = K = 2, the model becomes p111 p221 p112 p222 = . M (7) : p121 p211 p122 p212 Using the methods of Section 3, the MLEs of the mijk ’s are found. (7)
m ˆ ijk Injury (j) Driver Ejected (i)
No Yes
Accident Type (k) Collision Rollover Not Severe Severe Not Severe Severe 350.5 149.5 59.51 112.5 25.51 23.49 19.49 79.51
The test statistics have (2 − 1)(2 − 1)(2 − 1) = 1 degree of freedom and are X 2 = .04323
and G2 = .04334 .
The model of equality of odds ratios fits the data remarkably well. The reader can verify that none of the models involving independence or conditional independence fit the data. Another way to examine M (7) is to look at the estimated odds ratios and see if they are about equal. For this purpose, we use the unrestricted estimates of the pijk ’s, i.e., pˆijk = nijk /n··· . The estimated odds ratios are p121 pˆ211 pˆ111 pˆ221 /ˆ
= =
350(23)/26(150) 2.064
p122 pˆ212 pˆ112 pˆ222 /ˆ
= 60(80)/19(112) = 2.256 .
and
These values are quite close. In summary, for both collisions and rollovers, the odds of a severe injury are about twice as large if the driver is ejected from the vehicle than if not. Equivalently, the odds of having a nonsevere injury are about twice as great if the driver is not ejected from the vehicle than if the driver is ejected. It should be noted that the odds of being severely injured in a rollover are consistently much higher than in a collision. What we have concluded in our analysis of M (7) is that the relative effect of the driver being ejected is the same for both types of accident and that being ejected substantially increases one’s chances of being severely injured. So you see, it really does pay to wear seat belts.
3.2 Independence and Odds Ratio Models
3.2.5
85
Odds Ratios and Independence Models
For a two-dimensional table, Proposition 2.3.3 established that the model of independence was equivalent to the model that all odds ratios equal one. Similarly, all eight of the models discussed for three-dimensional tables can be written in terms of odds ratios. So far, only our characterization of M (7) is in terms of odds ratios. Since models M (1) , M (2) , and M (3) are similar, we will examine only M (1) . Similarly, we will consider M (4) as representative of M (4) , M (5) , and M (6) . To begin, consider M (4) . If M (4) is true, then pijk = pi·k p·jk /p··k ; M (4) is the model that has rows and columns independent given layers. Let us consider an odds ratio with layers fixed. We can write a typical odds ratio as pijk pi j k /pij k pi jk . Assuming M (4) with positive pijk ’s, we get pijk pi j k pij k pi jk
= = =
(pi·k p·jk /p··k )(pi ·k p·j k /p··k ) (pi·k p·j k /p··k )(pi ·k p·jk /p··k ) pi·k p·jk pi ·k p·j k pi·k p·j k pi ·k p·jk 1.
Thus, for a fixed value of k, the odds ratio is one. Conversely, if pijk pi j k /pij k pi jk = 1 for all i, i , j, and j , then pijk p··k = pijk pi j k = pij k pi jk = pij k pi jk i ,j
i j
j
i
= pi·k p·jk , (4)
so M holds. It is also of interest to note that, under M (4) , odds ratios with the row (column) fixed are equal for all rows (columns). In particular, pijk pij k pijk pij k
= = =
(pi·k p·jk /p··k )(pi·k p·j k /p··k ) (pi·k p·jk /p··k )(pi·k p·j k /p··k ) pi·k p·jk pi·k p·j k pi·k p·jk pi·k p·j k p·jk p·j k . p·jk p·j k
(3)
Thus, the odds ratio does not depend on i and must be the same for each row. These facts imply that M (4) is a special case of M (7) . Perhaps the simplest way to examine odds ratios in relation to M (1) is to use the results obtained for M (4) . The following proposition allows this. Proposition 3.2.5. are true. Proof.
M (1) is true if and only if both M (4) and M (5)
If M (1) is true, then pijk = pi·· p·jk . Summing over j gives
86
3. Three-Dimensional Tables
pi·k = pi·· p··k ; thus, pi·· = pi·k /p··k . Substitution yields pijk = pi·· p·jk = pi·k p·jk /p··k ; thus, M (4) holds. A similar argument summing over k shows that M (5) holds. Conversely, if both M (4) and M (5) are true, then pijk = pi·k p·jk /p··k and pijk = pij· p·jk /p·j· . It follows that
pi·k pij· pijk = = p·jk p··k p·j·
and from the last equality, pi·k p·j· = p··k pij· . Summing over j gives pi·k p··· = p··k pi·· ; recalling that p··· = 1 and rearranging terms gives pi·· = pi·k /p··k . Substituting this into M (4) gives pijk
= pi·k p·jk /p··k = pi·· p·jk 2
and M (1) holds. It follows from Proposition 3.2.5 and the discussion of M model M (1) is equivalent to pijk pi j k /pij k pi jk = 1
(4)
that the
(layers fixed)
for all i, i , j, j , and k, and pijk pi jk /pijk pi jk = 1
(columns fixed)
for all i, i , k, k , and j. In addition, all odds ratios with rows fixed will be equal (but not necessarily equal to one). M (1) is thus a special case of M (7) . Similar arguments establish that if M (0) is true, then all odds ratios equal one regardless of whether rows, columns, or layers have been fixed. Finally, note that as in Chapter 2, odds ratios remain unchanged when the pijk ’s are all replaced with mijk ’s.
3.3 Iterative Computation of Estimates
3.3
87
Iterative Computation of Estimates
To fit the model M (7) that all odds ratios are equal, we need to com(7) pute the m ˆ ijk ’s. There are two standard algorithms for doing this: the Newton-Raphson algorithm and the iterative proportional fitting algorithm. Newton-Raphson amounts to doing a series of weighted regression analyses. It is commonly referred to as iteratively reweighted least squares. The Newton-Raphson algorithm is discussed in Section 10.5. Iterative proportional fitting was introduced by Deming and Stephan (1940) for purposes other than fitting models to discrete data, but the algorithm gives maximum likelihood estimates for the models discussed in the previous section and for the balanced ANOVA type log-linear models that will be discussed later. Meyer (1982) presents methods of transforming various other loglinear models so that iterative proportional fitting can be applied. In this section, we describe the method of iterative proportional fitting for (7) finding the m ˆ ijk ’s. The method can be easily extended to find estimates ˆ ijk ’s for more complicated higher-dimensional tables. Under M (7) , the m are characterized by the model itself and the fitted margins m ˆ ij· = nij· , m ˆ i·k = ni·k , and m ˆ ·jk = n·jk . The method is based on the fact that 1 = (nij· /m ˆ ij· ) = (ni·k /m ˆ i·k ) = (n·jk /m ˆ ·jk ) and, thus, m ˆ ijk
=
m ˆ ijk
=
and m ˆ ijk =
nij· m ˆ ijk , m ˆ ij· ni·k m ˆ ijk , m ˆ i·k n·jk m ˆ ijk . m ˆ ·jk
The iterative procedure begins with some initial guesses for the m ˆ ijk ’s, [0] [3t] ˆ ijk , say m ˆ ijk , and modifies the initial guess iteratively. Given estimates m the modifications are nij· [3t] [3t+1] m ˆ ijk = m ˆ , [3t] ijk m ˆ ij· ni·k [3t+2] [3t+1] = m ˆ , m ˆ ijk [3t+1] ijk m ˆ i·k and
[3(t+1)]
m ˆ ijk
=
n·jk [3t+2]
m ˆ ·jk
[3t+2]
m ˆ ijk
.
The initial guesses can be any positive numbers that satisfy M (7) . Typically, one takes [0] for all i, j, k . m ˆ ijk = 1
88
3. Three-Dimensional Tables
The iterations continue until the estimates stop changing; i.e., for all i, j, k, [3t] . [3t+1] . [3t+2] . [3(t+1)] . m ˆ ijk = m ˆ ijk = m ˆ ijk = m ˆ ijk
If convergence occurs to a set of values, say m ˆ ijk , then these must be the maximum likelihood estimates. To see this, we need to show two things: first, that m ˆ ij· = nij· , m ˆ i·k = ni·k , and m ˆ ·jk = n·jk , and second, that the m ˆ ijk ’s satisfy M (7) . Because of the nature of the iterative proportional fitting algorithm, at convergence we have nij· m ˆ ijk = m ˆ ijk , m ˆ ij· so nij· 1= m ˆ ij· and m ˆ ij· = nij· . Similarly, m ˆ i·k = ni·k and m ˆ ·jk = n·jk . To see that M (7) is satisfied, we must show that ˆ ij1 /m ˆ i11 m ˆ 1j1 = m ˆ 11k m ˆ ijk /m ˆ i1k m ˆ 1jk m ˆ 111 m for any values of i, j, and k greater than one. The key point here is that if [3t] m ˆ ijk satisfies M (7) , then the modifications also satisfy M (7) . Thus, if the initial values satisfy M (7) , the result of the iterative procedure also satisfies M (7) . [3t] Specifically, assume that the m ˆ ijk ’s satisfy M (7) . We will show that the [3t+1]
m ˆ ijk
’s satisfy M (7) . Since [3t+1]
m ˆ ijk we have [3t+1] [3t+1] ˆ ij1 m ˆ 111 m [3t+1] [3t+1] m ˆ i11 m ˆ 1j1
and [3t+1]
[3t+1]
m ˆ 11k m ˆ ijk [3t+1]
m ˆ i1k
[3t+1]
m ˆ 1jk
=
nij· [3t]
m ˆ ij·
[3t]
m ˆ ijk ,
[3t] [3t] [3t] [3t] n11· m ˆ 11· nij· m ˆ ij· m ˆ 111 m ˆ ij1 = [3t] [3t] [3t] [3t] m ˆ i11 m ˆ 1j1 ni1· m ˆ i1· n1j· m ˆ 1j·
[3t] [3t] [3t] [3t] n11· m ˆ 11· nij· m ˆ ij· m ˆ m ˆ ijk 11k = . [3t] [3t] [3t] [3t] m ˆ i1k m ˆ 1jk ni1· m ˆ i1· n1j· m ˆ 1j· [3t]
ˆ ijk ’s and the multipliers do not depend on Since M (7) is satisfied for the m [3t+1]
k, clearly M (7) is satisfied for the m ˆ ijk the
[3t+2] m ˆ ijk ’s
and
[3(t+1)] m ˆ ijk ’s
’s. Similar arguments show that
also satisfy M (7) , cf. Exercise 3.8.11.
3.4 Log-Linear Models for Three-Dimensional Tables
89
In fact, iterative proportional fitting can be used to fit any of the stan[0] ˆ ijk to satdard models. To fit, say M (6) , the iterative procedure chooses m [2t]
isfy M (6) and then modifies estimates m ˆ ijk using the marginal conditions ˆ i·k = ni·k . Specifically, m ˆ ij· = nij· and m [2t+1]
m ˆ ijk and
[2(t+1)]
m ˆ ijk
=
=
nij· [2t] m ˆ ij·
ni·k [2t+1] m ˆ i·k
[2t]
m ˆ ijk
[2t+1]
m ˆ ijk
.
The equations for modifying the m’s ˆ are determined by the marginal conditions. It is easily checked that if the sequence converges, then the m’s ˆ satisfy both the marginal conditions and M (6) . In fact, since there are closed form estimates for the m’s, ˆ it takes only one set of modifications to obtain the MLEs. It was mentioned earlier that the initial guesses are typically taken as [0]
m ˆ ijk = 1
for all
ijk .
The reason is that this initial guess satisfies all of the standard models. Thus, to use iterative proportional fitting for any model, one need only specify the marginal conditions and the algorithm automatically provides the estimates. Finally, since the method is based on multiplication, any initial guess [0] of zero will always remain zero. In other words, if for any ijk, m ˆ ijk = 0, [t]
then m ˆ ijk = 0 for all t. Thus, if it is known ahead of time that mijk = 0, [0]
iterative proportional fitting can handle the situation by taking m ˆ ijk = 0. On the other hand, if it is not known that mijk = 0, then we must choose [0] m ˆ ijk > 0 (cf. Section 8.1).
3.4
Log-Linear Models for Three-Dimensional Tables
In a three-dimensional table for continuous data, the basic model is a threeway analysis of variance model with all interactions, i.e., yijk = u + u1(i) + u2(j) + u3(k) + u12(ij) + u13(ik) + u23(jk) + u123(ijk) + eijk . For tables of counts, the same form model is used for the log(mijk )’s: A saturated model is written log mijk = u + u1(i) + u2(j) + u3(k) + u12(ij) + u13(ik) + u23(jk) + u123(ijk) .
90
3. Three-Dimensional Tables
We will be primarily interested in eight reduced versions of this model. Each of the eight reduced models corresponds to one of the eight independence – odds ratio models for three-dimensional tables. In particular, for tables of positive probabilities, an odds ratio model is true if and only if the corresponding log-linear model is valid. The eight submodels are log(mijk ) = u + u1(i) + u2(j) + u3(k) ,
(0)
log(mijk )
= u + u1(i) + u2(j) + u3(k) + u23(jk) ,
(1)
log(mijk )
= u + u1(i) + u2(j) + u3(k) + u13(ik) ,
(2)
log(mijk )
= u + u1(i) + u2(j) + u3(k) + u12(ij) ,
(3)
= u + u1(i) + u2(j) + u3(k) + u13(ik) + u23(jk) , = u + u1(i) + u2(j) + u3(k) + u12(ij) + u23(jk) ,
(4) (5)
= u + u1(i) + u2(j) + u3(k) + u12(ij) + u13(ik) ,
(6)
log(mijk ) log(mijk ) log(mijk ) and
log(mijk ) = u + u1(i) + u2(j) + u3(k) + u12(ij) + u13(ik) + u23(jk) .
(7)
For i = 0, . . . , 7, the log-linear model (i) holds if and only if M (i) holds. One way to see this is to go through a series of arguments similar to those used in Section 2.4; however, the equivalence can most easily be seen by examining odds ratios. For example, model (7) holds if and only if the u123(ijk) terms can be dropped from the full model. As in standard analysis of variance, the three-factor interaction terms can be dropped if and only if any set of (I −1)(J −1)(K −1) linearly independent three-factor interaction contrasts are all zero. In general, a three-factor interaction contrast is J K I
qijk u123(ijk)
i=1 j=1 k=1
or, equivalently,
I J K
qijk log(mijk )
i=1 j=1 k=1
where qij· = qi·k = q·jk = 0 for all i, j, k. The condition in M (7) is that m111 mij1 m11k mijk = mi11 m1j1 mi1k m1jk for all i, j, and k strictly greater than 1. Taking logs of both sides gives log(m111 ) − log(mi11 ) − log(m1j1 ) + log(mij1 ) = log(m11k ) − log(mi1k ) − log(m1jk ) + log(mijk ).
3.4 Log-Linear Models for Three-Dimensional Tables
91
Rearranging terms, we see that M (7) is equivalent to log(m111 ) − log(mi11 ) − log(m1j1 ) + log(mij1 ) − log(m11k ) + log(mi1k ) + log(m1jk ) − log(mijk ) = 0 for each of the (I − 1)(J − 1)(K − 1) possible choices of i, j, and k greater than 1. All of these are three-factor interaction contrasts. Since these contrasts are linearly independent, M (7) holds if and only if the three-factor interaction terms can be dropped from the model, hence, if and only if model (7) holds. Now consider M (4) , that rows and columns are independent given layers. In terms of odds ratios, all odds ratios with any one factor fixed are equal, and all odds ratios with layers fixed equal one. To show that M (4) is equivalent to model (4), we need to show that M (4) is equivalent to having no three-factor interaction and no row-column interaction. As above, the fact that all odds ratios are equal is equivalent to having no three-factor interaction. We need to establish that all odds ratios with layers fixed equal one if and only if there is no row-column interaction. Under M (4) , for all k, mijk mi j k /mij k mi jk = 1 or, taking logs, µijk − µij k − µi jk + µi j k = 0 where µijk ≡ log mijk . Recall that a contrast in the row-column interactions is an interaction contrast in the µ ¯ij· ’s. Averaging the odds ratio contrasts over k gives the row-column interaction contrast ¯ij · − µ ¯i j· + µ ¯ i j · = 0 . µ ¯ij· − µ If we take i = 1 and j = 1, we have (I − 1)(J − 1) linearly independent contrasts in the row-column interaction equal to 0; the u12(ij) terms can be dropped from the full model. Conversely, if there is no three-factor interaction and no row-column interaction, then the contrasts µijk −µij k − µi jk + µi j k are all equal and µ ¯ij· − µ ¯ij · − µ ¯i j· + µ ¯ij· = 0. Thus, all odds ratios are equal and those with layer fixed equal one. Similarly, M (1) is true if and only if all odds ratios are equal and those with either columns or layers fixed equal one. All odds ratios equal is equivalent to no three-factor interaction; all odds ratios with layers fixed equal to one is equivalent to no row-column interaction (no u12(ij) terms); and all odds ratios with columns fixed equal to one is equivalent to no row-layer interaction (no u13(ik) terms). Nearly all of models (0)-(7) are grossly overparametrized. For example, in model (1), the terms u, u2(j) , and u3(k) are all totally redundant. The parameters u1(i) and u23(jk) are sufficient to explain everything. The u, u2(j) , and u3(k) terms can take any values, yet by choosing the u1(i) ’s and u23(jk) ’s appropriately, model (1) holds. Rewriting the models in a less overparametrized fashion leads to a very convenient shorthand notation for the models
92
3. Three-Dimensional Tables
(0) (1) (2) (3) (4) (5) (6) (7)
MODEL log(mijk ) = u1(i) + u2(j) + u3(k) log(mijk ) = u1(i) + u23(jk) log(mijk ) = u2(j) + u13(ik) log(mijk ) = u3(k) + u12(ij) log(mijk ) = u13(ik) + u23(jk) log(mijk ) = u12(ij) + u23(jk) log(mijk ) = u12(ij) + u13(ik) log(mijk ) = u12(ij) + u13(ik) + u23(jk)
SHORTHAND [1][2][3] [1][23] [2][13] [3][12] [13][23] [12][23] [12]][13] [12][13][23]
In addition, the unrestricted (saturated) model, log(mijk ) = u + u1(i) + u2(j) + u3(k) + u12(ij) + u13(ik) + u23(jk) + u123(ijk) can be rewritten log(mijk ) = u123(ijk) and abbreviated as [123]. The shorthand can also be used to remember the conditional independence interpretations of the models. For example, [1][2][3] has everything in different brackets so everything is independent. In [1][23], the rows ([1]) are in a different bracket from columns and layers ([23]); thus rows are independent of columns and layers. In [13][23], rows 1 and columns 2 are in different brackets but layers 3 is in both brackets. Thus, given layers, rows and columns are independent. No such separation of factors works for [12][13][23], and there is no interpretation in terms of independence for the corresponding model. The shorthand identifies both the model and the margins that must be fitted to obtain MLEs. Thus, the shorthand provides all the information necessary for fitting the models using iterative proportional fitting (or Newton-Raphson). For example, [1][23] requires that the margins m ˆ i·· = ni·· and m ˆ ·jk = n·jk be fitted and the model [12][23] requires that m ˆ ij· = nij· and m ˆ ·jk = n·jk be fitted. As discussed in Chapter 10, under mild restrictions, any values m ˆ ijk that satisfy the fitted margins and the log-linear model are the MLEs. In particular, for model (1) the unique MLEs are (1) m ˆ ijk = ni·· n·jk /n··· . These satisfy the marginal conditions and, because (1) log m ˆ ijk = log(ni·· ) + log(n·jk /n··· ) = u ˆ1(i) + u ˆ23(jk) , they satisfy the log-linear model (1).
3.4.1
Estimation
Estimation of the expected cell counts mijk has already been considered. Given the m ˆ ijk ’s, estimation of the model parameters can proceed in a manner similar to analysis of variance. Consider a standard ANOVA model, say yijk = ξ + αi + βj + γk + (αβ)ij + (αγ)ik + eijk
3.4 Log-Linear Models for Three-Dimensional Tables
93
with i = 1, . . . , I, j = 1, . . . , J, k = 1, . . . , K, and the eijk ’s independent N (0, σ 2 ). A contrast in, for example, the (αβ) interaction is determined by numbers qij i = 1, . . . , I, j = 1, . . . , J, that satisfy qi· = q·j = 0. The contrast is qij (αβ)ij . ij
The maximum likelihood estimate is qij y¯ij· ij
and
Var
qij y¯ij· =
ij
σ2 2 q . K ij ij
The log-linear model log(mijk ) = ξ + αi + βj + γk + (αβ)ij + (αγ)ik can be rewritten as µijk = u + u1(i) + u2(j) + u3(k) + u12(ij) + u13(ik) , where µijk ≡ log(mijk ) . Consider an interaction contrast qij u12(ij) ij
which is equivalent to
qij µ ¯ij· .
ij
ˆ ijk , so the MLE of µijk is µ ˆijk = log(m ˆ ijk ). Write The MLE of mijk is m ˆijk = log(m ˆ ijk ) . wijk ≡ µ Then the estimated contrast is
qij w ¯ij· .
ij
In general, the MLE of any function of the µijk ’s is just the same function applied to the µ ˆijk ’s. In particular, techniques from analysis of variance, when applied to the µ ˆijk ’s, give the estimates for contrasts in the corresponding log-linear models. In other words, whatever you would do to the
94
3. Three-Dimensional Tables
yijk ’s in ANOVA to estimate a parameter, apply the same method to the µ ˆijk ’s to estimate the corresponding parameter in a log-linear model. Unfortunately, computation of asymptotic variances is not straightforward. It requires the use of matrices and, even for contrasts, is similar in difficulty to finding the variance of a linear combination of regression coefficient estimates. Estimation is considered in detail in Section 10.2. In the author’s opinion, the most interesting aspects of estimation are ˆ ijk ’s, those directly related to the mijk ’s, odds, and odds ratios. Given the m estimates of odds and odds ratios are easy to obtain. Many examples of this have already been given. Again, the more difficult aspect of estimation is in obtaining asymptotic standard errors so that formal inferential procedures can be used.
3.4.2
Testing Models
In regression analysis it is well known that one can test a model against a larger model to see whether the smaller model is an inadequate explanation of the data. This technique is also used in analysis of variance but often it is not discussed explicitly because in balanced ANOVA it is possible to skirt the issue. For example, in a balanced ANOVA with two factors A and B and no interaction, the test for main effects in A does not depend on whether the main effects for B are included in the model. The technique of testing models against larger models is fundamental in log-linear model analysis. The sense in which one model is larger than another is illustrated below. All of the tests discussed in Section 2 can be viewed as testing models (r) ˆ ijk and against the saturated model. The test of M (r) was based on m ˆ ijk , where m ˆ ijk is the the nijk ’s. The nijk ’s are used because nijk = m unrestricted MLE of mijk . The unrestricted MLE of mijk is obtained by using a model that puts no restrictions on the mijk ’s, namely, the saturated model log(mijk ) = u+u1(i) +u2(j) +u3(k) +u12(ij) +u13(ik) +u23(jk) +u123(ijk) . (8) Again, this model is grossly overparametrized; an equivalent model is log(mijk ) = u123(ijk) . A saturated model has at least one parameter for every cell in the table, so the model always fits the data perfectly. More generally, one can test any model against a strictly larger model. For instance, model (1) log mijk = u + u1(i) + u2(j) + u3(k) + u23(jk) can be tested against model (4) log mijk = u + u1(i) + u2(j) + u3(k) + u13(ik) + u23(jk) ,
3.4 Log-Linear Models for Three-Dimensional Tables
95
because model (4) contains all of the terms in model (1) plus additional terms, the u13(ik) ’s. In other words, model (1) is a special case of model (4). The test is simply a test of whether the u13 ’s are needed in model (4) (or equivalently a test of M (1) versus M (4) ). (1) To test [1][23] (model (1)) against [13][23] (model (4)), we use the m ˆ ijk ’s, (4)
the m ˆ ijk ’s, and the Pearson or likelihood ratio chi-squares. The Pearson chi-square is 2 (4) (1) I J K ˆ ijk m ˆ ijk − m . X2 = (1) m ˆ ijk i=1 j=1 k=1 The likelihood ratio chi-square is G2 = 2
I J K
(4) (4) (1) m ˆ ijk log m ˆ ijk /m ˆ ijk .
i=1 j=1 k=1
Since this is a test of no row-layer interactions, the degrees of freedom are the degrees of freedom for row-layer interactions, i.e., (I −1)(K −1), exactly as in analysis of variance. Similarly, model (1): [1][23] can be tested against model (5): [12][23] because model (5) contains model (1). On the other hand, [1][23] cannot be tested against [12][13] because [1][23] contains u23(jk) terms, but [12][13] does not contain u23(jk) terms; thus, [12][13] is not strictly larger than [1][23]. In this case, we say that [1][23] and [12][13] are not comparable. To perform tests, we need to be able to identify the degrees of freedom associated with each model. For standard analysis of variance type models, the degrees of freedom for a model are just the sum of the degrees of freedom for each term in the model. The degrees of freedom for terms are the same as in standard analysis of variance. Term u u1 u2 u3 u12 u13 u23 u123
Degrees of Freedom 1 I −1 J −1 K −1 (I − 1)(J − 1) (I − 1)(K − 1) (J − 1)(K − 1) (I − 1)(J − 1)(K − 1)
The degrees of freedom for testing [1][23] versus [13][23] are the degrees of freedom for [13][23] minus the degrees of freedom for [1][23]. Adding up the degrees of freedom for individual u terms, the degrees of freedom for [13][23] are 1+(I −1)+(J −1)+(K −1)+(I −1)(K −1)+(J −1)(K −1). The degrees
96
3. Three-Dimensional Tables
of freedom for [1][23] are I + (I − 1) + (J − 1) + (K − 1) + (J − 1)(K − 1). The degrees of freedom for the test are the difference in the model degrees of freedom, which is (I − 1)(K − 1). As mentioned before, this is just the degrees of freedom for the terms that are in [13][23] but not in [1][23], i.e., the u13(ik) ’s. In general, to test model (r) against model (s), where model (s) is strictly larger than model (r), 2 (s) (r) J K I ˆ ijk m ˆ ijk − m X2 = (r) m ˆ ijk i=1 j=1 k=1 and G2 = 2
I J K
(s) (s) (r) m ˆ ijk log m ˆ ijk /m ˆ ijk .
(9)
i=1 j=1 k=1
These values can be compared to a chi-square distribution. The degrees of freedom for the chi-square is the difference in the degrees of freedom for models (s) and (r). One of the advantages of using G2 instead of X 2 is that it simplifies the process of testing models against each other. Any of the usual tests of models can be obtained easily from the eight tests in Section 2. In fact, this is the standard way of performing tests on models. The tests in Section 2 are all tests of a reduced log-linear model against the saturated model (8). (Note that the saturated model is strictly larger than any of the other eight models.) Example 3.4.1. In Example 3.2.2 on classroom behavior, it was remarked that there were relationships between various likelihood ratio test statistics. Specifically, the following results were given: Model M : [1][2][3] M (1) : [1][23] p·jk = p·j· p··k (0)
G2 16.42 5.56 10.86
df 7 5 2
The test statistics for all the models are for testing against the saturated model. (In the third case, it is the saturated model for the two-dimensional table.) The model [1][23] includes u23 terms in addition to the terms in [1][2][3]. Because the model [1][23] is strictly larger than [1][2][3], a test for the adequacy of the smaller model can be performed. Rather than using equation (9) directly, the test of the adequacy of the smaller log-linear model can be obtained by subtraction from the saturated model test statistics given above. Specifically, for testing [1][2][3] versus [1][23], G2 = 16.42 − 5.56 = 10.86
3.4 Log-Linear Models for Three-Dimensional Tables
97
with 7 − 5 = 2 degrees of freedom. Not only can this be viewed as a test of the log-linear models but also as a test of the independence models, i.e., M (0) versus M (1) . Moreover, it is precisely the test given in Example 3.2.2 for M : p·jk = p·j· p··k . (0)
Exercise 3.2. Using the data of Example 3.2.2, compute the m ˆ ijk ’s and 2 use (9) to verify that G = 10.86 for testing [1][2][3] versus [1][23]. We now develop these results in general. Consider testing each of models (r) and (s) against the saturated model. Recalling that for the saturated model m ˆ ijk = nijk , we get likelihood ratio test statistics of (r) nijk log nijk /m ˆ ijk G2 (r vs. 8) = 2 ijk
and G2 (s vs. 8) = 2
(s) nijk log nijk /m ˆ ijk .
ijk
As shown at the end of Section 10.1, maximum likelihood estimates for log-linear models satisfy (s) (s) (r) m ˆ ijk log m ˆ ijk /m ˆ ijk G2 (r vs. s) = 2 ijk
=
2
(s) (r) nijk log m ˆ ijk /m ˆ ijk .
ijk
Given this property, it is a simple matter to check that G2 (r vs. s) = G2 (r vs. 8) − G2 (s vs. 8) . Moreover, the degrees of freedom for the tests satisfy df (r vs. s) = df (r vs. 8) − df (s vs. 8) .
(10)
To see (10), note that (a) the degrees of freedom for the saturated model (8) are IJK, (b) df (r vs. s) = df (model (s)) − df (model (r)), (c) df (r vs. 8) = IJK − df (model (r)), and (d) df (s vs. 8) = IJK − df (model (s)). Substitution into (10) gives the correct result. The methods of obtaining G2 (r vs. s) and df (r vs. s) from G2 ’s and df ’s for testing against saturated models are basic to log-linear model practice. Typically, computer programs only provide G2 ’s and df ’s for testing against saturated models, so reduced model tests must be constructed using this method. In fact, this approach to testing (r) versus (s) using G2 ’s for the saturated model is not restricted to G2 ’s for the saturated model. It can be used with
98
3. Three-Dimensional Tables
any model (t) that is larger than both (r) and (s). The use of the saturated model is a convenience because it is strictly larger than all other models. Example 3.4.2. Once again, we consider the data on personality (1), cholesterol (2), and diastolic blood pressure (3) of Example 3.2.3. In the table below are given the degrees of freedom, values of X 2 and G2 , and the P value associated with G2 for testing all eight of the standard models against the saturated model. Model [12][13][23] [12][13] [12][23] [13][23] [1][23] [2][13] [3][12] [1][2][3]
df 1 2 2 2 3 3 3 4
X2 0.617 2.188 2.985 4.566 7.102 6.189 4.543 8.730
G2 0.613 2.062 2.980 4.563 7.101 6.184 4.601 8.723
P .434 .358 .224 .100 .067 .102 .207 .067
Using a criterion of α = .05, all of these models fit the data; however, consider testing [1][2][3] against [12][13]. The test statistic is G2 = 8.723 − 2.062 = 6.661 with df = 4 − 2 = 2 . Because χ2 (.95, 2) = 5.99 , the model [12][13] fits significantly better than [1][2][3]. In other words, the model with cholesterol level and diastolic blood pressure level independent given personality type fits significantly better than the model of complete independence. The reader can also verify that the models [3][12] and [12][13][23] also fit significantly better than [1][2][3]. (The model [12][23] is almost significantly better than [1][2][3].) We are left with a sequence of hierarchical models [3][12], [12][13], and [12][13][23] that all fit better than complete independence. Testing [3][12] against [12][13] gives G2 df χ2 (.95, 1)
= 4.601 − 2.062 = 2.539 , = 3 − 2 = 1, =
3.84 ,
so there seems to be no reason to take the larger model. Similarly, testing [3][12] against [12][13][23] gives G2 df
= 4.601 − 0.613 = 3.988 = 3−1=2
χ2 (.95, 2) =
5.99 ,
3.5 Product-Multinomial and Other Sampling Plans
99
so again, the model [3][12] seems adequate. The model that posits blood pressure being independent of personality type and cholesterol is the smallest model that adequately fits the data. (It is interesting to note that based on Akaike’s information criterion, cf. Section 6, model [12][13] is the best model.) To complete the analysis, we need to examine the nature of the relationship between personality and cholesterol. This was done in Example 3.2.3. That analysis remains valid.
3.5
Product-Multinomial and Other Sampling Plans
In this section, we consider the implications of product-multinomial sampling and give a brief discussion of the effect of complex sampling plans involving stratified sampling and cluster sampling. In addition, the use of conditional distributions as a basis for statistical inference is mentioned. Recall the data from Example 2.1.1 on opinions about legalized abortion.
Female Male Total
Support 309 319 628
Do Not Support 191 281 472
Total 500 600 1100
This is product-multinomial sampling. A sample of 500 females was taken. An independent sample of 600 males was also taken. The results were combined into a 2 × 2 table. We consider two extensions of these data to illustrate product-multinomial sampling in three-dimensional tables. Example 3.5.1. Suppose that each population is further classified according to political affiliation. We might then get the table Sex (i) Female
Male
Party (j) Republican Democrat Independent Total Republican Democrat Independent Total
Opinion (k) Support Do Not Support 79 40 132 71 98 80 309 191 65 141 113 319
94 95 92 281
Total 119 203 178 500 159 236 205 600
100
3. Three-Dimensional Tables
The totals for females and males are fixed. We know the female total is 500, so we must expect the female total to be 500. This means that m1·· = n1·· = 500 . Similarly, for male totals, m2·· = n2·· = 600 . More briefly, we write simply mi·· = ni·· , i = 1, 2. Any model that we fit must accommodate these facts. In other words, our estimates must satisfy the constraints m ˆ i·· = ni·· ,
(1)
i = 1, 2. Fortunately, the estimates for all of the ANOVA type models that we have discussed satisfy this condition. Any model that includes the u1(i) terms (or their equivalents) will satisfy (1). We now consider a slightly more complex sampling scheme. Example 3.5.2. Consider a three-factor table based on sex, socioeconomic status, and opinion about legalized abortion. Socioeconomic status has two categories: low and not low. The table of counts is Sex (i) Female
Male
Status (j) Low Not Low Total Low Not Low Total
Opinion (k) Support Do Not Support 171 79 138 112 309 191 152 167 319
148 133 281
Total 250 250 500 300 300 600
In this table, four independent samples have been incorporated into the table. The samples are (1) a sample of 250 low-status females, (2) a sample of 250 females not of low status, (3) a sample of 300 low status males, and (4) a sample of 300 males not of low status. The sampling design has fixed the sex-status marginal totals, so the expected sex-status totals equal the observed totals, i.e., mij· = nij· . Any model that estimates expected cell counts must also incorporate the condition that m ˆ ij· = nij·
3.5 Product-Multinomial and Other Sampling Plans
101
for all i and j. In particular, any model that includes the u12(ij) terms will have these margins fixed. If we restrict attention to models that include the u12(ij) terms, we do not have to concern ourselves further with the product-multinomial nature of the sampling design. The restriction that u12(ij) terms must be in the model reduces the possible number of models. The possible models are listed below. Possible Models with mij· Fixed by the Sampling Design [123] [12][13][23] [12][13] [12][23] [12][3]
Finally, these ideas extend easily to higher-dimensional tables. Suppose we have a four-dimensional table with indices h, i, j, k. If the sampling design fixes the margins mh·jk = nh·jk , then we restrict attention to log-linear models that include the u134(hjk) terms. If the sampling design fixes the margins m·i·k = n·i·k , then we consider only models with u24(ik) terms. Note that if the model includes, say, the u234(ijk) terms, then the u24(ik) terms are implicitly in the model. With the u234(ijk) terms in the model, the u24(ik) terms are redundant and it is irrelevant whether the u24(ik) ’s are explicitly stated as part of the model or not. Examples 3.5.1 and 3.5.2 illustrate the two primary sampling schemes for response factors that were discussed in Section 2.3. In both examples, Opinion can be viewed as a response factor. In Example 3.5.2, both Sex and Status are explanatory factors. For every combination of the levels of the explanatory factors, there is an independent multinomial sample. The categories for each multinomial are the levels of the response factor Opinion. This is the first of the sampling schemes discussed in Section 2.3. In Example 3.5.1, only the two levels of the explanatory factor Sex are used to define the independent multinomial populations. The categories of the multinomials are defined by all combinations of the levels of the response factor Opinion and the levels of the explanatory factor Party. This is the generalized sampling scheme discussed in Section 2.3. If Opinion is regarded as a response, it is not unusual to condition on all of the explanatory factors, i.e., Sex and Party, in the analysis. Thus, the data may be treated as if they were product-multinomial with an independent multinomial for each
102
3. Three-Dimensional Tables
combination of the explanatory factors. These issues are discussed again in Chapter 4.
3.5.1
Other Sampling Models
As mentioned in Section 1.5, the other commonly used sampling scheme for log-linear models is Poisson sampling. In Poisson sampling, an independent Poisson random variable is observed for each cell in the table. It is easily seen that Poisson sampling leads to the same methods of analysis that are used for multinomial sampling, cf. Chapter 12. Although Poisson, product-multinomial, and multinomial sampling are the only sampling models considered in this book, it should not be concluded that these are the only sampling schemes used to generate and analyze categorical data. The hypergeometric distribution is often taken as the appropriate sampling model. Also, most large sample surveys are not conducted so as to generate multinomial or Poisson data. Of course, these alternative sampling models may require substantial changes in the statistical analysis. Hypergeometric sampling arises quite naturally in discrete data problems. For example, 2×2 tables in which both the row totals and the column totals are fixed can be generated by hypergeometric sampling. Hypergeometric sampling is easily generalized to I × J tables. The hypergeometric distribution is also appropriate if one conditions on the row and column totals. The reason for conditioning on the row and column totals is that they are sufficient statistics under the model of independence. Tests for model adequacy based on the conditional distribution, given the sufficient statistics of the model, play a major role in Statistics generally and have a particularly important history in the analysis of categorical data. Fisher’s exact conditional test for 2 × 2 tables, cf. Exercise 2.7.5 or Plackett (1981), may be the most famous single methodology in categorical data analysis. A key reason for using conditional tests is that they are appropriate even for small samples. The main problem with conditional tests is that for log-linear models, they are generally difficult to compute. McCullagh (1986) and Hirji, Mehta, and Patel (1987) use alternative approaches to examine conditional tests. McCullagh concentrates on the use of asymptotic conditional distributions with the idea that conditional asymptotics are more appropriate for small and moderate sample sizes than unconditional asymptotics. Hirji et al. enumerate all of the tables that have the same values for the sufficient statistics. This work also applies to logistic regression. While enumeration is a very demanding computational task, with modern algorithms and computing equipment it has become a realistic alternative to the methods discussed here. Haberman (1974a, p. 14-33) details conditional inference for categorical data. Balmer (1988), Bedrick and Hill (1990), Mehta and Patel (1980, 1983), Mehta, Patel, and Gray (1985), Mehta, Patel, and Tsiatis (1984), and Pagano and Taylor-Halvorson
3.5 Product-Multinomial and Other Sampling Plans
103
(1981) all address issues of conditional inference and table enumeration. Recent reviews of these methods have been given by Agresti (1992) and Mehta (1994). The computer programs StatXact and/or LogXact perform the necessary computations. Davison (1988) uses saddlepoint methods to approximate conditional distributions. In some cases, random samples of the tables can be used rather than enumerating all of the tables. Kreiner (1987) discusses model selection using conditional tests and, specifically, the use of random samples from the conditional distribution. The generation of such random samples is based on the work of Agresti, Wackerly and Boyett (1979), Boyett (1979), and Patefield (1981). Berkson (1978) and Kempthorne (1979) give alternative views of Fisher’s exact test. For a general discussion of conditional tests see Lehmann (1986). Large sample surveys typically involve the use of stratification and cluster sampling, cf. Kish (1965). Multinomial sampling corresponds to simple random sampling with replacement. Product-multinomial sampling involves independent samples on a number of subpopulations. This is just stratified sampling. As we have seen, if strata are included as a factor in the table, then stratified sampling causes no problems in a log-linear model analysis of the data. The difficulty with stratified sampling is that often the individual strata are not of interest in the analysis. The desired conclusions are for the population as a whole and must be arrived at by weighting results from the separate strata. In sampling theory, the point of selecting strata is to reduce the variability of the overall results. If this is accomplished and if data from a stratified sample are analyzed as though they are multinomial, the variability is overestimated and results appear to be less significant than they actually are. The incorporation of cluster sampling is fundamentally more difficult to deal with. The whole point of cluster sampling is that the observations within a cluster are not independent. Typically, they display a positive correlation. All of our standard sampling plans assume independence between individual observations, so the standard plans are clearly inappropriate for cluster sampling. Inferences based on independence will underestimate the variability of data with a positive correlation. Thus, analyzing cluster sampling data as if they were multinomial will typically overstate the significance of results. In a complex survey involving both stratification and cluster sampling, the tendencies to understate significance and to overstate significance will, to some extent, offset each other. While this is a positive sign for the standard multinomial analysis, it by no means ensures that any particular set of survey data can be analyzed accurately when the complex sampling structure is ignored. In all likelihood, one or the other tendency to misstate the variability will dominate. Any serious analysis of complex survey data must involve an evaluation of the effect of the survey design on the analysis. Some of the key references on the analysis of complex survey data are Koch, Freeman, and Freeman (1975), Fienberg (1979), Brier (1980), Holt, Scott,
104
3. Three-Dimensional Tables
and Ewings (1980), Rao and Scott (1981, 1984, 1987), Bedrick (1983), Binder et al. (1984), Gross (1984), Fay (1985), and Thomas and Rao (1987). The collection of papers edited by Skinner, Holt, and Smith (1989) provides a useful summary of methods for analyzing data from complex surveys, including categorical data.
3.6
Model Selection Criteria
In analysis of variance and regression, the three measures most commonly used to evaluate the fit of models are R2 , Adjusted R2 , and Mallows’ Cp . Each of these measures has natural analogues in log-linear models. R2 measures how much of the total variation is being explained by the model. R2 has the property that models must explain as much or more of the variation than their submodels (models in which some terms have been deleted). Adjusted R2 modifies the definition of R2 so that larger models are penalized for being larger. Mallows’ Cp statistic is related to Akaike’s information criterion. We will apply Akaike’s information criterion to model selection for log-linear and logistic regression models and discuss the relation of Akaike’s criterion to Mallows’ Cp . Discussions of model selection criteria in general settings are given by Akaike (1973) and Schwarz (1978). A good review and comparison is presented in Clayton, Geisser, and Jennings (1986).
3.6.1
R2
In standard regression analysis, R2 is defined as R2 =
SSReg SSTot − C
where SSReg is the sum of squares for regression and SSTot−C is the sum of squares total corrected for the grand mean. In fact, SSTot−C is just the error sum of squares for the model that includes only an intercept. If we denote SSE(X) as the error sum of squares for an arbitrary model called X (e.g., with design matrix X) and SSE(X0 ) as the error sum of squares for a model with only an intercept, then R2 =
SSE(X0 ) − SSE(X) . SSE(X0 )
SSE(X0 ) is the total variation and SSE(X0 ) − SSE(X) is the variability explained by the model X. The ratio of these two, R2 , is the proportion of the total variation explained by the model. In general, there is no reason that SSE(X0 ) has to be the sum of squares for a model with just an intercept. In general, SSE(X0 ) could be the error
3.6 Model Selection Criteria
105
sum of squares for the smallest interesting model. In regression analysis, the smallest interesting model is almost always the model with only an intercept. In log-linear models, the smallest interesting model may well be the model of complete independence. (Recall from Exercise 2.4 that the independence model for a two-way table is also the intercept-only model for logistic regression.) In log-linear models, G2 plays a role similar to that of SSE in regression. If X0 indicates the smallest interesting model and X indicates the log-linear model of interest, we define R2 =
G2 (X0 ) − G2 (X) G2 (X0 )
where G2 (X) and G2 (X0 ) are the likelihood ratio test statistics for testing models X and X0 against the saturated model. If the X0 model is the smallest interesting model, then G2 (X0 ) is a measure of the total variability in the data. (It tests X0 against a model that fits the data perfectly.) It follows that G2 (X0 ) − G2 (X) measures the variability explained by the X model. R2 is the proportion of the total variability explained by the X model. Alternative definitions of R2 are available. As in standard regression analysis, R2 cannot be used to compare models that have different numbers of degrees of freedom. In regression analysis, this is caused by the fact that larger models have larger R2 ’s. Exactly the same phenomenon occurs with log-linear models. In fact, R2 for the saturated model will always equal one because G2 for the saturated model is zero.
3.6.2
Adjusted R2
Having defined R2 for log-linear models, the same adjustment for model size used in standard regression analysis can be used for log-linear models. The adjusted R2 is Adj. R2 = 1 −
q − r0 [1 − R2 ] q−r
where q is the number of cells in the table and r and r0 are the degrees of freedom for the models X and X0 . Note that there are q − r degrees of freedom for testing X against the saturated model and q − r0 degrees of freedom for testing X0 . A little algebra shows that Adj. R2 = 1 −
G2 (X)/(q − r) . G2 (X0 )/(q − r0 )
A large value of Adj. R2 indicates that the model X fits well. The largest value of Adj. R2 will occur for the model X with the smallest value of
106
3. Three-Dimensional Tables
G2 (X)/(q −r). Just as in regression analysis, the Adj. R2 criterion suggests the inclusion of many (probably too many) explanatory terms.
3.6.3
Akaike’s Information Criterion
We now consider Akaike’s information criterion as a method for selecting log-linear models. After describing Akaike’s method, we demonstrate its close relationship to standard regression model selection based on Mallow’s Cp statistic. Akaike (1973) proposed a criterion of the information contained in a statistical model. He advocated choosing the model that maximizes this information. For log-linear models, maximizing Akaike’s information criterion (AIC) amounts to choosing the model X that minimizes AX = G2 (X) − [q − 2r] ,
(log-linear)
where G2 (X) is the likelihood ratio test statistic for testing the X model against the saturated model, r is the number of degrees of freedom for the X model, and there are q degrees of freedom for the saturated model, i.e., q cells in the table. Given a list of models to be compared along with their G2 statistics and the degrees of freedom for the tests, a slight modification of AX is easier to compute by hand. AX − q
= G2 (X) − 2[q − r] = G2 (X) − 2 (test degrees of freedom) .
Because q does not depend on the model X, minimizing AX −q is equivalent to minimizing AX . Note that for the saturated model, A − q = 0. Before continuing our discussion of the AIC, we give an example of the use of A, R2 , and Adj. R2 . Example 3.6.1. For the personality (1), cholesterol (2), blood pressure (3) data of Examples 3.2.1 and 3.2.3, testing models against the saturated model gives Model [12][13][23] [12][13] [12][23] [13][23] [1][23] [2][13] [3][12] [1][2][3]
df 1 2 2 2 3 3 3 4
G2 0.613 2.062 2.980 4.563 7.101 6.184 4.602 8.723
A−q −1.387 −1.938 −1.020 0.563 1.101 0.184 −1.398 0.723
R2 .885 .764 .658 .477 .186 .291 .472 0
Adj. R2 .719 .527 .318 −.046 −.085 .055 .297 0
3.6 Model Selection Criteria
107
In Example 3.4.2, we established that there were three eligible models: [3][12], [12][13], and [12][13][23] and that model [12][23] was almost eligible. The AIC criterion A − q easily picks out all four of these models, with [12][13] the best of them. The adjusted R2 criterion also identifies these four models, but the values seem rather strange. For example, [12][23], which is not significantly better than [1][2][3], has a higher Adj. R2 than [3][12], which is significantly better than [1][2][3]. The R2 values seem like reasonable measures. The author’s inclination is to use the AIC, cf. Clayton, Geisser, and Jennings (1986). With only three factors, it is easy to look at all possible models. Model selection criteria become more important when dealing with tables having more factors.
Relation to Mallows’ Cp The approach to using Akaike’s information criterion outlined above is quite general. If we are considering a collection of models indexed by ξ, we can denote individual models as Mξ . For any ξ, let pξ be the dimension of the parameter space of the model. This is the number of independent parameters in the model. For models with linear structure, this is the degrees of freedom for the model (rank of the design matrix) plus the number of any independent nonlinear parameters. Log-linear models do not involve any nonlinear parameters. Standard regression and ANOVA involve one nonlinear parameter, the variance σ 2 . Suppose that there exists a most general model M with s independent parameters. In other words, any model Mξ is just a special case of M . For log-linear models, M is the saturated model and s is the number of cells in the table. For selecting variables in standard regression analysis, M is the full model that includes an intercept plus all s − 1 available variables. Finally, let Λ(ξ) be the likelihood ratio test statistic for testing Mξ against the larger model M . Maximizing Akaike’s information criterion is equivalent to choosing ξ to minimize Aξ = Λ(ξ) − (s − 2pξ ) . Consider applying this to the problem of variable selection in regression analysis. Let SSE(F ) be the sum of squares for error of the full model and SSE(X) be the error sum of squares for a reduced model with design matrix X and rank(X) = p. Let s be the degrees of freedom for the full model. If we assume that the variance σ 2 is known, then AX =
SSE(X) − SSE(F ) − (s − 2p) . σ2
(regression)
Because σ 2 will not really be known, an ad hoc procedure would be to ˆ 2 = SSE(F )/(n − s), the mean squared error for the full estimate σ 2 with σ
108
3. Three-Dimensional Tables
model where n is the regression sample size. An estimate of AX is AˆX
SSE(X) SSE(F ) − − (s − 2p) σ ˆ2 σ ˆ2 SSE(X) − (n − s) − (s − 2p) = σ ˆ2 SSE(X) − (n − 2p) = σ ˆ2 = Cp =
where Cp is Mallows’ well-known criterion for selecting regression models, cf. Christensen (1996b, Section 14.1).
3.7
Higher-Dimensional Tables
Log-linear models can easily be extended to tables with more than three factors. All of the basic principles from three-dimensional tables continue to apply. However the models, as well as independence and odds ratio relationships, become more complex. Independence relationships for highdimensional tables are discussed in Chapter 5. With more factors, there are many more models to consider. Systematic methods of model selection are discussed in Chapter 6. In this section, we just illustrate some examples.
Example 3.7.1. A study was performed on mice to examine the relationship between two drugs and muscle tension. For each mouse, a muscle was identified and its tension was measured. A randomly chosen drug was given to the mouse and the muscle tension was measured again. The muscle was then tested to identify which type of muscle it was. The weight of the muscle was also measured. Factors and levels are tabulated below. Factor Change in Muscle Tension Weight of Muscle Muscle Drug
Abbreviation T W M D
Levels High, Low High, Low Type 1, Type 2 Drug 1, Drug 2
The sampling is product-multinomial with the total count for each muscle type fixed. The data are
3.7 Higher-Dimensional Tables
Tension(h)
109
Drug(k) Drug 1 Drug 2 3 21 23 11
Weight(i) High
Muscle(j) Type 1 Type 2
Low
Type 1 Type 2
22 4
32 12
High
Type 1 Type 2
3 41
10 21
Low
Type 1 Type 2
45 6
23 22
High
Low
For illustration, we fit three log-linear models to this four-factor table: the model of all main effects log(mhijk ) = γ + τh + ωi + µj + δk ,
(1)
the model of all two-factor interactions log(mhijk ) = γ + τh + ωi + µj + δk + (τ ω)hi + (τ µ)hj + (τ δ)hk + (ωµ)ij + (ωδ)ik + (µδ)jk ,
(2)
and the model of all three-factor interactions log(mhijk ) = γ + τh + ωi + µj + δk + (τ ω)hi + (τ µ)hj + (τ δ)hk + (ωµ)ij + (ωδ)ik + (µδ)jk (3) + (τ ωµ)hij + (τ ωδ)hik + (τ µδ)hjk + (ωµδ)ijk . Getting rid of some of the redundant parameters, these can be rewritten as (1) log(mhijk ) = τh + ωi + µj + δk , log(mhijk ) = (τ ω)hi + (τ µ)hj + (τ δ)hk + (ωµ)ij + (ωδ)ik + (µδ)jk ,
(2)
and log(mhijk ) = (τ ωµ)hij + (τ ωδ)hik + (τ µδ)hjk + (ωµδ)ijk
(3)
respectively, leading to the shorthand notations [T][W][M][D],
(1)
[TW][TM][WM][TD][WD][MD],
(2)
[TWM][TWD][TMD][WMD].
(3)
As discussed in Section 4, the shorthand provides all the information necessary for fitting the model (other than the actual cell counts).
110
3. Three-Dimensional Tables
The test statistics for testing these models against the saturated model are given below. Clearly, the only model that fits the data is the model of all three-factor interactions. Model [TWM][TWD][TMD][WMD] [TW][TM][WM][TD][WD][MD] [T][W][M][D]
df 1 5 11
G2 0.11 47.67 127.4
P .74 .00 .00
As before, we can also test any model against reduced models. For example the test of [TWM][TWD][TMD][WMD] versus the reduced model [TW][TM][WM][TD][WD][MD] has G2 = 47.67 − 0.11 = 47.56 on df = 5 − 1 = 4. The data of Example 3.7.1 and the following data will be used to illustrate techniques in subsequent chapters. Example 3.7.2. Consider a data set in which there are four factors defining a 2 × 2 × 3 × 6 table. The factors are Factor Race Sex Opinion
Abbreviation R S O
Age
A
Levels White, Nonwhite Male, Female Yes = Supports Legalized Abortion No = Opposed to Legalized Abortion Und = Undecided 18-25, 26-35, 36-45, 46-55, 56-65, 66+ years
The sex and opinion factors are reminiscent of Example 2.1.1, but the data are distinct. The data are given in Table 3.1. See also Exercise 3.8.10.
3.7.1
Computer Commands
The muscle tension data are listed in file ‘tension.dat’ as counts for each cell with indices for tension, weight, muscle type, and drug, respectively. The file is as given below. 3 21 23 11 22 32 4
1 1 1 1 1 1 1
1 1 1 1 2 2 2
1 1 2 2 1 1 2
1 2 1 2 1 2 1
3.7 Higher-Dimensional Tables
111
TABLE 3.1. Abortion Opinion Data Race
Sex
Opinion
18-25
26-35
Age 36-45 46-55
56-65
66+
Male
Yes No Und
96 44 1
138 64 2
117 56 6
75 48 5
72 49 6
83 60 8
Female
Yes No Und
140 43 1
171 65 4
152 58 9
101 51 9
102 58 10
111 67 16
Male
Yes No Und
24 5 2
18 7 1
16 7 3
12 6 4
6 8 3
4 10 4
Female
Yes No Und
21 4 1
25 6 2
20 5 1
17 5 1
14 5 1
13 5 1
White
Nonwhite
12 3 10 41 21 45 23 6 22
1 2 2 2 2 2 2 2 2
2 1 1 1 1 2 2 2 2
2 1 1 2 2 1 1 2 2
2 1 2 1 2 1 2 1 2
We can fit the log-linear model [WMD][TWM][TWD][TMD] using SAS PROC GENMOD. options ps=60 ls=72 nodate; data tension; infile ’tension.dat’; input n T W M D; proc genmod data=tension; class T W M D; model n = W*M*T T*W*M T*W*D T*M*D / link=log dist=poisson; run; The main differences between these commands and those given in Subsection 2.6.1 for logistic regression are that now “link=log” and
112
3. Three-Dimensional Tables
“dist=poisson”. These change GENMOD from fitting logistic regression to fitting log-linear models. To fit other specific models such as [TM][WM][MD] or [T][WM][D], the model statement uses T∗M W∗M M∗D or T W∗M D, respectively. The “class” command used above specifies that a variable is not acting like a predictor variable in regression but rather that it gives indices for specifying the levels of an analysis of variance type factor. Similarly, we can use GENMOD to fit the abortion data. The data file ‘abort.dat’ has five columns, the first four are indices for race, sex, age, and opinion. The last column has the counts for each cell. The SAS commands for fitting the model [RSO][RSA][ROA][SOA] are options ps=60 ls=72 nodate; data abort; infile ’abort.dat’; input R S A O N; proc genmod data=abort; class R S A O; model N = R*S*O R*S*A R*O*A S*O*A / link=log dist=poisson; run; GLIM uses commands that are similar to GENMOD. Interactions are specified with a period rather than an asterisk. GLIM begins by specifying the number of cells in the table, i.e., the “units.” $units 72$ $data r s a o n$ $factor r 2 s 2 a 6 o 3$ $dinput 6$ $yvar n$ $error poisson$ $fit r.s.o + r.s.a + r.o.a + s.o.a$ $display e$ $stop$ The “factor” command specifies that a variable is not acting like a predictor variable in regression but rather that it gives indices for specifying the levels of an analysis of variance type factor. For GLIM, the user needs to specify the number of levels for each factor. After the ‘dinput 6’ command, the DOS version of GLIM prompts the user for the name of the data file. GENMOD and GLIM use the Newton-Raphson algorithm. BMDP-4F uses iterative proportional fitting. For the model [RSO][RSA][ROA][SOA], the BMDP-4F commands are as follows: / INPUT
FILE = ’C:\LOGLIN\ABORT.DAT’. FORMAT = FREE.
3.8 Exercises
113
VARIABLES = 5. / VARIABLE NAMES = R, S, A, O, N. / TABLE INDICES = R, S, A, O. COUNT = N. / STAT ALL. / FIT MODEL = RSO, RSA, ROA, SOA. / PRINT LINE = 79. / END BMDP-4F is the most powerful program I am aware of for fitting analysis of variance type log-linear models. In addition to GENMOD, SAS has a procedure called CATMOD. CATMOD will not be discussed. (I’ll leave the reasons to your imagination.)
3.8
Exercises
Exercise 3.8.1. Complete an analysis similar to that of Example 3.4.2 for the classroom behavior data of Example 3.0.1. Exercise 3.8.2. Complete an analysis similar to that of Example 3.4.2 for the auto accident data of Example 3.2.4. Exercise 3.8.3. Radelet (1981) gives data on the relationship between race and the imposition of the death penalty. The data are given in Table 3.2. Analyze the data. TABLE 3.2. Race and the Death Penalty Defendant’s Race Black
Victim’s Race Black White
Death Penalty Yes No 6 97 11 52
White
Black White
0 19
9 132
Exercise 3.8.4. The data on graduate admissions at Berkeley given in Exercise 2.6.1 was actually collapsed over the six largest departments within the university. The possibility exists that the data may display Simpson’s paradox. The full data are given in Table 3.3. Analyze the three-dimensional table and comment on Simpson’s paradox relative to these data. Exercise 3.8.5. Discuss Simpson’s paradox in terms of the following probability inequalities. Pr(A|B and C) < Pr(A| not B and C),
114
3. Three-Dimensional Tables TABLE 3.3. Graduate Admissions at Berkeley Dept. A B C D E F
Male Admitted Rejected 512 313 353 207 120 205 138 279 53 138 22 351
Female Admitted Rejected 89 19 17 8 202 391 131 244 94 299 24 317
Pr(A|B and not C) < Pr(A| not B and not C), and Pr(A|B) > Pr(A| not B). Exercise 3.8.6. Reevaluate your analysis of the data discussed in Exercise 2.6.3 in light of Simpson’s paradox. Are there other factors that need to be accounted for in a correct analysis of these data? Exercise 3.8.7. For the data of Example 3.2.4, do the first step of the (7) iterative proportional fitting algorithm for m ˆ ijk using a hand calculator. [0]
Use starting values of m ˆ ijk = 1. Compare the results after one step to the fully iterated estimates. Exercise 3.8.8. u3(k) + u12(ij) .
Consider the model log(mijk ) = u + u1(i) + u2(j) +
(a) Show that the maximum likelihood estimate of u3(1) − u3(2) is log(n··1 ) − log(n··2 ). (b) Show that maximum likelihood estimation gives ˆ12(22) u ˆ12(11) u log = log(n11· ) − log(n12· ) − log(n21· ) + log(n22· ). u ˆ12(12) u ˆ12(21) Exercise 3.8.9. The Mantel-Haenszel Statistic. In biological and medical applications, it is not uncommon to be confronted with a series of 2 × 2 tables that examine the same effect under different conditions. If there are K such tables, the data can be combined to form a 2 × 2 × K table. Because each 2 × 2 table examines the same effect, it is often assumed that the odds ratio for the effect is constant over tables. This is equivalent to assuming the no three-factor interaction model. To test for the existence of the effect, one tests whether the common log odds ratio is zero while adjusting for the various circumstances under which data were collected. In terms of log-linear models, this is a one degree of freedom test
3.8 Exercises
115
of conditional independence given the layer k. Prior to the development of log-linear model theory, Mantel and Haenszel (1959) proposed a statistic for testing this hypothesis. The statistic, apart from a continuity correction factor, is 2 [ k (n11k − m ˆ 11k )] , [ m ˆ m ˆ ] [n − 1] 11k 22k ··k k where the m’s ˆ are obtained from the conditional independence model. This statistic has an asymptotic χ2 (1) distribution under the conditional independence model. The Berkeley graduate admission data of Exercise 3.8.4 and Table 3.3 is a set of six 2 × 2 tables. In each table we are interested in the effect of sex on admission; the six departments constitute various conditions under which this effect is being investigated. a) Give a justification for whether or not use of the Mantel-Haenszel statistic is appropriate for these data. b) If appropriate, use both G2 and the Mantel-Haenszel statistic to test whether there is an effect of sex on admission. c) Show that the denominator of the Mantel-Haenszel statistic can be ˆ 12k m ˆ 21k ] [n··k − 1]. written as k [m Exercise 3.8.10. Using the data of Table 3.1 fit the all main effects model, the all two-factor effects model, and the all three-factor effects model. Perform all of the tests possible among these three models. Discuss your results. Exercise 3.8.11. [3(t+1)] m ˆ ijk ’s
[3t+2]
With regard to Section 3, show that the m ˆ ijk
also satisfy M
(7)
’s and
.
Exercise 3.8.12. As can be seen from the iterative proportional fitting algorithm, the m’s ˆ for the model of no three-factor interaction depend only on the 3 two-dimensional marginal tables. Discuss how this fact can be used to develop a more complete analysis for the Gilby data of Exercise 2.7.3. What assumptions must be made and what techniques should be used? What problems will Standard VIII cause? Project 3.8.13. Write a computer program to fit the model of no threefactor interaction to the Gilby data of Exercise 2.6.3. This can be done using iterative proportional fitting. Assuming that this model fits, do any submodels fit adequately?
4 Logistic Regression, Logit Models, and Logistic Discrimination
In logistic regression, there is a (binary) response of interest, and predictor variables are used to model the probability of that response. More generally, in a table of counts, primary interest is frequently centered on one factor that constitutes a response (dependent) variable. The other factors in the table are only of interest for their ability to help explain the response variable. Special kinds of models have been developed to handle these situations. In particular, rather than modeling log expected cell counts or log probabilities (as in log-linear models), when there is a response variable, various log odds related to the response variable are modeled. The special case in which the response variable has only two categories is of particular interest and lends itself to an especially nice treatment. This is because, with only two categories, there is essentially only one way to define the odds. If p1 is the probability in the first category and p2 is the probability in the second category, then the odds of getting category one are p1 /p2 . The odds of getting category two are p2 /p1 . The important point is that either of these numbers, together with the fact that p1 + p2 = 1, completely determine both p1 and p2 . So with two categories, the two choices for the odds lead to the same results. (In the last section of this chapter, we look at ratios p1 /p2 but without p1 + p2 = 1; this causes complications.) With a two-category response variable, we will examine models for log(p1 /p2 ). When these models are regression type models, they are called logistic regression models. When these models are ANOVA type models, they are often referred to as logit models. The two terms “logit” and “logistic regression,” as applied to models, are essentially two names for the
4. Logistic Regression, Logit Models, and Logistic Discrimination
117
same idea. Technically, the terms logit and logistic are names for transformations. The logit transformation takes a number p between 0 and 1 and transforms it to log[p/(1 − p)]. The logistic transformation takes a number x on the real line and transforms it to ex /(1 + ex ). Note that the logit transformation and the logistic transformation are inverses of each other. In other words, the logistic transformation applied to log[p/(1 − p)] gives p and the logit transformation applied to ex /(1 + ex ) gives x. Doing an analysis of data requires both of these transformations. It is largely a matter of personal preference as to which name is associated with the model. The situation when there are more than two categories in the response variable is considerably more complicated because it is not clear which sets of odds to model. Several choices have been suggested; some of these are discussed in Section 6. As will be discussed in this chapter and shown in Chapter 11, the log odds models turn out to be equivalent to a loglinear model. It is important to remember that log odds models are for use when relationships between the nonresponse factors (explanatory variables) are not of interest. It is implicit in the definition of a logit model that no structure between the explanatory variables is taken into account. It is possible to use models for log odds that incorporate explanatory factor structure, but such models are not what are generally known as logistic regression or logit models. Goodman (1973) proposes a multistep modeling procedure for response factors. His procedure involves collapsing on the response factor and fitting a log-linear model to the marginal table of the explanatory factors. This is followed by fitting a logit model for the response factor. Taken together, these give the probability of any cell in the table as the product of the marginal probability of its explanatory categories and the conditional probability of the cell given its explanatory categories. These recursive response models may or may not be log-linear models. Asmussen and Edwards (1983) give conditions for the equivalence of log-linear models and these multistep response models. Fienberg (1980, Chapter 7) gives a good brief discussion of response models and their limitations. More recently, graphical response models have been discussed by Asmussen and Edwards (1983), Edwards and Kreiner (1983), Kiiveri, Speed, and Carlin (1984), Kiiveri and Speed (1982), and Wermuth and Lauritzen (1983). Graphical models are discussed in the next chapter. Holland (1986) discusses statistics and causal inference, as do Glymour et al. (1987). The latter authors seem to have a substantially different perspective than that presented here. Goodman’s basic procedure can be applied with more than one response factor and with responses involving more than two categories. In this chapter, attention is concentrated on models for one response factor that condition on all the explanatory factors. If more than one response factor is present, one simple approach just restricts the fitted models to log-linear models that condition on the explanatory factors.
118
4. Logistic Regression, Logit Models, and Logistic Discrimination
Section 1 examines regression models for two category responses. Sections 2, 3, and 4 discuss measuring the fit of models, logistic regression diagnostics, and variable selection, respectively. Analysis of variance type models are examined in Section 5 for responses with two categories and in Section 6 for responses with more than two categories. Section 7 examines the analysis of retrospective studies via logistic discrimination; the distinction between retrospective and prospective studies is discussed in the next subsection.
Retrospective Versus Prospective Studies It is important to distinguish between prospective and retrospective studies. It is important because aspects of the material in this chapter do not apply to retrospective studies. For our purposes, the distinction is based on the nature of the sampling scheme. If the sampling is independent Poisson or if the categories of the response factor are in the same multinomial for every combination of the explanatory factors, the study is prospective. This is probably the most common way of thinking about data collection. For example, a prospective study of heart attack victims might take 250 people and examine each to determine whether they have had a heart attack and their levels of various explanatory factors. The explanatory factors may be such things as age, blood cholesterol, and blood pressure. In the prospective study, each combination of explanatory factors can be used to determine a population and an individual randomly falls into a response category. So, typically, there are many populations, and often each individual in the study is sampled from a different population. Prospective studies are (or can be thought of) as product-multinomials in which the multinomial categories are the categories of the response factor. Obviously, in a random sample of 250 people from the general population, very few would have had heart attacks. An alternative sampling method is often used in the study of such rare events as heart attacks. One might sample 100 people who are known to have had heart attacks and 150 people who have not had heart attacks. Again, each individual is characterized by their levels of the explanatory factors. There are only two populations here: the heart attack victims and the subjects without heart attacks. The categories of the multinomials for the two populations are the different categories of explanatory factors. The analysis of such data involves describing the characteristics of the two groups in terms of the explanatory factors. The key fact in this second example is that the response factor categories define different multinomial populations and the multinomial categories are the different combinations of the explanatory factors. Generally, if, for all combinations of the explanatory factors, the various categories of the response factor occur in different multinomials, then the study is retrospective. In medical research, retrospective data col-
4. Logistic Regression, Logit Models, and Logistic Discrimination
119
lection corresponds to a case-control study. Clearly, for rare events, this procedure has advantages over selecting a simple random sample. In a multinomial sample, so few of the rare events will occur as to give little power for determining their likelihood. In the hypothetical retrospective study of heart attacks, let the index i denote a set of explanatory characteristics; let p1i be the probability of that set for the heart attack population and p2i be the probability for the population who have not had heart attacks. A parameter of interest is p1i /p2i , the ratio of the probabilities. (Note that, in general, p1i + p2i = 1, so these are not odds.) The parameter addresses the question of whether the explanatory characteristics i are more or less likely among heart attack victims than among subjects who have not had heart attacks. Unfortunately, this does not address the issue of cause and effect. It simply describes characteristics of the two populations. As will be seen in Section 7, inferences about p2i ) will be complicated by the fact that the probabilities apply log(ˆ p1i /ˆ p2i ) does to different multinomials. The asymptotic covariance of log(ˆ p1i /ˆ not simplify like it would if the probabilities were from the same multinomial. Moreover, one does not typically have the simplification that p1i /p2i is equal to m1i /m2i ; thus, inferences about log(p1i /p2i ) cannot be made ˆ 2i ). directly by examining log(m ˆ 1i /m Prospective studies do not directly address the issue of cause and effect either, but they come closer than retrospective studies. As discussed earlier, both multinomial sampling and some forms of product-multinomial sampling generate prospective studies. An example of a product-multinomial prospective study is to independently sample people for each set of explanatory characteristics and see how many have had heart attacks. In medical research, the data given by this sampling scheme are called cohort data, and cross-sectional data are used to indicate the results of simple multinomial sampling. For cohort data, take p1i to be the probability of a heart attack in the population defined by the ith set of explanatory variables. Obviously, p2i = 1 − p1i is the probability of no heart attack in that population. The ratio p1i /p2i is the odds of having a heart attack for that population. (Recall that describing p1i /p2i as an odds is not appropriate in a retrospective study.) In a cohort study, one can argue that the ith population is a cause and that the ratio p1i /p2i is an effect. Unfortunately, populations usually involve more than the explanatory factors used to define them. Aspects of the ith population other than the values of the explanatory factors may be the true cause for p1i /p2i . We hope that random sampling within the population of people minimizes these effects. For cross-sectional data, one can use the mental device of conditioning on the number of people who fall in each explanatory category to get an independent multinomial sample for each set of explanatory characteristics. Thus, we can treat cross-sectional prospective data as if they were cohort prospective data. In fact, any prospective study can be thought of
120
4. Logistic Regression, Logit Models, and Logistic Discrimination
as product-multinomial sampling with an independent sample for each set of explanatory characteristics. This results from the fact that a prospective study is defined to be one in which all of the categories of the response factor are contained in the same multinomial. Except for the last section, we restrict attention in this chapter to prospective studies. A convenience of dealing with prospective studies is that log odds defined for the response factor are the same using either probabilities or expected cell counts, i.e., log(p1i /p2i ) = log(m1i /m2i ).
4.1
Multiple Logistic Regression
This section is devoted to regression models for the log odds of a twocategory response variable. The difference between this section and Section 2.6 is that here we consider the use of multiple predictor variables. The discussion will be centered around an example. Example 4.1.1. Chapman Data. Dixon and Massey (1983) present data on 200 men taken from the Los Angeles Heart Study conducted under the supervision of John M. Chapman, UCLA. The data consist of seven variables: Abbreviation Ag S D Ch H W CNT
Variable Age: Systolic Blood Pressure: Diastolic Blood Pressure: Cholesterol: Height: Weight: Coronary incident:
Units in years millimeters of mercury millimeters of mercury milligrams per DL inches pounds 1 if an incident had occurred in the previous ten years; 0 otherwise
Of the 200 cases, 26 had coronary incidents. The data are available electronically from STATLIB as well as through my web homepage: http://stat.unm.edu/˜fletcher Additional information is given in the Preface. As discussed in Section 2.6, such data can be viewed as a 200 × 2 contingency table in which the columns indicate presence or absence of a coronary incident and the rows indicate the 200 combinations of the explanatory variables Ag, S, D, Ch, H, and W associated with the men in the study. Each row is considered an independent binomial involving one trial. (If more than one person has the same combination of explanatory variables, it is
4.1 Multiple Logistic Regression
121
irrelevant whether they are treated as binomials with one trial or grouped together yielding a table with less than 200 rows.) The table has 200 counts and 400 cells, so the data are very sparse. As discussed in Section 2.6, when testing a logistic regression model against the saturated log-linear model (i.e., testing the logistic model for lack of fit), the asymptotic χ2 approximation is notoriously bad. The test statistics are reasonable things to look at, but formal χ2 tests are generally inappropriate because of the sparse data. Somewhat surprisingly, asymptotic χ2 approximations do work for testing one logistic regression model (a full model) against another logistic regression (a reduced model). Let pi be the probability of a coronary incident for the ith man. We begin with the logistic regression model log[pi /(1 − pi )] = β0 + β1 Agi + β2 Si + β3 Di + β4 Chi + β5 Hi + β6 Wi , (1) i = 1, . . . , 200. As discussed in Section 2.6 and Chapter 11, this is equivalent to a log-linear model for a two-way table in which the predictor variables are used to model the interaction, cf. Exercise 4.8.15. The model can be fitted using methods for log-linear models or the methods can be specialized for fitting logistic regression models, cf. Subsection 4.4.2 for SAS, BMDP, and GLIM commands. The actual methods for fitting logistic models will be examined in later chapters. The maximum likelihood fit of this model is given below. Variable Intercept Ag S D Ch H W
Estimate −4.5173 0.04590 0.00686 −0.00694 0.00631 −0.07400 0.02014
G2 = 134.9,
Std. Error 7.451 0.02344 0.02013 0.03821 0.00362 0.1058 0.00984
z −0.61 1.96 0.34 −0.18 1.74 −0.70 2.05
df = 193
The formula for G2 is as in Section 2.6. The df is the number of cases, 200, minus the number of fitted parameters, 7. Based on the z values, none of the variables really stand out. There are suggestions of age, cholesterol, and weight effects. The G2 would look good except that, as discussed earlier, there is no basis for comparing it to a standard. Prediction follows much the same form as in Section 2.6, log[ˆ pi /(1 − pˆi )] = βˆ0 + βˆ1 Agi + βˆ2 Si + βˆ3 Di + βˆ4 Chi + βˆ5 Hi + βˆ6 Wi . For a 60-year-old man with blood pressure of 140 over 90, a cholesterol reading of 200, who is 69 inches tall and weighs 200 pounds, the estimated
4.1 Multiple Logistic Regression
123
discussed in detail in Section 3.6. A∗ is a modification of A − q for logistic regression. A − q ≡ G2 − 2(df ) and is the Akaike information criterion less the number of cells (200 × 2) in the table. A∗ is a version of the Akaike information criterion defined for comparing submodels of model (1) to the full model. It is defined by A∗ = (G2 − 134.9) − (7 − 2p) . Here, 134.9 is G2 for the full model (1), 7 comes from the degrees of freedom for the full model (6 explanatory variables plus an intercept), and p comes from the degrees of freedom for the submodel (p = 1 + number of explanatory variables). The information in A − q and A∗ is identical: A∗ = 258.1 + (A − q). (The value 258.1 = number of cells − G2 [full model] − p[full model] = 400 − 134.9 − 7.) A∗ is listed because it is a little easier to look at and takes values similar to Cp . Model Variables Ag,S,D,Ch,H,W Ag W H,W Ch S,D Intercept
df 193 198 198 197 198 197 199
G2 134.9 142.7 150.1 146.8 146.9 147.9 154.6
A−q −251.1 −253.3 −245.9 −247.2 −249.1 −246.1 −243.4
A∗ 7 4.8 12.2 10.9 9.0 12.0 14.7
Of the models listed, log[pi /(1 − pi )] = γ0 + γ1 Agi
(2)
is the only model that is better than the full model based on the information criterion; i.e., A∗ is 4.8 for this model, less than the 7 for model (1). Asymptotically valid tests of submodels against model (1) are available. These are performed in the usual way; i.e., the differences in degrees of freedom and G2 ’s give the appropriate values for the tests. For example, the test of model (2) versus model (1) has G2 = 142.7 − 134.9 = 7.8 with df = 198 − 193 = 5. Other tests are given below. Tests against Model (1) Model df G2 Ag 5 7.8 W 5 15.2** H,W 4 11.9* Ch 5 12.0* S,D 4 13.0* Intercept 6 19.7**
124
4. Logistic Regression, Logit Models, and Logistic Discrimination
All of the test statistics are significant at the .05 level, except for that associated with model (2). This indicates that none of the models other than (2) is an adequate substitute for the full model (1). In the table above, one asterisk indicates significance at the .05 level and two asterisks denotes significance at the .01 level. Our next step is to investigate models that include Ag and some other variables. If we can find one or two variables that account for most of the value G2 = 7.8, we may have an improvement over model (2). If it takes three or more variables to explain the 7.8, model (2) will continue to be the best-looking model. [Note that χ2 (.95, 3) = 7.81, so a model with three more variables than model (2) and the same fit as model (1) would still not demonstrate a significant lack of fit in model (2).] Below are fits for all models that involve Ag and either one or two other explanatory variables. Model Variables Ag,S,D,Ch,H,W Ag,S,D Ag,S,Ch Ag,S,H Ag,S,W Ag,D,Ch Ag,D,H Ag,D,W Ag,Ch,H Ag,Ch,W Ag,H,W Ag,S Ag,D Ag,Ch Ag,H Ag,W Ag
df 193 196 196 196 196 196 196 196 196 196 196 197 197 197 197 197 198
G2 134.9 141.4 139.3 141.9 138.4 139.0 141.4 138.5 139.9 135.5 138.1 141.9 141.4 139.9 142.7 138.8 142.7
A∗ 7.0 7.5 5.4 8.0 4.5 5.1 7.5 4.6 6.0 1.6 4.2 6.0 5.5 4.0 6.8 2.9 4.8
Based on the A∗ values, two models stand out: log[pi /(1 − pi )] = γ0 + γ1 Agi + γ2 Wi
(3)
log[pi /(1 − pi )] = η0 + η1 Agi + η2 Wi + η3 Chi
(4)
with A∗ = 2.9 and
with A∗ = 1.6. The estimated parameters and standard errors for model (3) are
4.1 Multiple Logistic Regression
Variable Intercept Ag W
Parameter γ0 γ1 γ2
Estimate −7.513 0.06358 0.01600
125
SE 1.706 0.01963 0.00794
For model (4), these are Variable Intercept Ag W Ch
Parameter η0 η1 η2 η3
Estimate −9.255 0.05300 0.01754 0.006517
SE 2.061 0.02074 0.003575 0.008243
The coefficients for Ag and W are quite stable in the two models. The coefficients of Ag, W, and Ch are all positive, so that a small increase in age, weight, or cholesterol is associated with a small increase in the odds of having a coronary incident. Note that we are establishing association, not causation. As in regular regression, interpreting regression coefficients can be very tricky. The fact that the regression coefficients are all positive conforms with the conventional wisdom that high values for any of these factors increases one’s chance of heart trouble. However, as in standard regression analysis, correlations between predictor variables can make interpretations of individual regression coefficients almost impossible. It is interesting to note that from fitting model (1), the estimated regression coefficient for D, diastolic blood pressure, is negative. A naive interpretation would be that as diastolic blood pressure goes up, the probability of a coronary incident goes down. (If the log odds go down, the probability goes down.) This is contrary to common opinion about how these things work. Actually, this is really just an example of the fallacy of trying to interpret regression coefficients. The regression coefficients have been determined so that the fitted model explains these particular data as well as possible. As mentioned, correlations between the predictor variables can have a huge effect on the estimated regression coefficients. The sample correlation between S and D is .802, so estimated regression coefficients for these variables are unreliable. Moreover, it is not even enough just to check pairwise correlations between variables; any large partial correlations will also adversely affect interpretations. Fortunately, such correlations should not normally have an adverse affect on the predictive ability of the model; they only adversely affect attempts to interpret regression coefficients. In Chapter 13, we will see that the regression coefficients also depend on the precise form of the logit model. Other methods for modeling the probabilities that are both reasonable and very similar to logistic regression can have very different regression coefficients while giving very similar probabilities. Finally, in this particular example, another excuse for the D coefficient βˆ3
126
4. Logistic Regression, Logit Models, and Logistic Discrimination
being negative is that from the z value, β3 is not significantly different from zero. The estimated blood pressure coefficients from model (1) also suggest an interesting hypothesis. (The hypothesis would be much more interesting if the individual coefficients were significant, but we wish to demonstrate a modeling technique.) The coefficient for D is −.00694, which is approximately the negative of the coefficient for S, .00686. This suggests that perhaps the difference S − D would be just as valuable a predictor as the individual predictors S and D. We can evaluate this by fitting log[pi /(1 − pi )] = γ0 + γ1 Agi + γ2 (Si − Di ) + γ3 Chi + γ4 Hi + γ5 Wi , which gives G2 = 134.9 on df = 194. This model is a special case of model (1), so a test of it against model (1) has G2 = 134.9 − 134.9 = 0 with df = 194 − 193 = 1. The G2 is essentially zero, so the data are consistent with the reduced model. Of course, this reduced model was suggested by the fitted full model, so any formal test would be biased — but then one does not accept null hypotheses anyway, and the whole point of choosing this reduced model was that it seemed likely to give a G2 close to that of model (1). We note that the new variable, S − D, is still not significant; it only has a z value of .006834/.01877 = .36. Another way to view the procedure of the previous paragraph would be as a test of H0 : β3 = −β2 in model (1). If we incorporate this hypothesis into model (1), we get log[pi /(1 − pi )] = β0 + β1 Agi + β2 Si + (−β2 )Di + β4 Chi + β5 Hi + β6 Wi = β0 + β1 Agi + β2 (Si − Di ) + β4 Chi + β5 Hi + β6 Wi as displayed above. (Whether we call the parameters β’s or γ’s is irrelevant.) We learned earlier that, relative to model (1), either model (3) or (4) does an adequate job of explaining the data. This conclusion was based on looking at A∗ values, but would also be obtained by doing formal tests of models. Thus, we know that age and weight are important variables in explaining coronary incidents. Moreover, cholesterol may also be an important variable. However, we have not explained most of the variability in coronary incidents. Consider a measure analogous to R2 . The smallest interesting logistic regression model is log[pi /(1−pi )] = γ0 . As seen earlier, this has G2 = 154.6 on 199 degrees of freedom. The percent of variability explained by model (3) is 154.6 − 138.8 R2 (Ag, W ) = = .10 , 154.6
4.2 Measuring Model Fit
127
which seems pretty pathetic. In fact, the R2 from the full model is not much better R2 (Ag, S, D, Ch, H, W ) =
154.6 − 134.9 = .13 . 154.6
We are a very long way from fitting the data as well as the saturated model fits. In fact, if we fit a 28-parameter model including all variables, all squares of variables, and all two-factor cross-product terms, G2 is 108.8, so R2 is still only .30. Granted, we have not explained most of the variation in the data, but it was probably not reasonable to think that we could. In standard regressions, a perfect fitting model can have a low R2 . This happens when there is substantial pure error in the model. The same thing happens in logistic regression. That fact will be illustrated in the next section.
4.2
Measuring Model Fit
In regression and ANOVA, R2 is large when the pure error Var(yi ) = σ 2 is small. When σ 2 is unknown, there is always the hope that it will be small (if we can find the correct model). In logistic regression, Var(yi ) = Ni pi (1−pi ). There isn’t any hope of making this universally small. You can only make it truly small by looking at uninteresting data — those with pi near 0 or 1. Cases with a realistic chance of going either way make for large variability. We begin our examination of how well models fit by looking at the likelihood ratio and Pearson test statistics as applied to logistic regression. As before, we have I independent binomials, each consisting of one trial. Thus, we have a logistic regression with I cases and the dependent variable y is either 0 or 1. Equivalently, we have an I × 2 table with counts nij , where ni1 + ni2 = 1. Let the variable y correspond to counts in the first column of the table so that yi = ni1 and 1 − yi = ni2 . If a logistic regression model for log[pi /(1 − pi )] is fitted, we obtain estimates pˆi of the pi ’s. For the corˆ i1 and (1 − pˆi ) = m ˆ i2 . The likelihood responding log-linear model, pˆi = m ratio and Pearson test statistics against the saturated model were given in Section 2.6. Example 4.2.1. Suppose that in a true and very accurate model, . . there are 30 observations (i values) each with pi = .1 = pˆi and 30 with . . pi = .9 = pˆi . From the first 30, we could then expect to get about three observations with yi = 1. From (2.6.6), each of these observations has a crude standardized residual of about (1 − .1) = 3. .1(1 − .1)
128
4. Logistic Regression, Logit Models, and Logistic Discrimination
The other 27 observations will have residuals of (0 − .1) = −.333 . .1(1 − .1) Similarly, from the second 30 observations, three would be about (0 − .9)/ .9(1 − .9) = −3 and 27 would be about (1 − .9)/ .9(1 − .9) = .333. It is disturbing that this perfect model with a perfect fit has what usually would be considered large residuals. Using the formula for X 2 from Section 2.6, the Pearson statistic will be about . X 2 = 6(32 ) + 54(.3332 ) = 60 , which is the number of cases. No matter how accurate the model is, the Pearson statistic for these observations will still be about 60. It will never get small. A similar phenomenon holds for the likelihood ratio test statistic. Under the same circumstances as discussed above, . G2 = 2[6 log(1/.1) + 54 log(1/(1 − .1))] = 33.32 . (Asymptotic theory does not hold for these tests, so there is no reason to expect X 2 and G2 to be about the same.) Note that a constant model for . these data would have pˆi = .5 and G2 = 2(60) log(1/.5) = 83.18, so for this essentially perfect model which is fit perfectly, R2 =
83.18 − 33.32 = .599; 83.18
not very high under the circumstances. In fact, since the probabilities in this example were chosen to be quite extreme (near 0 and 1), the observations have unusually low variability, which should actually inflate R2 . The moral is simply that one should not expect to see the very high R2 ’s that one sometimes gets in standard regression. No matter how accurate the fitted model, the test statistics will not become arbitrarily small nor will R2 approach 1. The likelihood ratio statistic G2 will become small only as the intrinsic variability of the true model decreases, i.e., as all probabilities approach 0 or 1. The Pearson statistic evaluated at the true model will remain near I, the number of cases. There are two morals to all of this. First, R2 type measures can be used to measure relative goodness of fit but may be misleading if used to measure absolute goodness of fit. Models with low R2 ’s can fit great. Models with high R2 ’s can exhibit lack of fit. Second, residuals from logistic regression cannot be used without special consideration given to the 0-1 nature of the data (cf. Jennings, 1986).
4.2 Measuring Model Fit
129
Example 4.2.2. Using model (4.1.4), one finds that of the 200 cases in the Chapman data, 26 cases had a crude standardized residual in excess of .97. The 26 cases were precisely the 26 cases that had coronary incidents. A method for identifying unusual cases that indicates that every case with a coronary event is unusual leaves something to be desired.
4.2.1
Checking Lack of Fit
Methods of checking for lack of fit in logistic regression have been discussed by Tsiatis (1980), Landwehr, Pregibon, and Shoemaker (1984), and Fienberg and Gong (1984). Their approaches are based on clustering near replicates of the regression variables so that something akin to pure error can be identified. We present here a method of evaluation inspired by standard residual analysis. Rather than clustering near replicates, it clusters cases with similar pˆi ’s. A standard method for identifying lack of fit in regression analysis is to plot the residuals against the predicted values. This plot should form a structureless horizontal band about zero (cf. Christensen, 1996b, Section 13.4). An equivalent plot would be the observations versus the predicted values. This plot should form a structureless band about the line with slope 1 and intercept 0. For logistic regression, a plot of observations versus predicted values should show predicted values near 0 having most observations equal to 0, and predicted values near 1 having most observations equal to 1; predicted values near .5 should have about equal numbers of observations that are 0s and 1s, etc. Such a plot could be difficult to interpret visually, so let’s get cruder. Break the predicted values into, say, 10 intervals: [0,.1), [.1,.2), [.2,.3), . . . , [.9,1]. For each interval, find the number of cases that have pˆ in the interval and the number of those cases that have y = 1. The midpoint of the interval multiplied by the number of cases should be close to the number of cases with y = 1. The fit within each interval can be summarized by looking at components of a Pearson-like statistic [(cases with y = 1) − (total interval cases)(midpoint)]2 . (total interval cases)(midpoint) These case values can be added to obtain a summary measure of the goodness of fit.
Example 4.2.3. For the Chapman data using model (4.1.4), no pˆi values are greater than .6. Intervals, total cases, coronary incidents, expected values (cases times midpoints), and components are listed below.
130
4. Logistic Regression, Logit Models, and Logistic Discrimination
pˆ Interval [0,.1) [.1,.2) [.2,.3) [.3,.4) [.4,.5) [.5,.6)
Number of Cases 99 60 22 10 7 2
Coronary Incidents 5 10 2 5 2 2
Cases × Midpoint 4.95 9 5.5 3.5 3.15 1.1
Components 0.0005 0.1111 2.2273 0.6429 0.4198 0.7364 Total = 4.138
The component for the interval [.2,.3) is much larger than the others. If there is a lack of fit in evidence, it is most likely that people with estimated probabilities of a coronary incident between .2 and .3 actually have a considerably lower chance of having a coronary. On the other hand, if the components are at all analogous to χ2 ’s, even the interval [.2,.3) is not clear evidence of lack of fit. Based on this rather questionable comparison, neither the individual value 2.2273 nor the total 4.138 are unreasonable. A possible improvement to this technique is, rather than taking the midpoint of the interval, to average the pˆ’s in the interval. For the interval [.2,.3), the average of the 22 pˆ’s is .24045 with a corresponding cell component of 2.0461. This indicates even less lack of fit.
4.3
Logistic Regression Diagnostics
We assume that the reader is familiar with diagnostics for standard regression. The diagnostics for logistic regression to be discussed are analogues of common methods used for standard regression. In standard regression, some of the usual diagnostic statistics are the residuals, standardized (studentized) residuals, standardized predicted (deleted) residuals (t residuals), Cook’s distances, and the leverages, i.e., the diagonal elements of the projection operator (“hat matrix”). A primary use of residuals is in detecting outliers. However, as we have seen, for data consisting of 0s and 1s, the detection of outliers presents some unusual problems. When there are only two outcomes, it is difficult to claim that seeing either of them constitutes an outlier. If we have too many 0s or 1s in situations where we would not expect them (e.g., too many 1s in situations that we think have a small probability of yielding a 1), then we have a problem, but the problem is best thought of as a lack of fit. Moreover, we have seen that with 0-1 data, perfectly reasonable observations can have “unusually large” residuals. Another use for residuals is in checking normality. For log-linear models, this can be thought of as checking how well the asymptotic theory holds.
4.3 Logistic Regression Diagnostics
131
Unlike ANOVA type log-linear models, for 0-1 data the residuals are not asymptotically normal, so, again, the usual residual analysis is not appropriate. All in all, the residuals (and modified residuals) do not seem very useful in and of themselves. We will concern ourselves with examining leverages and influential observations. In particular, we will examine the logistic regression analogue of Cook’s distance that was discussed in Pregibon (1981) and Johnson (1985). There are two questions frequently asked about influential observations. One is, “In what sense is this observation influential?” The question is crucial. Observations are not “influential” in a vacuum. They may be influential to the estimated regression parameters; they may be influential to the fitted probabilities. They may be influential to just about anything. When examining influential observations, one first decides on the important aspects of the model and then examines influence measures appropriate to those aspects. The author agrees with Johnson (1985) that, typically, the primary concern should be about influence on the fitted probabilities. In logistic regression, Cook’s distance is a direct influence measure relative to the fitted regression coefficients, but, as Johnson has shown, it can be viewed as an approximation to his Kullback-Leibler (K-L) divergence measure and Cook and Weisberg’s (1982) likelihood distance measure. Although the author’s inclination is toward the K-L divergence measure, the absence of readily available computer software dictates that the discussion be focused on Cook’s distance. The second frequently asked question about influential observations is, “Given some influential observations, what do you do about them?” My answer is that you should worry about them. Primarily, you should worry about whether it is more appropriate to ignore the fact that they are influential or eliminate their influence by deleting them from the data and then refitting the model. Of course, all of this is complicated by the fact that whether or not a case is influential depends on what model is being fitted. In the end, the answer to this question must depend on the data and the purpose of the analysis. Many standard logistic regression programs provide diagnostics. For example, SAS PROC LOGISTIC and BMDP-LR both provide them and they can also be obtained from GLIM. Some sample commands are given in Subsection 4.4.2. In addition, many standard regression programs routinely provide diagnostics and these can also be used to obtain logistic regression diagnostics because the Newton-Raphson method of fitting logistic regression models amounts to doing a series of weighted regressions. Standard diagnostic quantities for each case are the log odds
log[ˆ pi /(1 − pˆi )] = βˆ0 + βˆ1 Agi + βˆ2 Chi + βˆ3 Wi ,
132
4. Logistic Regression, Logit Models, and Logistic Discrimination
the predictive probability pˆi , the leverage a ˆii , the large sample standard ˆii ), the standardized residual error of pˆi , which is pˆi (1 − pˆi )(1 − a ri =
yi − pˆi pˆi (1 − pˆi )(1 − a ˆii )
the Pearson residual r˜i =
,
yi − pˆi
, pˆi (1 − pˆi ) the square of which is the ith component of Pearson’s chi-square, the deviance residual ± 2[yi log(yi /ˆ pi ) + (1 − yi ) log((1 − yi )/(1 − pˆi ))] where the sign is taken to be the same as the sign of yi − pˆi , and a version of Cook’s distance for logistic regression. These formulae are for yi either 0 or 1 and, as discussed earlier, residuals are not very interesting in this case. When yi is a binomial count between 0 and Ni , the residuals can be useful for reasonably large Ni . The standardized residuals then become yi − Ni pˆi ri = Ni pˆi (1 − pˆi )(1 − a ˆii ) with similar adjustments to other quantities, cf. Subsection 4.4.1. It should be mentioned that, properly defined, the logistic regression version of Cook’s distance for the ith case requires computation of both the estimated logistic regression coefficients and the estimated coefficients when the ith case is deleted from the data. Both sets of estimates require iterative computations and it will be desired to investigate Cook’s distance for every case in the data. This can be expensive. To reduce costs, it is common practice to use estimates when the ith case is deleted that are the result of one iteration of Newton-Raphson with starting values taken as the estimates from the full data set. In other words, this involves doing only one weighted regression. These one-step procedures will be the subject of our discussion. We now present the procedure for obtaining diagnostic values in the context of fitting the model log[pi /(1 − pi )] = β0 + β1 Agi + β2 Chi + β3 Wi to the Chapman data. Specifically, we discuss how to use the diagnostic procedures in standard regression to obtain diagnostics for logistic regression. Using a logistic regression program, we get the fit Variable Intercept Ag Ch W
Parameter β0 β1 β2 β3
Estimate −9.255 0.05300 0.006517 0.01754
SE 2.061 0.02074 0.003575 0.008243
4.3 Logistic Regression Diagnostics
133
Having fit the model, create a data file that contains Ag, Ch, W, y, and pˆ, where y consists of the 0-1 counts yi and pˆ is a new variable that consists of the 200 values for pˆi . This data file is used as input into a regression program that allows (a) transformations of variables, (b) weighted regression, (c) computation of leverages, and (d) computation of Cook’s distances. Using the transformation capability, define weights for the regression, say RWT, with RW Ti = pˆi (1 − pˆi ). Also, define two variables Y0 and Y with Y0i = log[ˆ pi /(1 − pˆi )] and Yi = Y0i + (yi − pˆi )/RW Ti . See Subsection 4.4.1 for computing methods when the data are not binary. The variable Y0i can be used to help verify that things are working as they should. Use the regression program to fit Y0i = β0 + β1 Agi + β2 Chi + β3 Wi + ei with the weights RW Ti . The regression coefficients from this fit should be identical to those obtained from the logistic regression program. Now fit Yi = β0 + β1 Agi + β2 Chi + β3 Wi + ei with weights RW Ti . This gives estimated regression coefficients that are one additional step of the Newton-Raphson algorithm beyond those obtained by the logistic regression program. In this example, the regression gives Variable Intercept Ag Ch W
Parameter Estimate β0 −9.256 0.05300 β1 0.006518 β2 0.017539 β3 M SE = .9942
SE 2.066 0.02077 0.003578 0.008248
The parameter estimates are very close to the original logistic regression estimates, but need not be identical to them. (They are close enough that the logistic regression program concluded that the results had converged.) The M SE is close to one, so the reported standard errors are also very close in the regression and the logistic regression. In logistic regression, there is no variance parameter to be estimated, as there is in standard regression, so for logistic regression, anything from a standard regression that involves the M SE must have that involvement eliminated.√Appropriate standard errors are the reported standard errors divided by M SE. From this standard regression fit, we can also obtain leverages, standardized residuals, and Cook’s distances. The leverages are precisely those suggested by Pregibon (1981). The standardized residuals and Cook’s distances
134
4. Logistic Regression, Logit Models, and Logistic Discrimination
reported by a standard regression program also involve adjustments for the M SE. For logistic regression, those adjustments must be eliminated. The reported Cook’s distances from the standard regression are essentially the one-step logistic regression Cook’s distances. The difference in the Cook’s distances is that the reported values are the true one-step estimates divided by the M SE. In the discussion below, the reported Cook’s distances have been multiplied by M SE to give the appropriate values. In any case, because the values from the program are all being divided by the same number, to make comparisons among cases the values could be used without modification. Another difference is that in standard regression, Cook’s distance involves dividing by the number of regression parameters. Often in logistic regression programs, this division is not used. So in this example, to get the Cook’s distances given by, say, BMDP-LR, the Cook’s distances reported by the regression program have to be multiplied by 4 M SE. For what they are worth with 0-1 √ data, the standardized residuals reported by the regression program times M SE are the correct standardized residuals. The nine cases with the highest leverages are Case Leverage
19 .104
38 .081
41 .149
84 .079
108 .147
111 .062
116 .067
153 .090
157 .093
The two that really stand out are Cases 41 and 108. Case 41 has Ag = 40, W = 169, and Ch = 520. This is an exceptionally high cholesterol value. Of the 200 cases, only 9 have Ch values over 400 and only 3 have Ch values over 428. These are Case 116 with Ch = 453, Case 38 with Ch = 474, and Case 41. A similar phenomenon occurs with Case 108. It has Ag = 51, W = 262, and Ch = 269. The weight of 262 pounds is extremely high within the data set. Of the nine cases with high leverage, only Cases 19, 41, and 111 correspond to men that had coronary incidents. Denote the Cook’s distances as Ci ’s. There are 32 cases with Ci ≥ .01. These include all 26 of the individuals who had coronary incidents. Of the other six cases, four were also among the highest leverage cases and the remaining two also had reasonably high leverages. Only four cases had Ci ≥ .05. These are Case 41 86 126 192
Ci .112 .078 .079 .064
Leverage .149 .008 .042 .022
All of these correspond to individuals with coronary incidents. If we compare the values 4 Ci to a χ2 (4) distribution as suggested by Johnson (1985), we can get some global idea of the amount of influence each case is having.
4.3 Logistic Regression Diagnostics
135
The conclusion is that none of the cases has much effect on the fitted model. The multiplier and df of 4 for calibrating Ci were the number of regression coefficients in the logistic regression. Of course, to compare these values to a chi-squared distribution, it is vital that the Ci ’s be computed properly; i.e., values from a standard regression program have to be multiplied by M SE. Case 41 is easily the most influential, so it is of interest to examine what happens if this case is deleted from the data. For the most part, the fitted pi ’s are similar. The estimated coefficients with Case 41, without Case 41, and standard errors without Case 41 are given below. Variable Intercept Ag Ch W
Estimate with Case 41 −9.255 0.05300 0.006517 0.01754
Estimate without Case 41 −8.813 0.05924 0.004208 0.01714
SE without Case 41 2.058 0.02138 0.003881 0.008216
The estimates have changed but not dramatically. Perhaps the most striking aspect is the change in the evidence for the effect of cholesterol. With Case 41 deleted, the estimate divided by the standard error is .004208/.003881 = 1.08. With Case 41 included, this value is .006517/.003575 = 1.82. The inclusion of cholesterol in the model was questionable before; without Case 41, there seems little need for it. With Case 41 deleted, G2 (Ag, Ch, W ) = 132.8 on 195 degrees of freedom. 2 G (Ag, W ) = 134.0 on 196 degrees of freedom. The difference in G2 ’s is 134.0 − 132.8 = 1.2 with 1 degree of freedom. There is virtually no evidence for including cholesterol in the model. The estimated coefficients without Case 41 using only Ag and W are Variable GM Ag W
Estimate −7.756 0.06675 0.01625
SE 1.745 0.02013 0.008042
Thus, we have found that almost all of the evidence for a cholesterol effect is based on the fact that one individual with a very high cholesterol level had a coronary incident. We can just drop that individual and state that for more moderate levels of cholesterol, the numerical level of cholesterol does not enhance our predictive ability. But this conclusion is already indicating that qualitatively different things happen at different cholesterol levels. Why not try to incorporate that idea into the models being considered? One could group cases based on cholesterol levels and fit different models to different groups. Rather than forming groups, perhaps cholesterol levels should be transformed before being used in the model. Whatever course
136
4. Logistic Regression, Logit Models, and Logistic Discrimination
the eventual analysis takes, Case 41 has directed our attention to the role of cholesterol. We now must question whether the current forms of our models are adequate for approximating the effect of cholesterol or whether the effect of cholesterol may be an oddity caused by one individual who just happened to have a coronary incident and very high cholesterol.
4.4
Model Selection Methods
Formal model selection methods can be based either on stepwise methods or finding best subsets of variables based on some criterion (e.g., Akaike’s information). Fitting lots of models can be very expensive because each fit requires an iterative procedure. Stepwise methods are sequential, hence cheaper than best subset methods. Standard computer programs are available for doing stepwise logistic regression, e.g., BMDP-LR and SAS PROC LOGISTIC. These operate in a fashion similar to standard regression (cf. Christensen, 1996a, 1996b; Draper and Smith, 1981; Weisberg, 1985). They are also very similar to the methods discussed in Sections 6.1 and 6.3. We will not give a detailed discussion. To the best of the author’s knowledge, the only standard computer program available for doing best subset logistic regression is SAS PROC LOGISTIC. This procedure is based on doing score tests, a subject that will be discussed below. In addition, programs for doing standard best subset selection can be used with one-step estimates of logistic regression parameters to identify good candidate models. To do this, the best subset program must allow weighted regression. Example 4.4.1. Model (4.1.1) was fitted to the Chapman data to obtain pˆi ’s. We then defined two variables: a weight variable RW Ti = pˆi (1 − pˆi ) and a dependent variable Yi = log[ˆ pi /(1 − pˆi )] + (yi − pˆi )/RW Ti . The best subset regression program BMDP-9R was used employing the weights RWT to get best subsets of Yi = β0 + β1 Agi + β2 Si + β3 Di + β4 Chi + β5 Hi + β6 Wi + ei . (Note the similarities to the procedure for getting diagnostic statistics.) The fits used in comparing various models are not fully iterated maximum likelihood fits. They involve one-step of the Newton-Raphson algorithm starting at the maximum likelihood fit for model (4.1.1). Determinations
4.4 Model Selection Methods
137
of best-fitting models are based on residual sums of squares rather than G2 ’s. Based on the Cp statistic, the five best-fitting models are Variables Ag, Ch, W Ag, W Ag, Ch, H, W Ag, S, Ch, W Ag, D, Ch, W
Cp 1.66 2.88 3.13 3.49 3.59
The last three models are among the best because adding a worthless variable to the good model based on Ag, Ch, and W cannot do too much harm. The two most interesting models are precisely those identified earlier by less systematic means. In fact, in this example, the Cp statistics are very similar to the corresponding A∗ values. Of course, the Cp statistics are based on one-step fits. Below, we compare the MLEs for the model with Ag, Ch, and W to the one-step fit. Variable Intercept Ag Ch W
MLE −9.2559 0.053004 0.0065179 0.017539
One-Step −9.21822 0.0529624 0.00647380 0.0174699
(Note that the MLEs differ slightly from the values given previously. The earlier values were obtained from the program GLIM. These values were obtained from BMDP-LR. It is normal for different [correct] programs to give slightly different answers.) The Cp ’s are not based on fully iterated fits, so it is probably a good idea to consider a larger number of models than one ordinarily would in standard regression. One hopes that the best fitting fully iterated models will be among the best fitting one-step models, but the relationship need not be exact. This method of obtaining best subsets using one-step approximations is very natural, so it is not surprising that it has been discovered independently several times. The earliest references of which I am aware are Nordberg (1981, 1982). As mentioned earlier, to the best of my knowledge, the only method for best subset regression that appears in a standard computer package is a method in SAS PROC LOGISTIC that gives the models with the best score tests. Score tests are arrived at in a similar fashion to the procedures discussed above. With regard to Example 4.4.1, to get the score test for dropping all of Ag, S, D, Ch, H, and W , fit the full regression model as indicated in the example but with one exception. The exception is that
138
4. Logistic Regression, Logit Models, and Logistic Discrimination
RW T and Y are defined as indicated using the pˆi ’s, but in Example 4.4.1, the pˆi ’s were obtained from a maximum likelihood fit of the full model, whereas for a score test, the pˆi ’s are obtained from a maximum likelihood fit of the model that contains only an intercept. The score test statistic for whether the six variables can be dropped is just the sum of squares for regression in the six-variable weighted regression model. The statistic is compared to a χ2 (6) distribution. One nice thing about score tests is that the pˆi ’s depend just on the intercept-only model, so getting the score statistics for testing any model against the intercept-only model is merely a matter of fitting a regression on that model. In other words, with the same definitions for RW Ti and Yi , one can test the full model as well as models such as Yi = β0 + β1 Agi + β4 Chi + ei and Yi = β0 + β1 Agi + β4 Chi + β6 Wi + ei simply by fitting the regressions and evaluating the sums of squares for regression. Of course, the method presented in Example 4.4.1 is essentially the same except that the pˆi ’s are taken from the (presumably more accurate) full model rather than the no-intercept model (which nobody takes seriously as a model). Also, some account of model size is being taken by looking at Cp statistics. If one specified best subset selection using the R2 criterion in the regression program, the only difference in the Nordbert and score procedures for choosing the best models would be in the choice of pˆi ’s.
4.4.1
Computations for Nonbinary Data
In this section and Section 3, we have considered only the case where the counts are either 0s or 1s. In Section 2.6 and in later chapters, we consider logistic models and/or theory for data involving counts that may be greater than 1. The computing methods discussed here are easily adapted to those situations. If the ith case has Ni trials (i.e., the possible values for yi are 0, 1, . . . , Ni ), then the appropriate weights are RW Ti = Ni pˆi (1 − pˆi ) and the dependent variable in the regressions is Yi = log[ˆ pi /(1 − pˆi )] + (yi − Ni pˆi )/RW Ti . Note that pˆi /(1 − pˆi ) = Ni pˆi /(Ni − Ni pˆi ) . Often logistic regression computer programs will provide the values Ni pˆi rather than pˆi as diagnostics. Finally, if all of the Ni ’s are large, then looking
4.4 Model Selection Methods
139
at the standardized residuals becomes reasonable. Also, when all the Ni ’s are large, tests against the saturated model can be validly compared to a chi-squared distribution.
4.4.2
Computer Commands
Below are SAS, BMDP, and GLIM commands for obtaining a logistic regression. The data are in a file ’chapman.dat’ with eight columns: the case index, Ag, S, D, Ch, H, W , and Cnt. The file looks like this. 1 2 3 4
44 35 41 31
124 110 114 100 data 199 50 128 200 31 105
80 254 70 70 240 73 80 279 68 80 284 68 continue 92 264 70 68 193 67
190 216 178 149
0 0 0 0
176 0 141 0
We begin with SAS commands. Perhaps the simplest way to fit the logistic regression model (4.1.4) in SAS is to use PROC GENMOD. The first line controls printing. The next four lines involve defining and reading the data and creating a variable “n” that gives the total number of possible successes for each case. The remaining lines specify the model and that a logistic regression is to be performed. options ps=60 ls=72 nodate; data chapman; infile ’chapman.dat’; input ID Ag S D Ch H W Cnt; n = 1; proc genmod data=chapman ; model Cnt/n = Ag Ch W / link=logit dist=binomial; run; A more powerful program for logistic regression is PROC LOGISTIC. options ps=60 ls=72 nodate; data chapman; infile ’chapman.dat’; input ID Ag S D Ch H W Cnt; proc logistic data=chapman descending; model Cnt=Ag Ch W / waldcl waldrl plcl influence iplots lackfit rsq; output out=chdiag predicted=phat; run;
140
4. Logistic Regression, Logit Models, and Logistic Discrimination
proc print data=chdiag; run; proc logistic data=chapman descending; model Cnt=Ag S D Ch H W / selection = score best = 3 details; run; This program includes two calls of PROC LOGISTIC. The first is a standard procedure for obtaining a logistic regression. The second involves model selection. On the line with “proc logistic”, one specifies the data being used and the command “descending”. The command “descending” is used so that the program models the probabilities of events coded as 1 rather than events coded as 0. In other words, it makes the program model the probability of a coronary incident rather than the probability of no coronary incident. Standard output includes the estimated regression coefˆ ficients, standard errors, values of z 2 , P values, and eβk ’s. The model statement is straightforward, specifying the dependent variable and the predictor variables. After the / on the model line, options are specified. “waldcl” causes the program to give the confidence intervals βˆk ± 1.96 SE(βˆk ); call ˆ the interval (a, b). “waldrl” causes the program to give the values eβk and a b intervals (e , e ). “plcl” gives alternative confidence intervals for the βk ’s based on profile likelihoods. The command “influence” causes diagnostics to be presented, basically everything discussed by Pregibon (1981). This includes leverages, Cook’s distance Ci (the same version as BMDP presents), and something called Cbar, which is (1 − a ˆii )Ci . Index plots are given by specifying “iplots”. For binary data, G2 does not give a valid lack of fit test, “lackfit” gives a test similar in spirit to that discussed in Section 2. “rsq” gives values for R2 and Adj. R2 , but R2 is defined differently than it is here. The “output” command creates a SAS data set containing the diagnostics, so they can then be printed or manipulated in other ways. The second proc logistic line was set up to do model selection. It uses the “selection” option. This can be set to “forward”, “backwards”, “stepwise”, or “score”. With the score option and “best = 3”, the three one-variable models with the best score statistics, the three best two-variable models, the three best three-variable models, and so on, are all presented. For BMDP, the commands are similar. You actually run the program BMDP-LR, so no statement of this being a logistic regression procedure is needed. Again the data are specified. The variables to be used are specified along with the dependent variable and the model. Interval and categorical variables must be specified prior to specifying the model. / INPUT
FILE = ’CHAPMAN.DAT’. FORMAT = FREE. VARIABLES = 8. / VARIABLE NAMES = Index, Ag, S, D, Ch, H, W, Cnt.
4.5 ANOVA Type Logit Models
/ REGRESS
/ PRINT / END
141
USE = Ag TO Cnt. DEPENDENT = Cnt. INTERVAL = Ag, S, D, Ch, H, W. MODEL = Ag, Ch, W. MOVE = 0, 0, 0. METHOD = MLR. CELLS = MODEL.
The program is actually set up to do forward, backward, or stepwise regression. The “move” command was used to make the program fit only the model desired. Diagnostics are obtained by the “cells = model” specification. Finally, for anyone who might want to use GLIM (still one of my favorites): $units 200$ $data I Ag S D Ch H W Cnt $ $dinput 6$ $calc n = 1$ $yvar Cnt$ $error binomial n$ $fit Ag+Ch+W$ $display e$ $extract %vl$ $calc ahat=%vl*%wt/%sc$ $calc r=(Cnt-%fv)/%sqrt(%sc*(1-ahat))$ $calc C=(ahat/(1-ahat))*(r**2)$ $look ahat r C$
(leverages) (std. resids) (Cook’s distances)
“units” specifies the number of cases in the regression. After “dinput 6”, DOS versions of GLIM prompt you for a file name, i.e., “chapman.dat”. The Cook’s distances are the same as those used in SAS and BMDP and 4 times those defined here. GLIM is similar in spirit to PROC GENMOD.
4.5
ANOVA Type Logit Models
In this section, analysis of variance type models for the log odds of a twocategory response variable are discussed. We begin with a standard example. Example 4.5.1. Consider the muscle tension data of Example 3.7.1. Recall that the factors and levels are
142
4. Logistic Regression, Logit Models, and Logistic Discrimination
Factor Change in muscle tension Weight of muscle Muscle type Drug
Abbreviation T W M D
Levels High, Low High, Low Type 1, Type 2 Drug 1, Drug 2
and the data are Tension (h)
Weight (i) High
Muscle (j) Type 1 Type 2
Drug (k) Drug 1 Drug 2 3 21 23 11
High Low High
Type Type Type Type
1 2 1 2
22 4 3 41
32 12 10 21
Type 1 Type 2
45 6
23 22
Low Low
Change in tension can be viewed as a response factor. Weight, muscle type, and drug are all explanatory variables. Thus, it is appropriate to model the log odds of having a high change in muscle tension. The three explanatory factors affect the log odds for high tension change. The most general model available is to use a model that includes all main effects and all interactions between the explanatory factors, i.e., log(p1ijk /p2ijk )
= G + Wi + Mj + Dk + (W M )ij + (W D)ik + (M D)jk + (W M D)ijk .
(1)
As usual, this is equivalent to a model with just the highest-order interactions; in this case, log(p1ijk /p2ijk ) = (W M D)ijk . Model (1) can be fit by maximum likelihood. Reduced models can be tested. Estimates and asymptotic standard errors can be obtained. In other words, the analysis of model (1) is similar to that of an (unbalanced) standard ANOVA model or a log-linear model. Of course, the analysis of model (1) should be similar to that of a loglinear model analysis because in a profound sense (alluded to in Section 2.6
4.5 ANOVA Type Logit Models
143
and discussed in detail in Chapter 11), model (1) is precisely the same model as the saturated log-linear model, i.e., log(mhijk ) = (τ ωµδ)hijk ,
(2)
where we have used Greek equivalents of T, W, M, and D to emphasize that the parameters in (1) and (2) are different. Note that in both models (1) and (2), there is at least one parameter on the right-hand side for every term on the left-hand side. In both models, the data are fitted perfectly. In examining the correspondence between logit models and log-linear models, it is crucial to keep in mind the fact that this is a prospective study, so p1ijk /p2ijk = m1ijk /m2ijk . Now consider a more interesting logit model than the saturated logit model (1). Consider, say, log(p1ijk /p2ijk ) = Wi + (M D)jk
(3)
where we have eliminated the redundant terms G, Mj , and Dk and assumed that the terms (W M )ij , (W D)ik , and (W M D)ijk add nothing to model (1). We wish to find the corresponding log-linear model. Model (3) is a model that explains tension change odds, so an effect, say Wi , alters the odds of high tension change. The odds cannot be altered without altering both the probability of high tension change and the probability of low tension change; thus, Wi affects both of these probabilities. In other words, the probabilities (and the expected cell counts) depend on both the tension change level T and the weight W. It follows that the logit effect Wi corresponds to a log-linear model interaction, say (τ ω)hi . Similarly, the logit effect (M D)jk corresponds to the interaction (τ µδ)hjk . As shown in Section 11.1, the log-linear model must contain (ωµδ)ijk terms, so model (3) is equivalent to log(mhijk ) = (τ ω)hi + (τ µδ)hjk + (ωµδ)ijk .
(4)
Inclusion of the terms (ωµδ)ijk is required to deal with the sampling scheme when thinking of the sampling as product-multinomial (i.e., independent binomials) for every combination of the explanatory factors. This mental device was discussed in the subsection of the introduction on retrospective versus prospective studies. In fact, these ideas extend to all logit and logistic regression models. Note that the shorthand notation used for ANOVA type log-linear models is easily adapted to ANOVA type logit models. Using this shorthand, the correspondence between logit models and log-linear models is illustrated in Table 4.1. In each case, the effects in the logit model correspond to log-linear model effects that are the interaction between T and the logit model terms. In
144
4. Logistic Regression, Logit Models, and Logistic Discrimination
addition, the log-linear models always include the three-way interaction between the explanatory factors. Note that models (3) and (4) correspond to line 7 of Table 4.1. As another example, line 3 of the table indicates that the model log(p1ijk /p2ijk ) = G + Wi + Mj + Dk + (W M )ij + (M D)jk is equivalent to the model log(mhijk )
= γ + ωi + µj + δk + (ωµ)ij + (ωδ)ik + (µδ)jk + (ωµδ)ijk + τh + (τ ω)hi + (τ µ)hj + (τ δ)hk + (τ ωµ)hij + (τ ωδ)hik .
Of course, the log-linear model can be written much more simply as log(mhijk ) = (ωµδ)ijk + (τ ωµ)hij + (τ ωδ)hik .
TABLE 4.1. Correspondence Between Some Logit and Log-Linear Models Logit Model 1) 2) 3) 4) 5) 6) 7) 8) 9) 10) 11)
{WM}{WD}{MD} {WM}{WD} {WM}{MD} {WD}{MD} {WM}{D} {WD}{M} {MD}{W} {W}{M}{D} {W}{M} {W}{D} {M}{D}
Log-Linear Model [WMD][TWM][TWD][TMD] [WMD][TWM][TWD] [WMD][TWM][TMD] [WMD][TWD][TMD] [WMD][TWM][TD] [WMD][TWD][TM] [WMD][TMD][TW] [WMD][TW][TM][TD] [WMD][TW][TM] [WMD][TW][TD] [WMD][TM][TD]
Given the log-linear models, the logit models can be obtained by subtraction. Using model (4), observe that log(p1ijk /p2ijk )
= log(m1ijk /m2ijk ) = log(m1ijk ) − log(m2ijk ) = (τ ω)1i + (τ µδ)1jk + (ωµδ)ijk − (τ ω)2i − (τ µδ)2jk − (ωµδ)ijk = [(τ ω)1i − (τ ω)2i ] + [(τ µδ)1jk − (τ µδ)2jk ] = Wi + (M D)jk
where Wi ≡ [(τ ω)1i − (τ ω)2i ] and (M D)jk ≡ [(τ µδ)1jk − (τ µδ)2jk ]. Thus, model (4) implies model (3). Conversely, as will be seen in Chapter 11, the logit model (3) implies the log-linear model (4).
4.5 ANOVA Type Logit Models
145
It is interesting to note that models other than (4) will also imply a logit structure of log(p1ijk /p2ijk ) = Wi + (M D)jk . Any reduced model relative to (4) where the reduction involves only the (ωµδ)ijk terms also implies that log(p1ijk /p2ijk ) = Wi + (M D)jk . For example, if log(mhijk ) = (τ ω)hi + (τ ωδ)hjk + (ωµ)ij + (δ)k , then log(p1ijk /p2ijk ) = log(m1ijk ) − log(m2ijk ) = Wi + (M D)jk . Such models imply that the logit structure holds plus some additional conditions on the explanatory factors. The logit structure of model (3) without any other conditions is equivalent to model (4), so if one is fitting logit models, the corresponding log-linear model must contain the three-factor effects (ωµδ)ijk . It will be recalled that our original argument for including the (ωµδ)ijk effects was based on the existence of product-binomial sampling. This condition is not necessary; a logit model implies the existence of the (ωµδ)ijk terms regardless of the sampling structure, cf. Section 11.1. Of course, in the absence of product-binomial sampling, it would seem to be difficult to interpret the terms log(p1ijk /p2ijk ) because we no longer have p1ijk + p2ijk = 1. Fortunately, if we condition on explanatory variables (i.e., if we condition on the marginal totals n·ijk ), then for any of the standard prospective sampling schemes, the conditional sampling scheme is product-binomial and the standard interpretations can be used for the conditional distribution. Before examining the actual analysis of the muscle tension data, we make one final comment about the logit model—log-linear model relationship. A logit model can be thought of as a model fitted to a two-factor table, where one factor is tension and the other factor consists of all combinations of weight, muscle type, and drug. The smallest interesting log-linear model is the model of independence: log(mhijk ) = τh + (ωµδ)ijk . Looking at log(p1ijk /p2ijk ) = log(m1ijk ) − log(m2ijk ), we see that this model corresponds to a model log(p1ijk /p2ijk ) = τ1 − τ2 ≡ G, i.e., just fitting a grand mean. Intuitively, this would be the smallest interesting logit model. The saturated model for the two-factor table is the interaction model log(mhijk ) = τh + (ωµδ)ijk + (τ ωµδ)hijk , which is the logit model log(p1ijk /p2ijk ) = G + (W M D)ijk . The more interesting logit models correspond to modeling the interaction in this twoway table. They posit more interaction than complete independence, but less interaction than the saturated model. Note that thinking of this as a two-way table is also consistent with the idea of product-binomial sampling. The muscle tension data corresponds to a 8×2 table. Each row is a distinct set of explanatory variables, indexed by ijk. The columns are the two categories of the response, indexed by h. Each row is thought of as an independent binomial, so the row totals should be fixed by inclusion of a main effect for rows, i.e., the W, M, D three-way interaction.
146
4. Logistic Regression, Logit Models, and Logistic Discrimination
We now return to the data analysis. Table 4.2 gives a list of logit models, df , G2 , P values, and A−q values. The df ’s, G2 ’s, P ’s, and A−q’s were actually obtained by fitting the corresponding log-linear models. Clearly, the best fitting logit models are the models {MD}{W} and {WM}{MD}. Both involve the muscle type—drug interaction and a main effect for weight. One of the models includes the muscle type—weight interaction.
TABLE 4.2. Statistics for Logit Models Logit Model
df
G2
P
A−q
{WM}{WD}{MD} {WM}{WD} {WM}{MD} {WD}{MD} {WM}{D} {WD}{M} {MD}{W} {W}{M}{D} {W}{M} {W}{D} {M}{D}
1 2 2 2 3 3 3 4 5 5 5
0.111 2.810 0.1195 1.059 4.669 3.726 1.060 5.311 11.35 12.29 7.698
.7389 .2440 .9417 .5948 .1966 .2919 .7898 .2559 .0443 .0307 .1727
−1.889 −1.190 −3.8805 −2.941 −1.331 −2.274 −4.940 −2.689 1.35 2.29 −2.302
We now take a closer look at the logit model {MD}{W}. As mentioned earlier, Table 4.2 was obtained by fitting the log-linear models corresponding to the logit model. The log-linear model corresponding to {MD}{W} is [WMD][TMD][TW]. Each logit model term becomes an interaction with the response factor T and there is an interaction between all of the explanatory factors. The estimated expected cell counts for [WMD][TMD][TW] are given in Table 4.3.
TABLE 4.3. Estimated Expected Cell Counts for the Log-Linear Model [WMD][TMD][TW] Tension (h)
Drug (k) Drug 1 Drug 2
Weight (i)
Muscle (j)
High
Type 1 Type 2
2.31 23.75
20.04 11.90
Low
Type 1 Type 2
22.68 3.26
32.96 11.10
High
Type 1 Type 2
3.69 40.24
10.97 20.10
Low
Type 1 Type 2
44.32 6.74
22.03 22.90
High
Low
4.5 ANOVA Type Logit Models
147
By taking the ratio of the high tension change estimates to the low tension change estimates, we obtain the estimated odds from the logit model. For example, as in Table 4.3, the high-tension, high-weight, type 1, drug 1 estimate is 2.308; the low-tension, high-weight, type 1, drug 1 estimate is 3.693. The ratio is 2.308/3.693 = .625. This is the logit model estimate of the odds of a high-tension change for high-weight, type 1, drug 1. The estimated odds for all cells are given in Table 4.4. TABLE 4.4. Estimated Odds of High Tension Change for the Logit Model {MD}{W} Weight High Low
Muscle Type 1 Type 2 Type 1 Type 2
Drug Drug 1 Drug 2 .625 .590 .512 .483
1.827 .592 1.496 .485
The estimated odds of having a high tension change are 1.22 times greater for high-weight muscles than for low-weight muscles. For example, in Table 4.4, .625/.512 = 1.22 but also 1.22 = .590/.483 = 1.827/1.495 = ˆ 22jk /m ˆ 12jk m ˆ 21jk = 1.22. This .592/.485. To put it another way, m ˆ 11jk m corresponds to the main effect for weight in the logit model. The odds also involve a muscle type—drug interaction. The nature of this interaction is easily established. Consider the four estimated odds for high weights, ˆ 21jk . These are the four values at the top of Table 4.4; e.g., for m ˆ 11jk /m muscle type 1, drug 1, this is .625. In every muscle type—drug combination other than type 1, drug 2, the estimated odds of having a high tension change are about .6. The estimated probability of having a high tension change is about .6/(1 + .6) = .375. However, for type 1, drug 2, the estimated odds are 1.827 and the estimated probability of a high tension change is 1.827/(1 + 1.827) = .646. The chance of having a high tension change is much greater for the combination muscle type 1, drug 2 than for any other muscle type—drug combination. A similar analysis holds for the ˆ 22jk but the actual values of the odds are smaller low-weight odds m ˆ 12jk /m by a factor of 1.22 because of the main effect for weight. The other logit model that fits quite well is {WM}{MD}. Tables 4.5 and 4.6 contain the estimated odds of high tension change for this model. The difference between Tables 4.5 and 4.6 is that the rows of Table 4.5 have been rearranged in Table 4.6. This sounds like a trivial change, but examination of the tables shows that Table 4.6 is easier to interpret. Looking at the type 2 muscles, the high-weight odds are .919 times the low-weight odds. Also, the drug 1 odds are 1.111 times the drug 2 odds.
148
4. Logistic Regression, Logit Models, and Logistic Discrimination TABLE 4.5. Estimated Odds for the Logit Model {WM}{MD} Weight High Low
Muscle Type 1 Type 2 Type 1 Type 2
Drug Drug 1 Drug 2 .809 .569 .499 .619
2.202 .512 1.358 .557
TABLE 4.6. Estimated Odds for the Logit Model {WM}{MD} Muscle Type 1 Type 2
Weight High Low High Low
Drug Drug 1 Drug 2 .809 .499 .569 .619
2.202 1.358 .512 .557
Neither of these are really very striking differences. For muscle type 2, the odds of a high tension change are about the same regardless of weight and drug. Contrary to our previous model, they do not seem to depend much on weight, and to the extent that they do depend on weight, the odds go down rather than up for higher weights. Looking at the type 1 muscles, we see the dominant features of the previous model reproduced. The odds of high tension change are 1.622 times greater for high weights than for low weights. The odds of high tension change are 2.722 times greater for drug 2 than for drug 1. Both models indicate that for type 1 muscles, high weight increases the odds and drug 2 increases the odds. Both models indicate that for type 2 muscles, drug 2 does not substantially change the odds. The difference between the models {MD}{W} and {WM}{MD} is that {MD}{W} indicates that for type 2 muscles, high weight should increase the odds, but {WM}{MD} indicates little change for high weight and, in fact, what change there is indicates a decrease in the odds. Incidentally, the reason for changing from Table 4.5 to Table 4.6 was the nature of the logit model. The model {WM}{MD} has M in both terms, so it is easiest to interpret when fixing the level of M. For a fixed level of M, the effects of W and D are additive, although the size of those effects change with the level of M. This analysis of the data on change in muscle tension was intentionally performed at the lowest level of technical sophistication. The estimated expected cell counts were obtained by iterative proportional fitting. The
4.5 ANOVA Type Logit Models
149
entire analysis was based on these fitted values and the associated likelihood ratio test statistics. For example, conclusions about the importance of estimates were drawn without the benefit of standard errors for those estimates. Obtaining standard errors requires more computational sophistication. In particular, it requires fitting an auxiliary regression as discussed in Sections 6.7 and 10.2. However, it is interesting to see how much can be obtained from such a small computational investment. There are two ways to fit an analysis of variance type logit (logistic) model. One way is to fit the corresponding log-linear model. The second way is to fit the logit model directly. This section has dealt exclusively with fitting the corresponding log-linear models. Section 1 deals exclusively with fitting the logistic (logit) model directly. Although Section 1 deals specifically with regression models, the procedures for a direct fit of an ANOVA model are similar. To reiterate, there are two principles that define the correspondence between logit models and log-linear models. Recall that effects in the logit model only involve the explanatory factors; e.g., logit effects for the tension change data never involve T, the response factor, only the explanatory factors. The first principle is that any effect in the logit model corresponds in the log-linear model to an interaction between the response factor and the logit effect. For example, a logit effect {MD} corresponds to a log-linear effect [TMD]. The second principle is that the log-linear model always includes the full interaction between the explanatory factors; e.g., all loglinear models include the [WMD] interaction. These principles also hold for logistic regression models. If g is an index or group of indexes that identify all levels of the predictor variables (i.e., explanatory factors), the log-linear model will have a term u(g) which is essentially the full interaction between the explanatory factors. Also, any linear logistic effect βxg becomes a log-linear interaction ηh xg where h indexes the two levels of the response factor.
4.5.1
Computer Commands
The muscle tension data are listed in the file ‘tenslr.dat’ with one column for the number of high tension scores, one column for the low tension scores, and three columns of indices that specify the level of weight (high is 1), muscle type, and drug, respectively. 3 21 23 11 22 32 4
3 10 41 21 45 23 6
1 1 1 1 2 2 2
1 1 2 2 1 1 2
1 2 1 2 1 2 1
150
4. Logistic Regression, Logit Models, and Logistic Discrimination
12 22 2 2 2 The following commands fit the model {WM}{WD}{MD} using SAS PROC GENMOD. This procedure works very much like GLIM. Note that the variable “n” is the total number of individuals with each level of weight, muscle type, and drug. As in Subsection 3.7.1, the “class” command is used to distinguish ANOVA type factors from regression predictors. options ps=60 ls=72 nodate; data tension; infile ’tenslr.dat’; input H L W M D; n = H+L; proc genmod data=tension; class W M D; model H/n = W*M W*D M*D / link=logit dist=binomial; run; proc print data=chdiag; run; Alternatively, the log-linear model for [WMD][TWM][TWD][TMD] can be fitted as in Subsection 3.7.1. To fit other models such as {WM}{MD} or {WM}{D} using GENMOD, the model statement uses W∗M M∗D or W∗M D, respectively.
4.6
Logit Models for a Multinomial Response
The basic method for dealing with a response variable (factor) with more than two levels is to arrange things so that only two things are compared at a time. One way of doing this is to identify pairs of levels to be compared. For example, if the response factor has R levels, comparing each level to the next level leads to modeling log(mi /mi+1 ),
i = 1, . . . , R − 1 ,
(1)
or, equivalently, log(pi /pi+1 ),
i = 1, . . . , R − 1 .
These are the odds of getting level i relative to getting level i + 1. They can be viewed as conditional odds given that either level i or i + 1 occurs. To illustrate multinomial response models, consider the data of Example 3.7.2 presented in Table 3.1. The data involve factors of which we will treat abortion opinion as a response. The levels of abortion opinion are Yes, No, and Undecided. These indicate levels of support for legalized abortion. The
4.6 Logit Models for a Multinomial Response
151
model scheme indicated by equation (1) dictates looking at a series of odds: the odds of Yes to No and the odds of No to Undecided. In this case, the nominal levels of the response can be rearranged to suit us. For example, we could choose to look at the odds of No to Yes and of Yes to Undecided. These latter odds can be viewed as the conditional odds of No to Yes for people who have a clear opinion, and the odds of Yes to Undecided for people who are not opposed. An alternative modeling scheme is for each level to be compared to a particular level; e.g., models can be formed for log(mi /mR ),
i = 1, . . . , R − 1 .
(2)
With abortion opinions given in the order Yes, No, Undecided, these models involve the odds of Yes to Undecided and of No to Undecided. Again, one could (and in this case probably would) rearrange the order of the levels so that the level everything is compared to is a particularly interesting category. If the same form model is used for each value of i, these methods are equivalent and both are equivalent to fitting a log-linear model. For example, the models log(mijk /mi+1 jk ) = w2(j) + w3(k) ,
i = 1, . . . , R − 1,
and log(mijk /mRjk ) = v2(j) + v3(k) ,
i = 1, . . . , R − 1,
are equivalent. (Note that the w and v parameters will also depend on i.) Both of these models are equivalent to log(mijk ) = u23(jk) + u12(ij) + u13(ik) . Just as in two-category logit models, the interaction between all explanatory factors, u23(jk) , is included in the model. The logit effects correspond in the log-linear model to interactions with the response factor. Given the log-linear model, the various logit models can be obtained by looking at differences. For example, log(mijk /mi+1 jk ) = log(mijk ) − log(mi+1 jk ) and log(mijk /mRik ) = log(mijk ) − log(mRjk ) lead to parametrizations such as w2(j) = u12(ij) − u12(i+1 j) and v3(k) = u13(ik) − u13(Rk) .
152
4. Logistic Regression, Logit Models, and Logistic Discrimination
(Note that, as mentioned above, w2 and v3 depend on the category i that is being examined.) Fits for all of the models in (1) and (2) can be obtained by fitting one log-linear model. Another way of reducing several response levels to binary comparisons is to pool response levels. One way to do this is to compare each level to the total of all other levels, e.g., model mi , i = 1, . . . , R . (3) log h=i mh These are the odds of getting category i relative to not getting level i. Fitting these models requires fitting at least R − 1 logit models. One loglinear model will not do. With the abortion opinion data, these are models for the odds of Yes to not Yes, the odds of No to not No, and the odds of Undecided to not Undecided. These models focus on one category of response and ignore the structure of all other categories. If the response levels have a natural ordering, say from smallest to largest, then it may be appropriate to look at continuation ratios mi , i = 1, . . . , R − 1 . (4) log R h=i+1 mh These are the odds of getting level i relative to getting a category higher than level i. As always, we can rearrange the ordering of the response categories if it suits us. This method works very nicely for the abortion data, even though the response levels have no natural ordering. Think of the categories as being ordered as Undecided, Yes, No. Then the first model here has the odds of undecided to everything else, i.e., the odds of undecided to being decided. The second model has the odds of Yes to No, i.e., the odds of supporting legalized abortion relative to opposing it. The odds in (4) are actually conditional odds. The probability of level i divided by the probability of a higher level is the odds of getting level i given that level i or higher is obtained. For example, the odds of Support to Oppose are actually conditional on being decided. As is seen in Exercise 4.8.14, fitting continuation ratio models for all i is equivalent to fitting a series of log-linear models. Yet another possibility is to fit cumulative logits, i m h , i = 1, . . . , R − 1 . log Rh=1 h=i+1 mh For abortion opinions ordered as Undecided, Yes, No, these models describe the odds of undecided to decided and the odds of not opposed to opposed. Example 4.6.1. We now examine fitting models to the data on race, sex, opinions on abortion, and age from Section 3.7. In a log-linear model,
4.6 Logit Models for a Multinomial Response
153
the variables are treated symmetrically. The analysis looks for relationships among any of the variables. Here, we consider opinions as a response variable. This changes the analysis in that [RSA] must be included in all models. Table 4.7 presents fits for all the models that include [RSA] and correspond to ANOVA type logit models. TABLE 4.7. Log-Linear Models for the Abortion Opinion Data Model
df
G2
A−q
[RSA][RSO][ROA][SOA] [RSA][RSO][ROA] [RSA][RSO][SOA] [RSA][ROA][SOA] [RSA][RSO][OA] [RSA][ROA][SO] [RSA][SOA][RO] [RSA][RO][SO][OA] [RSA][RO][SO] [RSA][RO][OA] [RSA][SO][OA] [RSA][RO] [RSA][SO] [RSA][OA] [RSA][O]
10 20 20 12 30 22 22 32 42 34 34 44 44 36 46
6.12 7.55 13.29 16.62 14.43 17.79 23.09 24.39 87.54 34.41 39.63 97.06 101.9 49.37 111.1
−13.88 −32.45 −26.71 −7.38 −45.57 −26.21 −20.91 −39.61 3.54 −33.59 −28.37 9.06 13.9 −22.63 19.1
The best fitting model is clearly [RSA][RSO][OA]. This model can be used directly to fit either the models in (1) log(mhi1k /mhi2k )
1 1 = wRS(hi) + wA(R) ,
log(mhi2k /mhi3k )
2 2 = wRS(hi) + wA(k) ,
the models in (2) log(mhi1k /mhi3k )
1 1 = vRS(hi) + vA(k)
log(mhi2k /mhi3k )
2 2 = vRS(hi) + vA(k) ,
or some variation of these. As discussed earlier, the first pair of models looks at the odds of supporting legalized abortion to opposing legalized abortion and the odds of opposing legalized abortion to being undecided. The second pair of models examines the odds of supporting legalized abortion to undecided and the odds of opposing to undecided. Of these, the only odds that seem particularly interesting to the author are the odds of supporting to opposing. In the second pair of models, choosing the category “undecided” as the standard level to which other levels are compared is particularly unintuitive. The fact that undecided is the last category is no reason for it to be chosen as the standard of comparison. Either of the
154
4. Logistic Regression, Logit Models, and Logistic Discrimination
other categories would be a better standard, so one of these should be used. Neither is obviously better than the other. Neither of these pairs of models are particularly appealing, so we will only continue the analysis long enough to illustrate salient points and to allow comparisons with other models to be discussed later. The fitted values for [RSA][RSO][OA] are given in Table 4.8.
TABLE 4.8. Fitted Values for [RSA][RSO][OA] Age Race
Sex
Opinion
18-25
26-35
36-45
46-55
56-65
65+
Male
Support Oppose Undec.
100.1 39.73 1.21
137.2 64.23 2.59
117.5 56.17 5.36
75.62 47.33 5.05
70.58 50.99 5.43
80.10 62.55 8.35
Female
Support Oppose Undec.
138.4 43.49 2.16
172.0 63.77 4.18
152.4 57.68 8.96
101.8 50.44 8.76
101.7 58.19 10.08
110.7 68.43 14.86
Male
Support Oppose Undec.
21.19 8.54 1.27
16.57 7.88 1.54
15.20 7.38 3.42
11.20 7.11 3.69
8.04 5.90 3.06
7.80 6.18 4.02
Female
Support Oppose Undec.
21.40 4.24 0.36
26.20 6.12 0.68
19.98 4.77 1.25
16.38 5.12 1.50
13.64 4.92 1.44
12.40 4.83 1.77
White
Nonwhite
We consider only the odds of support relative to opposed. The odds can be obtained from the fitted values. For example, the odds for young white males are 100.1/39.73 = 2.52. The full table of odds is given in Table 4.9. TABLE 4.9. Estimated Odds of Support versus Oppose
Race White Nonwhite
Legalized Abortion (Based on the log-linear model [RSA][RSO][OA]) Age Sex 18-25 26-35 36-45 46-55 56-65 Male Female Male Female
2.52 3.18 2.48 5.05
2.14 2.70 2.10 4.28
2.09 2.64 2.06 4.19
1.60 2.02 1.57 3.20
1.38 1.75 1.36 2.77
65+ 1.28 1.62 1.26 2.57
Note that the values from age to age vary by a constant multiple depending on the ages involved. The odds of support decrease steadily with age. The model has no inherent structure among the four race-sex categories;
4.6 Logit Models for a Multinomial Response
155
however, the odds for white males and nonwhite males are surprisingly similar. Nonwhite females are most likely to support legalized abortion, white females are next, and males are least likely to support legalized abortion. Confidence intervals for log odds or log odds ratios can be found using the methods of Section 10.2 or, alternatively, the methods of Section 11.1. If we pool categories, we can look at the set of three models generated by (3) or the set of two models generated by (4). The set of three models consists of the odds of supporting, the odds of opposing, and the odds of undecided (in each case, the odds are defined relative to the union of the other categories). The two models from (4) are essentially continuation ratio models. The most interesting definition of these models is obtained by taking the odds of supporting to opposing and the odds of undecided to having an opinion. Fitting the models involves fitting log-linear models to two sets of data. Eliminating all undecideds from the data, we fit [RSA][RSO][OA] to the 2×2×2×6 table with only the opinion categories “support” and “oppose.” The estimated expected cell counts are given in Table 4.10. Note that the estimated cell counts are very similar to those obtained when undecideds were included in the data. The odds of supporting relative to opposing are given below.
Race White Nonwhite
Odds of Support versus Opposed Age Sex 18-25 26-35 36-45 46-55 Male 2.52 2.14 2.09 1.60 2.70 2.64 2.01 Female 3.18 2.48 2.11 2.06 1.57 Male 4.31 4.22 3.22 Female 5.08
56-65 1.38 1.75 1.36 2.79
65+ 1.28 1.62 1.26 2.58
Except for nonwhite females, the odds of support are essentially identical to those obtained with undecideds included. The G2 for the fit without undecideds is 9.104 with 15 df . The G2 for fitting [RSA][RO][SO][OA] is 11.77 on 16 df . The difference in G2 ’s is not large, so a logit model log(mhi1k /mhi2k ) = R(h) + S(i) + A(k) may fit adequately. We now pool the support and oppose categories to get a 2×2×2×6 table in which the opinions are “support or oppose” and “undecided.” Again, the model [RSA][RSO][OA] is fitted to the data. For this model, we report only the estimated odds.
Race White Nonwhite
Odds of Being Decided on Abortion Age Sex 18-25 26-35 36-45 46-55 Male 116.79 78.52 32.67 24.34 83.43 56.08 23.34 17.38 Female 23.76 15.97 6.65 4.95 Male 68.82 46.26 19.25 14.34 Female
56-65 22.26 15.90 4.53 13.12
65+ 16.95 12.11 3.45 9.99
156
4. Logistic Regression, Logit Models, and Logistic Discrimination
TABLE 4.10. Estimated Expected Cell Counts with Undecideds Eliminated Age Race
Sex
Opinion
18-25
26-35
36-45
46-55
56-65
65+
Male
Support Oppose
100.2 39.78
137.7 64.35
117.0 55.98
75.62 47.38
70.22 50.78
80.27 62.73
Female
Support Oppose
139.2 43.78
172.2 63.77
152.3 57.71
101.6 50.41
101.7 58.28
109.9 68.05
Male
Support Oppose
20.67 8.33
16.96 8.04
15.48 7.52
11.00 7.00
8.07 5.93
7.81 6.19
Female
Support Oppose
20.84 4.11
25.17 5.84
20.21 4.79
16.78 5.22
13.98 5.02
12.97 5.03
White
Nonwhite
Again, the estimated odds vary from age to age by a constant multiple. The odds decrease with age, so older people are less likely to take a position. White males are most likely to state a position. Nonwhite males are least likely to state a position. White and nonwhite females have odds of being decided that are somewhat similar. The G2 for [RSA][RSO][OA] is 5.176 on 15 df . The G2 for the smaller model [RSA][RO][SO][OA] is 12.71 on 16 df . The difference is very large. Although a main-effects-only logit model fits the support-opposition table quite well, to deal with the undecided category requires a race-sex interaction. We have pretty much exhausted what can be done easily by fitting ANOVA type log-linear models using iterative proportional fitting. However, computer programs are readily available for direct fitting of logit models. We illustrate some results for modeling the odds of support relative to opposition with undecideds eliminated from the data. The model that we considered in detail was [RSA][RSO][OA]. This is equivalent to (5) log(mhi1k /mhi2k ) = (RS)hi + Ak which models the odds of supporting legalized abortion. To fit this model directly, we need to provide a computer program with the counts for all cells indicating support, i.e., nhi1k ,
h = 1, 2, i = 1, 2, k = 1, . . . , 6 ,
and the total of support and opposition for all cells Nhik = nhi·k = nhi1k + nhi2k . For example, n1111 = 96, n1211 = 140, n2111 = 24, n11·1 = 96 + 44 = 140, n12·1 = 140 + 43 = 183, and n21·1 = 24 + 5 = 29. In addition, for each count
4.6 Logit Models for a Multinomial Response
157
and total, we need to provide the program with the corresponding indices h, i, and k. Fitting model (5) directly gives G2 = 9.104 on 15 df , exactly the results from fitting the equivalent model [RSA][RSO][OA]. The table of odds has suggested two things: (1) odds decrease as age increases and (2) the odds for males are about the same. We want to fit models that incorporate these suggestions. Of course, because the data are suggesting the models, formal tests of significance will be even less appropriate than usual, but G2 ’s still give a reasonable measure of the quality of model fit. We model the fact that odds are decreasing with age by incorporating a linear trend in ages. We do not have specific ages to associate with the age categories, so we simply use the codes k = 1, 2, . . . , 6 to indicate ages. These scores lead to fitting the model log(mhi1k /mhi2k ) = (RS)hi + γk .
(6)
The G2 is 10.18 on 19 df , so the linear trend in coded ages fits very well. [Recall that model (5) has G2 = 9.104 on 15 df , so a test of model (6) versus model (5) has G2 = 10.18 − 9.104 = 1.08 on 19 − 15 = 4 df .] To incorporate the idea that males have the same odds of support, we recode the indices of the data. Recall that to fit model (5), we had to specify three index variables along with the numbers supporting and the totals. The indices for the (RS)hi terms are (h, i) = (1, 1), (1, 2), (2, 1), (2, 2). We could recode the problem with an index, say g = 1, 2, 3, 4, and fit the model log(mg1k /mg2k ) = (RS)g + Ak and get exactly the same fit. We can choose the recoding as (h, i) g
(1,1) 1
(1,2) 2
(2,1) 3
(2,2) 4
Note that, together, the subscripts g and k still distinguish all of the cases for which data are provided. This recoding can now be modified, so models that treat males the same can be specified. If we want to treat males the same, then the codes for white males g = 1 and nonwhite males g = 3 must be made the same. On the other hand, we still have distinct data for white males and nonwhite males, so the fact that there are two replications on males must be accounted for. To treat males the same, recode g as (f, e) with g f e
wm 1 1 1
wf 2 2 1
nm 3 1 2
nf 4 3 1
where e is an index for replications and the codes wm, wf, nm, nf indicate white males, white females, nonwhite males, and nonwhite females,
158
4. Logistic Regression, Logit Models, and Logistic Discrimination
respectively. Now fit the model log(mf e1k /mf e2k ) = (RS)f + Ak .
(7)
The two male groups are only distinguished by the subscript e, and e does not appear on the right-hand side of the model, so the two male groups will be modeled identically. In fact, to use a logistic regression program, you typically do not even need to define the index e. But whether you define it or not, it exists implicitly in the model. Model (7) is, of course, a reduced model relative to model (5). Model (7) has G2 = 9.110 on 16 df , so the comparison between models has G2 = 9.110 − 9.104 = .006 on 16 − 15 = 1 df . We have lost almost nothing by going from model (5) to model (7). Finally, we can write a model that incorporates both the trend in ages and the equality for males log(mf e1k /mf e2k ) = (RS)f + γk .
(8)
This has G2 = 10.19 on 20 df . Thus, relative to model (5), we have dropped 5 df from the model, yet only increased the G2 by 10.19 − 9.10 = 1.09. For the alternative parametrization, log(mf e1k /mf e2k ) = µ + (RS)f + γk , the estimates and standard errors using the side condition (RS)1 = 0 are Parameter µ (RS)1 (RS)2 (RS)3 γ
Estimate 1.071 0 .2344 .6998 −.1410
SE .1126 — .09265 .2166 .02674
Est./SE 9.51 — 2.53 3.23 −5.27
$ 2 is actually All of the terms seem important. With this side condition, (RS) an estimate of (RS)2 −(RS)1 , so the z score 2.53 is an indication that white females have an effect on the odds of support that is different from males. $ 3 is an estimate of the difference in effect of nonwhite females Similarly, (RS) and males. The estimated odds of support are Race-Sex Male White female Nonwhite female
18-25 2.535 3.204 5.103
26-35 2.201 2.783 4.432
Age 36-45 46-55 1.912 1.661 2.417 2.099 3.850 3.343
56-65 1.442 1.823 2.904
65+ 1.253 1.583 2.522
These show the general characteristics discussed earlier. Also, they can be transformed into (conditional) probabilities of support. Probabilities are
4.7 Logistic Discrimination and Allocation
159
generally easier to interpret than odds. The estimated probability that a white female between 46 and 55 years of age supports legalized abortion is 2.099/(1 + 2.099) = .677. The odds are about 2, so the probability is about twice as great that such a person will support legalized abortion rather than oppose it. Similar ideas of modeling can be applied to the odds of having made a decision on legalized abortion. Finally, a word about computing. The computations for models (6), (7), and (8) were executed using a computer program specifically designed for logit models. This was done because computer programs based on iterative proportional fitting cannot handle the corresponding log-linear models. Iterative proportional fitting only works for ANOVA type models. However, programs for fitting general log-linear models (e.g., GLIM) can handle the log-linear models that correspond to (6), (7), and (8). The models are found in the usual way. Model (6) corresponds to log(mhijk ) = (RSA)hik + (RSO)hij + γj k where we have added the highest-order interaction term not involving O and made the (RS) and γ terms depend on the opinion level j. Similarly, models (7) and (8) correspond to log(mf ejk ) = (RSA)f ek + (RSO)f j + (OA)jk and log(mf ejk ) = (RSA)f ek + (RSO)f j + γj k , respectively. In a somewhat different approach to treating response factors, Asmussen and Edwards (1983) allow the fitting of models that do not always include a term for the interactions among the explanatory factors. Instead, they argue that log-linear models are appropriate for response factors as long as the model allows for collapsing over the response factors onto the explanatory factors, cf. Section 5.3. These issues will also be discussed at the end of Section 6.8.
4.7
Logistic Discrimination and Allocation
How can you tell Swedes and Italians apart? How can you tell different species of irises apart? How can you identify people who are likely to have a heart attack or commit a crime? One approach is to collect data on individuals who are known to be in each of the populations of interest. The data can then be used to discriminate between the populations. To identify Swedes and Italians, one might collect data on height, hair color,
160
4. Logistic Regression, Logit Models, and Logistic Discrimination
eye color, and skin complexion. To identify irises, one might measure petal length and width and sepal length and width. Typically, data collected on several different variables are combined to identify the likelihood that someone belongs to a particular population. In a standard discrimination– allocation problem, independent samples are taken from each population. The use of these samples to characterize the populations is referred to as discrimination. Allocation involves identifying the population of an individual for whom only the variable values are known. The factor of interest in these problems is the population, but it is not a response factor in the sense used elsewhere in this chapter. In particular, discrimination data arises from conducting a retrospective study. The reader may want to review the subsection of the chapter introduction that discusses retrospective and prospective studies. There has been extensive work done on the problems of discrimination and allocation. Introductions to the subject are contained in Anderson (1984), Christensen (1990), Hand (1981), Lachenbruch (1975), McLachlan (1992), Press (1984), and Rao (1973). The review article by Cheng and Titterington (1994) relates discriminant analysis to neural networks. Recent work on logistic discrimination includes Cox and Ferry (1991) and O’Neill (1994). Probably, the two most commonly used methods of discrimination are Fisher’s linear discriminant function and logistic regression. Fisher’s method is based on the idea that each case corresponds to a fixed population and that the variables for each case are observations from a multivariate normal distribution. The normal distributions for the populations are assumed to have different means but the same variances and covariances. The logistic regression approach (or as presented here, the log-linear model approach) treats the distribution for each population as a multinomial. Much of the theoretical work on discriminant analysis is done in a Bayesian setting and both methods lend themselves to the easy computation of posterior probabilities for a case to be in a particular population. The weakness of Fisher’s method is that the assumption of normality with equal covariances is often patently false. The case variables are often percentages, rates, or categorical variables. Even when the case variables are continuous on the entire real line, they are often obviously skewed. Frequently the variance-covariance matrices in the various populations are not even similar, much less identical. Fortunately, Fisher’s method is somewhat insensitive (robust) to many of these difficulties, cf. Lachenbruch, Sneeringeer, and Revo (1973) and Press and Wilson (1978). Fisher’s method is also easily generalized to handle unequal covariance matrices. The strength of Fisher’s method is that for normal data it is more efficient than logistic discrimination, cf. Efron (1975). Example 4.7.1. Aitchison and Dunsmore (1975, p. 212) consider 21 individuals with 1 of 3 types of Cushing’s syndrome. Cushing’s syndrome
4.7 Logistic Discrimination and Allocation
161
is a medical problem associated with overproduction of cortisol by the adrenal cortex. The three types considered are related to specific problems with the adrenal gland, namely A−adenoma B−bilateral hyperplasia C−carcinoma The case variables considered are the rates at which two steroid metabolites are excreted in the urine. (These are measured in milligrams per day.) The two steroids are TETRA – Tetrahydrocortisone and PREG – Pregnanetriol. The data are listed in Table 4.11. TABLE 4.11. Cushing’s Syndrome Data Case
Type
TETRA
PREG
Case
Type
TETRA
PREG
1 2 3 4 5 6 7 8 9 10 11
A A A A A A B B B B B
3.1 3.0 1.9 3.8 4.1 1.9 8.3 3.8 3.9 7.8 9.1
11.70 1.30 0.10 0.04 1.10 0.40 1.00 0.20 0.60 1.20 0.60
12 13 14 15 16 17 18 19 20 21
B B B B B C C C C C
15.4 7.7 6.5 5.7 13.6 10.2 9.2 9.6 53.8 15.8
3.60 1.60 0.40 0.40 1.60 6.40 7.90 3.10 2.50 7.60
The data determine the 3 × 21 table Type A B C
1 1 0 0
2 1 0 0
3 1 0 0
4 1 0 0
5 1 0 0
6 1 0 0
7 0 1 0
8 0 1 0
Case ··· ··· ··· ···
16 0 1 0
17 0 0 1
18 0 0 1
19 0 0 1
20 0 0 1
21 0 0 1
The case variables TETRA and PREG are used to model the interaction in this table. The case variables are highly skewed, so, following Aitchison and Dunsmore, we analyze the transformed variables T L ≡ log(TETRA) and P L ≡ log(PREG). The transformed data are plotted in Figure 4.2. Now consider the sampling scheme. For studies of this type, it is best modeled as involving independent samples from the three populations: A, B, and C. The sampling can be viewed as product-multinomial because
4.7 Logistic Discrimination and Allocation
163
Thus, each column total must be at least one. If the sampling scheme were truly product-multinomial, there would be a positive probability of getting column totals equal to zero. Section 11.4 contains a more detailed discussion of these issues and a justification for treating the 3 × 21 table as product-multinomial. In the current section, we simply present the standard methodology. One of the tricky things about this is that it looks like logistic regression, except that we have more than two possibilities for the response. But treating this as a logistic regression is wrong. In a logistic regression, there are cases with predictor variables associated with them, and each case randomly and independently falls into a response category; e.g., have a coronary incident or don’t. In a logistic regression, when the responses are 0s and 1s, every case is a sample from a different population. But in this logistic discrimination, there are only three populations being sampled. The sample sizes are larger, and the values of the predictor variables are actually the results of the sampling. Because of the sampling scheme, when the samples from the various populations are of different sizes, the values mij are not directly useful in evaluating the relationship between populations and the predictor variables. For example, if we choose to sample 20,000 people from population A and only 10 from population B, the m1j ’s are not comparable to the m2j ’s. We must adjust for sample size before relating syndrome type to T L and P L. The evaluation of the relationship is based on the relative likelihoods of the three syndrome types. Thus, for any case j, our interest is in the relative sizes of p1j , p2j , and p3j . Estimates of these quantities are easily obtained from the m ˆ ij ’s. Simply take pˆij = m ˆ ij /ni· .
(1)
For a new patient of unknown syndrome type but whose values of T L and P L place him in category j, the most likely type of Cushing’s syndrome is that which has the largest value among p1j , p2j , and p3j . Clearly, we can estimate the most likely syndrome type. In practice, new patients are unlikely to fall into one of the 21 previously observed categories but the modeling procedure is flexible enough to allow allocation of individuals having any values of T L and P L. This will be discussed in detail in the subsection on allocation.
Discrimination For each individual j, the variables (T L)j and (P L)j have been observed. We seek a model that can be used to classify observations into syndrome type. The main effects model is log(mij ) = αi + βj ,
i = 1, 2, 3 ,
j = 1, . . . , 21 .
164
4. Logistic Regression, Logit Models, and Logistic Discrimination
We want to use T L and P L to help model the interaction, so fit log(mij ) = αi + βj + γ1i (T L)j + γ2i (P L)j ,
(2)
i = 1, 2, 3, j = 1, . . . , 21. This model is very similar to a log-linear version of the logit and logistic models discussed earlier. In particular, it has a separate term βj for every combination of the explanatory variables. Taking differences gives, for example, log(m1j /m2j ) = (α1 − α2 ) + (γ11 − γ12 )(T L)j + (γ21 − γ22 )(P L)j which can be written as log(m1j /m2j ) = α + δ1 (T L)j + δ2 (P L)j . Although this looks like a logistic regression model, it has a fundamentally different interpretation. Unlike logistic regression models, it is typically the case that m1j p1j log = log . m2j p2j Moreover, the ratio p1j /p2j is not even an odds of type A relative to type B. Both numbers are probabilities, but they are probabilities from different populations. The correct interpretation of p1j /p2j is as a likelihood ratio, specifically the likelihood of type A relative to type B. A value pij is the likelihood within population i of observing category j. Having fitted model (2), the estimate of the log of the likelihood ratio is pˆ1j m ˆ 1j /n1· m ˆ 1j n1· = log = log − log . log pˆ2j m ˆ 2j /n2· m ˆ 2j n2· It will be seen in Chapter 11 that, because interest is directed at comparing probabilities in different multinomials, asymptotic variances of estimates will be more complicated than for logistic regression. Finally, it should be noted that although odds depend on the sampling scheme, odds ratios do not. Odds ratios are handled in exactly the same way regardless of whether the sampling scheme is prospective or retrospective. The G2 for model (2) is 12.30 on 36 degrees of freedom. As in Section 2.6, although G2 is a valid measure of goodness of fit, G2 cannot legitimately be compared to a χ2 distribution. However, we can test reduced models. The model log(mij ) = αi + βj + γ1i (T L)j has G2 = 21.34 on 38 degrees of freedom and log(mij ) = αi + βj + γ2i (P L)j
4.7 Logistic Discrimination and Allocation
165
has G2 = 37.23 on 38 degrees of freedom. Neither of the reduced models provides an adequate fit. (Recall that χ2 tests of model comparisons like these were valid.) Table 4.12 contains estimated probabilities for the three populations. The probabilities are computed using equation (1) and model (2). TABLE 4.12. Estimated Probabilities Case
A
Group B
C
Case
A
Group B
C
1 2 3 4 5 6 7 8 9 10 11
.1485 .1644 .1667 .0842 .0722 .1667 .0000 .1003 .0960 .0000 .0000
.0012 .0014 .0000 .0495 .0565 .0000 .0993 .0398 .0424 .0987 .0999
.0195 .0000 .0000 .0000 .0003 .0000 .0015 .0000 .0000 .0025 .0003
12 13 14 15 16 17 18 19 20 21
.0000 .0000 .0001 .0009 .0000 .0000 .0000 .0000 .0000 .0000
.0295 .0966 .0999 .0995 .0907 .0102 .0060 .0634 .0131 .0026
.1411 .0068 .0000 .0000 .0185 .1797 .1879 .0733 .1738 .1948
Table 4.13 illustrates a Bayesian analysis. For each case j, it gives the estimated posterior probability that the case belongs to each of the three syndrome types. The data consist of the observed T L and P L values in category j. Given that the syndrome type is i, the estimated probability of observing data in category j is pˆij . Let π(i) be the prior probability that the case is of syndrome type i. Bayes theorem gives pˆij π(i) . π ˆ (i|Data) = 3 ˆij π(i) i=1 p Two choices of prior probabilities are used in Table 4.13: probabilities proportional to sample sizes, i.e., π(i) = ni· /n·· and equal probabilities π(i) = 13 . Prior probabilities proportional to sample sizes are rarely appropriate, but they relate in simple ways to standard output, so we give them more prominence than they probably deserve. Both of the sets of posterior probabilities are easily obtained. The table of proportional probabilities is ˆ ij = ni· pˆij just the table of m ˆ ij values. This follows from two facts: first, m and second, the model fixes the column totals, so m ˆ ·j = 1 = n·j . To obtain the equal probabilities values, simply divide the entries in Table 4.12 by the sum of the three probabilities for each case. Cases that are misclassified by either procedure are indicated with a double asterisk in Table 4.13. Table 4.14 summarizes the classifications. With proportional prior probabilities, 16 of 21 cases are correctly allocated. With equal prior probabilities, 18 of 21 cases are correctly allocated. While Table 4.14 is useful, it ignores
166
4. Logistic Regression, Logit Models, and Logistic Discrimination
TABLE 4.13. Probabilities of Classification
Case 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
**
** **
**
**
Group A A A A A A B B B B B B B B B B C C C C C
Proportional Prior Probabilities A B C .89 .01 .10 .99 .01 .00 1.00 .00 .00 .50 .50 .00 .43 .57 .00 1.00 .00 .00 .00 .99 .01 .60 .40 .00 .58 .42 .00 .00 .99 .01 .00 1.00 .00 .00 .29 .71 .00 .97 .03 .00 1.00 .00 .01 .99 .00 .00 .91 .09 .00 .10 .90 .00 .06 .94 .00 .63 .37 .00 .13 .87 .00 .03 .97
Equal Prior Probabilities A B C .88 .99 1.00 .63 .56 1.00 .00 .72 .69 .00 .00 .00 .00 .00 .01 .00 .00 .00 .00 .00 .00
.01 .01 .00 .37 .44 .00 .99 .28 .31 .97 1.00 .17 .93 1.00 .99 .83 .05 .03 .46 .07 .01
.12 .00 .00 .00 .00 .00 .01 .00 .00 .03 .00 .83 .07 .00 .00 .17 .95 .97 .54 .93 .99
4.7 Logistic Discrimination and Allocation
167
the clarity of the allocations. For example, case 4 with proportional probabilities is essentially a toss-up between types A and B. That information is lost in Table 4.14. (The probability of type A is slightly greater than one-half.) Another problem with Table 4.14 is that it tends to overestimate how well the discrimination would work on other data. The data were used to form a discrimination procedure and Table 4.14 evaluates how well it works by allocating the same data. This double dipping tends to make the discrimination procedure look better than it really is. Cross-validation can be used to reduce the bias introduced; for related work, see Geisser (1977) and Gong (1986). Finally, it is of interest to note that the difference in Table 4.14 between proportional probabilities and equal probabilities is that under proportional probabilities, one additional case in each of A and C is misclassified into B. That occurs because the prior probability for B is about twice as great as the values for A and C. TABLE 4.14. Summary of Classifications
Allocated to Group A B C
Proportional Prior Probabilities True Group A B C 5 2 0 1 7 1 0 1 4
Equal Prior Probabilities True Group A B C 6 2 0 0 7 0 0 1 5
Readers who are familiar with normal theory discrimination may be interested in the analysis of these data contained in Christensen (1990). Taking the logs of tetrahydrocortisone and pregnanetriol is important in using Fisher’s linear discrimination because the original data are clearly nonnormal. Logistic discrimination imposes no such normality requirement. Without the log transform, Fisher’s method misclassifies seven observations including five of the six in type A. Logistic discrimination on the untransformed data with proportional priors only misclassifies four observations and gets five of six correct in type A.
Allocation If you stop and think about it, discrimination seems like a remarkably silly thing to do. Why take cases from known populations and reclassify them when the process of reclassification introduces errors? The reason discrimination is interesting is because one can use a model that discriminates between cases from known populations to predict the population of an unknown case. In our example, a new patient can be measured for T L and P L, and then diagnosed as to type of Cushing’s syndrome without direct examination of the adrenal cortex. (I have no idea if this is an accurate
168
4. Logistic Regression, Logit Models, and Logistic Discrimination
description of medical practice, but it illustrates the kind of thing that can be done.) We now consider the problem of allocating new cases to the populations. Model (2) includes a separate term βj for each case, so it is not clear how model (2) can be used to allocate future cases. We will begin with logit models and then work back to an allocation model. Model (2) has 30 parameters, only 9 of which are really of interest. Of these nine, only six are estimable. From (2), we can model the probability ratio of type A relative to type B log(p1j /p2j ) = log(m1j /m2j ) − log(n1· /n2· )
(3) = (α1 − α2 ) + (γ11 − γ12 )(T L)j + (γ21 − γ22 )(P L)j − log(n1· /n2· ) .
The log-likelihoods of A relative to C are log(p1j /p3j ) (4) = log(m1j /m3j ) − log(n1· /n3· ) = (α1 − α3 ) + (γ11 − γ13 )(T L)j + (γ21 − γ23 )(P L)j − log(n1· /n3· ) . Fitting model (2) gives the estimated parameters. Par. α1 α2 α3
Est. 0.0 −20.06 −28.91
Par. γ11 γ12 γ13
Est. −16.29 −1.865 0.0
Par. γ21 γ22 γ23
Est. −3.359 −3.604 0.0
where the estimates with values of 0 are really side conditions imposed on the collection of estimates to make it unique. For a new case with values T L and P L, we plug estimates into equations (3) and (4) to get p2 ) = 20.06 + (−16.29 + 1.865)T L + (−3.359 + 3.604)P L − log(6/10) log(ˆ p1 /ˆ and p3 ) = 28.91 − 16.29(T L) − 3.359(P L) − log(6/5) . log(ˆ p1 /ˆ For example, if the new case has a tetrahydrocortisone reading of 4.1 and p2 ) = .24069 and log(ˆ p1 /ˆ p3 ) = a pregnanetriol reading of 1.10, then log(ˆ p1 /ˆ 5.4226. The likelihood ratios are p2 pˆ1 /ˆ p3 pˆ1 /ˆ
= =
1.2721 226.45
and by division, p3 = 226.45/1.2721 = 178.01 . pˆ2 /ˆ
4.7 Logistic Discrimination and Allocation
169
It follows that type A is a little more likely than type B and that both are much more likely than type C One can also obtain estimated posterior probabilities for a new case. The posterior odds are pˆ1 π(1) π ˆ (1|Data) ˆ2 = ≡O π ˆ (2|Data) pˆ2 π(2) and
pˆ1 π(1) π ˆ (1|Data) ˆ3 . = ≡O π ˆ (3|Data) pˆ3 π(3)
Using the fact that π ˆ (1|Data) + π ˆ (2|Data) + π ˆ (3|Data) = 1, we can solve for π ˆ (i|Data), i = 1, 2, 3. In particular, −1 ˆ2 O ˆ3 O 1 1 + = , π ˆ (1|Data) = 1+ ˆ2 ˆ3 ˆ2 O ˆ3 + O ˆ3 + O ˆ2 O O O −1 ˆ3 O 1 1 1 + = , 1+ π ˆ (2|Data) = ˆ2 ˆ2 ˆ3 ˆ2 O ˆ3 + O ˆ3 + O ˆ2 O O O O −1 ˆ2 O 1 1 1 + = . 1+ π ˆ (3|Data) = ˆ3 ˆ2 ˆ3 ˆ2 O ˆ3 + O ˆ3 + O ˆ2 O O O O Using TETRA = 4.10 and PREG = 1.10, the assumption π(i) = ni· /n·· and more numerical accuracy in the parameter estimates than was reported earlier, π ˆ (1|Data) = .433 π ˆ (2|Data) = .565 π ˆ (3|Data) = .002 . Assuming π(i) = 1/3 gives π ˆ (1|Data) = .560 π ˆ (2|Data) = .438 π ˆ (3|Data) = .002 . Note that the values of tetrahydrocortisone and pregnanetriol used are identical to those for case 5; thus, the π ˆ (i|Data)’s are identical to those listed in Table 4.13 for case 5. To use the log-linear model approach illustrated here, one needs to fit a 3 × 21 table. Typically, a data file of 63 entries is needed. Three rows of the data file are associated with each of the 21 cases. Each data entry has to be identified by case and by type. In addition, the case variables should be included in the file in such a way that all three rows for a case include the corresponding case variables, T L and P L. Model (2) is easily fitted using GLIM. It is easy to just fit log-linear or logistic models to data such as that in Table 4.11 and get m ˆ ij ’s or pˆij ’s. If you treat these values as estimated
170
4. Logistic Regression, Logit Models, and Logistic Discrimination
probabilities for being in the various populations, you are doing a Bayesian analysis with prior probabilities proportional to sample sizes. This is rarely an appropriate methodology.
4.8
Exercises
Exercise 4.8.1. The auto accident data of Example 3.2.4 was actually a subset of a four-dimensional table. The complete data are given in Table 4.15. Analyze the data treating severity of injury as a response variable. What conclusions can you reach from examining the m ˆ hijk ’s, the odds, and odds ratios? TABLE 4.15. Automobile Accident Data
Injury (j) Driver Ejected (i)
Injury (j) Driver Ejected (i)
No Yes
Small Cars (h = 1) Accident Type (k) Collision Rollover Not Severe Severe Not Severe Severe 350 150 60 112 26 23 19 80
No Yes
Standard Cars (h = 2) Accident Type (k) Collision Rollover Not Severe Severe Not Severe Severe 1878 1022 148 404 111 161 22 265
Exercise 4.8.2. Breslow and Day (1980) present data on the occurrence of esophageal cancer in Frenchmen. Explanatory factors are age and alcohol consumption. High consumption was taken to be anything over the equivalent of one liter of wine per day. The data are given in Table 4.16. Analyze the data as a logit model. In your analysis, consider the information on ordered age categories. Exercise 4.8.3. The data in the previous experiment is a series of 2 × 2 tables collected under five different age conditions. This is the same situation as the Mantel-Haenszel setup of Exercise 3.8.9. The Mantel-Haenszel test is one of conditional independence given age. It assumes that the model of no three-factor interaction holds. Respecify the test in terms of logit models. Exercise 4.8.4. Haberman (1978) reports data from the National Opinion Research Center on attitudes toward abortion (cf. Table 4.17). The data
4.8 Exercises
171
TABLE 4.16. Occurrence of Esophageal Cancer Age 25-34 35-44 45-54 55-64 65-74 75+
Alcohol Consumption High Low High Low High Low High Low High Low High Low
Cancer Yes No 1 0 4 5 25 21 42 34 19 36 5 8
9 106 26 164 29 138 27 139 18 88 0 31
were collected over 3 years. Analyze the abortion attitude data treating attitude as a response variable. Respondents were identified by their years of education and their religious group. The groups used were Catholics, Southern Protestants, and other Protestants. Southern Protestants were taken as Protestants who live in or south of Texas, Oklahoma, Arkansas, Kentucky, West Virginia, Maryland, and Delaware. Attitudes toward abortion were determined by whether the respondent thought that legal abortions should be available under three sets of circumstances. The three circumstances are (a) a strong chance exists of a serious birth defect, (b) the woman’s health is threatened, and (c) the pregnancy was the result of rape. A negative response in the table consists of negative responses to all circumstances. A positive response is three positives. A mixed response is any other pattern. Find an appropriate model for the data. Interpret the model and draw conclusions from the estimates. (Haberman also presents similar data based on three different circumstances: the child is not wanted, the family is poor, and the mother unmarried.) Exercise 4.8.5. Feigl and Zelen (1965), Cook and Weisberg (1982), and Johnson (1985) give data on survival of 33 leukemia patients as a function of their white blood cell count and the existence of a certain morphological characteristic in the cells. The characteristic is referred to as either AG positive or AG negative. The binary response is survival of at least 52 weeks beyond the time of diagnosis. The data are given in Table 4.18. Fit a logistic regression model with separate slopes and intercepts for AG positives and negatives. Examine the data for influential observations. Consider whether a log transformation of the white blood cell count is useful. Evaluate models with (a) the same slope for both AG groups, (b) the same intercept for both
172
4. Logistic Regression, Logit Models, and Logistic Discrimination
TABLE 4.17. Abortion Attitudes among Caucasian Christians Year
Religion
Years of Education
Negative
Attitude Mixed
Positive
1974
Prot. Prot. Prot.
0-8 9-12 12+
7 10 4
16 26 10
49 219 131
1974
Prot. S. Prot. S. Prot. S.
0-8 9-12 12+
1 5 2
19 21 11
30 106 87
1974
Cath. Cath. Cath.
0-8 9-12 12+
3 15 11
9 30 18
29 149 69
1973
Prot. Prot. Prot.
0-8 9-12 12+
4 6 4
16 24 11
59 197 124
1973
Prot. S. Prot. S. Prot. S.
0-8 9-12 12+
4 6 1
16 29 4
34 118 82
1973
Cath. Cath. Cath. Prot. Prot. Prot. Prot. S. Prot. S. Prot. S. Cath. Cath. Cath.
0-8 9-12 12+ 0-8 9-12 12+ 0-8 9-12 12+ 0-8 9-12 12+
2 16 7 9 13 4 9 6 1 14 18 8
14 45 20 12 43 9 17 10 8 12 50 13
32 141 72 48 197 139 30 97 68 32 131 64
1972
1972
1972
4.8 Exercises
173
AG groups, and (c) the same slope and intercept. Examine each model for influential observations. TABLE 4.18. Data on Leukemia Survival Survival 1 1 1 1 0 1 1 0 0 1 1 0 0 0 0 0 1
Cell Count 2,300 750 4,300 2,600 6,000 10,500 10,000 17,000 5,400 7,000 9,400 32,000 35,000 52,000 100,000 100,000 100,000
AG + + + + + + + + + + + + + + + + +
Survival 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Cell Count 4,400 3,000 4,000 1,500 9,000 5,300 10,000 19,000 27,000 28,000 31,000 26,000 21,000 79,000 100,000 100,000
AG − − − − − − − − − − − − − − − −
Exercise 4.8.6. Finney (1941) and Pregibon (1981) present data on the occurrence of vasoconstriction in the skin of the fingers as a function of the rate and volume of air breathed. The data are reproduced in Table 4.19. A constriction value of 1 indicates that constriction occurred. Analyze the data. Exercise 4.8.7. Mosteller and Tukey (1977) reported data on verbal test scores for sixth graders. They used a sample of 20 Mid-Atlantic and New England schools taken from The Coleman Report. The dependent variable y was the mean verbal test score for each school. The predictor variables were x1 — staff salaries per pupil, x2 — percent of sixth grade fathers employed in white collar jobs, x3 — a composite score measuring socioeconomic status, x4 — the mean score on a verbal test administered to teachers, and x5 — one-half of the sixth grade mothers’ mean number of years of schooling. Schools meet a performance standard set for them (by me) if their average verbal test score is above 37. The data are given in Table 4.20. (a) Using logistic regression on the 0-1 scores, find a good model for predicting whether schools meet the standard. (b) Using standard regression on y, find several of the best predictive models. Compare these to your logistic regression model.
174
4. Logistic Regression, Logit Models, and Logistic Discrimination
TABLE 4.19. Data on Vasoconstriction Constriction
Volume
Rate
Constriction
Volume
Rate
1 1 1 1 1 1 0 0 0 0 0 0 0 1 1 1 1 1 0 1
0.825 1.09 2.5 1.5 3.2 3.5 0.75 1.7 0.75 0.45 0.57 2.75 3.0 2.33 3.75 1.64 1.6 1.415 1.06 1.8
3.7 3.5 1.25 0.75 0.8 0.7 0.6 1.1 0.9 0.9 0.8 0.55 0.6 1.4 0.75 2.3 3.2 0.85 1.7 1.8
0 0 0 0 1 0 1 0 1 0 1 0 0 1 1 1 0 0 1
2.0 1.36 1.35 1.36 1.78 1.5 1.5 1.9 0.95 0.4 0.75 0.03 1.83 2.2 2.0 3.33 1.9 1.9 1.625
0.4 0.95 1.35 1.5 1.6 0.6 1.8 0.95 1.9 1.6 2.7 2.35 1.1 1.1 1.2 0.8 0.95 0.75 1.3
TABLE 4.20. Verbal Test Scores Obs. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
x1 3.83 2.89 2.86 2.92 3.06 2.07 2.52 2.45 3.13 2.44 2.09 2.52 2.22 2.67 2.71 3.14 3.54 2.52 2.68 2.37
x2 28.87 20.10 69.05 65.40 29.59 44.82 77.37 24.67 65.01 9.99 12.20 22.55 14.30 31.79 11.60 68.47 42.64 16.70 86.27 76.73
x3 7.20 −11.71 12.32 14.28 6.31 6.16 12.70 −0.17 9.85 −0.05 −12.86 0.92 4.77 −0.96 −16.04 10.62 2.66 −10.99 15.03 12.77
x4 26.60 24.40 25.70 25.70 25.40 21.60 24.90 25.01 26.60 28.01 23.51 23.60 24.51 25.80 25.20 25.01 25.01 24.80 25.51 24.51
x5 6.19 5.17 7.04 7.10 6.15 6.41 6.86 5.78 6.51 5.57 5.62 5.34 5.80 6.19 5.62 6.94 6.33 6.01 7.51 6.96
Score 1 0 0 1 1 0 1 0 1 1 0 0 0 0 0 1 0 0 1 1
y 37.01 26.51 36.51 40.70 37.10 33.90 41.80 33.40 41.01 37.20 23.30 35.20 34.90 33.10 22.70 39.70 31.80 31.70 43.10 41.01
4.8 Exercises
175
Exercise 4.8.8. The Logistic Distribution. Show that F (x) = ex /(1 + ex ) satisfies the properties of a cumulative distribution function (cdf). Any random variable with this cdf is said to have a logistic distribution. Exercise 4.8.9. Stimulus–Response Studies. The effects of a drug or other stimulus are often studied by choosing r doses of the drug (levels of the stimulus), say x1 , . . . , xr , and giving the dose xj to each of Nj subjects. The data consist of the number yj who exhibit some predetermined response. Often this response is the death of the subject, but it can be any measure of the effectiveness of the stimulus. Typically in dose-response studies, interest centers on the median effective dose, the ED(50), or if the response is death, the median lethal dose, the LD(50). The LD(50) is that dose for which the probability is 0.5 that a subject will die. Frequently, a model of the form ! log pj (1 − pj ) = α + β log(xj ) is fitted to such data. Assume this model holds for any and all doses. (a) Is the sampling scheme appropriate for a logistic regression? (b) How could you estimate the LD(50)? Exercises 11.8.2 and 11.8.3 give additional results on inference for the LD(50). Suppose that for each individual in the population there is a minimum dose x to which the individual will respond. (The individual is assumed to respond to all doses larger than x.) Let the random variable X be this minimum susceptibility for an individual chosen at random. (c) Give an estimate of the median of X. (d) Give an estimate of the 90th percentile of the distribution of X. (e) What is the distribution of X? Exercise 4.8.10. Probit Analysis. An alternative to the logistic analysis of dose-response data is probit analysis. In probit analysis, the model is Φ−1 (pj ) = α + β log(xj ) where Φ(·) is the cumulative distribution function of a standard normal distribution. Graph Φ(·) and the logistic cdf and compare the general shapes of the distributions. In light of the two previous exercises, give a brief summary of the similarities and differences of logit and probit analysis. For more information on probit analysis, the interested reader should consult Finney (1971). Exercise 4.8.11. Woodward et al. (1941) report several data sets, one of which examines the relationship between exposure to chloracetic acid
176
4. Logistic Regression, Logit Models, and Logistic Discrimination
and the death of mice. Ten mice were exposed at each dose level. The data are given in Table 4.21. Doses are measured in grams per kilogram of body weight. Fit the logistic regression model of Exercise 4.8.9 and estimate the LD(50). Try to determine how well the model fits the data. TABLE 4.21. Lethality of Chloracetic Acid Dose
Fatalities
Dose
Fatalities
.0794 .1000 .1259 .1413 .1500 .1588
1 2 1 0 1 2
.1778 .1995 .2239 .2512 .2818 .3162
4 6 4 5 5 8
Exercise 4.8.12. Consider a sample of j = 1, . . . , r independent binomials yj ∼ Bin(Nj , pj ), each with a covariate xj . Suppose that for some cumulative distribution function F (·), pj = F (xj ). Show that for some transformation zj = g(xj ) and parameters α and β, this logistic regression model holds: pj log = α + βzj . 1 − pj
Exercise 4.8.13. The data of Exercise 4.8.2 are actually a retrospective study. A sample of cancer patients was compared to a sample of men drawn from the electoral lists of the department of Ille-et-Vilaine in Brittany. Reanalyze the data in light of this knowledge. Exercise 4.8.14. Any multinomial response model can be viewed as the model for an I × J table. Assume product-multinomial sampling from J independent multinomials each with I categories. Define πij = pij
I
phj
h=i
so that the continuation ratios introduced in Section 4.2 are pij πij = I . 1 − πij h=i+1 phj
4.8 Exercises
Let rij = n·j −
i−1 h=1
177
nhj ; show that the product-multinomial likelihood J
%
n·j !
I
i=1
j=1
I
nij ! i=1
& n pijij
can be written as the product of binomial likelihoods, i.e., J I−1 rij j=1 i=1
nij
πijij (1 − πij )rij −nij . n
Using this result and maximum likelihood estimation, show that a set of continuation ratio models can be fitted simultaneously to the entire table by fitting each continuation ratio model separately. Note that the chi-square statistics for fitting each continuation ratio model can be added to get a chi-square statistic for the entire table. Exercise 4.8.15.
Give the log-linear model corresponding to (4.1.1).
Exercise 4.8.16. ple 13.2.2.
Analyze the trauma data that are described in Exam-
5 Independence Relationships and Graphical Models
As mentioned in Section 3.7, all of the general principles of testing and estimation presented for three-factor tables also apply when there are additional classification factors. The main difference in working with higherdimensional tables is that things become more complicated. First, there are many more ANOVA type models to consider. For example, in a four-factor table, there are 113 ANOVA models that include all of the main effects. In five-factor tables, there are several thousand models to consider. Second, a great many of the models require iterative methods for obtaining maximum likelihood estimates. Finally, interpretation of higher-dimensional models is more difficult. In this chapter, we examine interpretations of models for four and higherdimensional tables, graphical models, conditions that allow tables to be collapsed, and a variety of graphical models known as recursive causal models.
5.1
Model Interpretations
This section provides tools for interpreting log-linear models for higherdimensional tables. The interpretations are based on independence and conditional independence. The tools are based on viewing higher-dimensional tables as three-dimensional tables. In Section 2, we present alternative methods based on exploiting the relationships between graph theory and conditional independence.
5.1 Model Interpretations
179
An example of a model with four factors is log(mhijk )
= u + u1(h) + u2(i) + u3(j) + u4(k) + u12(hi) + u13(hj) + u23(ij) + u123(hij) + u14(hk) + u24(ik) .
Eliminating redundant parameters gives log(mhijk ) = u123(hij) + u14(hk) + u24(ik) . The shorthand notation for this model is [123][14][24]. Our discussion of model interpretations will be based exclusively on the shorthand notation for models. Consider the model [123][124]. If we think of all combinations of factors 1 and 2 as a single factor, then we get a three-factor table with factors (12), 3, and 4. The model [(12)3][(12)4] becomes a three-dimensional model of conditional independence. Given the levels of factors 1 and 2, factor 3 is independent of factor 4. This trick of combining factors to reduce a four-factor model into a threefactor model is very useful. The model [123][14] can be considered as a three-factor model in which all combinations of factors 2 and 3 are a single factor. The model [123][14] can then be interpreted as saying that given factor 1, factor 4 is independent of factors 2 and 3. Note that the model puts no constraints on the relationship between factors 2 and 3, and that conditional probabilities involving factors 2, 3, and 4 can change with the level of factor 1, cf. Example 1.1.5. Using the principle of combining factors, it is easy to see that [123][4] indicates that factor 4 is independent of factors 1, 2, and 3, but that the relationship between factors 1, 2, and 3 is unspecified. Also, [12][34] indicates that factors 1 and 2 may be related, factors 3 and 4 may be related, but 1 and 2 are independent of 3 and 4. A second useful trick in interpreting models is looking at larger models. If a particular model is true, then any larger model is also true. If the larger model has an interpretation in terms of independence, then the smaller model admits the same interpretation. For example, consider the three-factor models [12][3] and [12][23]. The smaller model [12][3] indicates that factors 1 and 2 are independent of factor 3. In particular, factors 1 and 3 are independent given the level of factor 2. This is the interpretation of the larger model [12][23]. The interpretation of the larger model is also valid for the smaller model. Note, however, that if two models are both valid and both are interpretable, then the smaller model gives the more powerful interpretation. Often, we want to identify the smallest interpretable model that fits. Now consider the four-factor model [12][13][14]. One larger model is [123][14], so factors 2 and 3 are independent of factor 4 given factor 1.
180
5. Independence Relationships and Graphical Models
Similarly, [12][134] and [13][124] are also larger models, so we find that given factor 1, the other three factors are all independent. Note that the structure of the model [12][13][14] makes this interpretation almost selfevident. Factor 1 is included in all three terms, so it is the variable that is fixed in the conditional probabilities. Factors 2, 3, and 4 are in separate terms, so they are independent given factor 1. Exercise 5.1. By examining the probabilities phijk , show that the three larger models imply conditional independence for factors 2, 3, and 4 in [12][13][14]. A more complicated example is [12][13][24]. One larger model is [13][124]. Thus, given factor 1, factor 3 is independent of factors 2 and 4. Another larger model is [24][123]; thus, given factor 2, factor 4 is independent of factors 1 and 3. Finally, consider the model [12][13][14][23]. The larger model [123][14] implies that 2 and 3 are independent of 4 given 1. In fact, this is the only simple interpretation associated with [12][13][14][23]. To see this, ignore factor 4. The three-factor model has the terms [12][13][23], which has no simple interpretation as a three-dimensional model. The next largest model is to replace [12][13][23] with [123]. If we do this in the four-factor model, we replace [12][13][14][23] with [123][14]. Any other model with a simple interpretation would have to be larger than [123][14]. However, because [123][14] already has a simple interpretation, we have the best explanation available. (Recall that the smaller the model, the more powerful the interpretation in terms of independence.) Table 5.1 summarizes the discussion above and also includes some additional models. Note that it is the pattern of the models that determines interpretability. Just as [12][13][14] indicates that 2, 3, and 4 are independent given 1, the model [12][23][24] indicates that factors 1, 3, and 4 are independent given factor 2. Any relabeling of the factors in Table 5.1 gives another interpretable model. It is important to remember that while models imply certain interpretations, more than one model generates the same interpretation. For example, the model [123][14] gives the interpretation that given 1, factors 2 and 3 are independent of factor 4. Conversely, the condition that given 1, factors 2 and 3 are independent of factor 4 implies that [123][14] must hold. However, the independence condition is also consistent with the smaller model [12][13][23][14]. This smaller model is equivalent to the independence condition along with the additional condition that there is no u123 interaction. Goodman (1970, 1971) and Haberman (1974a) introduced the concept of decomposable log-linear models. These models are also called multiplicative. The class of decomposable models consists of all models that have closed form maximum likelihood estimates. They also have simple interpretations in terms of independence or conditional independence. For example, all models for three factors other than [12][13][23] are decomposable. In Table
5.1 Model Interpretations
TABLE 5.1. Some Models and Their Conditional Independence Interpretations Model
∗
Interpretation
[123][124]
Given 1 and 2, factors 3 and 4 are independent.
[123][14][24]*
Given 1 and 2, factors 3 and 4 are independent.
[123][14]
Given 1, factor 4 is independent of factors 2 and 3.
[12][13][14][23]*
Given 1, factor 4 is independent of factors 2 and 3.
[123][4]
Factor 4 is independent of factors 1, 2, and 3.
[12][23][34][41]
Given 2 and 4, factors 1 and 3 are independent. Given 1 and 3, factors 2 and 4 are independent.
[12][13][14]
Given 1, factors 2, 3, and 4 are all independent.
[12][13][24]
Given 1, factor 3 is independent of factors 2 and 4. Given 2, factor 4 is independent of factors 1 and 3.
[12][34]
Factors 1 and 2 are independent of factors 3 and 4.
[12][13][4]
Factor 4 is independent of factors 1, 2, and 3. Given 1, factor 2 is independent of factor 3.
[12][3][4]
Factor 3 is independent of factors 1, 2, and 4. Factor 4 is independent of factors 1, 2, and 3.
[1][2][3][4]
All factors are independent of all other factors.
These models imply their interpretations; however, the interpretations do not imply the models.
181
182
5. Independence Relationships and Graphical Models
5.1, all models except the two with asterisks and [12][23][34][41] are decomposable. Note that [12][23][34][41] is not decomposable but is still characterized by its conditional independence relations. It is particularly easy to work with decomposable models because they have very simple structure. Often, results that are difficult or impossible to prove for arbitrary loglinear models can be shown for decomposable models, e.g., Bedrick (1983) and Koehler (1986). An exact characterization of decomposable models is given in the next section.
5.2
Graphical and Decomposable Models
Models that have interpretations in terms of conditional independence are known as graphical models. The terminology stems from the relationship of these models to graph theory. Berge (1973) gives a discussion of graph theory that is particularly germane but does not give statistical applications. Edwards and Kreiner (1983) give an overview of the use of graphical loglinear models. More recently, Edwards (1995) provides an introduction to the uses of graphical models in statistics, including applications other than log-linear models. More advanced recent books include Whittaker (1990) and Lauritzen (1996). Graphical models are determined by their two-factor interactions. The basic idea is that any graphical model containing all of the terms u12 , u13 , and u23 must also include u123 . To extend this, consider a graphical model that includes u12 , u13 , u23 , u24 , and u34 . The terms u12 , u13 , and u23 imply that u123 must be in the graphical model and u23 , u24 , and u34 imply that u234 must be in the model. A graphical model must contain u1234 if it includes all six of the two-factor terms that can be formed from the four factors, i.e., if it includes u12 , u13 , u14 , u23 , u24 , and u34 . Definition 5.2.1. A model is graphical if, whenever the model contains all two-factor terms generated by a higher-order interaction, the model also contains the higher-order interaction. In graph theory, the corresponding idea is that of a conformal graph. Obviously, we need to discuss both models and the two-factor effects generated by higher-order terms. The model [1234][345] is determined by the four-factor term u1234 and the three-factor term u345 . The four-factor term u1234 generates the two-factor terms u12 , u13 , u14 , u23 , u24 , and u34 . The three-factor u345 term subsumes the two-factor terms u34 , u35 , and u45 . We will refer to the four-factor term [1234] as generating [12], [13], [14], [23], [24], and [34]. Similarly, the three-factor term [345] generates the two-factor terms [34], [35], and [45]. Conversely, any graphical model that contains the two-factor effects [12], [13], [14], [23], [24], and [34] must include
5.2 Graphical and Decomposable Models
183
[1234] (or a larger term that subsumes [1234]) and a graphical model that includes [34], [35], and [45] also includes either [345] or a larger term. Example 5.2.2. Three-Factor Models. The two-factor terms that are possible with three-factors are [12], [13], and [23]. The two-factor effects generate only one higher-order interaction, [123]. The only three-factor models that contain all of the two-factor terms are [123] and [12][13][23]. The model [123] contains all the two-factor effects and the higher-order term, so [123] is graphical. The model [12][13][23] contains all the two-factor effects generated by [123] but does not contain the higher-order term, so it is not graphical. None of the other three-factor models contain all of the two-factor interactions, so, by default, they are all graphical models. Example 5.2.3. Four-Factor Models The model [123][24] is graphical because it includes the three-factor term [123] and it does not contain all of the two-factor terms generated by any other higher-order terms. This follows because all higher-order terms other than [123] involve factor 4, so each generates at least 2 two-factor terms that involve factor 4. The model [123][24] includes only one such term, [24], so, by default, the model does not include all the two-factor terms for any higher-order interaction other than [123]. Similarly, the model [123][124] is graphical because the two-factor terms that are present only generate the three-factor interactions in the model. A model that is not graphical is [12][13][14][23]. It includes all of [12], [13], and [23], but it does not include [123]. The model [123][124][234] is not graphical because it contains all six of the possible two-factor effects but does not contain [1234]. Except for the models indicated by asterisks, all of the models in Table 5.1 are graphical models. Any log-linear model can be embedded in a graphical model. This follows immediately from the fact that the saturated model is the graphical model having all possible two-factor effects. To interpret a specific log-linear model, one seeks the smallest graphical models that contain the specific model. Example 5.2.4. The nongraphical model [12][13][14][23] is a submodel of the graphical model [123][14]. The nongraphical model retains the conditional independence interpretation, 2 and 3 independent of 4 given 1 which is appropriate for the larger graphical model; however, the nongraphical model involves additional constraints. Amazingly, graphical models can be displayed graphically. (Wonders never cease!) In this context, graphs are not directly related to Cartesian coordinates but rather to graph theory. A graph consists of vertices (nodes)
184
5. Independence Relationships and Graphical Models
and edges. Vertices correspond to factors in log-linear models. Edges correspond to two-factor effects. Note that graphs based on two-factor effects would be useless without a convention that dictates how two-factor effects determine a log-linear model. Thus, pictures of graphical models are worthless until after graphical models have been defined. A key feature of this subject is the one-to-one correspondence between graphical log-linear models and graphs. Every model determines a graph and every graph determines a model. Example 5.2.5. Consider the model [12][23][34][41]. Factors are points on a graph (vertices) and two-factor interactions are allowable paths (edges) between points. The graph is given below. 2
t
t 3
1
t
t 4
Now consider the model [123][134]. The two-factor terms generated by [123] are [12], [23], [13] and the terms [13], [34], [14] are generated by [134]. Note that [13] is common to both sets of two-factor terms. The corresponding graph is given below. 2
t
t 3
1
t
t 4
We can also read log-linear models directly from the corresponding graph. For example, the graph 2
t @
t 3 @ @ @
1
t
@ @t 4
5.2 Graphical and Decomposable Models
185
has the two-factor effects [12], [24], [14]; these generate the three-factor term [124]. The graph also contains the edges [23], [34], [24] that generate [234]. We have accounted for all of the edges in the graph, so the model is [124][234]. As another example, consider the following graph. 2
t @
t 3 @ @ @
1
t
@ @t 4
Again the edges [12], [24], [14] generate the three-factor term [124]. However, this graph does not have all of the edges that generate [234] because the graph does not contain [23]. The term [34] is not included in any larger term, so it must be included separately. The model is [124][34]. Finally, consider a seven-factor graph. 1 tXX 6 t2 H t @ XXXX HH XXX HX 5 @ H X H t @ H H @ HH 3 @ H @t Ht 7 4 t
The graph contains all possible edges between the vertices 1, 2, 3, and 5; thus, the graphical log-linear model includes the term [1235]. Similarly, the graph contains all possible edges between the vertices 5, 6, and 7; therefore the log-linear model includes the term [567]. Finally, the graph contains the isolated edge [34], so this term must be in the model. These three terms account for all of the edges in the graph, so the graphical log-linear model is [1235][34][567]. Note that the graph also contains all possible edges between other sets of vertices; for example, all of the edges between 1, 2, and 5 are in the graph so the model includes [125]. However, [125] has already been forced into the model by the inclusion of [1235]. The graphical log-linear model is determined by the largest sets of vertices that include all possible edges between them. The set {1, 2, 5} is unimportant because it is contained in the set {1, 2, 3, 5}. A set of vertices for which all the vertices are connected by edges is complete. Both the sets {1, 2, 5} and {1, 2, 3, 5} are complete. A maximal complete set (i.e., a complete set that is not contained in any
186
5. Independence Relationships and Graphical Models
other complete set), is called a clique. The cliques of a graph determine the graphical log-linear model. The set {1, 2, 3, 5} is a clique, but the set {1, 2, 5} is not maximal, so it is not a clique. In the graph of the model [1235][34][567], the cliques are {1, 2, 3, 5}, {3, 4}, and {5, 6, 7}. There is an obvious correspondence between the cliques and the [·] notation defining the model. In the future, we will simply indicate the cliques as [1235], [134], and [567]. Because the cliques determine the model, the concept of a clique is of fundamental importance. Surprisingly, that importance need not be made explicit in the remainder of this discussion. However, in Wermuth’s method of model selection, the role of cliques cannot be ignored, cf. Section 6.5. Perhaps the most important reason for graphing log-linear models is that independence relations can be read directly from the graph. To do this, we need to introduce the concept of a chain. A chain is simply a sequence of edges that lead from one factor (vertex) to another factor. In the graph 1 X t X t2 H t 6 @ XXXX HH XXX HX 5 @ H X H t @ H H @ HH 3 @ H @ Ht 7 4 t t
there are a huge number of chains. For example, there is a chain from 1 to 2 to 5, a chain from 1 to 2 to 5 to 6, a chain from 1 to 2 to 5 to 7 to 6, a chain from 1 to 2 to 5 to 3 to 4, and many others. Note that a chain involves not only the end points but also all the intermediate points. In other words, there is a chain from 1 to 3 to 5 to 7, but the graph contains no chain from 1 to 3 to 7 because the graph does not include the edge [37]. We allow chains to begin and end at the same point, e.g., 3 to 1 to 5 to 3; in other words, round-trips are allowed. However, we do not allow the path to include a factor more than once. For example, we do not allow 3 to 5 to 6 to 7 to 5 to 1. Even though this path never uses the same edge twice, it does go through factor 5 twice. In a sense, there is no real loss in excluding such paths because we can still get from factor 3 to factor 1 by taking the path 3 to 5 to 1. We are just not allowing ourselves to drive around in circles. A formal definition of chains is given below. Definition 5.2.6. Let h and j be factors and let {i1 , . . . , ik } be a sequence of factors that are distinct from each other and from h and j. The sequence of edges Chj = {[hi1 ], [i1 i2 ], . . . , [ik j]} is a chain between h and j. A graph contains the chain Chj if the graph contains all of the edges included in the chain.
5.2 Graphical and Decomposable Models
187
We get a degenerate chain if we start at h, go to another vertex i, and back to h. This is the only situation in which an edge could appear twice within the sequence of edges defining a chain. Nonetheless, this chain only contains one edge. The key result on independence follows. Theorem 5.2.7. Let the sets A, B, and C denote disjoint subsets of the factors in a graphical model. The factors in A are independent of the factors in B given C if and only if every chain between a factor in A and a factor in B involves at least one factor in C. Proof.
2
See Darroch, Lauritzen, and Speed (1980).
Example 5.2.8. The four-factor model [12][13][24] is illustrated below. It is graphical, so Theorem 5.2.7 applies. Rewrite the model as [31][12][24]. By the theorem, factor 3 is independent of 2 and 4 given 1, factors 3 and 1 are independent of 4 given 2, and factors 3 and 4 are independent given 1 and 2. There are three independence conditions here and all are necessary. For example, the model that only specifies 3 independent of 2 and 4 given 1 is [31][124], not [12][13][24].
3 t
1 t
2 t
4 t
Theorem 5.2.7 also implies certain marginal independence relations. For example, factor 3 is independent of factor 2 given 1. This is a statement about the marginal distribution of factors 1, 2, and 3. Consider the model [12][3]. There are no chains connecting factors 1 and 2 with 3, so every chain that connects them involves at least one member of the empty set. Thus, 1 and 2 are independent of 3 given the factors in the empty set; i.e., 1 and 2 are independent of 3.
1 t
2 t
3 t
Marginally, we can conclude that 3 is independent of 2 and that 3 is independent of 1. In the model [123][24][456], 1 and 3 are independent of 5 and 6 given 2 and 4. Similarly, 1 and 3 are independent of 4, 5, and 6 given 2. Also, 5 and 6 are independent of 1, 2, and 3 given 4.
188
5. Independence Relationships and Graphical Models
1 H t H H 3 t
HH 2 Ht
t
5
4 t H HH H H Ht 6
Many marginal independence relationships hold for this model. For example, 1 and 3 are independent of 4 and 5 given 2. Exercise 5.2. (a) Graph the 10 graphical models in Table 5.1. (b) List 10 of the independence relationships in [12][23][34][41][25][567]. 6 t
1 t H HH HH 2 t Ht 4 H HH H H H t
5 t H H HH H Ht 7
3
The decomposable models discussed at the end of the previous section form a subset of the graphical models. They are the graphical models that have the additional condition of being chordal. The terms given in parentheses in the example below are graph theory terms. Example 5.2.9. [23], [34], [41]. We (closed chain) back from 2 to 3, from 3
Suppose that a model contains the interactions [12], can start at any of the points and travel in a cycle to that point. For example, we can travel from 1 to 2, to 4, and from 4 back to 1. 2
t
t 3
1
t
t 4
The model (graph) is chordal if every such cycle among four or more vertices has a shortcut. A shortcut is called a chord. The cycle given above has two
5.2 Graphical and Decomposable Models
189
possible shortcuts: [31] and [24]; adding either or both of these would make the model chordal. For example, if the model also includes [31], a cycle from 1 back to 1 can be shortened by traveling [12], [23], [31].
2
t
t 3
1
t
t 4
Similarly, a trip from 2 back to 2 can be shortened by traveling [23], [31], [12]. The graphical model generated by the terms [12], [23], [34], [41], and [31] is [123][134]. This decomposable model has the interpretation that given factors 1 and 3, factors 2 and 4 are independent. The maximum likelihood estimates are m ˆ hijk = nhij· nh·jk /nh·j· . The length of a chain is the number of edges in it. The closed chain [12], [23], [34], [41] has length four. The closed chain [12], [23], [31] has length three. A closed chain among four or more vertices is a closed chain of length four or more. Consider the model [12][25][53][34][41][567] given below in graphical form.
1 t
4 t
t2 H H HH
t 6
5 H H t HH HH 3 H Ht 7 t
This is not decomposable because the closed chain [12], [25], [53], [34], [41] involves five vertices and has no chords. The model requires the addition of at least two additional two-factor effects to convert it into a decomposable model. Adding one edge to the offending chain still leaves a cycle of length four without a chord. For example, adding [15] leaves the cycle [15], [53], [34], [41] without a chord. The following graph adds both [15] and [13].
190
5. Independence Relationships and Graphical Models
1 X t X t2 H t 6 @ XXXX HH XXX 5 @ HX H X H t @ H HH @ 3 H @ H @ Ht 7 4 t t
This is now the graph of a decomposable log-linear model. All cycles of length four or more have a chord. The corresponding log-linear model is [143][135][125][567]. These ideas are formalized in the following definition. Definition 5.2.10. Consider the chain Chj = {[hi1 ], [i1 i2 ], . . . , [ik j]}. The length of the chain is the number of edges (distinct elements) in Chj . A chain C0 is a reduced chain relative to Chj if C0 is a chain between h and j with length less than that of Chj and if every factor involved in C0 is also involved in Chj . Note that it is the factors (vertices) that define reduced, not the effects (edges). A closed chain is a chain between a factor h and itself with length greater than 1. In particular, the chain from h to h, Chj = {[hi], [ih]}, contains only one (distinct) edge, so it has length 1 and does not form a closed chain. Any two-factor term [ir is ] is a chord of the closed chain Chh = {[hi1 ], . . . , [ir−1 ir ], [ir ir+1 ], . . . , [is−1 is ], [is is+1 ], . . . , [ik h]} if the sequence {[hi1 ], . . . , [ir−1 ir ], [ir is ], [is is+1 ], . . . , [ik h]} is a closed reduced chain relative to Chh . It is allowable for either factor in [ir is ] to be h. A model is chordal if every closed chain of length k ≥ 4 generated by the model has a chord that is in the model. A model is decomposable if it is both graphical and chordal. By definition, a closed chain must have a length of at least 2; it follows immediately that a closed chain must have a length of at least 3. Clearly, a closed chain of length three cannot have a chord, so it is natural that the definition of chordal models involves closed chains of length 4 or more. It is possible for a model to be chordal without being graphical. Chordal models have restrictions on the two-factor terms; they place no requirements on higher-order terms. In graph theory, decomposable models correspond to acyclic hypergraphs. Example 5.2.11. The effects [12], [23], [34] define a chain from 1 to 4. The effects [12], [24] define a reduced chain from 1 to 4. The effects in the model [12][23][34][41] define a closed chain from 1 back to 1 but also from 2 back to 2, from 3 back to 3, and from 4 back to 4. To see the last of these,
5.2 Graphical and Decomposable Models
191
observe that the model contains the closed chain [41], [12], [23], [34]. The possible chords for these closed chains are [31] and [24]. To see that [24] is a chord, observe that [12], [24], [41] defines a closed reduced chain of [12], [23], [34], [41]. 2
t @
t 3 @ @ @
1
t
@ @t 4
The nongraphical model [12][23][34][41][24] is chordal because any closed chain of length four has a chord that is in the model. This model has no closed chains of length greater than four because it involves only four factors. The model [12][23][34][41][24] is not decomposable because it is not graphical. It contains all of [12], [41], and [24] but not [124]. The corresponding decomposable model is [124][234]. This model generates precisely the two-factor terms [12], [23], [34], [41], and [24], so it is both graphical and chordal. With four factors, the only graphical model that is not decomposable is [12][23][34][41]. Decomposable models have closed form estimates. We illustrate one simple case. Example 5.2.12. 3 t
Consider the graph 1 t
2 t
4 t
The corresponding model is [12][13][24]. The probability structure can be read from the model, phi·· ph·j· p·i·k , phijk = ph··· p·i·· where the terms in the numerator are determined by the terms in the model (the marginal probabilities in the numerator correspond to the margins fitted by the model) and the terms in the denominator correspond to the factors that appear in more than one term in the model. Marginal probabilities are estimated from marginal tables, e.g., pˆhi·· = nhi·· /n···· . The estimated expected counts are nhi·· nh·j· n·i·k . m ˆ hijk = n···· pˆhijk = nh··· n·i··
192
5. Independence Relationships and Graphical Models
Decomposable models are closely related to recursive causal models. Recursive causal models use ideas from directed graphs to indicate causation. As mentioned in the introduction to Chapter 4, causation is not something that can be inferred from data. Any causation must be inferred from other sources. Recursive causal models are introduced in Section 4. The interested reader can also consult the relevant literature, e.g., Wermuth and Lauritzen (1983), Kiiveri, Speed, and Carlin (1984), and the fine expository paper by Kiiveri and Speed (1982). The interplay between graph theory and statistics is a fascinating subject with implications for log-linear models, covariance selection, factor analysis, structural equation models, artificial intelligence, and database management. Reviews are given by Kiiveri and Speed (1982), Edwards (1995), Whittaker (1990), and Lauritzen (1996). For applications to log-linear models, see the references listed in the previous paragraph along with Darroch, Lauritzen, and Speed (1980). These articles cite a wealth of related work including the important contributions of Leo Goodman and Shelby Haberman.
5.3
Collapsing Tables
One important function of statistics is to summarize large batches of numbers. This is such a fundamental aspect of statistics that it is easily overlooked. For example, formal theories of statistics are generally based on the use of sufficient statistics, cf. Cox and Hinkley (1974). Intuitively, a sufficient statistic is simply a summary of the data that is sufficient for drawing valid conclusions about the data. The use of sufficient statistics is an enormous advantage both theoretically and practically. An early step in analyzing many sets of data is the construction of a table. Tables organize data in a way that makes the data more understandable. Clearly, small tables are easier to understand than large tables. For example, a 3 × 5 table is typically easier to understand than a 3 × 5 × 4 table. In this section, we establish conditions that, if satisfied, allow us to collapse a 3 × 5 × 4 table of counts into a 3 × 5 table and still draw valid conclusions. Recall that collapsing is not always possible. Simpson’s paradox is precisely the result of collapsing a table that cannot be validly collapsed. First, we discuss collapsing in three-factor tables and then extend the discussion to higher-order tables. Collapsed tables are also used in analysis of variance. The three-factor ANOVA model yijk
= µ + αi + βj + γk + (αβ)ij + (αγ)ik + (βγ)jk + eijk ,
5.3 Collapsing Tables
193
i = 1, . . . , I, j = 1, . . . , J, k = 1, . . . , K, = 1, . . . , N , is a model for analyzing the I × J × K table of means y¯ijk· . It is well known that with no three-factor (αβγ) interaction, each of the two-factor interactions can be examined by looking at the corresponding two-factor marginal table. For example, the two-factor interaction (αβ) can be investigated using the I × J table of y¯ij·· ’s. The situation with tables of counts is more complex. In general, if a loglinear model has no three-factor interaction and if all two-factor interactions exist, it is not valid to draw conclusions about two-factor interactions from the two-factor marginal tables. As discussed earlier, two-factor interactions are closely related to odds ratios. A three-factor table is said to be collapsible over factor 1 if the odds ratios in the marginal table p·jk are identical to the odds ratios for each row of the three-way table. In other words, we can collapse on rows (factor 1) if for all i, j, j , k, and k , pijk pij k p·jk p·j k = . p·j k p·jk pij k pijk
(1)
If this is true, we can draw valid inferences about the relationship between factors 2 and 3 by looking only at the marginal table. (This is clearly an easier task than examining a separate two-way table for each level of factor 1.) As shown in equation (3.2.3) in the subsection Odds Ratios and Independence Models, equation (1) holds if rows and columns are independent given layers. So, under this model, the relationship between columns and layers does not depend on rows. Similarly, the row-layer relationship can be investigated in the row-layer marginal table if rows and columns are independent given layers. Note that in order to have equation (1) hold, it is not necessary that rows and columns be independent given layers. It is easily seen that if rows and layers are independent given columns, then (1) still holds. Moreover, if both models [13][23] and [12][23] hold, then we must have [1][23], so rows are independent of both columns and layers. Obviously, in this case, collapsing over rows is allowable. The validity of collapsing over a factor is a property of the parameters pijk . Data analysis is based on the observed values nijk . It is important to realize that the MLE of the odds ratio p·jk p·j k /p·j k p·jk under either model [13][23] or [12][23] is n·jk n·j k /n·j k n·jk and that this is also the MLE from the marginal table of n·jk ’s. Thus, data analysis is identical whether working with the three-dimensional table or with the collapsed table. Collapsed tables are such a useful and intuitive tool that we have already used them in data analysis. The reader should note that collapsed tables were an integral part of both Examples 3.2.2 and 3.2.3.
194
5. Independence Relationships and Graphical Models
Our results on collapsing three-factor tables are summarized in the next theorem. Theorem 5.3.1. (a) If the model [13][23] holds, then the relationship between factors 2 and 3 can be examined in the marginal table n·jk and the relationship between factors 1 and 3 can be examined in the marginal table ni·k . (b) If either model [13][23] or [12][23] holds, then the relationship between factors 2 and 3 can be examined in the marginal table n·jk . (c) If model [1][23] holds, then the relationship between factors 2 and 3 can be examined in the marginal table n·jk . To extend collapsibility conditions to higher-order tables, use the same tricks as were used in Section 1 for interpreting higher-order models: reindexing and using larger models. Consider a table with five factors: 1, 2, 3, 4, and 5. Suppose our model is [1234][45][35]. This is contained in the model [1234][345]. By considering this as a three-factor table with factors 12, 3-4, and 5, we have that factor 1-2 and factor 5 are independent given factor 3-4. Thus, collapsing over factor 5 to examine the marginal table of factors 1, 2, 3, and 4 is valid. Also, collapsing over factors 1 and 2 gives a valid marginal table for examining factors 3, 4, and 5. It is particularly easy to read off collapsibility from a graphical model. Corollary 5.3.2. Let the sets A, B, and C denote a partition of the factors in a graphical model such that every chain between a factor in A and a factor in B involves at least one factor in C; then the relationships among the factors in A and C can be examined in the marginal table obtained by summing over the factors in B. Example 5.3.3.
The model [123][24][456] can be graphed as below.
1 H t H HH 2 H Ht 3 t
t
5
4 t H H HH H Ht 6
It follows that accurate conclusions can be drawn from the marginal tables n123··· , n1234·· , n···456 , and n·2·456 .
5.4 Recursive Causal Models
5.4
195
Recursive Causal Models
This section examines a class of graphical models that are useful for analyzing causal relationships. Causation is not something that can be established by data analysis. Establishing causation requires logical arguments that go beyond the realm of numerical manipulation. For example, a well-designed randomized experiment can be the basis for conclusions of causality, but the analysis of an observational study yields information only on correlations. When observational studies are used as a basis for causal inference, the jump from correlation to causation must be made on nonstatistical grounds. In this section, we consider a class of graphical models that have causation built into them. The discussion focuses on appropriate graphs and their interpretations. Not all of the graphical models in this class correspond to log-linear models; thus, the new class is distinct from the graphical models considered in Section 2. For the models considered here, the numerical process of estimation is exceedingly simple. The graphical models considered in this section are recursive causal models. Unlike most of the methods considered in Chapter 4, recursive causal models allow for multiple response factors. With multiple response factors, a given factor can serve as both a response, relative to some causal factors, and as a cause for other response factors. The term “recursive” indicates that response factors are not allowed to serve, even indirectly, as causes of themselves. We begin with a discussion of models that involve only one response factor. In particular, we consider the abortion opinion data discussed in Sections 3.7 and 4.6. Example 5.4.1. Abortion Opinion Data. The factors involved in the abortion opinion data are race R, sex S, opinion O, and age A. In Chapter 6, one of the better models found for these data is [RSO][OA]. This is a graphical model. S r
rO
R r
rA
The model [RSO][OA] indicates that Race and Sex are independent of Age given Opinion. While this may explain the data well, it is difficult to imagine a social process that could cause independence between such tangible characteristics as Race, Sex, and Age given something as ephemeral as
196
5. Independence Relationships and Graphical Models
Opinion. In particular, it violates Asmussen and Edwards’ (1983) criteria for response models (see the ends of Sections 4.6 and 6.8). In Chapter 4, we have argued that when analyzing a response, one should condition on all explanatory factors. With Opinion taken as a response, any log-linear model should include the interaction term [RSA] for the explanatory factors. In Section 4.6, we found that [RSA][RSO][OA] was a reasonable model. This model is not graphical. For example, the threefactor terms in the model, [RSA] and [RSO], imply the existence of [SA] and [SO] interactions. Taken together with the [OA] term, the model includes all of the two-factor terms included in [SAO]. By definition, if all these two-factor effects are included, a graphical model must also include [SAO]. Thus, [RSA][RSO][OA] is not graphical. Note that based on the model [RSA][RSO][OA], any logit model for Opinion has an effect (RS) for Race-Sex interaction and a main effect A for age. In the discussion below, we present a recursive causal model that incorporates the same effects. The difference is that the recursive causal model is not a log-linear model but a conjunction of log-linear models. While [RSO][OA] is a graphical model, it is not a recursive causal model for Opinion. Given below is the graph of a recursive causal model in which Opinion is the response, Race, Sex, and Age are direct causes of Opinion, and there is a joint effect for Race and Sex. S r
-r O 6
R r
rA
This is very similar to the graph for [RSO][OA]; however, some of the edges have been replaced by arrows. Arrows are called directed edges. Edges without arrowheads are undirected edges. Directed edges point to response factors; O is the only response factor. In this graph, the directed edges originate at the explanatory factors. The factors R, S, and A are each called a direct cause of O because there is a directed edge from each of R, S, and A to O. The undirected edge between R and S is unchanged from the graph of [RSO][OA]; the edge represents an interaction between R and S. Age involves no undirected edges. With no loss of generality, the probability model corresponding to these four factors can be written as Pr(R = h, S = i, O = j, A = k) = Pr(O = j|R = h, S = i, A = k)Pr(R = h, S = i, A = k).
5.4 Recursive Causal Models
197
Here, the probability is written as the product of a conditional probability of the response factor given its direct causes, Pr(O = j|R = h, S = i, A = k) and another term, Pr(R = h, S = i, A = k), that involves only the explanatory factors. Each of these terms is to be modeled with a log-linear model. The term Pr(O = j|R = h, S = i, A = k) is a conditional probability; so, as discussed earlier, the corresponding log-linear model should include an [RSA] term. The log-linear model used is the graphical model that incorporates the explanatory factor edges for [RSA] and changes directed edges involving the four factors to undirected edges. In the following graph, the directed edges are retained to emphasize that there are two steps involved. -r O S r @ 6 @ @ @ @ @r A R r
Changing directed edges to undirected edges, the graphical log-linear model is clearly the saturated model, cf. Section 2. The maximum likelihood estimate of Pr(R = h, S = i, O = j, A = k) is nhijk /n···· and, thus, the maximum likelihood estimate of Pr(O = j|R = h, S = i, A = k) ≡ phijk /phi·k is pˆhijk nhijk = . pˆhi·k nhi·k The probability model for the explanatory factor term Pr(R = h, S = i, A = k) is also a log-linear model determined by the graph of the recursive causal model. The log-linear model is the graphical model obtained by dropping the response factor and the directed edges. It is given below. S r
R r
rA
The log-linear model for this graph is [RS][A]. It determines a marginal distribution for the explanatory factors. The model is that Pr(R = h, S = i, A = k) = Pr(R = h, S = i)Pr(A = k).
198
5. Independence Relationships and Graphical Models
or, equivalently, phi·k = phi·· p···k . The maximum likelihood estimates are pˆhi·k =
nhi·· n···k . n···· n····
Combining the two sets of results gives Pr(R = h, S = i, O = j, A = k) = Pr(O = j|R = h, S = i, A = k)Pr(R = h, S = i)Pr(A = k) or, equivalently, phijk =
phijk (phi·· ) p···k . phi·k
This probability model appears to be a saturated log-linear model but is not. Taking logs gives log(phijk ) as the sum of four additive terms, one of which involves all four indices. This would seem to be a saturated loglinear model. However, a two-stage modeling procedure was used, so the simple-minded approach is not appropriate. Using the maximum likelihood estimates from each stage gives pˆhijk =
nhijk nhi·· n···k nhi·k n···· n····
and estimated expected cell counts m ˆ hijk = n····
nhijk nhi·· n···k . nhi·k n···· n····
It is interesting to note that there is no data reduction involved in the estimation. An important, if often underemphasized, element of statistical modeling is that large amounts of data are reduced to manageable form by the use of sufficient statistics. This is certainly true for ANOVA type log-linear models where maximum likelihood estimates are completely determined by various marginal tables. The m ˆ hijk ’s given above require knowledge of the nhijk ’s, so no data reduction has occurred. This is not always true; data reduction does occur for some recursive causal models. We now consider recursive causal graphs with four factors, of which two are responses. Example 5.4.2. Two Response Factors. Assume multinomial sampling for a table with four factors. Consider the recursive causal graph given below.
5.4 Recursive Causal Models
2 r
-r 3
1 r
? r4
199
Factors 1 and 2 are purely explanatory and interact. Factor 3 has one direct cause, which is factor 2. Factor 4 has one direct cause, factor 3. In general, the probability model for four factors can be written in three terms: Pr(F1 = h, F2 = i, F3 = j, F4 = k) = Pr(F1 = h, F2 = i) × Pr(F3 = j|F1 = h, F2 = i)Pr(F4 = k|F1 = h, F2 = i, F3 = j). Based on the graph, we write the recursive causal probability model as Pr(F1 = h, F2 = i, F3 = j, F4 = k) = Pr(F1 = h, F2 = i)Pr(F3 = j|F2 = i)Pr(F4 = k|F3 = j). The first term, Pr(F1 = h, F2 = i), involves only the purely explanatory factors. The second term, Pr(F3 = j|F2 = i), involves the distribution for factor 3 given its direct cause, the purely explanatory factor 2. Note that factor 1 is an indirect cause of 3 because of its relationship with factor 2; however, only the direct cause F2 is involved in the probability model. The third term, Pr(F4 = k|F3 = j), involves the distribution of F4 given its direct cause F3 . Again, estimation of probabilities is simple. The probability Pr(F1 = h, F2 = i) is estimated using the graphical log-linear model obtained by dropping the response factors 3 and 4 and all directed edges. In other words, it is estimated from the saturated model [12] for the marginal table involving only the first two factors. The term Pr(F3 = j|F2 = i) is estimated using the saturated model for the 2, 3 marginal table. The last term, Pr(F4 = k|F3 = j), is estimated from the saturated model for the 3, 4 marginal table. Thus, pˆhijk
pˆ·ij· pˆ··jk pˆ·i·· pˆ··j· nhi·· n·ij· n··jk . n···· n·i·· n··j·
= pˆhi·· =
Of course, the estimated expected cell counts are simply m ˆ hijk = n···· pˆhijk .
200
5. Independence Relationships and Graphical Models
The graphical model can be made more interesting by inserting a direct cause between 1 and 4. 2 r
-r 3
1 r
-? r4
The recursive causal probability model is now Pr(F1 = h, F2 = i, F3 = j, F4 = k) = Pr(F1 = h, F2 = i)Pr(F3 = j|F2 = i)Pr(F4 = k|F1 = h, F3 = j). Again, the first term Pr(F1 = h, F2 = i), involves only the purely explanatory factors. The second term, Pr(F3 = j|F2 = i), involves the distribution for factor 3 given its direct cause. The difference between this model and the previous one is that the third term, Pr(F4 = k|F1 = h, F3 = j), now involves the distribution of F4 given both of its direct causes F1 and F3 . Estimation is based on saturated models for appropriate marginal tables and yields pˆhijk
pˆ·ij· pˆh·jk pˆ·i·· pˆh·j· nhi·· n·ij· nh·jk . n···· n·i·· nh·j·
= pˆhi·· =
Note that a saturated model is used for the explanatory factors only because the graph indicates use of a saturated model. Unlike response factors, the model for explanatory factors is not required to be a saturated model for the appropriate marginal table. Consider one final graph. 2 r
-r 3
1 r
- r?4
The probability model is Pr(F1 = h, F2 = i, F3 = j, F4 = k) = Pr(F1 = h, F2 = i) × Pr(F3 = j|F1 = h, F2 = i)Pr(F4 = k|F1 = h, F3 = j).
5.4 Recursive Causal Models
201
Again, the first term, Pr(F1 = h, F2 = i), involves only the purely explanatory factors. The second term, Pr(F3 = j|F1 = h, F2 = i), involves the distribution for factor 3 given both of its direct causes. The third term, Pr(F4 = k|F1 = h, F3 = j) also conditions only on direct causes. Estimation is based on appropriate marginal tables, pˆhijk =
nhi·· nhij· nh·jk n···· nhi·· nh·j·
with estimated expected cell counts m ˆ hijk = n···· pˆhijk . In general, a causal graph for a set of factors C includes a set M of purely explanatory factors, also called external or exogenous factors and a set C − M of response factors, also called internal or endogenous factors. The exogenous factors have an undirected graph associated with them. Each endogenous factor is the end point for one or more directed edges. The directed edges can originate at either exogenous or endogenous factors. A causal graph is recursive if no endogenous factor is a cause of itself; in other words, if there are no directed pathways that lead from an endogenous factor back to itself. For example, the causal graph 2 r
1 r
-r 3
? r4
is not recursive. Factor 2 is exogenous. The other three factors are endogenous. There is a directed path from 3 to 4 to 1 and back to 3, so 3 is a cause of itself and the graph is not recursive. In this example, all of the endogenous factors are causes of themselves. Generally, for a factor Fi ∈ C − M , its direct causes are the factors from which a directed edge goes to Fi . Let the set of all such factors be Di . The probability for a recursive causal model is Pr (Fi = fi |Di ) . Pr (Fi = fi : Fi ∈ C) = Pr (Fi = fi : Fi ∈ M ) Fi ∈C−M
The term Pr (Fi = fi : Fi ∈ M ) depends on the marginal graph for M , i.e., the graph that drops all endogenous factors and directed edges. Estimates of Pr (Fi = fi : Fi ∈ M ) are maximum likelihood estimates from the corresponding graphical log-linear model. Estimation of a term of the form Pr (Fi = fi |Di ) is based on the saturated model for the marginal table with factors in {Fi } ∪ Di .
202
5. Independence Relationships and Graphical Models
Putting arrowheads on edges and defining a new method for stating probability models has nothing fundamental to do with causation. These probability models can be used even when the causation suggested by the graph exists only in the head of the data analyst. Moreover, if enough models with nonsensical causations are fit, one that fits well may be found. Obviously, a well-fitting model does not establish that the causal patterns in the model are true. However, if the graph is a reasonable statement of a causal process, a well-fitting probability model adds credence to the hypothesized causal process. Evaluating how well recursive causal models fit is discussed later in this section. A useful and interesting concept in recursive causal models is that of configuration >. It allows one to relate recursive causal models to decomposable log-linear models, cf. Section 2. Three factors A, B, and C are in configuration > if C is caused by both A and B, but there is neither a directed nor an undirected edge between A and B. This is illustrated below. B r
-r C
A r
Recursive causal models are typically not log-linear, but there is a substantial intersection between the classes of models. The key result is that a recursive causal graph contains no factors that are in configuration > and the graph restricted to the exogenous variables is decomposable if and only if the recursive causal probability model is identical to the probability model determined by a decomposable log-linear model, cf. Wermuth and Lauritzen (1983). Example 5.4.3. Consider the recursive causal graph given below. This is similar to one used in Example 5.4.1; however, the factor R has been eliminated as a direct cause for O. S r
-r O 6
R r
rA
5.4 Recursive Causal Models
203
The factors S, A, O are in configuration >; thus, the recursive causal graph is not equivalent to a decomposable log-linear model. By connecting the nodes for S and A, the configuration > can be eliminated. -r O 6
S r @ @
@ @
R r
@ @r A
The probabilities for this recursive causal graph are the probabilities for the model [RS][SA] in the exogenous marginal table, obtained by collapsing over the response factor O, times the conditional probabilities for Pr(O = j|S = i, A = k) from the saturated model collapsing over R. This gives phi·· p·i·k p·ijk phi·· p·ijk = . phijk = p·i·· p·i·k p·i·· Note that these are exactly the same probabilities as determined by the decomposable log-linear model [RS][SOA] defined by the underlying undirected graph S r @ @ @ @ R r
rO
@ @r A
In the underlying undirected graph, directed edges are changed to undirected edges. The probability models are the same, so independence relationships are the same. For the decomposable model, the independence relationship is that R is independent of O and A given S. This holds for both the log-linear model and the recursive causal model. The probability models associated with decomposable log-linear models are identical to probability models for recursive causal models that (1) have no configurations > and (2) have a decomposable exogenous factor graph. The exogenous factor graph is the graph with all response factors and directed edges eliminated. It follows that decomposable models can be thought of as a subset both of graphical log-linear models and of recursive
204
5. Independence Relationships and Graphical Models
causal models. A recursive causal model graph with a decomposable exogenous factor graph and no configurations > can be transformed into a decomposable log-linear model graph simply by changing directed edges to undirected edges. See Wermuth and Lauritzen (1983) and Kiiveri, Speed, and Carlin (1984) for the validity of these statements. Exercise 5.3. In Example 5.4.1, it was stated that the model [RSO][OA] is not a recursive causal model for Opinion. However, [RSO][OA] is decomposable, so it corresponds to some recursive causal model. Explain why [RSO][OA] is not a recursive causal model for Opinion, and by changing the endogenous factor, give a recursive causal model that does correspond to [RSO][OA]. We now present a conditional independence result that holds for general recursive causal models. Any endogenous (response) factor Fi is independent of the factors Fj for which it is not a direct or indirect cause given Di , the direct causes of Fi . Independence among exogenous factors is determined by the exogenous factor graph. Example 5.4.4. In the model associated with the following graph, factor 4 is independent of the factors for which it is not a cause, i.e., factors 2 and 3, given factor 1 which is the direct cause of 4. 2 r
-r 3
1 r?
-r 4
Another independence relation that can be read from the graph is that 3 is independent of 4 given 1 and 2, the direct causes of 3. Note that the graph contains no configurations >, so the independence relations are the same as in the underlying undirected graph. The decomposable model is [41][123] which has 4 independent of 2 and 3 given 1. The decomposable model also implies that 3 is independent of 4 given 1 and 2. Wermuth and Lauritzen (1983) and Kiiveri, Speed, and Carlin (1984) examine the validity of conditional independence statements for recursive causal models. Another natural application of recursive causal graphs is in the analysis of structural equation models. Interpretations and conditional independence results hold as for the analysis of discrete data; see Kiiveri and Speed (1982).
5.4 Recursive Causal Models
205
Birch (1963), Goodman (1973), and Fienberg (1980) have examined methods of model selection that apply to recursive causal models. The methods are illustrated through an example. Example 5.4.5. Suppose muscle tension data similar to that of Examples 3.7.1 and 4.5.1 has been collected. The factors involved are T, the change in muscle tension, W, the weight of the muscle, M, the muscle type and D, the drug administered. Each factor is at two levels. For ease of exposition, we will treat the sampling as multinomial. The graph below indicates a possible recursive causal scheme. Wr 6
M r
-r T 6
rD
Change in muscle tension T is hypothesized to have all of W, M, and D as direct causes. Muscle weight W has muscle type M as a direct cause. The purely explanatory factors are M and D; they are allowed to interact with each other. Write Pr(T = h, W = i, M = j, D = k) = phijk . Note that the indexing has changed from previous examples. The first factor is now at the upper right of the graph rather than the lower left and the order is counterclockwise rather than clockwise. These changes are important for verifying the maximum likelihood estimates presented. The expected cell counts for the recursive causal model are n··jk n·ij· nhijk . m ˆ hijk = n···· n···· n··j· n·ijk The graph given above has one configuration > involving W, D, and T, so the probability model is not a decomposable log-linear model. The lack of fit of the model can be tested in the usual way. Some algebra shows that nhijk nhijk log G2 = 2 m ˆ hijk hijk n·ijk = 2 n·ijk log . n··jk n·ij· /n··j· ijk
This is precisely the lack of fit statistic for testing the log-linear model [WM][MD] against the saturated model in the marginal table for W, M, and D. With each factor at two levels, it follows that the statistic has 2
206
5. Independence Relationships and Graphical Models
degrees of freedom. In fact, the log-linear model [WM][MD] is consistent with the only conditional independence result available from the graph; the response factor W is independent of D given M, the direct cause of W. The relationship of the lack of fit test with the log-linear model [WM][MD] can be seen through the probability modeling procedure. Recall that the basis of the modeling procedure is that one can always write Pr(T = h, W = i, M = j, D = k) = Pr(M = j, D = k) × Pr(W = i|M = j, D = k)Pr(T = h|W = i, M = j, D = k). and, based on the graph, the recursive causal probability model is Pr(T = h, W = i, M = j, D = k) = Pr(M = j, D = k) × Pr(W = i|M = j)Pr(T = h|W = i, M = j, D = k). The only real modeling being done is replacing Pr(W = i|M = j, D = k) with Pr(W = i|M = j). The factor T is extraneous. The perfectly general statement Pr(W = i, M = j, D = k) = Pr(M = j, D = k)Pr(W = i|M = j, D = k) has been replaced with Pr(W = i, M = j, D = k) = Pr(M = j, D = k)Pr(W = i|M = j) = Pr(M = j)Pr(D = k|M = j)Pr(W = i|M = j). This is just the model for conditional independence of W and D given M. It is not surprising that the test statistic involves only the aspect of the model that is not always true. There is no particular reason to believe in a relationship between the type of drug used and the muscle type. It is also questionable whether muscle type really has an effect on weight. The graph that incorporates these ideas involves dropping one directed edge and the undirected edge. It is given below. Wr
-r T 6
M r
rD
5.4 Recursive Causal Models
207
Note that by dropping M as a direct cause of W, W has been transformed into an exogenous factor. The estimated expected cell counts are n·i·· n··j· n···k nhijk . m ˆ hijk = n···· n···· n···· n···· n·ijk This model can be checked for general lack of fit as above. It is not difficult to see that the test is identical to that for complete independence [W][M][D] in the marginal table collapsing over T. The likelihood ratio chi-squared has 4 degrees of freedom. This new model was obtained from the previous one by dropping edges in the previous graph; thus, this model is a reduced model relative to the previous one. It follows that a likelihood ratio test can be performed for comparing the two models. Not surprisingly, the test simplifies to that of [W][M][D] versus [WM][MD]. Another possible model eliminates the effect of muscle weight W on tension. Wr 6
M r
rT 6
rD
The maximum likelihood estimates are n··jk n·ij· nh·jk . m ˆ hijk = n···· n···· n··j· n··jk The graph contains no configurations >, so the probability model and thus the maximum likelihood estimates are the same as for the decomposable log-linear model [WM][MTD] determined by the underlying undirected graph. Consider one final model. Wr 6
M r
rT 6
rD
208
5. Independence Relationships and Graphical Models
This incorporates independence of M and D from the exogenous factor graph and it has one conditional independence relation involving responses: W independent of T and D given M. These two relationships imply that the pair W and M are independent of D. The maximum likelihood estimates are n··j· n···k n·ij· nh·jk . m ˆ hijk = n···· n···· n···· n··j· n··jk The likelihood ratio test statistic for this model can be separated into the sum of three terms. The first term is the statistic for testing [M][D] versus [MD]. The second term is the statistic for testing [WM][MD] versus [WMD]. The last term is for testing [TMD][WMD] versus [TWMD]. The following series of equalities establishes the result. nhijk nhijk log G2 = 2 m ˆ hijk hijk nhijk [log(nhijk ) − log(m ˆ hijk )] = 2 hijk
n··j· n···k nhijk log(nhijk ) − 2 nhijk log = 2 n···· hijk hijk n·ij· nh·jk nhijk log nhijk log −2 −2 n··j· n··jk hijk hijk nhijk n·ijk nhijk log n··jk = 2 n·ijk n··jk hijk n··j· n···k nhijk log −2 n···· hijk n·ij· nh·jk nhijk log nhijk log −2 −2 n··j· n··jk hijk hijk n··j· n···k nhijk log(n··jk ) − nhijk log = 2 n···· hijk n·ijk n·ij· nhijk log − nhijk log +2 n··jk n··j· hijk nhijk nh·jk nhijk log − nhijk log +2 n·ijk n··jk hijk n··j· n···k nhijk log(n··jk ) − log = 2 n···· hijk n·ij· n··jk nhijk log (n·ijk ) − log +2 n··j·
hijk
5.5 Exercises
209
nh·jk n·ijk +2 nhijk log(nhijk ) − log n··jk hijk n··j· n···k n··jk log (n··jk ) − log 2 n···· jk n··jk n·ij· n·ijk log (n·ijk ) − log +2 n··j· ijk n·ijk nh·jk nhijk log(nhijk ) − log . +2 n··jk
=
hijk
The three terms in the last equality are precisely the three test statistics that were claimed. The existence of such breakdowns for G2 is quite general.
5.5
Exercises
Exercise 5.5.1. Using the methods of Section 5.1, discuss the independence relationships for all of the models given below. (a) [123][24][456] (b) [12][13][23][24][456] (c) [123][124][456] (d) [123][24][456][15] (e) [123][24][456][15][36] Exercise 5.5.2. In the saturated log-linear model for a four-dimensional table, let u34 = 0 and let all of the corresponding higher-order terms also be zero, e.g., u134 = 0. (a) Based on this model, find a formula for m ˆ hijk without using graphical methods. (b) Use graphical methods to find m ˆ hijk . Exercise 5.5.3. The vertices for a five-factor model are given below. Connect the dots to give a graphical representation of the model [123][135][34][24]. Use the illustration to show that [123][135][34][24][25] is not a graphical model.
210
5. Independence Relationships and Graphical Models
3 t 1 t
2 t t
t
5
4
Exercise 5.5.4. Which of the models given below are graphical? Graph them. Which of these are decomposable? Discuss the independence relationships for all of the models. For each model, what marginal tables will provide valid inferences? (a) [123][24][456] (b) [12][13][23][24][456] (c) [123][124][456] (d) [123][24][456][15] (e) [123][24][456][15][36] Exercise 5.5.5. Consider all of the graphs in Example 5.4.2. Classify each as equivalent or not equivalent to a decomposable log-linear model. For those that are equivalent, prove the equivalence of the probability models.
6 Model Selection Methods and Model Evaluation
This chapter examines methods of selecting models for high-dimensional tables. The model selection methods considered are the stepwise methods, e.g., forward selection and backward elimination, a modified backward elimination method from Aitkin (1978, 1979) that controls the experimentwise error rate, and a backward elimination method from Wermuth (1976) that is restricted to decomposable models. In addition, we discuss the use of the model selection criteria presented in Section 3.6. Of course, it would be foolish to choose a model simply because some model selection procedure presents it to you as a good model. Other considerations such as model interpretability and the consistency of the data with model assumptions may dictate choosing some other model. It is always wise to use model selection methods to produce several apparently good models that can be investigated further. In line with this approach, the analysis of residuals and influential observations is also discussed in this chapter. Modeling is a useful process both for prediction of future observables and for describing the relationships between factors. Large models always reproduce the data on which they were fitted better than smaller models. The saturated model always provides a perfect fit of the data. However, smaller models have more powerful interpretations and are often better predictive tools than large models. Often, our goal is to find the smallest model that fits the data. In model fitting, there are two approaches to specifying models: the descriptive approach and the causative approach. The descriptive approach simply describes the relationships that are observed. For example, in a three-way table, one might find that given a young child’s educational sta-
212
6. Model Selection Methods and Model Evaluation
tus, her father’s educational status and her mother’s educational status are independent. This can be used to describe the data, but it would be foolish to suggest that the child’s status in any way determines her parents’ status. In analyzing these factors, it makes sense to consider the parents’ status as fixed and to determine its effect on the child’s status. However, a statistical relationship between parents’ status and child’s status does not imply causation. Causation cannot be inferred on statistical grounds; it must be inferred from the subject matter. For these reasons, we will concentrate on descriptive modeling. Nonetheless, causation has important implications for statistical modeling. Causation is closely related to the existence of response factors; the analysis of response factors is treated in Chapter 4, Section 5.4, and is also discussed in Chapter 11.
6.1
Stepwise Procedures for Model Selection
Stepwise procedures assume an initial model and then use rules for adding or deleting terms to arrive at a final model. Stepwise procedures are categorized in three ways: forward selection, in which terms are added to an initial small model; backward elimination, in which terms are removed from an initial large model; and composite methods, in which terms can either be added to or removed from the initial model. Methods for choosing initial models and examples will be considered in the subsequent two sections. Because of the huge number of terms available in high-dimensional models, more effort is expended in selecting an initial model than is commonly used in regression analysis. Moreover, because ANOVA type models use parameters that are not uniquely defined, stepwise procedures must be adjusted so that nonsensical models are not considered. Often, at any given point, stepwise procedures are applied only to examine terms that involve the same number of factors. For example, sometimes in examining three-factor interactions, two-factor interactions are ignored and any three-factor interactions that are implied by the existence of higher-order interactions are forced into the model. (In this case, higherorder interactions are those that involve four or more factors.) We begin by considering this particular procedure. Later, an improved method (implemented in BMDP) will be discussed. Stepwise procedures are sequential in that they assume a current model and look to add or delete terms one at a time to that model. When considering s-factor terms, the basic forward selection rule is FS:
(a)
Add the s-factor term not already in the model that has the most significant test statistic.
(b)
Continue adding terms until no term achieves a predetermined minimum level of significance.
6.1 Stepwise Procedures for Model Selection
213
The basic backward elimination rule is BE:
(a)
Delete the s-factor term with the least significant test statistic among s-factor terms that are not forced into the model.
(b)
Continue until all terms maintain a predetermined minimum level of significance.
The backward elimination procedure is based on comparing models and does not consider whether the reduced models fit relative to the saturated model. It is possible that a model may fit globally and that dropping an s-factor term may be acceptable but that the new smaller model may not fit globally. One might want to modify the procedure so that it stops before eliminating any effect that will cause the saturated model test to be rejected. Note that a term can be forced into the model either by the sampling scheme or by the presence in the model of a higher-order term that implies the existence of the term in question. For example, in the model [1235][234], when considering three-factor terms for elimination, all of [123], [125], [135], and [235] are forced into the model by having [1235] in the model. The only three-factor term eligible for elimination is [234]. The composite method alternates between applying the forward selection rule and the backward elimination rule. For forward selection, the test statistics are the statistics for testing the current model against the larger models in which one additional term has been added. For backward elimination, the test statistics are the statistics for testing the current model against the reduced models in which one term has been eliminated. “Significance” of test statistics is measured by their P values. A test statistic fails to achieve a predetermined minimum level of significance, say α, if P > α and maintains that level of significance if P < α. The level α is often taken as .10, .05, or .01. As an alternative to considering only s-factor terms, one can consider adding or deleting either simple or multiple effects. Adding a simple effect consists of adding an effect that does not imply the simultaneous addition of any other effects. For example, if we have four factors, say R, S, O, and A, and the model is [RSO][SA], the only simple effects that could be added are [RA] and [OA]. To see this, note two things. First, all other two-factor terms are already in the model, so these are the only two-factor terms that can be added. Second, to add any three-factor terms, e.g., [RSA], also implies the addition of a new two-factor term. Therefore, adding any threefactor term implies the addition of more than one effect and thus is not the addition of a simple effect.
214
6. Model Selection Methods and Model Evaluation
If we consider the deletion of simple effects from [RSO][SA], the only possible deletions are the [RSO] and [SA] terms. Deleting the [RSO] term leaves the model [RS][RO][SO][SA]. Deleting the SA effect leaves [RSO][A]. Addition of a multiple effect involves incorporating a new factor into some effect that already exists in the model. For the model [RSO][SA], possible multiple effects are constructed by adding A to [RSO] (giving [RSOA][SA]), by adding R to [SA] (giving [RSO][RSA]), and by adding O to [SA] (giving [RSO][OSA]). In addition, the term [A] is implicitly in the model, so the terms [RA] and [OA] can be added, giving [RSO][SA][RA] and [RSO][SA][OA], respectively. Also, the term [RO] is implicitly in the model, so the term [ROA] can be added, yielding the model [RSO][SA][ROA]. In forward selection, considering either addition of simple effects or addition of multiple effects is appropriate. Because addition of multiple effects involves consideration of a wider variety of additional effects, addition of multiple effects is generally preferred. In backward elimination, deletion of simple effects is the appropriate procedure. As defined here, deleting multiple effects is the same procedure as deleting simple effects. For the model [RSO][SA], deletion of any factor from [RSO] leaves all of the implicit terms [RS], [SO], and [RO] unaffected. Thus, deletion of multiple effects allows consideration of only the models [RS][SO][RO][OA] and [RSO][A]. These are the same models considered in the deletion of simple effects. There is a key difference between adding and deleting multiple effects. Adding multiple effects to [RSO][SA] allows addition of interesting nonsimple effects like [ROA] because [RO] is implicitly in the model. Deleting a factor from an implicit term such as [RO] has absolutely no effect on the model [RSO][SA]. Other definitions of what it means to delete a multiple effect are possible, but the ones I am acquainted with can give stupid results. To be safe, when using computer software, one should always specify deletion of simple effects. One reasonable approach to backward elimination in, say, a five-factor model is to eliminate first the five-factor effect if possible, then any unnecessary four-factor effects. When eliminating three-factor effects, restrict attention only to those three-factor effects not forced into the model by the included four-factor effects. Similarly, only consider for elimination twofactor effects that are not forced in by the included three- and four-factor effects. Also, any effects forced into the model by the sampling scheme should never be considered for elimination. As is well known from regression analysis, the main virtue of stepwise methods is that they are fast and cheap. Their virtue is directly related to their fault. They are fast and cheap because the procedures put severe limits on the number of models that are considered. Because only a limited number of models are examined, the procedures can easily miss the best models. Stepwise methods do not give the best model based on any overall criteria of model fit (cf. Section 6); in fact, they can give models that contain none of the terms that are in the best models. Forward selection
6.2 Initial Models for Selection Methods
215
is a notoriously bad method of variable selection because it starts from an inadequate model and there is no guarantee that it will ever arrive at an adequate model. Backward elimination should give an adequate model if the initial model is adequate, but the only way to ensure an adequate initial model is to use the saturated model. Combined methods improve on forward selection simply because they allow consideration of more models. However, combined methods do not ensure finding the best models either. A nontechnical problem with stepwise procedures is that they give a unique “best” answer. Typically, no uniquely correct model exists. If stepwise methods are to be used, it is wise to use several variations and therefore arrive at several candidate models. These models should be evaluated on their interpretability and their consistency with model assumptions to arrive at one or more final models.
6.2
Initial Models for Selection Methods
In this section, we discuss a variety of methods for arriving at an appropriate initial model from which to begin the search for a well-fitting model. The examples in this section deal only with initial model selection. An example incorporating various stepwise procedures is given in Section 3. This section examines three approaches to picking an initial model: all s-factor effects models, models based on tests of marginal and partial association, and models based on testing each term in the saturated model last.
6.2.1
All s-Factor Effects
The simplest way to choose an initial model is to take one that consists of all effects of a particular level s. For example, the initial model can be the model of all main effects, or all two-factor effects, or all three-factor effects, and so on. The initial model can be chosen as either the smallest of these models that fits the data or the largest of these models that does not fit the data. In particular, for a four-factor table, one can test the models (a) [1][2][3][4] (b) [12][13][14][23][24][34] (c) [123][124][234][134] (d) [1234] against each other to determine the smallest model that fits the data. Suppose it is model (b). We can then consider eliminating terms from model (b). Another approach is to look at the largest model that does not fit the data. That would be model (a). We can then consider selecting terms
216
6. Model Selection Methods and Model Evaluation
to add to model (a). Elimination and selection can be performed either by ad hoc methods or by using the formal rules for backward elimination and forward selection. Note that if we begin with model (b) [model (a)], it is very tempting to restrict attention to deleting (adding) only two-factor terms. Considering only two-factor terms is a substantial reduction in work as compared to investigating all levels of terms, but this simplification runs the risk of both missing some important terms and leaving in some unimportant terms. Finally, it should be noted that if a combined stepwise procedure is to be used, either of the initial models is appropriate. However, different initial models may give different results. Example 6.2.1. Reconsider the data of Examples 3.7.1 and 4.5.1 on the relationship between two drugs and muscle tension. For each mouse, a muscle was identified and its tension was measured. A randomly chosen drug was given to the mouse and the muscle tension was measured again. The muscle was then tested to identify which type of muscle it was. The weight of the muscle was also measured. Factors and levels are tabulated below. Factor Change in Muscle Tension Weight of Muscle Muscle Drug
Abbreviation T W M D
Levels High, Low High, Low Type 1, Type 2 Drug 1, Drug 2
The sampling is product multinomial with the total count for each muscle type fixed. The data are Tension
Drug Drug 1 Drug 2 3 21 23 11
Weight High
Muscle Type 1 Type 2
Low
Type 1 Type 2
22 4
32 12
High
Type 1 Type 2
3 41
10 21
Low
Type 1 Type 2
45 6
23 22
High
Low
The test statistics are given below. Clearly, the only model that fits the data is the model of all three-factor interactions.
6.2 Initial Models for Selection Methods
Model [TWM][TWD][TMD][WMD] [TW][TM][WM][TD][WD][MD] [T][W][M][D]
6.2.2
df 1 5 11
G2 0.11 47.67 127.4
217
P .74 .00 .00
Examining Each Term Individually
An intuitively appealing method of selecting an initial model is to examine each term in the saturated model and include only those terms that are important. The question arises as to how one decides which terms are important. One reasonable approach is testing whether the terms are nonzero. One can then include the terms that are significantly different from zero and drop the rest. Unfortunately, the saturated model is overparametrized to the point that any terms except the u1234 ’s can be dropped without affecting the model. The problem is in determining how to test whether the terms are zero. There are many possibilities. For example, to test whether u123(hij) is important in a four-dimensional table, we can test [12][13][234][134] versus [123][234][134] or we can test [234] versus [123][234] or any of a very large number of other model comparisons. The problem is to decide on which tests to examine. Two methods have been proposed: testing each term last and the method suggested by Brown (1976) in which tests of marginal and partial association are performed for each term. These methods are examined in the next two subsections.
6.2.3
Tests of Marginal and Partial Association
Brown (1976) proposed looking at two tests for each term in the saturated model: a test of marginal association and a test of partial association. These tests can be used in a variety of ways to choose an initial model. To test a particular term for marginal association, collapse over any factors not included in the term. The test of marginal association is based on the marginal table and consists of testing the model that involves only the term in question against the largest submodel that does not include the term. (Main effects are not tested for marginal association.) Example 6.2.2. The test of marginal association for the u1234(hijk) ’s is the test of [123][124][134][234] versus [1234]. The test of marginal association for the u123(hij) ’s is the test of [12][23][13] versus [123]. The test of marginal association for the u24(ik) ’s is the test of [2][4] versus [24]. The test for partial association depends on the number of factors involved in the term. If the term involves s factors, the test of partial association is a test of the model with all s-factor (interaction) terms against the reduced
218
6. Model Selection Methods and Model Evaluation
model in which the term in question is dropped out. Thus, in a test of partial association, all other effects are fixed at a certain level of interaction. Example 6.2.3. In a four-dimensional table, the test of partial association for the u123(hij) ’s is the test of [124][134][234] versus [123][124][134][234]. The test of partial association for the u24(ik) ’s is the test of [12][13][14][23][34] versus [12][13][14][23][24][34]. In a four-dimensional table, the test of partial association for u1234 is identical to the test of marginal association. Note that the degrees of freedom for the tests are the degrees of freedom for dropping the term in question; they are typically the same in the two tests. There are a number of ways of choosing an initial model using Brown’s tests: (a) include all terms with significant marginal tests, (b) include all terms with significant partial tests, (c) include all terms for which either the marginal or partial test is significant, (d) include all terms for which both the marginal and partial tests are significant. Method (d) always gives the smallest model. Method (c) always gives the largest model. Method (d) can be used to determine an initial model for forward selection. Method (c) determines a model that might be used with backward elimination. Any of the four methods would give an appropriate initial model for combined stepwise selection. An obvious ad hoc model selection approach is to restrict attention to models that are between the small model of method (d) and the large model of method (c). Perhaps the main fault with this method is that important terms could have been missed in model (c). Example 6.2.4. Brown’s tests for the muscle tension data are presented in Table 6.1. The WMD term is clearly significant as is the WM term. In addition, several terms involving the change in muscle tension appear to be important, e.g., T, TM, TD, and possibly TMD. Using significance levels of α = .01 and α = .10, the four initial models suggested by Brown’s tests are Method Method Method Method
6.2.4
(a): (b): (c): (d):
α = .01 [WMD][TMD] [WMD][TD][TM] [WMD][TMD] [WMD][TD]
α = .10 [WMD][TMD][TWM] [WMD][TMD] [WMD][TMD][TWM] [WMD][TMD].
Testing Each Term Last
The basis of this method is testing whether each term can be dropped from the saturated model without a significant loss of explanatory power.
6.2 Initial Models for Selection Methods
219
TABLE 6.1. Brown’s Tests for the Muscle Tension Data Effect
Partial Association G2
P
Marginal Association G2
P
T W M D TW TM WM TD WD MD TWM TWD TMD WMD TWMD
6.04 3.55 1.18 0.08 2.35 6.81 63.66 6.02 0.65 0.17 1.00 0.01 2.86 35.65 0.14
.01 .06 .28 .78 .13 .01 .00 .01 .42 .68 .32 .93 .09 .00 .70
— — — — 0.06 5.27 62.25 6.37 1.12 1.40 2.63 0.04 6.01 40.49 0.14
— — — — .80 .02 .00 .01 .29 .24 .10 .85 .01 .00 .70
The problem with this method is that it requires a reparametrization of the model. For example, the model log(mhijk ) = u24(ik) + u1234(hijk) is a saturated model, but if we drop the u24(ik) ’s, we get log(mhijk ) = u1234(hijk) which is still a saturated model. Dropping the u24(ik) ’s does not change the model. To test every term against the saturated model requires a regression parametrization in which dropping any term really reduces the model. We begin with a simple example that assumes familiarity with estimation for analysis of variance under the “usual” constraints. After the example, we deal with the question of reparametrization. The discussion of reparametrization involves a more sophisticated use of linear model ideas than has been used thus far in the book. Example 6.2.5. Consider again the muscle-tension data of Example 6.2.1. This involves four factors each at two levels. We begin by examining a similar normal theory ANOVA model yhijk
= µ + αh + βi + γj + ηk + (αβ)hi + (αγ)hj + (αη)hk + (βγ)ij + (βη)ik + (γη)jk + (αβγ)hij + (αβη)hik + (αγη)hjk + (βγη)ijk + (αβγη)hijk + ehijk .
With two levels in each factor, every interaction has one degree of freedom and corresponds to a contrast. For example, under the “usual” side conditions, the (αβ) interaction contrast is (αβ)11 − (αβ)12 − (αβ)21 + (αβ)22 .
220
6. Model Selection Methods and Model Evaluation
The estimate of the contrast is y¯11·· − y¯12·· − y¯21·· + y¯22·· . Log-linear model estimation is analogous to the ANOVA procedure. We are dealing with a saturated model log(mhijk ) = u + u1(h) + · · · + u234(ijk) + u1234(hijk) , so m ˆ hijk = nhijk for all h, i, j, and k. Define new parameters λ corresponding to each interaction contrast. For example, the u12 interaction corresponds to a contrast 4λ12 = u12(11) − u12(12) − u12(21) + u12(22) or, equivalently, 16λ12 = 4[u12(11) − u12(12) − u12(21) + u12(22) ]. This particular definition of the λ’s relates to two things that will be examined later. One is ease of computation of the estimates; the other is a useful reparametrization of the saturated model. Let whijk = log(nhijk ). The estimated contrast is ˆ 12 16λ
= 4[w ¯11·· − w ¯12·· − w ¯21·· + w ¯22·· ] = [w11·· − w12·· − w21·· + w22·· ] .
Applying the usual side conditions, u12(h·) = u12(·i) = 0, leads to the parameter estimates ˆ 12 λ =u ˆ12(11) = −ˆ u12(12) = −ˆ u12(21) = u ˆ12(22) . 4 Obviously, if you know one of the u ˆ12(hi) ’s, you know them all. It is simpler ˆ 12 . to focus on λ ˆ are 1/16th of the sums of 8 whijk ’s minus The estimates of all the λ’s ˆ have the same asymptotic the sum of the remaining 8 whijk ’s, so all λ’s standard error 1/2 1 ˆ = 1 . SE(λ) 16 nhijk hijk
The standard error depends on having a saturated model and is a generalization of the result for log odds ratios given earlier. Details are given in Section 10.2. If we do an analysis of variance on the whijk ’s, the sums of squares for ˆ 2 . Table 6.2 shows an analysis of variance. Table 6.3 various terms equal 16λ
6.2 Initial Models for Selection Methods
221
TABLE 6.2. Analysis of Variance on log(nhijk ) for the Muscle Tension Data Source
df
SS
T W M D TW TM WM TD WD MD TWM TWD TMD WMD TWMD
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
.2208 .3652 .0000 .9238 .0522 .4202 5.631 .1441 .0080 .2167 .1123 .0018 .2645 3.286 .01188
ˆ and |z| = |λ/SE( ˆ ˆ = |16λ/SE(16 ˆ ˆ The z values gives values of |16λ| λ)| λ)|. can be used to test λ = 0. The estimates in Table 6.3 were obtained from Table 6.2. For example, the source T has a sum of squares of .2208, so ˆ T ) = (1/nhijk ). ˆ T | = (16).2208. The standard error is SE(16λ |16λ The main reason for using estimates of 16λ rather than λ is that the 16λ estimates are more comparable to another reparametrization of the saturated model that will be used later. The important terms in Table 6.3 are λW M D , λW M , and perhaps λD . In other words, the main effect for D, the WM interaction, and the WMD interaction are the important terms in the model. By our rule for including lower-order terms, the inclusion of λW M D implies the model [WMD] which automatically includes both [WM] and [D]. It is interesting to note that the factor T does not appear in any important terms. As mentioned at the beginning of the subsection, testing each term last requires that the saturated model be reparametrized into a regression model. A method is needed for relating the reparametrized results back to the original parametrization. This is most easily done when each factor is at only two levels. If each factor is at only two levels, there is one degree of freedom for each u term. Still, there are an infinite number of possible parametrizations. It is necessary to (arbitrarily) choose one. Example 6.2.6. log(mhijk )
Consider a 2 × 2 × 2 × 2 table. The model
= u + u1(h) + u2(i) + u3(j) + u4(k) (1) + u12(hi) + u13(hj) + u14(hk) + u23(ij) + u24(ik) + u34(jk)
222
6. Model Selection Methods and Model Evaluation TABLE 6.3. Muscle Tension Data: Estimates and Test Statistics for Model (2) λ
ˆ |16λ|
|z|
T W M D TW TM WM TD WD MD TWM TWD TMD WMD TWMD
1.880 2.417 0.002 3.846 0.914 2.593 9.492 1.518 0.358 1.862 1.340 0.172 2.057 7.251 0.436
1.44 1.85 0.00 2.94 0.70 1.98 7.26 1.16 0.27 1.42 1.03 0.13 1.57 5.55 0.33
ˆ = 1.307. SE(16λ)
+ u123(hij) + u124(hik) + u134(hjk) + u234(ijk) + u1234(hijk) can be reparametrized as log(mhijk )
= λ + (−1)h−1 λ1 + (−1)i−1 λ2 + (−1)j−1 λ3 + (−1)k−1 λ4 + (−1)h+i−2 λ12 + (−1)h+j−2 λ13 + (−1)h+k−2 λ14 + (−1)i+j−2 λ23 + (−1)i+k−2 λ24 + (−1)j+k−2 λ34 + (−1)h+i+j−3 λ123 + (−1)h+i+k−3 λ124
(2)
+ (−1)h+j+k−3 λ134 + (−1)i+j+k−3 λ234 + (−1)h+i+j+k−4 λ1234 . This parametrization gives the same estimates as using the “usual” side conditions, i.e., 0 = u1(·) = u2(·) = u3(·) = u4(·) = u12(·i) = u12(h·) = · · · = u1234(·ijk) = u1234(h·jk) = u1234(hi·k) = u1234(hij·) . Model (2) is given in matrix form in Example 10.4.1. Other sets of side conditions correspond to other reparametrizations. For example, another frequently used set of side conditions are 0 = u1(1) = u2(1) = u3(1) = u4(1) = u12(1i) = u12(h1) = · · · = u1234(1ijk) = u1234(h1jk) = u1234(hi1k) = u1234(hij1) . Here, all u terms are set equal to zero for which any of h, i, j, or k is 1. If we let δab = 1 when a = b and 0 otherwise where a and b are any symbols, these side conditions correspond to the
6.2 Initial Models for Selection Methods
223
reparametrized model log(mhijk )
= γ + δh2 γ1 + δi2 γ2 + δj2 γ3 + δk2 γ4 + δ(h,i)(2,2) γ12 + δ(h,j)(2,2) γ13 + δ(h,k)(2,2) γ14 + δ(i,j)(2,2) γ23 + δ(i,k)(2,2) γ24 + δ(j,k)(2,2) γ34 + δ(h,i,j)(2,2,2) γ123 + δ(h,i,k)(2,2,2) γ124 + δ(h,j,k)(2,2,2) γ134 + δ(i,j,k)(2,2,2) γ234
(3)
+ δ(h,i,j,k)(2,2,2,2) γ1234 . Except for λ1234 and γ1234 , these parametrizations are not equivalent and can lead to different conclusions about which terms should be in a model. As mentioned earlier, a primary difficulty in testing each term last is in relating the tests for the reparametrized model to tests for ANOVA type models. In the special case where each factor has two levels (categories), the relationship is simple, because each term in the ANOVA type models has one degree of freedom, just as each test in the reparametrized model has one degree of freedom. If a particular term has a large test statistic, the corresponding main effect or interaction is included in the model. For example, if we reject H0 : λ12 = 0 (or H0 : γ12 = 0), then our ANOVA model includes u12(hi) . This implies that the ANOVA model will include (at least implicitly) u1(h) and u2(i) regardless of whether λ1 (γ1 ) and λ2 (γ2 ) are significantly different from zero. Note that because λ12 and γ12 are not equivalent, the results of this procedure depend on the parametrization chosen. In fact, identifying important effects by testing all effects last does not provide a good end model. It provides an initial model from which some method of exploration (e.g., forward selection or combined stepwise) can be used to determine a final model. Example 6.2.7. Consider again the muscle tension data of Example 6.2.1. The λ values defined in Example 6.2.5 using the usual side conditions are exactly the same as the λ values defined in model (2). For example, 4λ12 = u12(11) − u12(12) − u12(21) + u12(22) . We have already obtained estimates of the λ’s, standard errors, and |z| scores. Similarly, if we use the parametrization of model (3) and the related side conditions, we find, for example, that γ12 = u12(11) − u12(12) − u12(21) + u12(22) ; however, 4λ12 = γ12 . Some computer programs, e.g., GLIM, routinely provide estimates and standard errors for the parametrization of model (3). The results along with |z| values are reported in Table 6.4. There are now
224
6. Model Selection Methods and Model Evaluation
at least six interesting terms: γW M D , γM D , γW M , γD , γM , and γW . The |z| value for γW D is also quite large. Again using the rule of including lowerorder terms, the inclusion of γW M D implies the model [WMD], which, in turn, implies the inclusion of all of the other interesting terms. Once again, factor T does not appear. The results from these two parametrizations are reasonably consistent for this data set, but it is not difficult to see how the analysis could go awry. Consider the terms for TWM and TMD. Using model (2), we get ˆ T W M )| = 1.03 and |z(λ ˆ T M D )| = 1.57, so, although neither is signifi|z(λ cant, the TMD term seems considerably more important than the TWM γT M D )| = 0.80. term. However, in model (3), |z(ˆ γT W M )| = 0.80 and |z(ˆ In model (3), the TMD and TWM terms seem to be of equal importance. The problem is that the parameters are model dependent. Because they are from different models, 8λT W M = γT W M . As a result, the test statistics are testing different things. The only exception to this is that 16λT W M D = γT W M D . The relationships between parameters in models (1), (2), and (3) are complex and, in the our view, often not worth pursuing. The simplest way to avoid the complexities of alternative parametrizations is to deal directly with model (1) and its submodels. In our view, the method of testing each term last can give some rough ideas about the analysis, but usually should not be considered to give anything more than rough ideas. Again, we note a rather curious phenomenon in this example. The experiment was conducted to investigate changes in muscle tension. Neither parametrization shows the significance of any effect involving T, the change in tension. It is theoretically possible that none of the other factors relate to change in muscle tension, but in most studies of this type, the investigator conducts the experiment because he or she knows that there are relationships between the other factors and change in tension. As we saw from the tests of partial and marginal association, such relationships exist. The method employed has simply failed to find them. Two final comments on the choice of an initial model. Any effects that are forced into the model to deal with the sampling scheme should be included in any initial model and never deleted in any model selection method. Also, one is rarely interested in models that are smaller than the model of complete independence. It is common practice to include at least the main effect for every factor in an initial model.
6.3
Example of Stepwise Methods
We now give detailed examples of forward selection and backward elimination. Reconsider the data of Example 3.7.2 in which there are four factors defining a 2 × 2 × 3 × 6 table. Recall that the factors are
6.3 Example of Stepwise Methods
225
TABLE 6.4. Muscle Tension Data — Model (3): Estimates, Standard Errors, and Test Statistics
Factor Race Sex Opinion
Age
γ
γ ˆ
SE
|z|
T W M D TW TM WM TD WD MD TWM TWD TMD WMD TWMD
−0.000 1.992 2.037 1.946 0.716 0.578 −3.742 −0.742 −1.571 −2.684 −0.888 −0.304 0.810 3.407 0.436
.8165 .6154 .6138 .6172 .8569 .8570 .8199 .9024 .6765 .7179 1.104 .9781 1.010 .9620 1.307
0.00 3.24 3.32 3.15 0.84 0.67 4.56 0.82 2.32 3.74 0.80 0.31 0.80 3.54 0.33
Abbreviation R S O
A
Levels White, Nonwhite Male, Female Yes = Supports Legalized Abortion No = Opposed to Legalized Abortion Und = Undecided 18-25, 26-35, 36-45, 46-55, 56-65, 66+ years
The data are repeated in Table 6.5. We begin by fitting the all three-factor model, the all two-factor model, and the complete independence (all one-factor) model. Model [RSO][RSA][ROA][SOA] [RS][RO][RA][SO][SA][OA] [R][S][O][A]
df 10 37 62
G2 6.12 26.09 121.47
Clearly, both the all three-factor and the all two-factor models fit the data relative to the saturated model. Comparing the all three-factor and all two-factor models gives G2 df
= 26.09 − 6.12 = 19.97, = 37 − 10 = 27,
so there is no reason to reject the all two-factor model. The model of complete independence does not fit.
226
6. Model Selection Methods and Model Evaluation
TABLE 6.5. Abortion Opinion Data Race
Sex
Opinion
18-25
26-35
Age 36-45 46-55
56-65
66+
Male
Yes No Und
96 44 1
138 64 2
117 56 6
75 48 5
72 49 6
83 60 8
Female
Yes No Und
140 43 1
171 65 4
152 58 9
101 51 9
102 58 10
111 67 16
Male
Yes No Und
24 5 2
18 7 1
16 7 3
12 6 4
6 8 3
4 10 4
Female
Yes No Und
21 4 1
25 6 2
20 5 1
17 5 1
14 5 1
13 5 1
White
Nonwhite
6.3.1
Forward Selection
First, we consider forward selection using the model of complete independence as our initial model. As a criterion for adding terms, we add the most significant term as long as the significance level is below .10. For those of us who are not wild about significance tests, some comments based directly on the likelihood ratio test statistics are also included. It should be remembered that the formal method of forward selection is based on the significance levels. Three methods of forward selection have been discussed. These involve adding two-factor terms, adding simple effects, and adding multiple effects. Starting with the model of complete independence, the first step in all three methods is the same. We require the following fits: Model [RS][O][A] [RO][S][A] [RA][S][O] [SO][R][A] [SA][R][O] [OA][R][S] [R][S][O][A]
Added Term RS RO RA SO SA OA
df 61 60 57 60 57 52 62
G2 119.45 107.48 115.46 112.28 120.75 59.78 121.47
6.3 Example of Stepwise Methods
227
All of the models with two-factor terms are compared to the model of complete independence; e.g., to test the RS term, G2 = G2 ([R][S][O][A]) − G2 ([RS][O][A]) = 121.47 − 119.45 = 2.02. The degrees of freedom are 62 − 61 = 1. The results of the tests are summarized below. Term RS RO RA SO SA OA
df 1 2 5 2 5 10
G2 2.02 13.99 6.01 9.19 0.72 61.69
P .1557 .0009 .3055 .0101 .9820 .0000
The term OA has the smallest P value, so [OA] is added to the model of complete independence. With the new model [R][S][OA], the second step of adding either twofactor effects or simple effects again remains the same. At this point, adding any three-factor term would imply adding more than one effect, so simple effects are only two-factor effects. Consideration of the addition of multiple effects leads to a different second step. To add either a two-factor effect or a simple effect requires the following fits: Model [RS][OA] [RO][OA][S] [RA][OA][S] [SO][OA][R] [SA][OA][R] [R][S][OA]
Added Term RS RO RA SO SA
df 51 50 47 50 47 52
G2 57.76 45.79 53.77 50.59 59.06 59.78
Again, all the models with an additional two-factor term are compared to the model [R][S][OA]. Term RS RO RA SO SA
df 1 2 5 2 5
G2 2.02 13.99 6.01 9.19 0.72
P .1557 .0009 .3055 .0101 .9820
The term RO is added to the model, giving a base model of [RO][OA][S]. If addition of multiple effects to [R][S][OA] is considered, two more models must be evaluated. Adding an additional factor to the OA term leads to the models [R][SOA] and [S][ROA]. These models have the fits
228
6. Model Selection Methods and Model Evaluation
Model [R][SOA] [S][ROA] [R][S][OA]
Added Term SOA ROA
G2 47.91 32.31 59.78
df 35 35 52
They are tested against [R][S][OA]. Term SOA ROA
df 17 17
G2 11.87 27.47
P .8082 .0516
These P values are larger than the P value for RO, so the term RO is still added to [R][S][OA], giving the new model [RO][OA][S]. In this example, addition of two-factor effects and addition of simple effects turn out to be identical procedures. We now follow this procedure to its conclusion. After establishing the end model, we will indicate how these procedures would have differed if our model selection procedure had been modified slightly. Finally, we will follow the method of addition of multiple effects to its conclusion. After the second step of forward selection, the simple effects and twofactor effects procedures had arrived at a base model of [RO][OA][S]. The only simple effects that can be added are two-factor effects. There are four possible effects that can be added. The necessary model fits are Model [RS][RO][OA] [RA][RO][OA][S] [SO][RO][OA] [SA][RO][OA] [RO][OA][S]
Added Term RS RA SO SA
df 49 45 48 45 50
G2 43.77 38.82 36.60 45.07 45.79
This leads to the following differences: Term RS RA SO SA
df 1 5 2 5
G2 2.02 6.97 9.19 0.72
P .1557 .2228 .0101 .9820
Thus, [SO] is added to the model. This turns out to be the last step at which anything is added to the model. The next step examines
6.3 Example of Stepwise Methods
Model [RS][SO][RO][OA] [RA][SO][RO][OA] [SA][SO][RO][OA] [SO][RO][OA]
Added Term RS RA SA
df 47 43 43 48
229
G2 34.25 29.63 35.33 36.60
and Term RS RA SA
df 1 5 5
G2 2.35 6.97 1.27
P .1252 .2228 .9382
At this stage, all of the P values are in excess of .10, so no new term is added and the final model is [SO][RO][OA]. To see how adding two-factor effects can differ from adding simple effects, suppose that our criterion for stepping is having P values less than .15. With this criterion, the term RS would be added to the model, giving a new model of [RS][SO][RO][OA]. If we restrict ourselves to adding two-factor effects, the next step involves adding RA or SA. If we allow addition of simple effects, the three-factor term RSO could also be added. This is the first time that a simple effect is a three-factor effect because [RS][SO][RO][OA] is the first model that contains all three of the twofactor effects that correspond to a three-factor effect. In particular, with [RS], [SO], and [RO] in the model, adding [RSO] is adding a simple effect. Recall that the rationale for starting our search with the model of complete independence was based in part on the fact that the all two-factor model gave an adequate fit. This provides a rationale for considering only the addition of two-factor terms. Unfortunately, the test of the all twofactor model against the all three-factor model can have very little power for identifying individual three-factor terms that are important. It is not safe to ignore all three-factor terms based on this one test. Thus, it is dangerous to consider adding only two-factor effects. By considering addition of simple effects, we at least admit the possibility of examining important three-factor effects. We now return to examining forward selection with the addition of multiple effects. Recall that after the second step, we had arrived at a base model of [RO][OA][S]. The multiple effects that can be added involve adding one new factor to a term already in the model, so these are the remaining twofactor effects (which are also simple effects) plus RSO, ROA, and SOA. Given below are statistics for fitting each model plus the differences in df ’s and G2 ’s between the various models and [RO][OA][S]. Tests are based on these differences. We use a cutoff of α = .05. (For this example, the first two steps do not change when α = .05 is used instead of α = .10.)
230
6. Model Selection Methods and Model Evaluation
Model [OR][OA][SA] [RA][OR][OA][S] [RO][OA][OS] [RO][OA][SR] [RO][SOA] [ROA][S] [RSO][OA] [RO][OA][S]
Added Term SA RA SO RS SOA ROA RSO —
df 45 45 48 49 33 35 45 50
G2 45.07 38.82 36.60 43.77 33.92 32.31 24.77 45.79
df 5 5 2 1 17 15 5 —
Differences G2 P 0.72 .9820 6.97 .2228 9.19 .0101 2.02 .1557 11.87 .8082 13.48 .5654 21.02 .0008 — —
For testing against [RO][OA][S], the model with the smallest P value is [RSO][OA], with P = .0008. The P value is less than .05, so we take [RSO][OA] as our working model. Multiple effects that can be added to this are SA, RA, SOA, ROA, RSA, and RSOA. The statistics are given below. Model [RSOA] [RSO][OA][SA] [RSO][OA][RA] [RSO][SOA] [RSO][ROA] [RSO][OA][RSA] [RSO][OA]
Added Term RSOA SA RA SOA ROA RSA —
df 0 40 40 30 30 30 45
G2 0.00 23.50 17.79 22.09 11.29 14.43 24.77
df 45 5 5 15 15 15 —
Differences G2 P 24.77 .9938 1.27 .9382 6.97 .2228 2.68 .9998 13.48 .5655 10.34 .7980 — —
For testing against the model [RSO][OA], every model has a P value greater than .05, so no new terms are added. The final model is [RSO][OA]. Because the addition of multiple effects leads to considering more models than the addition of simple effects, the author prefers the multiple effect option if you insist on doing forward selection.
6.3.2
Backward Elimination
We now consider applying backward elimination to the initial model containing all two-factor terms. We will use a cutoff value of α = .05. Given below are statistics for the model of all two-factor terms and the six models in which one of the two-factor terms has been dropped. (At this stage, the simple effects are precisely the two-factor effects.) The df and G2 for testing each model against the saturated model are given. With this information, each reduced model can be tested against the all two-factor model by taking differences in the df ’s and G2 ’s. For each of the reduced models, the differences are listed along with the P value for the test.
6.3 Example of Stepwise Methods
Model [AS][AR][AO][OS][OR][SR] [AS][AR][OS][OR][SR] [AR][AO][OS][OR][SR] [AS][AO][OS][OR][SR] [AS][AR][AO][OR][SR] [AS][AR][AO][OS][SR] [AS][AR][AO][OS][OR]
Deleted Term — AO AS AR OS OR SR
df 37 47 42 42 39 39 38
G2 26.09 89.24 27.28 32.98 36.12 41.33 28.36
df — 10 5 5 2 2 1
231
Differences G2 P — — 63.15 .0000 1.19 .9461 6.89 .2289 10.03 .0067 15.24 .0005 2.27 .1319
Deleting the AS term gives the largest P value, so we choose the reduced model [AR][AO][OS][OR][SR]. Once again, the simple effects are the two-factor effects. The necessary statistics are Model [AR][AO][OS][OR][SR] [AR][OS][OR][SR] [AO][OS][OR][SR] [AR][AO][OR][SR] [AR][AO][OS][SR] [AR][AO][OS][OR]
Deleted Term — AO AR OS OR SR
df 42 52 47 44 44 43
G2 27.28 89.93 34.25 36.80 42.53 29.63
df — 10 5 2 2 1
Differences G2 P — — 62.65 .0000 6.97 .2228 9.52 .0086 15.26 .0005 2.35 .1252
Deleting the AR term gives the largest P value, so the new model is [AO][OS][OR][SR]. Simple effects are still two-factor effects, so the necessary statistics are Model [AO][OS][OR][SR] [A][OS][OR][SR] [AO][OR][SR] [AO][OS][SR] [AO][OS][OR]
Deleted Term — AO OS OR SR
df 47 57 49 49 48
G2 34.25 95.94 43.77 48.57 36.60
df — 10 2 2 1
Differences G2 P — — 61.69 .0000 9.52 .0085 14.32 .0008 2.35 .1252
We now delete SR and use the base model [AO][OS][OR]. The statistics are Model [AO][OS][OR] [A][OS][OR] [AO][OR][S] [AO][OS][R]
Deleted Term — AO OS OR
df 48 58 50 50
G2 36.60 98.29 45.79 50.59
df — 10 2 2
Differences G2 P — — 61.69 .0000 9.19 .0101 13.99 .0009
None of the P values is greater than .05, so we stop deleting terms and go with the model [AO][OS][OR]. Note that this model has the nice interpretation that given people’s opinions; race, sex, and age are independent.
232
6. Model Selection Methods and Model Evaluation
In this example, simple effects were always two-factor effects. If our cutoff level had been α = .01, this would not have happened. With a cutoff of .01, the term OS can be dropped from [AO][OS][OR], yielding the model [AO][OR][S]. Deleting two-factor terms leads us to consider the reduced models [A][OR][S] (eliminating AO) and [AO][R][S] (eliminating OR). If we consider dropping simple effects, we would also consider the reduced model [AO][OR] (eliminating S). However, as mentioned earlier, it is typically not a good idea to drop main effects. If the initial model was the model of all three-factor effects, the difference between dropping three-factor effects and dropping simple effects could be substantial. Dropping simple effects is a more general procedure and seems more reasonable to the author.
6.3.3
Comparison of Stepwise Methods
Forward selection with α = .10 and addition of simple effects lead to the model [RO][SO][OA]. It is easily seen that α = .05 would lead to the same model. Forward selection with multiple effects and α = .05 leads to [RSO][OA]. Backward elimination of simple effects from the all two-factor model with α = .05 leads to [RO][SO][OA]. Frankly, we are lucky to have two methods give the same model. There is no reason that this needs to happen. Which is a better model? One is a special case of the other, so the test statistic is 36.60 − 24.77 = 11.83 with 48 − 45 = 3 degrees of freedom. This suggests quite strongly that [RSO][OA] is the better model. Note that it will not always be possible to test the results of different procedures because the models may not be comparable. Stepwise methods are very sensitive to the cutoff values used. They are also very sensitive to the initial model. For backward elimination, we started with the model of all two-factor effects. The importance of the single threefactor effect RSO was washed out in testing the all two-factor model against the all three-factor model. Hence, it was decided to go with the two-factor model. If we had considered Brown’s measures of partial and marginal association, we would have been better off (at least for these data). Brown’s measures are given in Table 6.6. The term [RSO] stands out as a clearly important effect. In fact, even without using a stepwise procedure, Brown’s tests clearly suggest the model [RSO][OA], but that is a function of these particular data. In this section, we have used several variations on stepwise regression. This is not just a pedagogical device. If stepwise methods are to be used in spite of their well-known weaknesses, it is important to use several variations. This allows the data to indicate several candidate models. These models should be compared to see how well they fit the model assumptions. They should also be compared for interpretability.
6.3 Example of Stepwise Methods
233
TABLE 6.6. Brown’s Measures of Association
6.3.4
Effect
df
Partial G2
P
Marginal G2
P
R S O A RS RO SO RA SA OA RSO RSA ROA SOA RSOA
1 1 2 5 1 2 2 5 5 10 2 5 10 10 10
1552.90 25.21 1532.82 55.14 2.27 15.24 10.03 6.89 1.19 63.15 10.51 2.55 7.17 1.43 6.12
.00 .00 .00 .00 .13 .00 .01 .23 .95 .00 .01 .77 .71 1.00 .81
— — — — 2.02 13.99 9.19 6.01 0.72 61.69 9.48 1.70 6.51 1.41 6.12
— — — — .16 .00 .01 .31 .98 .00 .01 .89 .77 1.00 .81
Computer Commands
BMDP-4F performs these stepwise procedures and gives initial models, including measures of partial and marginal association. Commands for backward elimination of simple effects are / INPUT
/ / / /
/ /
FILE = ’C:\LOGLIN\ABORT.DAT’. FORMAT = FREE. VARIABLES = 5. VARIABLE NAMES = R, S, A, O, N. TABLE INDEX = R, S, A, O. COUNT = N. STAT ALL. FIT MODEL = AS, AR, AO, OS, OR, SR. ASSOCIATION = 4. DELETE = SIMPLE. STEP = 10. PROB = .05. PRINT LINE = 79. END
The “step” command specifies how many steps are allowed in the procedure. The “prob” command specifies the probability for stopping the procedure. With this program, both the P value for the individual term and the P value for testing the model against the saturated model must be less than the specified probability for the procedure to stop.
234
6. Model Selection Methods and Model Evaluation
6.4
Aitkin’s Method of Backward Selection
Aitkin (1978, 1979) suggests a model selection method that is closely related to the all s-factor effects method described in Section 2. After using backward selection to pick an all s-factor model, Aitkin’s method provides for testing every model intermediate between the all s-factor model and the all s − 1-factor model. The procedure also incorporates ideas on simultaneous testing that control the overall error rate for all tests performed. Aitkin begins by testing the all s − 1-factor model against the all s-factor model at a level, say, γs . (The choice of γs will be discussed later.) This is actually a test of whether the s-factor effects are needed in the model. Except for the choice of γs , this is exactly what was done in the subsection of Section 2 on All s-Factor Effects. To describe the procedure precisely, we need some additional notation. Let G2s be the likelihood ratio test statistic and let ds be the degrees of freedom for testing the all s-factor model against the saturated model. To test the need for s-factor effects, we reject the null hypothesis of no s-factor effects if G2s−1 − G2s > χ2 (1 − γs , ds−1 − ds ) . This is a test for the adequacy of the all s − 1-factor model. Aitkin then identifies the smallest value of s for which the all s-factor effects model adequately fits the data. This model has s as the largest value such that G2s−1 − G2s > χ2 (1 − γs , ds−1 − ds ). Example 6.4.1. Consider the muscle tension data of Example 6.2.1. This is a four-factor table. As given in Example 6.2.1, the all s-factor models are s 4 3 2 1
Model [TWMD] [TWM][TWD][TMD][WMD] [TW][TM][WM][TD][WD][MD] [T][W][M][D]
ds 0 1 5 11
G2s 0 0.11 47.67 127.4
For reasons to be considered later, suppose γ4 = .05, γ3 = .185, and γ2 = .265, then Aitkin’s tests are s − 1 versus s 3 versus 4 2 versus 3 1 versus 2
G2s−1 − G2s 0.11 − 0 = .11 47.67 − 0.11 = 47.56 127.4 − 47.67 = 79.7
χ2 (1 − γs , ds−1 − ds ) χ2 (.95, 1) = 3.841 χ2 (.815, 4) = 6.178 χ2 (.735, 6) = 7.638
The largest value of s for which G2s−1 − G2s > χ2 (1 − γs , ds−1 − ds ) is s = 3. The model [TWM][TWD][TMD][WMD] fits the data according to Aitkin’s criteria.
6.4 Aitkin’s Method of Backward Selection
235
Before discussing Aitkin’s method in general, we examine of its application to the race, sex, opinion, age data. Example 6.4.2. For the race, sex, opinion, age data of Section 3, take γs = .10 for all s. Recall from the previous section that the model of all two-factor effects fits well. Because of this, the procedure will never consider three-factor effects. In particular, it will never consider the model [RSO][OA] which fits very well. The model of all two-factor effects has 37 degrees of freedom and G2 = 26.09 for testing against the saturated model. The model of complete independence has 62 degrees of freedom for testing against the saturated model. In Aitkin’s method, a model, say X, with all main effects and some two-factor effects is deemed inadequate if G2X − 26.09 > χ2 (.90, 62 − 37) = 34.39 or, equivalently, if G2X > 34.38 + 26.09 = 60.47 . Any model that is not inadequate is adequate. A minimal adequate model is an adequate model that has no submodel that is deemed adequate. Our primary interest is in identifying minimal adequate models. Among models with two-factor effects, the most informative models will be small models that fit and large models that do not fit. If a small model fits, any larger model also fits. If a large model does not fit, then any smaller model does not fit. We begin by looking for small models that fit adequately. In particular, consider the first step of forward selection from the complete independence model. Model [R][S][OA] [R][SA][O] [RA][S][O] [R][SO][A] [RO][S][A] [RS][O][A]
Added Effect AO SA RA SO RO RS
G2 59.78 120.75 115.46 112.28 107.47 109.45
We are in luck! One of these models, [R][S][OA], has G2 < 60.47, so it is deemed adequate. Thus, we have an extremely small model that is adequate and any larger model must also be adequate. The model [R][S][OA] is clearly a minimal adequate model. We have also established that any other minimal adequate models must have at least 2 two-factor effects (we have already checked all models with only 1 two-factor effect) and none of the two-factor effects can be OA (otherwise it will have [R][S][OA] as a submodel).
236
6. Model Selection Methods and Model Evaluation
We now look for relatively large models that do not fit. Typically, a good approach is to look at the first step of backward elimination from the all two-factor effects model and hope to find some that do not fit adequately. For these data, however, we just established that any model larger than [R][S][OA] fits. In the first step of backward elimination, one two-factor effect is dropped out. Unless the two-factor effect dropped out is OA, we already know that the model will fit. The only model we need consider is [RS][RO][RA][SO][SA]; this model has a G2 of 89.24. Once again, we are in luck. The value 89.24 is greater than the critical value 60.47, so [RS][RO][RA][SO][SA] is deemed inadequate. Thus, any model that does not include OA must be inadequate. Combining our two results, we have established that the only minimal adequate model is [R][S][OA]. We now consider what would occur if γ2 was somewhat larger. Aitkin has suggested a method of choosing the γs ’s that will be discussed later. It begins with a level, say α = .05, and for t = 4 factors with s = 2, 4 Aitkin’s choice is γ2 = 1 − (1 − α)(2) = .265, where 42 = 6 is the number of combinations of four things taken two at a time. Upon establishing that χ2 (.735, 25) = 28.97 where 25 = 62−37, a model X consisting of two-factor effects is deemed inadequate if G2X > 28.97 + 26.09 = 55.06 . The model [R][S][OA] has G2 = 59.78, so it is no longer deemed adequate. However, it is very close to meeting the adequacy criterion. It makes sense to consider models that include [OA] and another two-factor effect. In particular, this is precisely the second step in forward selection starting with complete independence and adding simple effects. The models considered and their G2 ’s are Model [R][SA][OA] [RA][S][OA] [R][SO][OA] [RO][S][OA] [RS][OA]
G2 59.06 53.77 50.59 45.79 57.76
The models [R][SA][OA] and [RS][OA] have G2 > 55.06, so they are deemed inadequate. The models [RA][S][OA], [R][SO][OA], and [RO][S][OA] are adequate and no smaller models are adequate, so the models [RA][S][OA], [R][SO][OA], and [RO][S][OA] are minimally adequate models. Working from the all two-factor model down, the model of all two-factor effects except [OA] is inadequate (G2 = 89.24), so all adequate models contain [OA]. The inadequate models, [R][SA][OA] and [RS][OA] need to be considered as to the additional terms needed to make them adequate. If we add RA,
6.4 Aitkin’s Method of Backward Selection
237
SO, or RO, then we have made them larger than one of our minimally adequate models. The only two-factor effects that can generate additional minimally adequate models are SA and RS. If we add the appropriate effect to each model, we get [RS][SA][OA] in both cases. The G2 for this model is 57.04. Because G2 is greater than the critical value 55.06, [RS][SA][OA] is considered inadequate. Therefore, the only minimally adequate models are [RA][S][OA], [R][SO][OA], and [RO][S][OA]. All of these have simple interpretations in terms of conditional independence. Aitkin’s method applied to these data gives smaller models than any of the stepwise methods considered. Unfortunately, it missed the important [RSO] interaction.
General Discussion We now present a general discussion of Aitkin’s method. The method begins with a model of all s-factor effects that adequately fits the data, while the model with only the s − 1-factor effects does not fit the data. The crux of the method is in identifying intermediate models that also give an adequate fit. If X is a model that contains all s − 1-factor effects and some but not all of the s-factor effects, Aitkin tests the adequacy of X by rejecting adequacy of fit if G2X − G2s > χ2 (1 − γs , ds−1 − ds ) Here, G2X is the likelihood ratio test statistic for testing model X against the saturated model. Note that the same criterion for rejection χ2 (1 − γs , ds−1 − ds ) is used for any such model X. Also, if the all s − 1-factor model is indeed an adequate fit, then because G2X − G2s < G2s−1 − G2s , the probability of a false rejection for any and all such models X is less than the probability of a false rejection of the all s − 1-factor model. The fact that the criterion of rejection does not depend on the particular model X leads to two important observations. If X is an adequate model, then any larger model, say W , must also be deemed adequate because G2X − G2s ≥ G2W − G2s . Similarly, if X is inadequate, then any smaller model W must also be deemed inadequate because G2X − G2s ≤ G2W − G2s . These facts together with Aitkin’s restriction to only considering s-factor terms make it practical to examine all models that involve s-factor terms. For the data of Example 6.2.1, Aitkin’s method seeks to examine every model that is intermediate between the all three-factor model [TWM][TWD][TMD] [WMD] and the all two-factor model [TW][TM][WM][TD][WD][MD]. For example, in testing the intermediate, model [TWM][TWD][TMD], the value of G2[T W M ][T W D][T M D] − G23 = 35.65 is larger than χ2 (.815, 4) = 6.178, so the model [TWM][TWD][TMD] is not considered adequate. (The test statistic was obtained from Table 6.1 and .815 = 1 − γ3 from earlier in the section.) Note that the χ2 value uses the 4 degrees of freedom as-
238
6. Model Selection Methods and Model Evaluation
sociated with testing the all two-factor model against the all three-factor model. Similarly, any other model intermediate between the all two-factor and all three-factor models is tested against the all three-factor model using χ2 (.815, 4). If the all two-factor model is adequate, the probability of a false rejection when testing the all two- and all three-factor models is γ3 = .185. If the all two-factor model is adequate, then any intermediate model is adequate. Similarly, if an intermediate model is inadequate, then the all two-factor model must be inadequate. Because tests for intermediate models use the same χ2 value but have smaller G2 values than the all 2 versus all 3 test, a false rejection occurs for an intermediate model if and only if a false rejection occurs for the all two-factor model. (This is similar in spirit to Scheff´e’s method of multiple comparisons.) Aitkin defines a subset of s-factor effects as a minimal adequate subset if no proper subset defines an adequate model. (The model is the model of all s−1-factor effects plus the subset of s-factor effects.) Typically, there will be several such models. Each of these models may be reduced further by testing any smaller-order (e.g., s − 1) terms that are not forced into the model. These tests use the criterion of rejection appropriate for that order. [For an s − 1-factor term, compare the test statistic to χ2 (1 − γs−1 , ds−2 − ds−1 ).] The fact that relatively few lower-order terms will not be forced into the model makes it practical to carry out this procedure. The end result of Aitkin’s model selection method is a collection of minimal adequate models. These models should be compared for interpretability. The should also be compared to see how well they fit the model assumptions. In fact, they can even be compared using the Adjusted R2 or the Akaike information criteria discussed in Section 3.6. Probably the main fault of Aitkin’s method is the backward elimination in its first step. The first step is to decide on an adequate all s-factor model. For example, in a five-factor table, there are 10 three-factor terms. If 1 of the three-factor terms is substantial and the other 9 are not, then a test for all 10 terms will have little power to establish the need for this single three-factor term. We saw an example of this phenomenon earlier with the abortion opinion data. If it is important to pick up individual high-order interactions, Aitkin’s method will be problematic. On the other hand, Aitkin’s method may provide a good starting point to which an examination of higher-order interactions can be added. Finally, we discuss the choice of γs ’s. Aitkin suggests choosing the γs ’s so that there is a probability no greater than, say, γ of rejecting the maineffects-only model (i.e., complete independence) when main-effects-only is adequate. Suppose there are t factors in the table. When complete independence is true, the various tests for s-order interactions are asymptotically independent. Thus, asymptotically, the probabilities of not rejecting any test is the product of the probabilities for the individual tests and the γi ’s
6.4 Aitkin’s Method of Backward Selection
239
should be chosen to satisfy 1−γ =
t
(1 − γs ) .
s=2
(Note that we do not consider testing main effects, i.e., first-order “interactions.”) Particular values of the γs ’s can be chosen by analogy with a balanced t-factor analysis of variance. In a t-factor ANOVA, the number of s-factor effects is st . In a balanced ANOVA (with known variance), each of these tests might be conducted at some common level α. Applying this idea to log-linear models, the probability of not rejecting any of the s-factor tests (assuming complete independence holds and tests are independent) is t 1 − γs = (1 − α) s . This determines a specific value for γs . The corresponding t value of γ can be found using the binomial theorem. Because 2t = 2=0 st , we find that γ
=
=
1− 1−
t s=2 t
(1 − γs ) t (1 − α) s
s=2
=
1 − (1 − α)2
t
−t−1
.
Aitkin (1979) suggests that it is reasonable to pick an α level that yields a γ between .25 and .5. In Example 6.4.1, t = 4 and the γs values were chosen to satisfy 4 1 − γs = (1 − .05) s , so γ4 = .05, γ3 = .185 and γ2 = .265. These yield γ
= 1 − {(1 − .05)(1 − .185)(1 − .265)}, = .431
which is in Aitkin’s suggested range. In the discussion of Aitkin’s (1978) paper, D.R. Cox suggests that the emphasis on simultaneous testing in Aitkin’s method is excessive. A compromise approach that seems intuitively appealing to the current author is, for some value α, to choose α = γ2 = · · · = γt .
240
6. Model Selection Methods and Model Evaluation
6.5
Model Selection Among Decomposable and Graphical Models
Wermuth (1976) has proposed a backward elimination technique that is restricted to the decomposable models discussed in Section 5.2. She focuses on identifying pairs of factors that can be viewed as conditionally independent. Wermuth’s method is most easily understood in terms of graph theory and our general discussion of the method will center on that. However, we begin our discussion with an example that does not use graphical terms or motivations. Example 6.5.1. Consider again the race, sex, opinion, age data with a backward elimination cutoff of α = .05. The initial model is the saturated model and we consider all factor pairs for possible conditional independence. This leads to the following lack of fit tests: Factor Pair RS RO RA SO SA OA
Model [ROA][SOA] [RSA][SOA] [RSO][SOA] [RSA][ROA] [RSO][ROA] [RSO][RSA]
df 18 24 30 24 30 28
G2 20.45 38.22 22.09 27.91 11.29 78.06
P .3080 .0329 .8506 .2639 .9989 .0000
Note that for factor pair RS, the corresponding model [ROA][SOA] has R and S independent given O and A. Similar interpretations hold for the other pairs and models. The largest P value among the tests is for [RSO][ROA]. The P value is greater than .05, so we take as the model [RSO][ROA]. The conditional independence relation is that S and A are independent given R and O. We have incorporated into the model the conditional independence of factors S and A. At the next step, we consider models that incorporate conditional independence between another pair of factors. In particular, we consider models that are reduced relative to [RSO][ROA] and incorporate an additional conditional independence. The one exception is that the factor pair RO is in both terms of the model [RSO][ROA], so it cannot be considered. As will be seen later in this section, incorporating a conditional independence between the pair RO would lead to a model that is not decomposable. For the pair OA, conditional independence is introduced by reducing the term [ROA] into [RO][RA]. The resulting model is [RSO][RO][RA]. The term [RO] is redundant because it is implied by [RSO]; thus, the reduced model is [RSO][RA]. A similar analysis holds for all other factor pairs except RO. The second step of the model selection method requires the following models and statistics:
6.5 Model Selection Among Decomposable and Graphical Models
Factor Pair — OA RA OS RS
Model [RSO][ROA] [RA][RSO] [OA][RSO] [RS][ROA] [SO][ROA]
df 30 38 45 34 33
G2 11.29 80.45 24.77 30.29 23.12
df — 8 15 4 3
241
Differences G2 P — — 69.16 .0000 13.48 .5657 19.00 .0011 11.81 .0083
The tests presented as differences are tests of the given models against the model [RSO][ROA]. The largest P value is again greater .05 and belongs to [OA][RSO], so this model is used for the next step. The model [OA][RSO] incorporates the conditional independence of R and A in addition to the conditional independence of S and A obtained from the first step of the selection procedure. The model [OA][RSO] has R and S independent of A given O. In the next step, we begin with the model [OA][RSO]. The pairs SA and RA have already been identified for conditional independence. There are no pairs that are contained in both terms of the model, so there are no pairs that cannot be considered for possible conditional independence. All pairs other than SA and RA are considered for the possibility that they are conditionally independent. To incorporate conditional independence between O and A into the model [OA][RSO], break the term [OA] into [O][A] and use the model [O][A][RSO], which is equivalent to [A][RSO]. To incorporate conditional independence between O and S into [OA][RSO], break the term [RSO] into [RO][RS] and use the model [OA][RO][RS]. Similar models are used for the factor pairs RS and RO. The necessary tests are given below. Factor Pair — OA OS RS RO
Model [OA][RSO] [A][RSO] [OA][RS][RO] [OA][SO][RO] [OA][SO][RS]
df 45 55 47 48 49
G2 24.77 86.45 43.77 36.60 48.57
df — 10 2 3 4
Differences G2 P — — 61.68 .0000 19.00 .0000 11.83 .0082 23.80 .0001
None of the P values is greater than .05, so we stop with the model [OA][RSO]. It is interesting to note that this happens to be the same model as achieved by forward selection of multiple effects. Note also that if α ≤ .0082, we would be led to consider R and S as conditionally independent and thus use the model [RO][SO][OA]. This is the other model achieved by stepwise regression. In turn, [RO][SO][OA] would lead to taking S and O as conditionally independent and thus using the model [RO][S][OA]. These results can be seen from the following displays.
242
6. Model Selection Methods and Model Evaluation
Factor Pair — SO OA RO
Model [OA][SO][RO] [RO][OA][S] [RO][SO][A] [OA][SO][R]
Factor Pair — OA RO
Model [RO][OA][S] [RO][S][A] [OA][R][S]
df 48 50 58 50
df 50 60 52
G2 36.60 45.79 98.29 50.59
df — 2 10 2
Differences G2 P — — 9.19 .0101 61.69 .0000 13.99 .0009
G2 45.79 107.5 59.78
df — 10 2
Differences G2 P — — 61.7 .0000 13.99 .0009
The model [RO][S][OA] is one of the minimally adequate models arrived at in our second application of Aitkin’s method. A key feature of Wermuth’s method is that a pair of factors contained in more than one term in the model cannot be considered for conditional independence. Edwards and Havranek (1985) suggest dropping this requirement. Doing so changes the method from a search among decomposable models to a search among graphical models.
General Discussion We now present a general discussion of Wermuth’s method using graphtheoretic ideas. Recall that graphical models are completely determined by their two-factor effects. The procedure begins with the graphical model that includes all two-factor effects, i.e., the saturated model. Every twofactor effect is considered for deletion. The two-factor effect that generates the largest P value is deleted if the P value exceeds some cutoff point α. Whichever effect is dropped, it determines two subsets of factors in which there are effects between every pair of factors. Recall that a subset with all possible two-factor effects is called complete and that if a complete subset is not strictly contained in any other complete subset, it is maximal. A maximal complete subset is a clique. Wermuth’s method starts out with the clique based on all factors. Dropping one two-factor effect generates two cliques that each contain all but one factor. Proceeding inductively, at any stage in Wermuth’s method there are two or more cliques available. Two-factor effects that are part of more than one clique are not considered for elimination. It will be seen that the graphical model obtained by eliminating such effects is not decomposable. Among all other two-factor effects, the one with the largest P value is eliminated provided that the P value exceeds α.
6.5 Model Selection Among Decomposable and Graphical Models
243
Example 6.5.2. With five factors the initial model is [12345]. This can also be thought of as the initial clique. 3 tX XX XXX @ XXX 1 2 @ X( Xt t hh ( h @ ( H ( ( H hhhhhh @((( H (( h (h h (h H ( ( hh@ H h @ t (( h t H( 5
4
All two-factor terms are considered for elimination. There are 52 = 10 of these. If, for example, [12] is dropped, it is easily seen from the graph that there are two new cliques, [1345] and [2345]. Thus, the graphical model is [1345][2345]. Another way of establishing the graphical model is to examine the nine remaining two-factor terms: [13], [14], [15], [34], [35], [45], [23], [24], [25]. Of these nine, the first six terms generate [1345] and the last six terms generate [2345]; thus, the model is [1345][2345]. The P value for dropping [12] is the P value for testing the model [1345][2345] versus [12345]. Having deleted [12], the second stage begins by considering the cliques of the model [1345][2345]. The cliques are [1345] and [2345]. The two-factor effects [34], [35], and [45] are contained in both cliques, so these are not considered for elimination. Among the other two-factor terms, the one with the largest P value is eliminated provided the P value exceeds α. In considering whether to drop, say, [13], the test compares the graphical model with both [12] and [13] eliminated to the graphical model in which only [12] is eliminated. As discussed above, when [12] is eliminated, the model is [1345][2345]. If [13] is also eliminated, the clique [1345] breaks up into two complete subsets [145] and [345]. 3 tX X @ XXXX XX 1 @ XXX2t t h ( h ( @ ( H hhhh ( ( H (( hhhh ( @ ( H ( ( h ( HH (( hhhh@ h @ t (( t h H( 5
4
To see this, observe that [1345] is generated by [13], [14], [15], [34], [35], [45]. If [13] is dropped, the complete subsets are based on [14], [15], [45], and [34], [35], [45]. These generate [145] and [345], respectively. Note that [345] is not a clique because it is not maximal; it is contained in the complete subset [2345]. With both [12] and [13] eliminated, the cliques are [145] and
244
6. Model Selection Methods and Model Evaluation
[2345], so the model is [145][2345]. The test for eliminating [13] is the test of [145][2345] versus [1234][2345]. If [12] and [13] have been eliminated in the first two stages, the model is [145][2345]. The only two-factor effect contained in more than one clique is [45]. All other effects are considered for elimination. Suppose [35] is dropped. The resulting model is [145][234][245]. Now both [45] and [24] are in more than one clique, so neither can be eliminated. The process of testing and dropping two-factor terms continues until there are no tests with a P value greater than α.
To see that dropping an effect that is in more than one clique destroys decomposability, consider dropping [24] from the model [145][234][245].
1 t @
@ @
4 t @ @
3 t @ @
@ @t
@ @ @t
5
2
From the two cliques involving [24], form the closed chain [34][45][52][23]. This chain of length 4 has one chord, [24]. If [24] is dropped, the model is no longer chordal, hence no longer decomposable. Whenever an effect contained in more than one clique is dropped, this construction of a closed chain of length four with no chords will work. Simply construct a closed chain out of the two vertices in the common effect and two other distinct vertices, one from each clique. Conversely, if a model is decomposable and dropping a two-factor term makes it nondecomposable, then that two-factor term must be in more than one clique. For a model to become nondecomposable, the term eliminated must be the only chord in a closed chain of length four or more. If a closed chain of length four has only one chord, the chord must be in two complete subsets and thus in two cliques. In particular, if a decomposable model contains the closed chain [12][23][34][41] and contains only the one chord [24], then the model includes the complete sets [124] and [234], so [24] is contained in two different complete sets.
6.5 Model Selection Among Decomposable and Graphical Models
2
t @
245
t 3 @ @ @
1
t
@ @t 4
Each of the two complete sets must be contained in some clique and these cliques must be distinct. If the sets were in the same clique, then that clique would also have to include the chord [13]. By assumption, this cannot occur. In fact, this argument for closed chains of length four is sufficient for all cases because when a model becomes nondecomposable, the term eliminated must be the only chord in a closed chain of length four. In a decomposable model, any closed chain of length greater than four must have more than one chord because if the length is five or more and there is only one chord, there is a reduced closed chain of length at least four without a chord. Clearly, Wermuth’s method can be generalized to graphical models by removing the restriction that two-factor effects in more than one term are not considered for elimination. At each stage, all two-factor effects that have not been previously deleted can be considered for elimination. The corresponding models are the graphical models determined by the twofactor effects that have not been eliminated. This procedure was apparently first suggested by Edwards and Kreiner (1983). Model selection among graphical models is also discussed by Havranek (1984) and Edwards and Havranek (1985). With four factors, the difference between Edwards and Kreiner’s method and Wermuth’s method occurs only at the second stage. This is due to the fact that there is only one graphical but nondecomposable model. As applied to the abortion opinion data, the term [SA] is dropped at the first stage, so at the second stage, Wermuth’s method does not allow [RO] to be dropped. The method of Edwards and Kreiner has no such restriction. For models with more than four factors, the difference between the graphical method and the decomposable method can be substantial. Restricting model search to graphical models seems like a very promising compromise between searching in the very large class of all ANOVA type models and searching in the very restrictive class of decomposable models. However, the difficulty of searching among graphical models should not be underestimated. Good (1975) has shown that for a 10-factor table there are almost 3.5 million graphical models. With these methods as with all others, it is important to obtain several candidate models and to evaluate the models on grounds other than the
246
6. Model Selection Methods and Model Evaluation
values of their test statistics. Also, effects included because of the sampling design cannot be eliminated.
6.6
Use of Model Selection Criteria
The best approach to model selection for log-linear models would be to search through all models and choose, for closer examination, those with high values of Adj. R2 , low values of AIC, or extreme values of some other model selection criterion, cf. Section 3.6. Such a procedure would require enormous amounts of computation. One possible way to reduce computations would be to base an initial search on models fitted by weighted least squares (cf. Sections 4.4 and 10.6) rather than models fitted by maximum likelihood. At the moment, the author’s best suggestion is to use a variety of modelfitting methods with a variety of critical (cutoff) values, and for stepwise methods, use a variety of initial models. By using several methods, we hope to find a wide range of possible models. These models can then be evaluated relative to each other with the help of the model selection criteria. Often, there is no need to decide on one particular model; a small number of alternative models may be more informative. If it is necessary to decide on only one model, the model with the lowest AIC (or highest Adj. R2 ) may not be the best choice. Other considerations such as interpretability and consistency with assumptions may dictate choosing a model with a low AIC but not necessarily the lowest. Example 6.6.2. The evidence presented so far in the series of examples on the race, sex, opinion, age data strongly suggests to the author that the best model is [RSO][OA]. A formal comparison of all of the models arrived at by the various methods suggests the same conclusion. Model [RSO][OA] [RO][SO][OA] [R][S][OA] [RA][S][OA] [R][SO][OA] [RO][S][OA]
df 45 48 52 47 50 50
G2 24.77 36.60 59.78 53.77 50.59 45.79
A−q −65.23 −59.40 −44.22 −40.23 −49.41 −54.21
R2 .80 .70 .51 .56 .58 .62
Adj. R2 .72 .61 .41 .42 .48 .53
In a less clear-cut situation, it would be wise to consider many more stepwise procedures than have been illustrated. One worrisome aspect is that very little consideration has been given to three-factor effects other than RSO. The reader can check that none of the other three-factor effects substantially improves the model.
6.7 Residuals and Influential Observations
247
We have decided on one particularly good candidate model: [RSO][OA]. However, the analysis of the data does not end with finding an appropriate ANOVA type model; that is just an important first step. The model indicates that combinations of race and sex are independent of age given opinions about legalized abortions. Thus, we can collapse over some factors to study interrelationships in marginal tables. We can collapse over ages to study the relationships among race, sex, and opinion. We can collapse over race and sex to study the relationship between age and opinion. Cell counts for the collapsed tables need to be examined to study the nature of the relationships. These aspects of the analysis are discussed further in Section 8. It is also necessary to evaluate whether the model really fits the data. To this end, Section 7 contains information on residual analysis and influential observations. Finally, the interpretability of the model should be examined. This model has a very nice interpretation with race and sex independent of age given opinion. Does the interpretation make any sense? As we will see in Section 8, interpretability is probably this model’s weakest point. It can be argued that the appropriate analysis of these data involves explaining opinions on the basis of race, sex, and age. In that case, the methods of Chapter 4 should be used on these data.
6.7
Residuals and Influential Observations
In standard regression analysis, it is common practice to use residuals to check whether assumptions made in the model are valid and to detect the presence of observations that are unusually influential on the fit of the model. Three statistics that are commonly used for these purposes are the leverages, the standardized residuals, and Cook’s distances. As seen in Chapter 4, residuals are not of much use when dealing with binary (0-1) data. However, in tables with reasonably large counts, residuals can be useful. In a regression model Y = Xβ+e, the leverages are the diagonal elements of the projection matrix X(X X)−1 X . The matrix X is determined by the design of the data, i.e., the values of the predictor variables in the regression. The leverage measures how far the variables of a particular case are from the average of all of the cases. (The projection matrix is often called the “hat” matrix because it changes Y into Yˆ , i.e., Yˆ = X(X X)−1 X Y . Personally, I find this name distasteful. However, given the liberties I am about to take in naming residuals, I am probably not in a position to make a fuss.) For log-linear models, there is an analogue of the leverage that depends both on the design and the probability of getting observations in a particular cell. Because the probabilities are unknown, they must be estimated; hence, we use estimated leverages. The estimated leverage of the ith case
248
6. Model Selection Methods and Model Evaluation
is denoted a ˆii . Leverages are discussed in detail in Chapter 10. The log-linear analogue of a standardized residual is ri =
ˆi ni − m m ˆ i (1 − a ˆii )
.
For a correct model, a large sample approximation for the distribution of ri is ri ∼ N (0, 1). Note that these are very similar to the Pearson residuals discussed earlier: ˆi ni − m ; r˜i = √ m ˆi they differ only in that the standardized residuals involve the leverages. ˆ i , the difference between Actually, the residuals are the values ni − m the observed and predicted values. As discussed in Section 2.1, these need to be standardized in some √ way. The Pearson residuals use a crude stanˆ i . In Section 10.7, the Pearson residuals are dardization, dividing by m referred to as the crude standardized residuals to distinguish them from the standardized residuals that involve a more sophisticated (and proper) standardization. This terminology was chosen by analogy with regression analysis. Unfortunately, it differs from the terminology used by many authors on log-linear models. Often, the Pearson (crude standardized) residuals are referred to as simply the standardized residuals, and the values defined here as the standardized residuals are referred to as the adjusted residuals. Finally, Cook’s distance for the ith case is a measure of the influence the ith case has on the fit of the model. In the context of fitting log-linear models, we drop each cell from the table, fit the remaining cells, and then estimate an expected cell count for the dropped cell. This is done without reference to any marginal totals that may be fixed by design; hence, it is most appropriate for Poisson sampling. If the model has p degrees of freedom, the analogue of Cook’s distance can be written !2 m ˆ r log(m ˆ r /m ˆ r(i) ) p Ci = all cells r where m ˆ r(i) is the estimate of the rth cell when the ith cell has been deleted (cf. Section 10.7). For Poisson sampling, these values can be calibrated by comparing them to a p1 χ2 (p) distribution. If Ci > χ2 (.5, p)/p, cell i has a substantial influence. For multinomial or product-multinomial sampling, the degrees of freedom should be reduced by the number of independent multinomials. Because computation of all the m ˆ r(i) ’s would require separate iterative procedures and be very expensive, it is suggested that a one-step estimate be used. Starting with the values m ˆ r , drop a cell and, rather than
6.7 Residuals and Influential Observations
249
fully iterating, use just one step of the Newton-Raphson algorithm to obtain the m ˆ r(i) ’s, cf. Section 10.7. This definition of the Cook’s distances has weaknesses; however, the situation is analogous to that of the Pearson residuals. The definition of the Pearson residuals as standardized residuals is weak, but the Pearson residuals are easy to compute and they contain valuable information. As will be seen below, using standard computer software, the standardized residuals are now easy to compute, so there is little reason to use the Pearson residuals. Similarly, using standard computer software, Cook’s distances, as defined here, are easy to compute. Moreover, the author feels that they contain valuable information. Until something better becomes readily available, the author suggests examining these Cook’s distances. Anderson (1992) discusses diagnostics for categorical data analysis and Thomas and Cook (1989, 1990) discuss influence for generalized linear models (which include log-linear models, cf. Chapter 9).
6.7.1
Computations
We assume that the reader is capable of fitting an ANOVA model using a regression program. Our log-linear models are ANOVA type models. Good computer programs for doing regression generally provide leverages, standardized residuals, and Cook’s distances. Also, they allow for computing weighted regressions. After fitting the log-linear model, retain the counts for each cell, ni , and the fitted values for each cell, m ˆ i . Now use the regression program to refit the ANOVA model, but use weighted regression with ˆi weighti = m and a dependent variable ˆ i ) + (ni − m ˆ i )/m ˆi . Yi = log(m The leverages given by the program will be the a ˆii ’s. The standardized residuals reported will be √ ri MSE √ where MSE is the estimate of the standard deviation from √ the regression. Simply multiply the reported standardized residuals by MSE to obtain the correct values. The reported values of Cook’s distances are Ci MSE where Ci is computed using a one-step fit. The reported values Ci need to be multiplied by MSE. For the purpose of comparing the relative magnitudes of the ri ’s or Ci ’s, the multiplication is irrelevant. For comparing
250
6. Model Selection Methods and Model Evaluation
LEVERAGE 0.64+ 0.48+ 0.32+ 0.16+ -
2* ** *
**** * * 2*
*
* 2** * *2 **2 * *
*2
*
* *
*2 * **
*
** ** * *** * *
* * * *
*
*
*2
2* *2 * * *
+---------+---------+---------+---------+---------+ 0 15 30 45 60 75 INDEX FIGURE 6.1. Leverage–Index Plot
standardized residuals to a N (0, 1) distribution or Cook’s distances to a χ2 distribution, the multipliers are important. It should also be noted that for large samples and a correct model, the MSE approaches a χ2 distribution divided by its degrees of freedom. The large sample expected value for the MSE is 1. Example 6.7.1. We now examine the leverages, standardized residuals, and Cook’s distances for the abortion opinion data. In particular, we consider the fit of the model [RSO][OA]. The fitted values are given in Section 8 as Table 6.7. The leverages are plotted against index values in Figure 6.1. The index values are just values 1, 2, . . . , 72 assigned to the cells. The G2 for the model [RSO][OA] has 45 degrees of freedom. There are 72 cells, so there are 72 − 45 = 27 degrees of freedom for the model. The sum of the 72 leverages must add up to 27. The average leverage is 27 72 = .375. The largest leverage is about .64, which is less than twice the average. None of the leverages seems excessively large. Leverages are rarely very large in balanced ANOVA type models. Figures 6.2 and 6.3 contain a box plot and an index plot of the standardized residuals, respectively. The box plot identifies one very large residual and four other large residuals. In the index plot, only two residuals really
6.7 Residuals and Influential Observations
251
-------*----------I + I-------- * * O -------------+---------+---------+---------+---------+---------+ -2.0 -1.0 0.0 1.0 2.0 3.0 *
FIGURE 6.2. Standardized Residual–Box Plot STANDARDIZED RESIDUAL 3.0+ * * 1.5+ * * * * - * ** * * * * * * ** ** ** * * * 0.0+ * * *** ** *2 ** * * * 2** * *2 * * 2 * *** * * * *2 * * * * * * * - * * -1.5+ * * +---------+---------+---------+---------+---------+ INDEX 0 15 30 45 60 75 FIGURE 6.3. Standardized Residual–Index Plot
stand out. There were 24 nonwhite males between 18 and 25 years of age who support legalized abortion; the estimated value from the model is only 14.52. This cell has a leverage of .222, a standardized residual of 2.82, and a Cook’s distance of .085. The other large standardized residual is from the cell for nonwhite males above 65, who support legalized abortion. The observed value in this cell is 4, the fitted value is 10.90, the leverage is .181, the standardized residual is −2.31, and Cook’s distance is .044. Considering that there are 72 cells, these values are not remarkably large. In fact, what is remarkable is that most of the standardized residuals are so tightly packed around zero. Figure 6.4 contains a normal probability plot of the standardized residuals. If the asymptotic theory is valid, the plot should be approximately linear. It is not. Again, the problem seems to be that there are too many cells fitted too well by the model.
252
6. Model Selection Methods and Model Evaluation
STANDARDIZED RESIDUALS 3.0+ * * 1.5+ * * ** 22**2 *2322 0.0+ 3323332 *2223233 *2*** * * * -1.5+ * * --------+---------+---------+---------+---------+--------2.0 -1.0 0.0 1.0 2.0 RANKITS FIGURE 6.4. Standardized Residual–Rankit Plot
The MSE for the regression from which the residuals were obtained was .571. Under the sampling schemes considered, the MSE is a lack of fit test statistic. For large samples and a correct model, the MSE should be near 1. For incorrect models, it should be greater than 1. Again, the model fits better than we have any right to expect. When examining leverages directly, we look for large individual leverages because they indicate that a cell is unlike the other cells. Large leverages are leverages near 1. Leverages also play a role in standardized residuals. Even if none of the leverages is large, they can be important. In particular, the range√of leverages in this example is from .115367 to .632986. The factors 1/ 1 − aii are used to change Pearson residuals into standardized residuals. These factors range from 1.063 to 1.651, so a Pearson residual with the largest leverage would have a standardized residual more than 1.5 times larger than the same Pearson residual would have with the smallest leverage. This is not an inconsequential difference. Pearson residuals and standardized residuals can lead to very different conclusions. Figure 6.5 contains a box plot of the Cook distances. Figure 6.6 contains an index plot of the Cook distances. Most cells have very little influence on the fit. The plots identify a number of relatively influential cases, but 1 2 χ (27), none of the distances as compared to the calibrating values of 27 1 2 are substantial. For example, 27 χ (.5, 27) = 26.3/27 = .974; none of the
6.7 Residuals and Influential Observations
253
----I+ I----- ** * OO O O O O O ----+---------+---------+---------+---------+---------+-----0.000 0.016 0.032 0.048 0.064 0.080
FIGURE 6.5. Cook’s Distance–Box plot COOK’S DISTANCE * 0.075+ * - * 0.050+ * * * * 0.025+ * * - * * * * * ** * ** * *2* * * * * ** ** ** * 0.000+ * ** 2*** **** *2 *** **2 ** 2** 2*2* *2 +---------+---------+---------+---------+---------+ 0 15 30 45 60 75 INDEX FIGURE 6.6. Cook’s Distance–Index Plot
distances are anywhere near .974. The largest Cook’s distance is the .085 for young nonwhite males who support legalized abortion.
6.7.2
Computing Commands
Below are commands that give the diagnostics provided by BMDP-4F. / INPUT
FILE = ’C:\LOGLIN\ABORT.DAT’. FORMAT = FREE. VARIABLES = 5. / VARIABLE NAMES = R, S, A, O, N. / TABLE INDEX = R, S, A, O. COUNT = N. / STAT ALL.
254
6. Model Selection Methods and Model Evaluation
/ PRINT
/ FIT / END
LINE = 79. LAMBDA. BETA. VAR. STAN. CHISQ. LRCHI. MODEL = RSO, OA.
Diagnostics are also available from GLIM. The procedure for fitting loglinear models was illustrated in Subsection 3.7.1. The commands for diagnostics are exactly as in Subsection 4.4.2.
6.8
Drawing Conclusions
We have discussed model interpretation, model selection, and model validation. We have selected a model [RSO][OA] that is reasonably small, fits well, and has a simple interpretation. No cells seem to be unduly influential and no cells have outrageously bad fits. The model indicates that, given Opinion, Age is independent of Race and Sex. As discussed in the introduction to this chapter, the model is a description of the data and can be used to predict behavior in a similarly conducted study. It is not a statement about causation. In fact, it makes little sense to imagine that opinions about abortion cause the relative frequencies of Race and Sex to be independent of the frequencies of Age. By itself, the model tells us nothing about the relationships among Race, Sex, and Opinion or about the relationship between Age and Opinion. Table 6.7 contains the estimated expected cell counts under the model [RSO][OA]. It is a complicated table, but much could be learned from studying it. Fortunately, there are easier ways to get at this information; we can collapse factors and study marginal tables. As discussed in Section 5.3, the Race-Sex-Opinion relationships can be examined by collapsing over Age and looking at the Race-Sex-Opinion marginal table. This is given in Table 6.8. We see that whites are more likely to be in the survey than nonwhites. White females are a bit more likely to appear than white males. Among nonwhites, males and females are about equally likely. Ignoring the undecideds, the rate of support for legalized abortion is about the same for white and nonwhite males. It is higher for nonwhite females than white females. It is higher for white females than for white males. All of these things can be examined using odds ratios similar to Section 4.6. Formal inference requires standard errors as discussed in Section 10.2.
6.8 Drawing Conclusions
255
TABLE 6.7. Estimated Cell Counts under [RSO][OA] Race
Sex
Opinion
18-25
26-35
Age 36-45 46-55
56-65
65+
Male
Support Oppose Undec.
105.5 41.87 1.39
132.1 61.93 2.50
114.5 54.95 5.27
76.94 47.98 5.27
72.81 52.34 5.54
79.19 61.93 8.04
Female
Support Oppose Undec.
141.0 44.61 2.43
176.7 65.98 4.37
153.1 58.55 9.22
102.9 51.11 9.22
97.38 55.76 9.70
105.9 65.98 14.07
Male
Support Oppose Undec.
14.52 5.61 0.84
18.19 8.30 1.52
15.76 7.36 3.20
10.59 6.43 3.20
10.03 7.01 3.37
10.90 8.30 4.88
Female
Support Oppose Undec.
19.97 3.91 0.35
25.01 5.79 0.62
21.67 5.14 1.32
14.57 4.48 1.32
13.79 4.89 1.39
14.99 5.79 2.01
White
NonWhite
TABLE 6.8. Race, Sex, Opinion Marginal Table Race White
Sex Male Female
Support 581 777
Opinion Oppose 321 342
Undec. 28 49
Nonwhite
Male Female Totals
80 110 1548
43 30 736
17 7 101
Totals 930 1168
140 147 2385
TABLE 6.9. Opinion, Age Marginal Table Opinion Support Oppose Undec. Totals
18-25 281 96 5 382
26-35 352 142 9 503
Age 36-45 46-55 305 205 126 110 19 19 450 334
56-65 194 120 20 334
65+ 211 142 29 382
Totals 1548 736 101 2385
256
6. Model Selection Methods and Model Evaluation
To examine the relationship between Opinion and Age, we can collapse over Race and Sex. The marginal totals are given in Table 6.9. We see that some age groups are more likely to respond. There is more support than opposition in each age group. Undecideds increase with age. Also, the amount of support seems to decrease with age. I find myself not really caring about the relative incidences of Race and Sex, but rather am interested in the relative support for legalized abortion among the different groups. This amounts to treating Opinion as a response variable and Race and Sex as explanatory variables; i.e., Race and Sex can be imagined to determine Opinion. In my experience, most contingency tables have one or more factors of particular interest that can be considered as response factors. (Of course, that is only in my experience.) Specific methods for analyzing tables with response factors were examined in Chapter 4. If O were to be treated as a response and R, S, A as factors explaining that response, Asmussen and Edwards (1983) argue that, of the models listed in Example 6.6.2, only [RA][S][OA] is appropriate. Recall from Section 4.6 their contention that log-linear models are appropriate for response factors only if the model allows for collapsing over the response factors onto the explanatory factors, cf. Section 5.3. For example, the model [RSO][OA] can be collapsed over Race and Sex or collapsed over Age but not over the response factor opinion. Therefore, [RSO][OA] is not a reasonable loglinear model for the response factor O. It is illogical for Race and Sex to be independent of Age given the factor Opinion which is supposed to be a response. The response cannot generate independence between explanatory factors! On the other hand, models such as [RA][S][OA] are reasonable. Recall that [RA][S][OA] is one of the minimally adequate models found by Aitkin’s method. However, you should also recall that Aitkin’s method totally missed the important [RSO] interaction.
6.9
Exercises
Exercise 6.9.1. Reanalyze the auto accident data of Example 4.8.1 without treating any of the factors as a response factor. Exercise 6.9.2. Reanalyze the abortion attitude data of Exercise 4.8.4 without treating any of the factors as a response. Exercise 6.9.3. Using our discussion of graphical models and collapsibility in Chapter 5, argue that when the true model is [123][24][456],
6.9 Exercises
257
the test of marginal association for the u123(ijk) ’s does not ignore any vital information. Is the same conclusion appropriate when the true model is [12][13][23][24][456]? What if the true model is [123][124][456], [123][24][456][15], or [123][24][456][15][36]?
7 Models for Factors with Quantitative Levels
Just as in analysis of variance, if the levels of some factors are associated with quantitative values, these values can be used in the analysis. For example, in a two-factor ANOVA where factors are two kinds of fertilizer and levels are different quantitative amounts of fertilizers, an ANOVA would often examine linear and higher-order contrasts in the main effects and polynomial contrasts in the interaction (e.g., the linear-by-linear contrast). In the analysis of categorical data, it is relatively rare that a factor has truly quantitative values associated with its levels. Often categories are ordered, but are not intrinsically quantitative. This is referred to as having ordinal factor levels or simply as having ordinal data. For example, socioeconomic status is often categorized as low, medium, and high. Surely, it would be advantageous to incorporate information about ordering into the analysis. The problem is in finding an appropriate method. The most commonly used method is to assign scores to the three levels of the factor. These scores can be any known numbers, say, x1 , x2 , and x3 . In fact, the most common method is to take x1 = 1, x2 = 2, and x3 = 3. The analysis then proceeds as if the scores are true quantitative levels. Alternatively, a factor might be income and the levels of the factor could be income intervals, say, less than $20,000, $20,000-$40,000, and more than $40,000. To have quantitative levels, we need one number associated with each level. Such numbers simply do not exist. As a practical matter, we could use the midpoints of the intervals as quantitative levels (scores). We can then develop models based on these approximate scores. Of course, if actual incomes are available for each individual, it would be more suitable to use the incomes in an appropriate regression analysis, cf. Section 4.1.
7.1 Models for Two-Factor Tables
259
However, in practice it is not uncommon to encounter factors that have been created by categorizing continuous variables. Continuous variables that have been categorized present some unique difficulties when assigning scores. With categories that are intervals, using the midpoints of the intervals is simple and appealing. In the income example above, the midpoint scores $10,000 and $30,000 may work reasonably well as quantitative levels for the first two income intervals. However there is no natural quantitative level to use for the category “more than $40,000.” Any analysis is dependent on the score that is chosen to represent the third category. Moreover, using midpoints may not be very efficient. Suppose it was known that most people in the less than $20,000 interval had incomes near $20,000. It would be better to use a score that was near $20,000 rather than using the midpoint $10,000. In practice, the method of determining a score will often depend on additional sources of information. If $10,000 is not an appropriate score, there must be additional information leading to that conclusion. Use the same additional information to arrive at an alternative score. In this chapter, we examine models that incorporate the quantitative nature of factor levels. When factor levels are not truly quantitative, the appropriateness of such models will be directly related to the appropriateness of the scores being used. We assume that the quantitative levels (i.e., scores) are known and consider linear models for the log of the expected cell counts (i.e., log-linear models). An alternative approach is to consider the scores as parameters and to estimate the scores. If the scores are parameters, then the models considered are no longer linear models for the log of the expected cell counts. Such models are discussed in Section 3.
7.1
Models for Two-Factor Tables
Consider a 3×4 table with quantitative levels x1 , x2 , x3 and w1 , w2 , w3 , w4 . If the observations in the table were 12 normally distributed values yij , an analysis of variance would be appropriate. The model with no interaction is (1) yij = u + u1(i) + u2(j) + eij . In this model, the 2 degrees of freedom for the main effect of Factor 1 can be broken into a linear contrast and a quadratic contrast. The three degrees of freedom for Factor 2 can be broken into a linear contrast, a quadratic contrast, and a cubic contrast. Equivalently, we can rewrite model (1) as a regression model yij = u + β1 xi + β2 x2i + η1 wj + η2 wj2 + η3 wj3 + eij .
(2)
Now consider the full interaction model yij = u + u1(i) + u2(j) + u12(ij) + eij .
(3)
260
7. Models for Factors with Quantitative Levels
Note that with only one observation per cell, the terms u12(ij) are hopelessly confounded with the errors eij . Since our goal is only to draw analogies between analysis of variance and log-linear models, we need not concern ourselves with this confounding. To explore the interaction in model (3), we can consider contrasts in the interactions. Using the quantitative factor levels leads to considering things like the linear-by-linear, linear-byquadratic, and quadratic-by-cubic interaction contrasts. In total, there are (3 − 1)(4 − 1) = 6 of these linearly independent interaction contrasts. Alternatively, we can rewrite model (3) using regression terms in place of the interactions, yij = u + u1(i) + u2(j) + γ11 xi wj + γ12 xi wj2 + γ13 xi wj3 + γ21 x2i wj + γ22 x2i wj2 + γ23 x2i wj3 + eij . This suggests a variety of partial interaction models that can be considered. The simplest of these models is yij = u + u1(i) + u2(j) + γ11 xi wj + eij . This is the model that provides for main effects in each factor, but models the interaction as consisting entirely of linear-by-linear interaction.
7.1.1
Log-Linear Models with Two Quantitative Factors
Exactly the same procedures are used when the data consist of counts. Consider an I × J table with quantitative levels x1 , . . . , xI and w1 , . . . , wJ . The model of independence is log mij = u + u1(i) + u2(j) . The model of full interaction (the saturated model) is log mij = u + u1(i) + u2(j) + u12(ij) . We can structure the interaction by considering a linear-by-linear association model (4) log mij = u + u1(i) + u2(j) + γxi wj . The maximum likelihood estimates m ˆ ij must satisfy m ˆ i· m ˆ ·j and
ij
= ni· , = n·j , m ˆ ij xi wj =
i = 1, . . . , I, j = 1, . . . , J, ij
nij xi wj .
7.1 Models for Two-Factor Tables
261
(This is easily seen from the results of Chapter 10.) Model (4) can be tested against the saturated model using either G2 or X 2 . The reduced model of independence can be tested against model (4) using either G2 , γ ). X 2 , or γˆ /SE(ˆ Often, model (4) is written in an equivalent form. If observations in all IJ cells are possible, model (4) is equivalent to log(mij ) = u + u1(i) + u2(j) + γ(xi − x ¯· )(wj − w ¯· ) (5) I J where x ¯· = I1 i=1 xi and w ¯· = J1 j=1 wj . Frequently, the factor levels are equally spaced. This means that for some constants c and d, xi+1 − xi = c, i = 1, . . . , I −1, and wj+1 −wj = d, j = 1, . . . , J −1. This special case turns out to be equivalent to taking xi = i, i = 1, . . . , I and wj = j, j = 1, . . . , J. Model (4) can be rewritten as log mij = u + u1(i) + u2(j) + γ(i)(j) .
(6)
For equally spaced levels, model (6) is a reparametrization of model (4). The parameter γ in (6) is only identical to the parameter γ in (4) when the original scores are xi = i and wj = j. In particular, γ in model (6) is equivalent to γdc in model (4). The scores xi = i and wj = j are used frequently when the factor levels are ordinal. Model (4), when applied with scores that are equally spaced, is called the uniform association model. The name is apt because under this model, the odds ratios for consecutive table entries are identical. In particular, mij mi+1 j+1 = eγdc , mij+1 mi+1 j i = 1, . . . , I − 1, j = 1, . . . , J − 1. To see this, note that mij mi+1 j+1 log mij+1 mi+1 j = log mij − log mij+1 − log mi+1 j + log mi+1 j+1 = u + u1(i) + u2(j) + γxi wj − u − u1(i) − u2(j+1) − γxi wj+1 − u − u1(i+1) − u2(j) − γxi+1 wj + u + u1(i+1) + u2(j+1) + γxi+1 wj+1 = γ[xi wj − xi wj+1 − xi+1 wj + xi+1 wj+1 ] = γ[xi (wj − wj+1 ) − xi+1 (wj − wj+1 )] = γ[xi (−d) − xi+1 (−d)] = γd[xi+1 − xi ] = γdc . If xi = i and wj = j, then d = 1 and c = 1,, so consecutive log odds ratios equal γ.
262
7. Models for Factors with Quantitative Levels
The case of equal spacings arises very often, so we will always refer to model (4) and its equivalent, model (5), as the model of uniform association. If factor levels are not equally spaced, this terminology is meaningful in the sense that γ is a measure of association that applies uniformly, but any particular odds ratio must also be adjusted for differences in factor levels. Finally, note that there is much more flexibility available than merely considering the independence model, the uniform association model, and the saturated model. The saturated model is equivalent to log mij
= u + u1(i) + u2(j) + γ1,1 xi wj + γ1,2 xi wj2 + · · · + γ1,J−1 xi wjJ−1 + γ2,1 x2i wj + γ2,2 x2i wj2 + · · · + γ2,J−1 x2i wjJ−1 .. . + γI−1,1 xI−1 wj + γI−1,2 xiI−1 wj2 + · · · + γI−1,J−1 xI−1 wjJ−1 . i i
A wide variety of submodels can be fitted. For example, we could consider a second-order interaction model, say log mij = u + u1(i) + u2(j) + γ1,1 xi wj + γ1,2 xi wj2 + γ2,1 x2i wj + γ2,2 x2i wj2 . This model is larger than the uniform association model (5), but smaller than the saturated model. The maximum likelihood estimates for this model satisfy the equations i = 1, . . . , I, m ˆ i· = ni· , m ˆ ·j = n·j , j = 1, . . . J, xi wj m ˆ ij = xi wj nij , ij
ij
xi wj2 m ˆ ij
=
ij
ij
7.1.2
xi wj2 nij ,
ij
x2i wj m ˆ ij
=
ij
x2i wj nij ,
ij
x2i wj2 m ˆ ij
=
x2i wj2 nij .
ij
Models with One Quantitative Factor
Suppose that only the first factor in an I × J table has quantitative levels. Denote these levels as x1 , . . . , xI . We still want to consider models that are more general (larger) than the model of independence, but smaller than the saturated model. A frequently used model in this situation is the column effects model (7) log mij = u + u1(i) + u2(j) + τj xi .
7.1 Models for Two-Factor Tables
263
This model implies that there is a linear effect on log mij from the rows of the table, but that the slope (τ ) of this linear effect changes from column to column. In the case of I populations and J = 2 responses, model (7) is also a simple linear logistic regression model, cf. Section 2.6. Although model (7) is appropriate when only one factor is quantitative, it can also be used when both factors are quantitative. Note that model (4) is a reduced model relative to model (7) in which the additional structure τj = γwj is imposed. Thus, the uniform association model assumes that the slopes change linearly with the columns. Model (7) is more general in that it allows arbitrary changes in the slopes. In particular, model (7) is equivalent to the model log(mij ) = u + u1(i) + u2(j) + γ11 xi wj + γ12 xi wj2 + · · · + γ1,J−1 xi wjJ−1 . Maximum likelihood estimates for model (7) must satisfy m ˆ i· m ˆ ·j I i=1
m ˆ ij xi
= ni· , = n·j , =
I
nij xi ,
i = 1, . . . , I, j = 1, . . . , J, j = 1, . . . , J .
i=1
Testing is performed in the usual way. More generally, we can consider any submodel of the saturated model. The saturated model can be reparametrized as log mij = u + u1(i) + u2(j) + τ1,j xi + τ2,j x2i + · · · + τI−1,j xiI−1 . For J = 2, this is the I − 1-degree polynomial logistic regression model . log mi1 mi2 = β0 + β1 xi + β2 x2i + · · · + βI−1 xI−1 i Of course, if the second factor in the table is quantitative rather than the first factor, we can write the saturated model as log mij = u + u1(i) + u2(j) + ηi,1 wj + · · · + ηi,J−1 wjJ−1 and consider reduced models. The row effects model log mij = u + u1(i) + u2(j) + ηi wj is probably the most frequently used of these. Although it seems to be done infrequently, there is no mathematical reason not to fit models using regression in place of main effects. For example, a reduced model relative to the uniform association model is log mij = u + βxi + ηwj + γxi wj .
264
7. Models for Factors with Quantitative Levels
This model also implies uniform association (in terms of the odds ratios when levels are equally spaced), but imposes additional constraints. Example 7.1.1. A sample of men between the ages of 40 and 59 was taken from the city of Framingham, Massachusetts. The men were crossclassified by their serum cholesterol and systolic blood pressure. We restrict attention to a subsample that did not develop coronary heart disease during a 6-year follow-up period. The data are given below. Cholesterol (in mg/100 cc) <200 200-219 220-259 ≥260 Totals
Blood Pressure (in mm Hg) <127 127-146 147-166 167+ 117 121 47 22 85 98 43 20 119 209 68 43 67 99 46 33 388 527 204 118
Totals 307 246 439 245 1237
Consider four models: Abbreviation [C][P][C1 ] [C][P][P1 ] [C][P][γ] [C][P]
Model log(mij ) = u + uC(i) + uP (j) + C1i (j) log(mij ) = u + uC(i) + uP (j) + P1j (i) log(mij ) = u + uC(i) + uP (j) + γ(i)(j) log(mij ) = u + uC(i) + uP (j) .
These are the row effects, column effects, uniform association, and independence models, respectively. The fits for the models relative to the saturated model are Model [C][P][C1 ] [C][P][P1 ] [C][P][γ] [C][P]
df 6 6 8 9
G2 7.404 5.534 7.429 20.38
A−q −4.596 −6.466 −8.571 2.38
The best fitting model is log(mij ) = u + uC(i) + uP (j) + γ(i)(j) . Using the side conditions uC(1) = uP (1) = 0, the parameter estimates and standard errors are
7.1 Models for Two-Factor Tables
Parameter u uC(1) uC(2) uC(3) uC(4) uP (1) uP (2) uP (3) uP (4) γ
Estimate 4.614 0 −0.4253 −0.0589 −0.8645 0 0.0516 −1.164 −1.991 0.1044
265
Standard Error .0699 — .1015 .1363 .1985 — .0965 .1698 .2522 .0293
The estimated cell counts are Estimated Cell Counts: Uniform Association Blood Pressure Cholesterol <127 127-146 147-166 167+ <200 112.0 131.0 43.1 20.9 81.3 105.4 38.5 20.8 200-219 130.1 187.4 76.0 45.5 220-259 64.5 103.2 46.4 30.8 ≥260 These are obtained from the uniform association model, so the odds ratios for consecutive table entries are identical. For example, the odds of blood pressure < 127 relative to blood pressure 127-146 for men with cholesterol < 200 are 1.11 times the similar odds for men with cholesterol of 200-219; up to roundoff error 112.0(105.4) 112.0/131.0 = = e.1044 = 1.11 81.3/105.4 81.3(131.0) where .1044 = γˆ . Similarly, the odds of blood pressure 127-146 relative to blood pressure 147-166 for men with cholesterol < 200 are 1.11 times the odds for men with cholesterol of 200-219: 131.0(38.5) = e.1044 = 1.11 . 105.4(43.1) Also, the odds of blood pressure < 127 relative to blood pressure 127-146 for men with cholesterol 200-219 are 1.11 times the odds for men with cholesterol of 220-259: 81.3(187.4) = e.1044 = 1.11 . 130.1(105.4) For consecutive categories, the odds of lower blood pressure are 1.11 times greater with lower blood cholesterol than with higher blood cholesterol.
266
7. Models for Factors with Quantitative Levels
The asymptotic 95% confidence interval for γ has end points .1044 ± 1.96(.0293). The interval is (.047, .162). The corresponding interval for the odds ratio is (e.047 , e.162 ) or (1.05, 1.18). Thus, for consecutive categories, we are 95% confident that the odds of lower blood pressure are between 1.05 and 1.18 times greater with lower cholesterol than with higher cholesterol. Of course, we can also compare nonconsecutive categories. For categories that are one step away from consecutive, the odds of lower blood pressure are 1.23 = e2(.1044) times greater with lower cholesterol than with higher cholesterol. For example, the odds of having blood pressure < 127 compared to having blood pressure of 147 − 166 with cholesterol < 200 are 1.23 = e2(.1044) times those for cholesterol 200 − 219. To check this, observe that 112.0(38.5) = 1.23. 81.3(43.1) Similarly, the odds of having blood pressure < 127 compared to having blood pressure of 127-146 with cholesterol < 200 are 1.23 times those for cholesterol 220-259. Extending this leads to observing that the odds of having blood pressure < 127 compared to having blood pressure of 167+ with cholesterol < 200 are 2.559 = e9(.1044) times those for cholesterol ≥ 260. It is of interest to compare the estimated cell counts obtained under uniform association with the estimated cell counts under independence. The estimated cell counts under independence are Estimated Cell Counts: Independence Blood Pressure Cholesterol <127 127-146 147-166 167+ <200 96.3 130.8 50.6 29.3 77.2 104.8 40.6 23.7 200-219 137.7 187.0 72.4 41.9 220-259 76.85 104.4 40.4 23.4 ≥260 With γ > 0, the uniform association model increases the estimated cell counts (relative to independence) for cells with (a) high cholesterol and high blood pressure and (b) low cholesterol and low blood pressure. Also, the uniform association model decreases the estimated cell counts for cells with (a) high cholesterol and low blood pressure and (b) low cholesterol and high blood pressure.
7.2
Higher-Dimensional Tables
The same basic methods used to incorporate quantitative levels into twofactor models can also be used in higher dimensions. For example, consider
7.2 Higher-Dimensional Tables
267
an I × J × K table with quantitative levels x1 , . . . , xI , w1 , . . . , wJ , and v1 , . . . , vK . Assuming no three-factor interaction, there are three types of models that are particularly useful. The homogeneous uniform association model is a model in which given a level for any factor, the remaining two factors display a uniform association. This model is log(mijk ) = u + u1(i) + u2(j) + u3(k) + β1 xi wj + β2 xi vk + β3 wj vk . Note that if xi = i, wj = j mijk mi+1 j+1 k log mi+1 jk mi j+1 k = u + u1(i) + u2(j) + u3(k) + β1 xi wj + β2 xi vk + β3 wj vk + u + u1(i+1) + u2(j+1) + u3(k) + β1 xi+1 wj+1 + β2 xi+1 vk + β3 wj+1 vk − u − u1(i+1) − u2(j) − u3(k) − β1 xi+1 wj − β2 xi+1 vk − β3 wj vk − u − u1(i) − u2(j+1) − u3(k) − β1 xi wj+1 − β2 xi vk − β3 wj+1 vk = β1 (xi wj − xi+1 wj − xi wj+1 + xi+1 wj+1 ) = β1 . Thus, for any level of k, the log odds ratio for consecutive table entries equals β1 . Similarly, for j fixed, consecutive log odds ratios equals β2 ; and for i fixed, consecutive log odds ratios equal β3 . If Factor 3 does not have quantitative levels or if we merely wish to ignore the quantitative nature of the levels of Factor 3, we can write log(mijk ) = u + u1(i) + u2(j) + u3(k) + τ1k xi + τ2k wj + βxi wj . For each level of k, this model gives uniform associations. For a fixed level of i or a fixed level of j, odds ratios need not display uniform association. If neither Factors 2 or 3 has quantitative levels or if we wish to ignore their quantitative nature, we can use the model log mijk = u + u1(i) + u2(j) + u3(k) + u23(jk) + τ2j xi + τ3k xi . As before, models can be generalized by including powers of the xi , wj , and vk scores. We can also model the three-factor interaction. The models log mijk = u + u1(i) + u2(j) + u3(k) + u12(ij) + u13(ik) + u23(jk) + βxi wj vk and log mijk = u + u1(i) + u2(j) + u3(k) + β1 wj vk + β2 xi vk + β3 xi wj + γxi wj vk deal with the three-factor interaction while modeling two-factor interactions in alternative ways. Both of these models could be described as heterogeneous uniform association models.
268
7. Models for Factors with Quantitative Levels
Example 7.2.1. In Chapter 6, we found that for the race, sex, opinion, age data, the model [RSO][OA] fits well. The ages are quantitative levels. We consider whether using the quantitative nature of this factor leads to a more succinct model. The age categories are 18-25, 26-35, 36-45, 46-55, 56-65, and 66+. For lack of a better idea, the category scores were taken as 1, 2, 3, 4, 5, and 6. Since the first and last categories are different from the other four, the use of the scores 1 and 6 are particularly open to question. Two models were considered: Abbreviation [RSO][OA] [RSO][A][O1 ] [RSO][A][O1 ][O2 ]
Model log(mhijk ) = uRSO(hij) + uOA(jk) log(mhijk ) = uRSO(hij) + uA(k) + O1j k log(mhijk ) = uRSO(hij) + uA(k) + O1j k + O2j k 2
Both of these are reduced models relative to [RSO][OA]. ([RSO][OA] is equivalent to log(mhijk ) = uRSO(hij) + uA(k) + O1j k + O2j k 2 + O3j k 3 + O4j k 4 + O5j k 5 .) To compare models, we need the following statistics Model [RSO][OA] [RSO][A][O1 ][O2 ] [RSO][A][O1 ]
df 45 51 53
G2 24.77 26.99 29.33
Comparing [RSO][A][O1 ] versus [RSO][OA] gives G2 = 29.33−24.77 = 4.56 with degrees of freedom 53 − 45 = 8. The G2 value is not significant. Similarly, [RSO][A][O1 ][O2 ] is an adequate model relative to [RSO][OA]. The test for [O2 ] has G2 = 29.33 − 27.99 = 1.34 on 2 df , which is not significant. The model with only [O1 ] fits the data well. A primary difficulty with using quantitative factors is the necessity of assigning the factor scores. One way to avoid this problem is to estimate the factor scores. Methods for doing this are discussed in Section 3.
7.2.1
Computing Commands
Models with quantitative factors can be fit easily using several computer packages, e.g., SPLUS, GLIM, GENSTAT, and SAS PROC GENMOD. For example, the model log(mhijk ) = uRSO(hij) + uA(k) + O1j k + O2j k 2 can be fitted using SAS PROC GENMOD as given below. options ps=60 ls=72 nodate; data abort; infile ’abort.dat’; input R S A O N; A1 = A; A2 = A * A; proc genmod data=abort;
7.3 Unknown Factor Scores
269
class R S O A; model N = R*S*O A O*A1 O*A2 / link=log dist=poisson; run; The key difference here from analysis of variance type models is that in the “class” command, age was not specified as a grouping (class) variable. The terms O1j k and O2j k 2 are really interactions between the factor O and the predictors variables age and age squared. In the model statement, they are simply specified as interactions.
7.3
Unknown Factor Scores
The next step in generalizing the models of Sections 1 and 2 is to allow the factor scores to be unknown. In place of model (7.1.4), we assume the model (1) log(mij ) = µ + αi + βj + γνi ωj where νi , i = 1, . . . , I, and ωj , j = 1, . . . , J, are unknown parameters with I i=1
νi2 = 1 =
J
ωj2
(2)
j=1
and α· = β· = ν· = ω· = 0. The side conditions are imposed because the model is no longer log-linear. In general, for nonlinear models, the exact parametrization can be important in determining the properties of the model. The side conditions are necessary to have a well-defined parametrization. In particular, without condition (2), the parameter γ would not be well defined. Model (1) is not log-linear, so the theoretical results used to justify fitting log-linear models do not apply. A separate theoretical development is required. Moreover, computer programs specifically developed for fitting log-linear models by maximum likelihood cannot be used to obtain the maximum likelihood fit of the model. Chuang (1983) used iteratively reweighted nonlinear least squares to obtain maximum likelihood estimates for models with unknown factor scores. In addition to model (1), it is of interest to examine reduced models. In particular, the submodel of (1) with column main effects that are linear in the unknown factor scores is log(mij ) = µ + αi + λωj + γνi ωj
(3)
270
7. Models for Factors with Quantitative Levels
where α· = ν· = ω· = 0.
and
I
νi2 =
i=1
J
ωj2 = 1 .
j=1
Model (3) can also be written as log(mij ) = µ + αi + νi ωj with α· = ω· = 0
and
J
(4)
ωj2 = 1 .
j=1
Here, νi is equivalent to λ + γνi in model (3), which is why the conditions I 2 ν· = 0 and i=1 νi = 1 are dropped. We can take model (4) one step further and write it as log(mij ) = α0i + α1i ωj where ω· = 0
and
J
ωj2 = 1 .
j=1
The new parametrization is related to model (4) by α0i ≡ µ + αi and α1i = νi . This version of the linear column effects model has the nice interpretation of fitting separate lines in the unknown factor scores for each level of i. Similarly, if row effects are linear in the factor scores, the appropriate model is log(mij ) = µ + λνi + βj + γνi ωj with β· = ν· = ω· = 0
and
I
νi2 = 1 =
i=1
J
ωj2 .
j=1
This is equivalent to log(mij ) = µ + βj + νi ωj and also to the separate lines model log(mij ) = β0j + β1j νi where ν· = 0
and
I i=1
in both models and β· = 0 in model (5).
νi2 = 1
(5)
7.3 Unknown Factor Scores
271
If both main effects are linear, the model is log(mij ) = µ + λ1 νi + λ2 ωj + γνi ωj with ν· = ω· = 0
and
I
νi2
i=1
=
J
ωj2 = 1 .
j=1
An equivalent model is log(mij ) = µ + νi + ωj + γνi ωj
(6)
where ν· = ω · = 0 . The relationship between the models is based on νi being equivalent to λ1 νi , ωj being equivalent to λ2 ωj , and γ being equivalent to γ/λ1 λ2 . When the factor categories are known to be ordered, a corresponding order can be imposed on the estimated factor scores. For example, models (1), (4), (5), and (6) can be fitted subject to the condition that ν1 ≤ ν2 ≤ · · · ≤ νI . Similar orderings can be imposed on the column scores. Unfortunately, such constraints cause complications in the numerical procedures required to fit the models. In practice, it seems to be more common to fit the models without imposing order conditions on the scores. One can then verify whether the data are consistent with the a priori ordering. Model (1) was first proposed by Fienberg (1968). Later, Goodman (1979, 1981), Anderson (1980), and Chuang (1983) extended the use of estimated factor scores. Johnson and Graybill (1972) proposed a model similar to (1) for standard analysis of variance. They built on earlier results in analysis of variance that are also of interest for log-linear models. Tukey (1949) proposed a 1 degree of freedom test for nonadditivity in a two-way analysis of variance. Mandel (1961, 1971) extended Tukey’s results by considering tests for more general models and presented a justification of the models based on unknown factor scores. Mandel’s models differ from those considered by Johnson and Graybill in that they use the row and column effects in place of the unknown factor scores. These models and their justification apply equally well to log-linear models. Mandel’s models are log(mij ) = µ + αi + βj + δi βj ,
α· = β· = 0 ,
(7)
log(mij ) = µ + αi + βj + φj αi ,
α· = β· = 0 .
(8)
α· = β· = 0 .
(9)
and The Tukey model is log(mij ) = µ + αi + βj + γαi βj ,
272
7. Models for Factors with Quantitative Levels
It is easily seen that model (7) is equivalent to the linear row effects model (4). By equating βj = ωj and δi = νi − 1 and substituting into (4), we see that model (4) can be written as model (7). Conversely, if model (7) holds, write ωj = βj βj2 βj2 νi = (1 + δi )
and
to see that model (4) holds. Similarly, models (5) and (8) are equivalent and models (6) and (9) are identical. This establishes the justification for examining models (7), (8), and (9). They are equivalent to models with interesting interpretations in terms of underlying unknown factor scores. If these models are appropriate and necessary, the maximum likelihood fits should be obtained. Maximum likelihood estimation and (generalized) likelihood ratio tests involving any of the models for unknown factor scores require specialized methods for fitting the log-nonlinear models. Standard programs for fitting log-linear models are not appropriate. However, by analogy with the two-stage fitting procedure commonly used in analysis of variance, a simple method can be derived for evaluating whether these models are necessary. All of the models considered contain the model of complete independence log(mij ) = µ + αi + βj
(10)
as a submodel. This may not be obvious in models (4), (5), and (6), but it is in their equivalent versions (7), (8), and (9). Models (7), (8), and (9) can be tested against model (10) in a very simple way. First, write τij = µ + αi + βj and note that if model (10) is true, (7) is equivalent to log(mij ) = µ + αi + βj + δi τij ,
(11)
log(mij ) = µ + αi + βj + φj τij ,
(12)
(8) is equivalent to
and (9) is equivalent to log(mij ) = µ + αi + βj + γ(τij )2 .
(13)
7.3 Unknown Factor Scores
273
Models (11), (12), and (13) would be log-linear if the τij ’s were known. Fit model (10) by maximum likelihood to obtain τˆij . Substitute τˆij for τij in models (11), (12), and (13) so that they are log-linear in the other parameters. Fit the models based on τˆij using standard log-linear methods and test them against model (10) in the usual way. If model (10) is true, these tests have asymptotic chi-squared distributions. For linear models, the validity of tests based on this two-stage fitting procedure was established by Milliken and Graybill (1970) and Rao (1965). For log-linear models, a corresponding result is given by Christensen and Utts (1992). These models and methods can also be applied to higher-dimensional tables and to logit models. Example 7.4.1 gives the details for fitting a logit model. Example 7.3.1. Wing (1962), Haberman (1974b), and Fienberg (1980) have considered data on the relationship between length of hospitalization and frequency of visits for 132 long-term schizophrenic patients. Length of hospitalization was categorized as over 2 years but under 10, (2, 10), over 10 years but under 20, (10, 20), and over 20 years, 20+. Frequency of visits were regular, irregular (no home visits, hospital visits less than once a month), and never. The data are given in Table 7.1. TABLE 7.1. Schizophrenic Data Visitation Frequency (i) Regular Irregular Never
Length of Hospitalization (j) in years (2, 10) (10, 20) 20+ 43 6 9
16 11 18
3 10 16
Fitting model (10) in the usual way and using the two-stage fitting procedure described above for the other models yields the lack of fit statistics given below. Model (10) (11) (12) (13)
df 4 2 2 3
Two-Stage G2 38.35 1.21 11.18 14.76
Except for model (10), these G2 ’s are not likelihood ratio lack of fit statistics for the models. However, the G2 for model (10) can be subtracted from the other G2 ’s to obtain valid asymptotic χ2 tests for model (10) versus the other models. The test statistics are as follows:
274
7. Models for Factors with Quantitative Levels
Model (11) (12) (13)
df 2 2 1
Two-Stage G2 37.14 27.17 23.59
All of the models fit better than (10), especially model (11). One set of parameter estimates are given below for the two-stage fit of model (11). Recall that parameter estimates are not uniquely defined. Parameter µ α1 α2 α3 β1 β2 β3 δ1 δ2 δ3
Estimate −12.91 0.000 14.65 15.23 0.000 0.4727 0.4977 5.034 0.05197 0.000
Observe that δˆ1 > δˆ2 > δˆ3 = 0 with δˆ1 much larger than the others. To ˆ we need to examine their draw conclusions about the meaning of the δ’s, multipliers in model (11), the τˆij ’s. The τˆij ’s are
j 1 2 3
1 3.305 2.473 2.939
i 2 3.051 2.220 2.685
3 2.612 1.780 2.246
ˆ as we move In each row, the τˆij ’s are decreasing. With non-negative δ’s, to the right in each row, the fitted counts decrease. For patients who are visited irregularly or never, the model indicates little decrease over time due to the interaction because the δˆ values are near zero. However, δˆ is large for row 1, so, according to the fitted model, the number of patients who have regular visits decreases dramatically over time.
Exercise 7.1. Show that models (7) and (11) are equivalent. Show that models (9) and (13) are equivalent.
7.4 Logit Models
7.4
275
Logit Models
Results analogous to Section 3 apply to models for the log odds. The example in this section involves a logit model with factors that are assumed to have unknown quantitative category scores. Example 7.4.1. Rosenberg (1962) presents data on the relationships among Religion, Father’s Educational Level, and Self-Esteem. The data are given in Table 7.2. Self-Esteem is considered the response.
TABLE 7.2. Data of Rosenberg (1962).
Religion
SelfEsteem High
8th or less 245
Father’s Educational Level Some HS Some Coll HS Grad Coll Grad 330 388 100 77
Post Coll 51
Catholic Low High
115 28
152 89
153 102
40 67
37 87
19 62
Jewish Low
11
37
35
18
12
13
High
125
234
233
109
197
90
Low
68
91
173
47
82
32
Protestant
Chuang (1983) presents a maximum likelihood analysis of Rosenberg’s data that includes logit versions of Mandel’s models (7.3.7) and (7.3.8) and the Tukey model (7.3.9). The expected cell counts are mijk , where i denotes religion, j denotes educational level, and k denotes self-esteem. A logit model imposes structure on τij ≡ log(mij1 /mij2 ) . The logit versions of Mandel’s models are τij = µ + αi + βj + δi βj
(1)
τij = µ + αi + βj + φj αi .
(2)
and The logit version of the Tukey model is τij = µ + αi + βj + γαi βj .
(3)
τXij = µ + αi + βj .
(4)
The additive model is
276
7. Models for Factors with Quantitative Levels
In Section 3, the models (7.3.11), (7.3.12), and (7.3.13) were used to simplify the two-stage fitting process. Their logit model analogues are τij τij
= µ + αi + βj + δi τXij ,
(5)
= µ + αi + βj + φj τXij ,
(6)
and τij = µ + αi + βj + γ(τXij )2 .
(7)
As in Section 3, these models are equivalent to models (1), (2), and (3), respectively. To fit (5), (6), and (7) using maximum likelihood requires the use of something other than a standard log-linear model or logistic model computer program. Chuang (1983) suggests modifying a nonlinear least squares program. The alternative two-stage procedure discussed in Section 3 is easily implemented with standard software. Begin by fitting model (4) to obtain the τˆXij ’s. The only nonlinear aspect to models (5), (6), and (7) is the presence of the τXij parameters, so if these parameters were known, there would be no difficulty in obtaining fits for the models. In the two-stage procedure, the estimates τˆXij are substituted for the parameters. The resulting linearized models are fitted in the usual way with standard software. Test statistics for comparing the models to model (4) are also computed in the usual way. Table 7.3 presents the results of fitting models (5), (6), and (7) by both maximum likelihood and the two-stage procedure. The G2 values reported in Table 7.3 for the two-stage fits are not directly applicable because they are statistics for testing the models against the saturated model. The theoretical justification given in Christensen and Utts (1992) applies only to testing models (5), (6), and (7) against the additive model (4). Although the G2 ’s reported in Table 7.3 do not have a sound theoretical basis as test statistics, they are sometimes a valuable data analytic tool. This is not unreasonable because they are one-to-one functions of test statistics that have a sound basis. TABLE 7.3. Model Fits Model (5) (6) (7) (4)
df 8 5 9 10
MLE G2 12.76 12.07 25.58 26.39
Two-Stage G2 16.34 13.25 26.34 —
The results of testing the models with nonadditivity against the additive model are given in Table 7.4. In this example, the two-stage tests appear to be a little less powerful than the generalized likelihood ratio tests, but
7.5 Exercises
277
the qualitative conclusions about the best-fitting model are identical for the two methods. Model (5), i.e., model (1), appears to be the best-fitting model. Recall from Section 3 that this is the model that, for each religion, fits a line in the unknown factor scores associated with Father’s Educational Level. TABLE 7.4. Tests of Nonadditivity Model
df
MLE G2
Two-Stage G2
(5) (6) (7)
2 5 1
13.63 14.32 0.81
10.05 13.14 0.05
If one of the log-nonlinear models is to be used in further work, an exact maximum likelihood fit of the model should be used. Nonetheless, the two-stage fitting procedure provides a simple yet valid diagnostic tool for checking whether Mandel’s models and the Tukey model require more investigation.
7.5
Exercises
Exercise 7.5.1. Reanalyze the Intelligence versus Clothing table of Exercise 2.6.3 using the methods of Section 1. Note that this ignores the potentially complicating factor Standard and the complex sampling scheme. Exercise 7.5.2.
For the model
log(mij ) = u + u1(i) + u2(j) + γ1 i j + γ2 i2 j , find the log odds ratio log[mij mi+1 j+1 mi+1 j mi j+1 ] in terms of the model parameters. Exercise 7.5.3. Duncan, Schuman, and Duncan (1973) and Duncan and McRae (1979) present data on evaluations made in 1959 and 1971 of the performance of radio and TV networks. The data are given in Table 7.5. Use the methods of Section 2 to analyze these data. Exercise 7.5.4. log odds ratios
Assuming the use of consecutive integer scores, find the log[mijk mi+1 j+1 k mi+1 jk mi j+1 k ]
278
7. Models for Factors with Quantitative Levels TABLE 7.5. Radio and Television Network Performance. Year 1971 1959
and
Respondent’s Race White Black White Black
Performance of Networks Poor Fair Good 158 24 54 4
636 144 253 23
600 224 325 81
log[mijk mi+1 j k+1 mi+1 jk mi j k+1 ]
in terms of the model parameters for the two heterogeneous uniform association models of Section 2. Exercise 7.5.5. Use the methods of Section 3 to analyze the Intelligence – Clothing table of Exercise 2.6.3.
8 Fixed and Random Zeros
Not infrequently, one encounters a table in which a number of the cell counts are 0s. These cells sometimes cause problems when fitting log-linear models. Recall that in our discussion of multiple logistic regression in Section 4.1, we had a 200×2 table that was riddled with zero counts. Except for the fact that some asymptotic results did not hold, the zeros caused no problems. Cells with zero counts merely have the potential to cause problems. Cells with zero counts are classified in two ways: fixed and random. Fixed zeros are cells in which it is impossible to observe counts. Such cells must always be zero. Random zeros are cells that happen to have zero counts, but where it is possible to have positive counts. Tables with fixed zeros are called incomplete tables.
8.1
Fixed Zeros
Example 8.1.1. Brunswick (1971) reports data on the health concerns of teenagers. These data have also been examined by Grizzle and Williams (1972) and Fienberg (1980). The data are given in Table 8.1. The two zeros in the table are fixed. It is physiologically impossible for males (of whatever age) to have menstrual difficulties. (Technically, I suppose the real issue here is whether males worry about menstrual difficulties. I suppose somewhere, sometime, some teenage male has worried about them, but one must admit that these are very nearly fixed zeros. We will treat them as such.)
280
8. Fixed and Random Zeros TABLE 8.1. Health Concerns of Teenagers
Sex (i)
Age (j)
Health Concerns (k) Sex, Menstrual How Healthy Reproduction Problems I Am
Nothing
Male
12-15 16-17
4 2
0 0
42 7
57 20
Female
12-15 16-17
9 7
4 8
19 10
71 31
The solution to dealing with fixed zeros is to throw them away. They are cells that do not really exist. One simply ignores those cells and fits a model to the cells that do exist. Thus, the model is fitted to an incomplete table. In fact, it is impossible to include fixed zeros in a log-linear model. A loglinear model specifies a value of log(mi ) for every cell i. It is implicit that log(mi ) is defined. If a cell is a fixed zero, the probability of an observation occurring in that cell is zero, so the expectation, mi , must also be zero. In such cases, log(mi ) is undefined. To fit log-linear models, one has to throw fixed zeros away. If the Newton-Raphson algorithm of Chapter 10 is used to fit log-linear models, throwing out fixed zeros is no problem. The algorithm can easily handle the fact that not all combinations of the factor categories are considered in the model. When using iterative proportional fitting, the situation is slightly more complex. The algorithm is based on having all combinations of the factor categories defined. Fortunately, there is a simple way around this problem. Recall that it is standard to start iterative proportional fitting with all initial cell estimates equal to 1 and that if the initial values satisfy the constraints of the model, the subsequent iterations also satisfy the constraints of the model. With all initial values of 1, the initial values satisfy the constraints of any interesting ANOVA model. To deal with fixed zeros, take the initial values of the corresponding cells to be 0. It is easily seen that if the initial value is 0, then all subsequent values will also be 0. (We use the definition that 0/0 = 0.) Moreover, the constraints on the model with fixed zero cells eliminated are identical to the constraints on the complete model when the fixed zero cells are required to have fitted values of 0. Although one can get the correct fitted values using iterative proportional fitting, the user often must provide the correct degrees of freedom. These are determined by the model with the fixed zero cells eliminated. Example 8.1.2. We illustrate the computation of degrees of freedom using the data of Example 8.1.1. Consider the model log(mijk ) = M + Si + Aj + Hk + (SA)ij + (SH)ik + (AH)jk .
(1)
8.1 Fixed Zeros
281
Clearly, the grand mean M can exist and we have data available for each sex, age, and health concern. Computing these degrees of freedom as usual, we have Term M S A H
df 1 1 1 3
We also have data available for every combination of sex and age, and every combination of age and health concern. Again, computing degrees of freedom in the usual way, we get Term SA AH
df 1 3
We do not have data available for every combination of sex and health concern. We have no data for males with menstrual difficulties. In the 2 × 4 table of sex and health concerns, we have only 7 cells instead of 8. The degrees of freedom for this table must be 7. To fit this table perfectly, we would use the parameters M , Si , Hk , and (SH)ik . The total number of degrees of freedom for these parameters must be 7. Because M has 1 degree of freedom, S has 1, and H has 3, that leaves only 2 degrees of freedom available for SH. Thus, model (1) has 1 + 1 + 1 + 3 + 1 + 2 + 3 = 12 degrees of freedom. The saturated model has 1 degree of freedom for each cell. There are nominally 16 cells, but 2 are fixed zeros, so the table has only 14 cells. For testing model (1) against the saturated model, the degrees of freedom are 14 − 12 = 2. These techniques are easily applied to all models for the health concern data. In testing [SA][H] against the saturated model, we have dropped the SH and AH terms. The degrees of freedom for these terms are put into the test degrees of freedom. Model (1) has 2 degrees of freedom for the test, SH has 2 degrees of freedom, and AH has 3 degrees of freedom, so the test of [SA][H] has 2 + 2 + 3 = 7 degrees of freedom. Alternatively, we can do the calculation by noting that the saturated model has 14 df , while the [SA][H] model has terms and degrees of freedom M (1), S(1), A(1), SA(1), H(3) for a total of 7 degrees of freedom. The test has 14 − 7 = 7 degrees of freedom. The fits for these data are summarized below.
282
8. Fixed and Random Zeros
Model [SA][SH][AH] [SH][AH] [SA][AH] [SA][SH] [SA][H] [SH][A] [AH][S] [S][A][H]
df 2 3 4 5 7 6 5 8
G2 2.03 4.86 13.45 9.43 22.03 15.64 17.46 28.24
The best fitting model is either [SA][SH][AH] or [SH][AH], depending on whether health concerns are treated as a response variable or not. One final note on fixed zeros. Because fixed zeros are really cells that do not exist, the existence of fixed zeros has no effect on the validity of large sample results. If the counts in the other cells are large, asymptotic results hold.
8.2
Partitioning Polytomous Variables
We now investigate a method that uses incomplete tables to examine category effects. Example 8.2.1. Duncan (1975) presents data on the earth-shattering question, “Who should shovel the snow from sidewalks?” The data are given in Table 8.2. Mothers were asked whether boys, girls, or both should do the shoveling. Mothers never responded that girls alone should do the shoveling, so the data are presented with only two categories. In addition, there are two explanatory factors, the mother’s religion (R), Protestant, Catholic, Jewish, or other, and the year (Y) in which the question was asked. (Having lived 36 years in Minnesota and Montana, I am aware that a key factor has been left out. Father does the vast majority of the shoveling.) We will treat mothers’ opinions on shoveling (S) as a response variable and consider only log-linear models that correspond to logit models. The point of this example is to illustrate how to use tables with fixed zeros to answer questions about parameters in log-linear models. Fitting the standard models gives Model [RY][RS][YS] [RY][RS] [RY][YS] [RY][S]
df 3 4 6 7
G2 0.4 21.5 11.2 31.7
8.2 Partitioning Polytomous Variables
283
TABLE 8.2. Mothers’ Opinions on Who Should Shovel Snow Shoveling (k) Boy Both
Religion (i)
Year (j)
Protestant
1953 1971
104 165
42 142
Catholic
1953 1971
65 100
44 130
Jewish
1953 1971
4 5
3 6
Other
1953 1971
13 32
6 23
Clearly, the best fitting model is [RY][RS][YS]. We can write the model as log(mijk ) = (RY )ij + Sk + (RS)ik + (Y S)jk ,
(1)
i = 1, 2, 3, 4 , j = 1, 2 , k = 1, 2. Because S is a response variable, the (RY )ij ’s must be in the model. The important terms are Sk , (RS)ik , and (Y S)jk . In a logit model, Sk corresponds to the grand mean; not very interesting. The terms (Y S)jk correspond to main effects in years with 1 degree of freedom. The terms (RS)ik correspond to main effects in religion with 3 degrees of freedom. Further analysis must examine the nature of the three degrees of freedom in (RS). We are really asking about relationships among the religion categories. One way to proceed was illustrated in Section 4.6. We could incorporate constraints on the religions. For instance, we could treat all Protestants and Jews alike, while allowing Catholics and Others to have separate effects on the shoveling response. Such a procedure would involve recoding the indices. We could recode the index i into a new pair of indices g and h as follows: i (g, h)
1 (1,1)
2 (2,1)
3 (1,2)
4 (3,1)
where g indicates religion with no difference between Protestants and Jews, while h is simply used to tell Protestants and Jews apart. We can rewrite model (1) as log(mghjk ) = (RY )ghj + Sk + (RS)ghk + (Y S)jk .
(2)
To eliminate differences between Protestants and Jews in (RS), we drop the h, giving log(mghjk ) = (RY )ghj + Sk + (RS)gk + (Y S)jk .
(3)
284
8. Fixed and Random Zeros
Both models (2) and (3) are actually models for incomplete tables. The possible values for g are 1, 2, 3. For h, they are 1, 2. For j and k, they are also 1 and 2. This suggests the existence of a 3 × 2 × 2 × 2 table. However, not all combinations of the indices are possible. Only when g is 1 can h be 2. Any cell (g, 2, j, k) in the 3 × 2 × 2 × 2 table with g = 2 or 3 is a fixed zero. It is obvious that quite a few questions can be addressed by creative reindexing. Duncan (1975) presented a particular pattern that is very flexible. Transform the index i into (r, s, t, u) where the correspondence is as follows: i (r, s, t, u)
1 (1,2,2,2)
2 (2,1,2,2)
3 (2,2,1,2)
4 (2,2,2,1)
Each 4-tuple has three 2s and one 1. If the 1 is in the third place, then the 4-tuple corresponds to i = 3, etc. The index r is 1 for Protestant and 2 otherwise. The index s is 1 for Catholic and 2 otherwise. Similarly, t = 1 indicates Jewish and u = 1 indicates Other. Including the year and shoveling factors, we now have a 2 × 2 × 2 × 2 × 2 × 2 table with many fixed zeros because mothers have only one religion. Using a natural identification, denote the factors corresponding to r, s, t, and u as P, C, J, and O. Model (1) can now be rewritten as log(mrstujk ) = (P CJOY )rstuj + Sk + (P CJOS)rstuk + (Y S)jk . Moreover, we can denote this as [PCJOY][PCJOS][YS]. Note that the four factors P, C, J, O taken together are equivalent to the old factor R. Now consider the model [PCJOY][YS][CS]. Recall that [PCJOY] is included because S is a response factor. The term [YS] seemed important in our original analysis, so it is retained. The new model replaces the terms (RS)ik = (P CJOS)rstuk in model (1) with the terms (CS)sk . In particular, the model is log(mrstujk ) = (P CJOY )rstuj + Sk + (CS)sk + (Y S)jk . The logit model effect of the four different religions is being replaced with one effect that distinguishes Catholics from all other religions. Earlier, we considered a model that treated Protestants and Jews the same, but allowed separate effects for Catholics and Others. In Duncan’s setup, this is the model [PCJOY][YS][COS]. In the model log(mrstujk ) = (P CJOY )rstuj + Sk + (COS)tuk + (Y S)jk , the effect of religion on mothers’ shoveling opinions is taken up by the (COS)tuk terms. Only three such terms exist: (COS)12k , an effect for Catholics; (COS)21k , an effect for others; and (COS)22k . The (COS)22k term does not distinguish between Protestants and Jews.
8.2 Partitioning Polytomous Variables
285
Moreover, because the 2 × 2 × 2 × 2 × 2 × 2 table is so very incomplete, there is another way to do exactly the same thing. The model [PCJOY][YS][CS][OS] is equivalent to [PCJOY][YS][COS]. The model for [PCJOY][YS][CS][OS] is log(mrstujk ) = (P CJOY )rstuj + Sk + (CS)sk + (OS)uk + (Y S)jk . Catholics get the effects (CS)1k + (OS)2k . Others get the distinct effects (CS)2k + (OS)1k . Protestants and Jews both get (CS)2k + (OS)2k . In a complete table, the 15 interactions between S and the religion factors P, C, J, and O would each have 1 degree of freedom. In fact, there are only 3 degrees of freedom for (RS) = (P CJOS). Thus, there are a lot of redundant models. In fact, any term that includes three of P, C, J, and O is equivalent to any other term that includes three, and these are all equivalent to the four-factor term. This follows from the fact that any three of the indices r, s, t, and u completely determine the cell; e.g., if r = 2, s = 2, t = 2, then we must have u = 1. Because all terms with three of the factors are the same, models such as [PCJOY][YS][PCJS] and [PCJOY][YS][CJOS] are equivalent to each other and to [PCJOY][YS][PCJOS]. Although Duncan’s method of reindexing is flexible, it is not a panacea. There are interesting questions that cannot be addressed using Duncan’s method. For example, if we wanted a model that treated Protestant and Jews the same and also treated Catholics and Others the same, we could not arrive at such a model using Duncan’s method. Now let’s see where Duncan’s method gets us with the shoveling data. Some models and fits are given below: Model [PCJOY][PCJOS][YS] = [RY][RS][YS] [PCJOY][PS][YS] [PCJOY][CS][YS] [PCJOY][JS][YS] [PCJOY][OS][YS] [PCJOY][YS] = [RY][YS]
df 3 5 5 5 5 6
G2 0.4 4.8 1.4 10.9 9.8 11.2
The models that involve [JS] and [OS] do not fit very well. The model with [JS] only distinguishes between Jews and non-Jews. The relatively poor fit indicates that Protestants, Catholics, and Others do not act the same. Similarly, the relatively poor fit of the model with [OS] indicates that Protestants, Catholics, and Jews do not act the same. Both the models with [PS] and [CS] fit reasonably well. The model with [CS] fits especially well. This model suggests that Protestants, Jews, and Others act the same, but Catholic mothers have different opinions about who should shovel snow. The model with [PS] indicates that Catholics, Jews, and Others act the same, but Protestant mothers have different opinions. There are so few Jews and Others that it is not surprising that lumping
286
8. Fixed and Random Zeros
them with either large category does not substantially hurt the fit. What we really know is that Protestants mothers have different attitudes than Catholic mothers and that if we want to lump Jewish and Other mothers with one of those categories, they seem to fit better with the Protestants than the Catholics. However, we know almost nothing about Jewish mothers and little about Other mothers. From looking at Table 8.2, it is clear that the Catholic mothers are much more egalitarian about shoveling than the Protestants or the Others. We could go further with this analysis by considering models [RY][YS] [CS][PS], [RY][YS][CS][JS], and [RY][YS][CS][OS], but considering what little difference there is between [RY][YS][CS] and [RY][YS][RS], there seems little point in pursuing the analysis further. Note that I have gotten lazy and started writing [RY] for [PCJOY].
8.3
Random Zeros
Random zeros are cells that have positive probability of occurring, but do not occur in the sample at hand. If a large enough sample was taken, these cells should eventually contain positive counts. Random zeros present three problems. First, they suggest that asymptotic results are invalid. If the sample was large enough for asymptotic approximations, these cells ought not be zero. The discussions of small samples in Section 2.4 and conditional inference in Section 3.5 are relevant to this problem. Second, when a table includes random zeros, maximum likelihood estimates of the parameters may not exist. Finally, there is a practical problem in that some computer programs will give “MLEs” even when they do not exist. In this section, we concern ourselves primarily with problems related to the nonexistence of MLEs. Example 8.3.1. Consider a 4 × 2 × 3 × 3 table on the results of arthroscopic knee surgery. The four factors and their categories are listed as follows: Factor Label Type Sex Age Result
Factor Description Type of injury Sex of patient Age of patient Outcome of surgery
Categories Twist, Direct, Both, No injury Male, Female 11-30, 31-50, 51-91 Excellent, Good, Fair-Poor
The data are given in Table 8.3. First, note that one cannot fit a saturated ˆ hijk . Because the model log-linear model. For a saturated model, nhijk = m is log-linear, log(m ˆ hijk ) must be defined. However, for some cells, nhijk = 0, ˆ hijk or log(m ˆ hijk ) is not defined. so either nhijk = m
8.3 Random Zeros
TABLE 8.3. Data on Arthroscopic Knee Surgery Type (h)
Sex (i) Male
Age (j) 11-30 31-50 51-91
Ex 21 32 20
Result (k) Good F-P 11 4 20 5 12 5
Twist Female
11-30 31-50 51-91
3 6 6
1 5 3
0 2 1
Male
11-30 31-50 51-91
3 2 0
2 4 0
2 4 0
Female
11-30 31-50 51-91
0 0 1
1 0 2
1 0 3
Male
11-30 31-50 51-91
7 11 0
1 6 4
1 2 6
11-30 31-50 51-91 11-30 31-50 51-91
1 1 2 0 1 3
0 1 4 0 2 3
0 1 1 0 1 0
11-30 31-50 51-91
1 1 1
0 2 6
0 0 8
Direct
Both Female
Male No Injury Female
287
288
8. Fixed and Random Zeros
We can extend this argument to other models. If we consider models that include the uT SA(hij) terms (i.e., models that include [TSA]), the ˆ hij· for all h, i, and j. However, MLEs must satisfy the condition nhij· = m ˆ 213· = 0, etc. n213· = n222· = n411· = 0. If MLEs exist, then m But, because we have a log-linear model, each term m ˆ 213k must be strictly positive. Summing over k, m ˆ 213· must also be strictly positive. Because m ˆ 213· = 0, we have a contradiction. The MLEs must not exist. Therefore, we cannot fit any log-linear models that include [TSA]. Similarly, n4·13 = n4·12 = 0, so MLEs do not exist for models that include [TAR]. All other marginal totals are positive, so models with [TSA] or [TAR] are our primary source of concern. To some extent, these problems can be evaded. Suppose we wish to fit a model with [TSA]. If we think of TSA as one composite factor, we can think of the problem as being one of fitting a (4 × 2 × 3) × 3 table, i.e., a 24 × 3 table. In this table, three of the “rows” have zero totals. If we drop these three rows from the table, we can fit the remaining 21 × 3 table. Now, suppose we fit the model [TSA][TR][AR]. We have 21 degrees of freedom in the model for fitting [TSA]. Normally, this would be 24 degrees of freedom for fitting a grand mean, main effects T , S, A; two-factor effects (T S), (T A), (SA); and the three-factor effect (T SA). However, because 3 rows are being dropped, we have only 21 degrees of freedom. For fitting [TR], we have an additional 8 degrees of freedom. Adding [TR] involves adding the main effect R with 2 degrees of freedom and the two-factor effect (T R) with 6 degrees of freedom. Finally, adding [AR] is equivalent to adding the twofactor effect (AR) with 4 degrees of freedom. The model has 21 +8+4 = 33 degrees of freedom. The table has 21 × 3 = 63 degrees of freedom, so the test of [TSA][TR][AR] has 63 − 33 = 30 degrees of freedom. (Incidentally, G2 = 34.72, so this is a very good model.) We now consider the problem of determining the degrees of freedom for testing the more complex model, [TSA][TAR]. Recall that there are marginal totals of 0 associated with both of these terms. Of the 24 marginal totals associated with [TSA] (obtained by summing over R), three are 0. This leads us to fitting the (24 − 3) × 3, TSA by R table considered above. Of the 36 marginal totals associated with [TAR] (obtained by summing over S), two are 0. This would normally lead us to fitting the (36 − 2) × 2 TAR by S table. In fact, what we need to do is fit the intersection of these tables. The 21 × 3 table involves dropping the cells (h, i, j, k) with values (2,1,3,1), (2,1,3,2), (2,1,3,3), (2,2,2,1), (2,2,2,2), (2,2,2,3), (4,1,1,1), (4,1,1,2), and (4,1,1,3). The 34 × 2 table drops the cells (4,1,1,3), (4,2,1,3), (4,1,1,2), and (4,2,1,2). Note that both tables drop the cells (4,1,1,2) and (4,1,1,3), so a total of 11 cells are being dropped from the 4 × 2 × 3 × 3 table. We are left with a table that has 72 − 11 = 61 cells. We now compute the degrees of freedom for the model [TSA][TAR]. As before, fitting [TSA] alone accounts for 21 degrees of freedom. Determining the degrees of freedom for adding [TAR] is somewhat more complex. Nor-
8.3 Random Zeros
289
mally, [TAR] alone would involve 36 degrees of freedom broken down as follows: (grand mean) : (1), T : (3), A : (2), R : (2), (T A) : (6), (T R) : (6), (AR) : (4), and (T AR) : (12). With the marginal zeros, there are only 34 degrees of freedom. The 2 degrees of freedom come out of the (T AR) interaction, so the correct degrees of freedom are (T AR) : (10). Adding [TAR] to a model with [TSA] involves adding the effects R, (T R), (AR), and (T AR). The degrees of freedom for [TSA][TAR] are 21 + 2 + 6 + 4 + 10 = 43. The lack of fit test for [TSA][TAR] has 61 − 43 = 18 degrees of freedom; G2 is 21.90. The basic approach to dealing with models that have random zeros is to identify all cells that imply that MLEs do not exist. In other words, identify all cells for which the maximum likelihood constraints would imply that m ˆ = 0. Such cells are dropped from the model and MLEs are found for the remaining cells. We are simply treating cells with “m ˆ = 0” as fixed zeros. The degrees of freedom for the table are the number of cells in the full table minus the number of “fixed” zeros. The degrees of freedom for the model are the usual degrees of freedom for the model minus the number of degrees of freedom lost because there is “no information” available on some parameters. From Example 8.3.1, [TSA] usually involves 24 degrees of freedom related to the 4 × 3 × 3 TSA marginal table. The table has n213· = n222· = n411· = 0. The nine cells involved in these three marginal totals are being treated like fixed zeros, so those nine cells “do not exist.” There is no information available on 3 of the 24 cells of the TSA marginal table. Thus, 3 degrees of freedom are lost to the model because there is no information available. We now consider one more example to set the ideas and illustrate some additional details. Example 8.3.2. The data in Table 8.4 were adapted from Lee (1980). The table involves four factors related to the survival of patients with stages 3 and 4 melanoma: Gender, Remission, Immunity, and Survival. Remission has three categories: still in remission, relapsed, never in remission. Immunity has three categories that were derived from results on six skin tests. One test score of at least 10 indicates good immunity. No test scores of at least 10 indicates no immunity. If more than half of the test scores are unknown and those that are known are less than 10, the immunity is taken as unknown. Finally, to get more interesting marginal zeros, Lee’s count in cell (2,1,3,2) has been changed from 1 to 0. Denote the factors Gender, Remission, Immunity, and Survival as G, R, I, and S, respectively. Consider fitting the model [GRI][GRS][GIS][RIS]. The likelihood equations are, for all h, i, j, and k, nhij· nhi·k
= m ˆ hij· , = m ˆ hi·k ,
nh·jk
= m ˆ h·jk ,
290
8. Fixed and Random Zeros
TABLE 8.4. Melanoma Data Gender (h)
Male
Immunity (j)
Relapsed
No Immunity Immunity Unknown
Remission
No Immunity Immunity Unknown
0 1 3
1 10 3
No Immunity Immunity Unknown No Immunity Immunity Unknown
3 10 8
1 5 2
2 3 0
0 4 0
Remission
No Immunity Immunity Unknown
0 0 0
0 10 4
None
No Immunity Immunity Unknown
2 6 3
0 3 8
None
Relapsed
Female
Survival (k) Dead Alive 2 0 4 1 0 0
Remission (i)
8.3 Random Zeros
n·ijk
291
= m ˆ ·ijk .
Many of these marginal tables have zero totals. The RIS marginal table has four zeros: 0 = n·131 = n·132 = n·112 = n·211 . These involve the eight cells with counts of 0: (1,1,3,1), (2,1,3,1), (1,1,3,2), (2,1,3,2), (1,1,1,2), (2,1,1,2), (1,2,1,1), and (2,2,1,1). The GRI table has three zeros: 0 = n113· = n213· = n221· . These involve six cases: (1,1,3,1), (1,1,3,2), (2,1,3,1), (2,1,3,2), (2,2,1,1), and (2,2,1,2). The GRS table has n22·1 = 0. This total involves the cases (2,2,1,1), (2,2,2,1), and (2,2,3,1). Finally, the GIS table has n2·12 = 0, so the cells (2,1,1,2), (2,2,1,2), and (2,3,1,2) are all 0. Having listed all of the cells that would have to have m ˆ = 0, we see that there are only 12 distinct cells. (These 12 happen to be all of the cells in the entire table with counts of 0, but that fact is irrelevant.) The full table is a 2 × 3 × 3 × 2 table having 36 cells. If we drop the 12 cells with m ˆ = 0, we have 24 cells remaining in the table. Thus, the table has 24 degrees of freedom. We now consider the degrees of freedom for the model. The RIS table has four zero totals. The corresponding cells are dropped from the table, so rather than being a 2 × 18 table, the G by RSI table is a 2 × 14 table. Thus, fitting [RIS] gives the model 14 degrees of freedom. We can also compute the degrees of freedom for fitting [RIS] term by term. To compute the degrees of freedom term by term, we need to note that the RI marginal table has n·13· = 0. Thus, this 3 × 3 table has one dropped cell and only 8 degrees of freedom. The degrees of freedom are allocated: (grand mean) : (1), R : (2), I : (2), and (RI) : (3) rather than the standard value 4. Moving up to the RIS 3 × 3 × 2 table, we have 14 nonempty cells. Because the (RI) interaction has only 3 degrees of freedom, the 14 degrees of freedom correspond to: (grand mean) : (1), R : (2), I : (2), S : (1), (RI) : (3), (RS) : (2), (IS) : (2), which leaves (RIS) : (1) rather than the standard value of 4. Thus, with this pattern of random zeros, the four empty cells cause a reduction of 1 degree of freedom in the (RI) interaction and 3 degrees of freedom in the (RIS) interaction. Note that the only two-factor marginal table that contains a zero is the RI table. Thus, any other reductions in degrees of freedom must occur in three-factor interaction terms. A similar analysis holds for the GRI marginal table. This 2 × 3 × 3 table has three zeros, so three cells are dropped. There are 18 − 3 = 15 degrees of freedom. The degrees of freedom are allocated: (grand mean) : (1), G : (1), R : (2), I : (2), (GR) : (2), (GI) : (2); once again (RI) : (3) (rather than 4) which leaves us with (GRI) : (2) (rather than 4). The model [RIS][GRI] has 14 degrees of freedom for [RIS]. We then add the terms G, (GR), (GI), and (GRI) with 1 + 2 + 2 + 2 = 7 degrees of freedom. Thus, [RIS][GRI] has 14 + 7 = 21 degrees of freedom.
292
8. Fixed and Random Zeros
The 2 × 3 × 2 GRS table has one zero, hence 11 degrees of freedom. They are allocated: (grand mean) : (1), G : (1), R : (2), S : (1), (GR) : (2), (GS) : (1), (RS) : (2), which leaves (GRS) : (1). Adding [GRS] to [RIS][GRI] involves adding the terms (GS) and (GRS) with 1 + 1 = 2 degrees of freedom, so [RIS][GRI][GRS] has 21 + 2 = 23 degrees of freedom. Finally, the 2×3×2 GIS table has one cell dropped for 11 degrees of freedom. The 1 degree of freedom lost is taken from the (GIS) interaction, so (GIS) has 1 degree of freedom. The model [RIS][GRI][GRS][GIS] involves adding (GIS) to the model [RIS][GRI][GRS]. The degrees of freedom are 23 + 1 = 24. Recall that the degrees of freedom for the table are 24. With 24 degrees of freedom for the model, we get a perfect fit. Having dropped out the cells that require m ˆ = 0, the model [RIS][GRI][GRS][GIS] is a saturated model. In order to implement this approach to dealing with random zeros, we must be able to identify all cells for which the likelihood equations imply that m ˆ = 0. As in Examples 8.3.1 and 8.3.2, it is frequently easy to identify some cells that have m ˆ = 0. Often, all of the cells with m ˆ = 0 are easily identified. Cells are easy to identify if they correspond to marginal totals that are zero. Unfortunately, sometimes all of the marginal totals can be positive, but cells with m ˆ = 0 still exist. Example 8.3.3. Consider a 2 × 2 × 2 table with n111 = n222 = 0 and nijk > 0 for all other cells. If we fit the model [12][13][23], the likelihood equations are nij· ni·k
= m ˆ ij· , = m ˆ i·k ,
and ˆ ·jk . n·jk = m Because n111 and n222 are the only 0s, all of the marginal totals listed above are positive. Looking at the marginal totals indicates no cause for ˆ 222 = 0. concern. Nonetheless, these equations imply that m ˆ 111 = m ˆ ··· . Writing out all of To see this, first note that we must have n··· = m the likelihood equations involving n111 and n222 gives n111 + n112 n221 + n222
= m ˆ 111 + m ˆ 112 ,
n111 + n121
= m ˆ 111 + m ˆ 121 ,
n212 + n222 n111 + n211
= m ˆ 212 + m ˆ 222 , = m ˆ 111 + m ˆ 211 , = m ˆ 122 + m ˆ 222 .
n122 + n222
= m ˆ 221 + m ˆ 222 ,
Adding these six equations together, we get 2(n111 + n222 ) + n··· = 2(m ˆ 111 + m ˆ 222 ) + m ˆ ··· .
8.4 Exercises
293
Because n··· = m ˆ ··· , we have ˆ 111 + m ˆ 222 . n111 + n222 = m With n111 = n222 = 0, we need both m ˆ 111 = 0 and m ˆ 222 = 0. These two cells would be dropped from the 2 × 2 × 2 table. Although the author is unaware of any general method of identifying situations like that in Example 8.3.3, the process of fitting models can give hints as to whether such cells have been overlooked. In particular, if some estimated cell counts seem to be converging to zero or if the iterative estimated cell counts fail to converge, it would be wise to consider the possibility that this is due to cells that need to be dropped because of the pattern of random zeros. Finally, it is interesting to look at the test of [TSA][TR][AR] versus [TSA][TAR] that can be obtained from Example 8.3.1. The test has G2 = 34.72 − 21.90 = 12.82 with df = 30 − 18 = 12. In this example, the degrees of freedom for the test happen to correspond to the usual degrees of freedom for the (T AR) interaction with T at four levels, A at three levels, and R at three levels. However, the 12 degrees of freedom are actually arrived at quite differently. As established earlier, (T AR) has 10 degrees of freedom. The other 2 degrees of freedom in the test come from the fact that [TSA][TR][AR] is fit to a 63-cell table rather than the 61-cell table of [TSA][TAR]. So the 12 degrees of freedom come from 10 degrees of freedom for (T AR) and 2 degrees of freedom for new cells. This discussion also points out that there are some technical difficulties involved in testing [TSA][TR][AR] versus [TSA][TAR]. We are testing a 63cell table against a 61-cell table. We have not discussed this sort of thing previously. Obviously, this requires a mathematical theory that embeds both of these within the 72-cell 4 × 2 × 3 × 3 table allowing for fixed zero cells, log-linear models on the nonzero cells, and reduced models in which fixed zeros are allowed to become unfixed. Moreover, asymptotic theory will be of limited value because all of these problems are being caused by small sample sizes.
8.4
Exercises
Exercise 8.4.1. Brown (1980) presents data that are reproduced in Table 8.5, on a cross-classification of 53 prostate cancer patients. The factors are acid phosphatase level in the blood serum, age, stage, grade, x-ray, and nodal involvement. Acid level and age are categorized as high or low. Stage is an indication of size and location of the tumor; a positive value is more serious. The grade and x-ray indicate whether biopsy and x-ray tests are positive for cancer. The final factor is the whether the lymph nodes are involved. Analyze the data using iterative proportional fitting and ANOVA type models.
294
8. Fixed and Random Zeros TABLE 8.5. Nodal Involvement in Prostate Cancer Grade X−ray Involvement Low − Low + High − High + Age Stage
Grade X−ray Involvement Low − Low + High − High + Age Stage
Yes 0 0 0 1
Low Acid − + No Yes 1 0 0 1 2 0 0 0
Yes 1 1 0 2
High Acid − + No Yes 1 0 0 0 0 1 0 1
− No 4 0 3 2
− No 5 1 3 2
+ − No 1 1 1 4
+ Yes 1 0 0 0
No 0 0 0 0
Yes 0 1 0 0
+ − No 0 1 0 0
+ Yes 1 2 0 0
No 0 1 0 0
Yes 0 5 1 1
Exercise 8.4.2. Extend your analysis of the Berkeley graduate admissions data (cf. Exercise 3.8.4) by incorporating the method of partitioning polytomous factors from Example 8.1.2. Exercise 8.4.3. Partitioning Two-Way Tables. Lancaster (1949) and Irwin (1949) present a method of partitioning tables that was used in Exercise 2.7.4. We now establish the validity of this method. Consider a two-dimensional I × (J + K − 1) table. The partitioning method tests for independence in two subtables. One table is a reduced I × K table consisting of the last K columns or the full table. The other table is an I × J table that uses the first J − 1 columns of the full table and also includes a column into which the last K columns of the full table have been collapsed. Write the data with three subscripts as nijk , i = 1, . . . , I, j = 1, . . . , J, and k = 1, . . . , Lj , where " Lj =
1 K
if j = J if j = J.
Consider the models log(mijk ) = αi + βjk ,
(1)
log(mijk ) = βij + βjk ,
(2)
log(mijk ) = γijk .
(3)
and
8.4 Exercises
295
Model (3) is the saturated model so (3)
m ˆ ijk = nijk . Model (1) is the model of independence in the I × J + K − 1 table, so (1) m ˆ ijk = ni·· n·jk n··· . (a) Show that the maximum likelihood estimates for model (2) are " nijk if j = J (2) m ˆ ijk = niJ· n·Jk n·J· if j = J. (b) Show that both G2 and X 2 are the same for testing model (2) against model (3) as for testing the reduced table for independence. (c) Show that both G2 and X 2 are the same for testing model (1) against model (2) as for testing the collapsed table for independence. (d) Extend (b) and (c) by showing that all power divergence statistics are the same, cf. Exercise 2.7.8. Exercise 8.4.4. The Bradley-Terry Model. “Let’s suppose, for a moment, that you have just been married and that you are given a choice of having, during your entire lifetime, either x or y children. Which would you choose?” Imrey, Johnson, and Koch (1976) report results from asking this question of 44 Caucasian women from North Carolina who were under 30 and married to their first husband. Women were asked to respond for pairs of numbers x and y between 0 and 6 with x < y. The data are summarized in Table 8.6. The most basic form for such experiments is to ask each woman to respond for all possible pairs of numbers. If this was done, there is a considerable amount of missing data. For example, in comparing 0 children with 1 child there are only 17+2 = 19 responses rather than 44. TABLE 8.6. Family Size Preference Alternative Choice
0
0 1 2 3 4 5 6
— 2 1 3 1 1 2
Preferred Number of Children 1 2 3 4 5 6 17 — 0 1 10 11 13
22 19 — 7 12 18 20
22 13 11 — 13 15 22
15 10 11 6 — 17 14
26 9 6 2 4 — 12
25 11 6 6 0 11 —
This data collection technique is called the method of paired comparisons. It is often used for such things as taste tests. Subjects find it easier to
296
8. Fixed and Random Zeros
distinguish a preference between two brands of cola than to rank their preferences among half a dozen. David (1988) provides a good survey of the literature on the analysis of preference data along with notes on the history of the subject. One particular model for preference data assumes that each item has a probability πi of being preferred. Thus, in a paired comparison, the conditional probability that i is preferred to j is πi /(πi + πj ). There are many ways to arrive at this model; the one given above is simple but restrictive. In other developments, the parameters πi need not add up to one, but it is no loss of generality to impose that condition. Bradley and Terry (1952) rediscovered the model and popularized it. The Bradley-Terry model was put into a log-linear model framework by Fienberg and Larntz (1976). For I items being compared, their framework consists of fitting the incomplete I(I − 1)/2 × I table in which the rows consist of all pairs of items and the columns consist of the preferred item. A test of the model log(mij ) = u + u1(i) + u2(j) is a test of whether the Bradley-Terry model holds. (a) Rewrite Table 8.6 in the Fienberg-Larntz form. (b) Test whether the Bradley-Terry model fits. (c) Show that under the log-linear main effects model the odds of preferring item j to item j is the ratio of a non-negative number depending on j and a non-negative number depending on j . Show that this is equivalent to the Bradley-Terry model. (d) Estimate the probabilities πi .
9 Generalized Linear Models
Generalized linear models are a class of models that generalize the linear models used for regression and analysis of variance. They allow for more general mean structures and more general distributions than regression and analysis of variance. Generalized linear models were first suggested by Nelder and Wedderburn (1972). An extensive treatment is given by McCullagh and Nelder (1989). Generalized linear models include logistic regression as a special case. Another special case, Poisson regression, provides the same analysis for count data as log-linear models. The discussion here involves more distribution theory than has been required elsewhere in this book; in particular, it makes extensive use of the exponential family of distributions and the gamma distribution. Information on these distributions can be obtained from many sources, e.g., Cox and Hinkley (1974). Section 1 presents the family of distributions used in generalized linear model theory. Estimation of the linear parameters is dealt with in Section 2. Model fitting and estimation of dispersion are examined in Section 3; both of these topics involve a version of the likelihood ratio test statistic called the deviance. Section 4 contains a summary and discussion. We begin with a brief review of some ideas from regression and analysis of variance and three examples of generalized linear models. Regression and analysis of variance are fundamental tools in statistics. A multiple regression model is yi = β1 xi1 + · · · + βp xip + ei , i = 1, . . . , n, where E(ei ) = 0 and, typically, xi1 = 1, i = 1, . . . , n, so that β1 is an intercept. The xij ’s are all assumed to be known predictor
298
9. Generalized Linear Models
variables; the βj ’s are fixed unknown parameters. A one-way analysis of variance can be written as yij
= µ + αi + eij = µ · (1) + α1 δ1i + · · · + αt δti + eij
where i = 1, . . . , t, j = 1, . . . , ni , E(eij ) = 0, and δhi is 1 if h = i and 0 otherwise. In the analysis of variance, µ and the αi ’s are fixed unknown parameters, while the multiplier 1 for µ and the δhi ’s play roles analogous to the xij ’s in regression. The key aspect of both regression and ANOVA is that they involve observations whose expected value is a linear combination of known predictor variables. In regression, E(yi ) = β1 xi1 + · · · + βp xip and in analysis of variance E(yij ) = µ · (1) + α1 δ1i + · · · + αt δti . If the observations y in regression and ANOVA have the same variance and are uncorrelated, regression and ANOVA provide the best estimates of (estimable) parameters among all estimators that are unbiased linear functions of the observations. If we go a step further and assume that the observations have independent normal distributions with the same variance, then the usual estimates are the best among all unbiased functions of the observations. In both of these statements, “best” means that the estimates have minimum variance. Under the assumption of independent normal distributions with the same variance, the usual estimates are also maximum likelihood estimates. The current chapter is concerned with finding maximum likelihood estimates for a more general set of models. The models are less restrictive in that they allow more general forms of linearity and distributions other than the normal. We now set some matrix notation. Both regression and analysis of variance are linear models, cf. Christensen (1996b). Let yi be an observation. It is of no significance how we subscript the observations. Any convenient method of subscripting is acceptable whether it be one subscript as in regression, two subscripts as in one-way analysis of variance, or three subscripts as in two-way ANOVA with replications. Let xi = (xi1 , . . . , xip ) be a 1×p row vector of predictor variables. Let β = (β1 , . . . , βp ) be a p×1 column vector of unknown parameters. A typical normal theory linear model assumes yi ∼ N (xi β, σ 2 ) where i = 1, . . . , n and the yi ’s are independent. For pedagogical reasons, it is advantageous to write yi ∼ N (mi , σ 2 ),
mi = xi β.
9.1 Distributions for Generalized Linear Models
299
We begin our discussion of generalized linear models by considering three examples. In each case, we take y1 , . . . , yn independent. The models are specified by their distributions and mean structure. For normal data, yi ∼ N (mi , σ 2 ),
mi = xi β.
E(yi ) = mi ,
This is the model for analysis of variance and regression. For Poisson data, yi ∼ Pois(mi ),
E(yi ) = mi ,
log(mi ) = xi β.
This is just a log-linear model for a table containing n cells where the count in each cell has a Poisson distribution with parameter mi . It is important to note that n is the number of cells and not the observation vector as it is elsewhere in this book. As we have mentioned before and as is shown in Chapter 12, under very weak conditions the analysis of a contingency table under Poisson sampling is the same as the analysis under multinomial sampling. It is interesting to note that the framework for generalized linear models assumes independent observations, so it does not apply directly to multinomial sampling or to general product-multinomial sampling. It is the equivalence of the maximum likelihood analyses under the Poisson, multinomial, and product-multinomial sampling schemes that makes generalized linear models a useful tool for contingency tables. For binomial sampling, we take yi to be the proportion of successes, so Ni yi ∼ Bin(Ni , pi ), log
mi 1 − mi
E(yi ) = pi ≡ mi ,
= log
pi 1 − pi
= xi β.
Note that Ni is a known quantity and not a parameter. The model is simply a logistic regression or logit model. The data consist of proportions obtained from independent binomial random variables and the mean structure is a linear model in the log odds. Alternative models for binomial regression will be mentioned later. Except in this chapter, yi for binomial regression is always taken to mean the number of successes, rather than the proportion of successes. The change is made to fit the binomial distribution into the family of generalized linear models.
9.1
Distributions for Generalized Linear Models
The normal, Poisson, and binomial distributions considered above are members of the exponential family of distributions. A random variable y has
300
9. Generalized Linear Models
a distribution in the exponential family if it has a probability density or mass function that can be written as v qj (θ) tj (y) h(y) f (y|θ) = R(θ) exp j=1
where θ = (θ1 , . . . , θs ) is a vector of parameters. If v = 1, the family is referred to as a one-parameter exponential family; the one parameter can be taken as q1 (θ). The theory of generalized linear models requires the distribution of y to be in a subclass of the one-parameter exponential family. The density or mass function must have the form # " w [θy − r(θ)] h(φ, y, w) (1) f (y|θ, φ; w) = exp φ where θ, φ, and w are scalars. By assumption, w is a fixed known number. The role of φ in this function is curious; it is treated as an unknown constant but not as a parameter. With φ constant, the distribution (1) is in the oneparameter exponential family; just take R(θ) = exp [−wr(θ)/φ], q(θ) = wθ/φ, t(y) = y, and h(y) = h(φ, y, w). In practice, φ is often an unknown parameter. As such, the distribution need not be in the exponential family relative to the two parameters θ and φ because the function h(φ, y, w) need not satisfy the conditions of a two-parameter exponential family. The value φ is simply a positive number that is convenient for defining various special cases. The particular form of f (y|θ, φ; w) in (1) is chosen so that the maximum likelihood estimate of θ does not depend on φ. This will be discussed in more detail in Section 2. For the family of distributions (1), the expected value of y depends on θ but not on φ. For any distribution, ' 1 = f (y|θ, φ; w) dy where it is understood that integration is always replaced by summation when y has a discrete distribution. Taking the derivative with respect to θ on both sides gives ' 0 = f˙(y|θ, φ; w) dy (2) where f˙ is the derivative of f with respect to θ and f satisfies conditions so that the derivative can be taken under the integral sign. From the exact form of f (y|θ, φ; w) in (1), it is easily seen that (2) is ' w (y − r(θ)) ˙ f (y|θ, φ; w) dy 0= φ
9.1 Distributions for Generalized Linear Models
301
where r(θ) ˙ is the derivative dr(θ)/dθ. It follows that E(y) ≡ m = r(θ). ˙ Typically, r˙ is an invertible function, so θ is also a function of the mean, say θ = r˙ −1 (m). Linear structure for distributions of the form (1) is most naturally specified by (3) θ = x β where, as usual, β is a vector of unknown parameters and x is fixed and known. Note that with θ = r˙ −1 (m), the linear structure x β in equation (3) is also a function of the mean. In fact, the analysis of generalized linear models can be carried through when the linear structure is a more general function of the mean, g(m) = x β, as long as it is possible to write θ = g∗ (x β) for some function g∗ (·). A generalized linear model consists of independent observations yi , i = 1, . . . , n, with yi ∼ f (yi |θi , φ, wi ),
E(yi ) ≡ mi ,
g(mi ) = xi β.
If g(mi ) = θi , the model is a canonical generalized linear model. In other words, a canonical model has g(·) ≡ r˙ −1 (·). Names have been given to the various components of generalized linear models. The linear structure x β is called the linear predictor. The function g(·) that specifies the relationship g(m) = x β between the mean and the linear predictor is called the link function. If g(m) = θ, the function g(·) is called the canonical link function. The density f (y|θ, φ; w) is often called the error function and the parameter φ is often called the dispersion parameter. In Section 3, we will need to know the variance of y when y has a density of the form (1). Taking the second derivative with respect to θ on both sides of ' 1 = f (y|θ, φ; w) dy and assuming that derivatives can be taken under the integral gives ' 0 = f¨(y|θ, φ; w) dy ! ' d (y − r(θ)) ˙ f (y|θ, φ; w) w dy = φ dθ ' w2 2 = f (y|θ, φ; w) dy (y − r(θ)) ˙ φ2 ' w −¨ r(θ)f (y|θ, φ; w) dy + φ
302
9. Generalized Linear Models
where two dots indicate a second derivative. It follows immediately that ! r(θ)w/φ] and, thus, 0 = Var(y)w2 /φ2 − [¨ Var(y) = r¨(θ)φ/w. The function r¨(θ) is often written as a function of m, V (m) ≡ r¨ r˙ −1 (m) = r¨(θ). V (m) is generally referred to as the variance function. We now review how the general distribution theory applies to the three examples given earlier: normal, Poisson, and binomial sampling. If y ∼ N (m, σ 2 ), the density for y real is (y − m)2 1 f (y|m; σ 2 ) = √ exp − 2σ 2 2π σ my 1 −m2 −y 2 /2σ 2 √ = exp e exp . 2σ 2 σ2 2π σ To see that the density has the form of equation (1), identify θ = m, w = 1, √ 2 2 φ = σ 2 , r(θ) = m2 /2, and h(φ, y, w) = 1/ 2πσ e−y /2σ . The expected value of y is m, so the canonical linear structure is θ = m = x β. The canonical link leads to a standard linear model. For y ∼ Pois(m), the probability mass function on y = 0, 1, 2, . . . is f (y|m)
my e−m y! = exp(−m) exp(y log(m)) (1/y!) .
=
Identify θ = log(m), w = 1, φ ≡ 1, r(θ) = m, and h(φ, y, w) = (1/y!). It is well known that for a Pois(m) distribution, the expected value and variance are both m. To see this from the general distribution theory, observe that the mean is r(θ) ˙ and with w = 1 and φ ≡ 1, the variance is r¨(θ). From θ = log(m) and r(θ) = m, it follows that r(θ) = eθ and thus r(θ) ˙ = eθ and θ ˙ = r¨(θ). The expected r¨(θ) = e . Again, using θ = log(m) gives m = r(θ) value of y is m, so the canonical linear structure is θ = log(m) = x β. The canonical link leads to a standard log-linear model for Poisson data. For N y ∼ Bin(N, p) with N known, the mass function on N y = 0, . . . , N is N f (y|p) = pN y (1 − p)N −N y Ny N y N p N = (1 − p) Ny 1−p p N N = (1 − p) exp N y log . 1−p Ny
9.1 Distributions for Generalized Linear Models
303
p , w = N , φ ≡ 1, r(θ) = − log(1 − p), and h(φ, y, w) = Identify θ = log 1−p N . The expected value of y is m ≡ p, so the canonical linear structure Ny p m is θ = log 1−p = log 1−m = x β. The canonical link leads to a standard logistic (logit) model. (See Exercise 9.5.1.) In general, the inverse of any cumulative distribution function (cdf) F (·) makes a reasonable link function for binomial data, i.e., g(p) = F −1 (p). F (u) = eu /(1 + eu ) is the cdf of the logistic distribution and defines logistic regression. Probit regression is the procedure based on taking F (u) = Φ(u) where Φ(u) is the cdf of a standard normal distribution. A third example is complementary log-log regression which uses F (u) = 1 − exp [1 − exp(eu )]. As a last example, consider the gamma distribution. The gamma distribution is defined by the probability density function f (y|α, λ) =
λα −λy α−1 e y Γ(α)
for y > 0. The density depends on two parameters, α and λ. The expected value of a gamma distribution is E(y) ≡ m =
α λ
and the variance is
α . λ2 To indicate that y has a gamma distribution, write Var(y) =
y ∼ Gamma(α, λ). Special cases of the gamma distribution include exponential distributions with mean 1/λ, i.e., Gamma(1, λ) and χ2 (n) distributions, Gamma( n2 , 12 )’s. The gamma density can be rewritten as α α α λ −λ y α−1 . exp α y f (y|α, λ) = α α Γ(α) To see the gamma density in the form of equation (1), identify θ = −λ/α, w = 1, φ = 1/α, r(θ) = − log(λ/α), and h(φ, y, w) = αα y α−1 /Γ(α). The expected value of y is m = α/λ = −1/θ, so the canonical linear structure is θ = −1/m = x β. Note that the distribution is only defined for y > 0; thus, for any gamma distribution, m > 0. It follows that when using the canonical link, restrictions must be placed on the parameter vector β to ensure that the expected value is positive. (See Exercise 9.5.2.) The canonical generalized linear model for n independent observations with gamma distributions is yi ∼ Gamma(α, λi ),
E(yi ) = mi =
α , λi
−1 = xi β mi
304
9. Generalized Linear Models
where β is restricted so that −xi β > 0 for all i. Gamma distribution regression is useful for modeling situations in which the coefficient of variation is constant. The coefficient of variation is Var(yi ) α/λ2i 1 = =√ . E(yi ) α/λi α When the data appear to have a constant coefficient of variation, using the gamma distribution is an alternative to doing a standard linear model analysis on the logs of the data, cf. Christensen (1996b, Section 13.7). Note that the constant coefficient of variation does not depend on the choice of link function. Often, noncanonical links such as the identity, m = x β, and the log, log(m) = x β, are used with gamma distributed data, cf. McCullagh and Nelder (1989). The identity link requires restrictions on β; the log link does not. When the coefficient of variation is small, the log link analysis is very similar to the linear model analysis on the logs of the data. The log link is probably the most commonly used for gamma regression. Exponential regression is the special case with φ = 1 and (usually) a log link.
9.2
Estimation of Linear Parameters
In their most natural form, generalized linear models assume n independent observations with yi ∼ f (yi |θi , φ; wi ) and
θi = xi β
for some vector of parameters β = (β1 , . . . , βp ) . The density f (yi |θi , φ; wi ) is defined by equation (9.1.1). The linear structure given above uses the canonical link function. More generally, the linear structure can be defined by g(mi ) = xi β. For this extension, assume θi = r˙ −1 (mi ) and θi = r˙ −1 g −1 (xi β) ≡ g∗ (xi β). The likelihood function for the generalized linear model with canonical link function is L(β; φ) = =
n i=1 n i=1
f (yi |θi , φ; wi ) f (yi |xi β, φ; wi ).
9.2 Estimation of Linear Parameters
305
Using equation (9.1.1), the log-likelihood is (β; φ) ≡ log[L(β; φ)] n log[f (yi |xi β, φ; wi )] = =
i=1 n i=1
(1)
n wi [xi β yi − r(xi β)] + log[h(φ, yi , wi )] . φ i=1
With φ fixed, the maximum likelihood estimate of β is obtained by solving for β in the likelihood equations ∂(β; φ) = 0, ∂βj
(2)
j = 1, . . . , p. It is a simple matter to see that taking the partial derivatives ∂(β; φ)/∂βj leads to likelihood equations of the form Qj (β) =0 φ for some functions Qj (·), j = 1, . . . , p. Obviously, the solution βˆ to such likelihood equations does not depend on the value of φ. Thus, βˆ is the maximum likelihood estimate for any value of φ; i.e., it is the maximum likelihood estimate regardless of the true value of φ. Essentially, the same analysis holds when a linear structure g(mi ) = xi β is assumed. With θi = g∗ (xi β), simply use g∗ (xi β) in place of xi β in equation (1). The only problem is that the partial derivatives become slightly more difficult to find. Maximum likelihood estimates are invariant under transformations of the parameters. In other words, given a maximum likelihood estimate for a parameter, any function of the maximum likelihood estimate is the maximum likelihood estimate for the corresponding function of the parameter. For a discussion of this property see Cox and Hinkley (1974, p. 287). Given ˆ we immediately obtain an a maximum likelihood estimate for β, say β, estimate of the expected value mi , namely ˆ m ˆ i = g −1 (xi β), an estimate of the linear predictor g(mi ), namely ˆ g(m ˆ i ) = xi β, and an estimate of θi , namely ˆ θˆi = g∗ (xi β).
306
9. Generalized Linear Models
Solving the likelihood equations (2) is typically accomplished by using the Newton-Raphson algorithm. For generalized linear models, this reduces to performing a series of weighted least squares regressions and is known as iteratively reweighted least squares. Sections 10.5 and 11.3 give details for the special cases of log-linear modeling and logistic regression. ˆ Under suitable conditions, the estimate βˆ and smooth functions of β, −1 ˆ e.g., m ˆ i = g (xi β), have asymptotic multivariate normal distributions. Moreover, an estimate of the asymptotic covariance matrix of βˆ is easily obtained from the iteratively reweighted least squares algorithm. Under suitable conditions, this estimate is consistent and also yields estimated ˆ Given the estiasymptotic covariance matrices for smooth functions of β. mates and the estimated asymptotic covariance matrices, standard normal theory methods for tests and confidence regions can be applied to yield asymptotic statistical inferences. This brief discussion of estimation has not addressed several important points. To perform the differentiations, the xi β’s need to define a regression so that the βj ’s are well defined. The partial derivatives need to be derived and shown to be of the form Qj (β)/φ, cf. Exercise 9.5.3. A solution to the likelihood equations must be shown to give the maximum of the loglikelihood. Exact conditions for the asymptotic results need to be stated; the necessary conditions may differ for different generalized linear models. For example, in regression analysis, one typically thinks about having the number of observations n go to infinity; however, for contingency table data, the number of cells in the table is n and is typically considered fixed, while the number of counts within the cells is assumed to get large. For more information on many of these issues see McCullagh and Nelder (1989).
9.3
Estimation of Dispersion and Model Fitting
Generalized linear model theory focuses on the estimation of linear parameters. The general theory seems to be less well developed for the purposes of model fitting and dispersion estimation. The basic statistics used in model fitting and estimating functions of the dispersion parameter φ are the deviance and a generalization of the Pearson test statistic. There are patterns common to the use of these statistics, but specifics vary from case to case. Our discussion focuses on two general asymptotic approaches. In one approach, the number of observations n is allowed to go to infinity. This approach is appropriate for many linear model and logistic regression problems. The second approach fixes n and uses asymptotics based on other aspects of the model. This approach is appropriate for many contingency table problems. We begin by defining the statistics. The standardized deviance is simply the asymptotic form of the likelihood ratio statistic for testing a generalized linear model against the cor-
9.3 Estimation of Dispersion and Model Fitting
307
responding saturated model. Remember that in the likelihood analysis of a generalized linear model, the dispersion parameter φ is treated as fixed. A saturated model is simply one in which the number of parameters is so large that the data are fit perfectly. In particular, the model yi ∼ f (yi |θi , φ; wi ), with no restrictions on the θi ’s, is saturated because there are as many parameters θi as there are observations. The maximum likelihood estimates have m ˆ i = yi . This is easily established from the likelihood equations for the θi ’s. Substituting θi for xi β in (9.2.1) and taking partial derivatives with respect to the θi ’s gives yi = r( ˙ θˆi ) = m ˆ i as a solution to the equations. The estimates of the θi ’s for the saturated model are determined by the estimates of the mi ’s. The parameters β and m = (m1 , . . . , mn ) are assumed to be interchangeable, so write (m; φ) ≡ (β; φ). Also, write y = (y1 , . . . , yn ) . The standardized deviance is two times the difference between the maximum of the log-likelihood under the saturated model and the maximum of the log-likelihood under the specified generalized linear model, i.e., ˆ φ) = 2 [(y; φ) − (m; ˆ φ)] . D∗ (m; Here, y is used in (y; φ) because y is the maximum likelihood estimate of m for the saturated model. From inspection of (9.2.1), it is easily seen that the standardized deviance can be written as ˆ φ) = D∗ (m;
D(m) ˆ φ
for a function D(m) ˆ that does not depend on φ. Define the function D(m) ˆ to be the deviance of the generalized linear model. Recall that in many important special cases, φ = 1. For normal theory linear models, D(m) ˆ is the sum of squares error. As mentioned before, the likelihood analysis treats φ as fixed and ignores the fact that the dispersion φ is a parameter. The standardized deviance ˆ φ) is only the likelihood ratio test statistic when φ is known. When D∗ (m; ˆ φ) is not even a statistic because it depends on an φ is unknown, D∗ (m; unknown parameter. D(m), ˆ on the other hand, does not depend on φ, so it is a statistic. Another statistic used to evaluate models and estimate dispersion is the generalized Pearson statistic. The Pearson statistic is defined as X2 =
n 2 wi (yi − m ˆ i) i=1
V (m ˆ i)
308
9. Generalized Linear Models
where V (·) is the variance function defined in Section 1. See Exercise 9.5.6. The Pearson statistic can be used for consistent estimation of φ for large n. The variance of yi is V (mi )φ/wi so, clearly, 2 wi (yi − mi ) = φ. E V (mi ) By Chebyshev’s Weak Law of Large Numbers (cf. Rao, 1973, p. 112), if n 4 1 wi2 E(yi − mi ) →0 n2 i=1 V (mi )2
as n → ∞, then
1 wi (yi − mi ) P → φ. n i=1 V (mi ) n
2
P
It follows that if V (·) is a continuous function and m ˆ i → mi for all i, X2 P →φ n−p where p is the number of parameters in β and we are assuming that the n × p model matrix x1 .. X= . xn
has rank(X) = p not depending on n. Typically, we take X2 φˆ = . n−p ˆ Continuous functions of the dispersion, say d(φ), are estimated with d(φ). If the Pearson estimate is consistent, continuous functions of it are also consistent estimates. Under some large sample conditions with fixed n, for all values of φ the standardized Pearson statistic has the asymptotic distribution X2 ∼ χ2 (n − p). φ In these cases, standard methods of variance estimation for normal data can be applied to give asymptotic confidence intervals and tests for φ, cf. Christensen (1996a, Sec. 2.6). Using properties of the χ2 distribution, the asymptotic distribution also leads to the approximation . E X 2 = φ · (n − p).
9.3 Estimation of Dispersion and Model Fitting
309
Once again, an obvious estimate of φ is X2 . φˆP = n−p Similarly, under certain conditions with n fixed, the standardized deviance D∗ (m; ˆ φ) has the asymptotic distribution D∗ (m; ˆ φ) ∼ χ2 (n − p) for all values of φ. By definition, D(m) ˆ = φD∗ (m; ˆ φ), so, asymptotically, D(m) ˆ ∼ φχ2 (n − p). Again, when the asymptotic distribution is valid, standard methods of variance estimation for normal data can be applied to give asymptotic confidence intervals and tests for φ. An obvious point estimate of φ is D(m) ˆ . φˆD = n−p Unfortunately, the deviance-based estimate is frequently inconsistent when n → ∞. Even in the simplest binomial case, yi ∼ Bin(1, p), this estimate is not consistent. For this case, φ ≡ 1, but for large samples, it is easily seen that D P → −2 [p log(p) + (1 − p) log(1 − p)] n−1 which is not typically equal to 1. Deviances and Pearson statistics can also be used to evaluate the adequacy of generalized linear models. If null distributions are available for the statistics, tests of the adequacy of models can be performed. These can be unconditional, exact conditional, approximate conditional, or asymptotic distributions. If null distributions are not available, the statistics can be used in an exploratory fashion to give rough ideas of model adequacy. If the data follow a true one-parameter exponential family distribution, the dispersion parameter φ is identically constant and, without loss of generality, we can take φ ≡ 1. If the model is correct and the Pearson statistic gives a consistent estimate of φ, X2 P → 1. n−p If X 2 /(n−p) is substantially larger than 1, it is an indication that the model is incorrect. With φ ≡ 1, the standardized deviance equals the deviance.
310
9. Generalized Linear Models
If the deviance estimate of φ is consistent, the deviance can also be used to evaluate lack of fit. When n is fixed, if the deviance has an asymptotic χ2 distribution, the deviance is a lack of fit test statistic that can be used in a formal asymptotic test of the generalized linear model against the saturated model. Similarly, if X 2 has an asymptotic χ2 distribution, the Pearson statistic can be used in a lack of fit test. To test a model g(mi ) = xi β against a reduced model, say g(mi ) = x0i γ in a one-parameter family, simply compare the difference in the deviances to an appropriate χ2 distribution. In particular, the asymptotic generalized likelihood ratio test rejects the adequacy of the reduced model at the α level if ˆ > χ2 (1 − α, p − p0 ). D(m ˆ 0 ) − D(m) ˆ 0 ) are the maximum likelihood estimate of m and the Here, m ˆ 0 and D(m deviance under the reduced model. The model is a reduced model in the sense that X0 = XB for some matrix B where x01 .. X0 = . x0n
and rank(X0 ) = p0 . Such reduced model tests tend to be asymptotically valid under weaker conditions than general lack of fit tests. In particular, the tests are often valid under both asymptotic approaches discussed here. Less formally, if (D(m ˆ 0 ) − D(m))/(p ˆ − p0 ) is a credible estimate of φ ≡ 1 under the reduced model, it makes sense to reject the reduced model whenever the estimate is much larger than 1. Both Poisson regression and logistic regression fit into this one-parameter framework. For Poisson regression, the deviance is G2 and often can be used for lack of fit tests. In both Poisson and logistic regression, the asymptotic χ2 approximation for the test of a model against a reduced model is often valid. However, we have seen that the lack of fit statistic for logistic regression is typically not asymptotically χ2 . Care must be used in applying the asymptotic results given above. The specifics of each situation must be considered. For generalized linear models with a nontrivial dispersion parameter, we can only test reduced models against larger models. An appealing asymptotic test is to reject the adequacy of the reduced model at the α level if ˆ (p − p0 ) D(m ˆ 0 ) − D(m) > F (1 − α, p − p0 , n − p). D(m) ˆ (n − p) This relies not only on D(m ˆ 0 ) − D(m) ˆ φ and D(m)/φ ˆ being asymptotically χ2 but also on them being asymptotically independent. As discussed
9.4 Summary and Discussion
311
earlier, the χ2 approximation to the distribution of D(m)/φ ˆ frequently requires asymptotics based on fixed n. For normal theory models, this is the usual F test. If(a) n is large, (b) ˆ (n − p) is a consistent estimate of φ, and D(m) (c) D(m ˆ 0 ) − D(m) ˆ φ is asymptotically χ2 , we get the asymptotic null distribution D(m ˆ 0 ) − D(m) ˆ (p − p0 ) χ2 (p − p0 ) ∼ p − p0 D(m) ˆ (n − p) 2 with a corresponding test. If X (n − p) is a consistent estimate of φ, the Pearson estimate can be used in the denominator of the asymptotic test. This is of particular importance when D( m) ˆ (n − p) is not consistent but 2 ˆ 0) − X (n − p) is. Again, a less formal evaluation can be made if D(m D(m) ˆ (p − p0 ) is a plausible estimate of φ under the reduced model. The reduced model is called in question when the test statistic is much larger than 1. Note that as n − p approaches infinity, the F (p − p0 , n − p) distribution approaches a χ2 (p − p0 )/(p − p0 ). Even though the appropriate asymptotic distribution for large n is a rescaled χ2 , for data analytic purposes it may not be unreasonable to use F tables instead. Normal linear models and gamma distribution regression both fit into the nontrivial dispersion parameter framework. As always, appropriate conditions must be met for the asymptotic results to be valid. For normal theory linear models, the deviance and Pearson statistic both equal the error sum of squares and the F distribution is exact. It does not rely on any asymptotic arguments.
9.4
Summary and Discussion
Without a doubt, iteratively reweighted least squares for generalized linear models is a remarkably useful computing device. A review of the use of iterative generalized least squares in statistical estimation is given by del Pino (1989). Iteratively reweighted least squares is a special case of iterative generalized least squares. Generalized linear models are designed to treat independent observations that have a distribution in the one-parameter exponential family. They also provide maximum likelihood estimates of appropriate functions of the β parameters when each observation has a distribution in a particular family of two-parameter distributions. This two-parameter family is chosen so that a trick commonly used in estimation for normal theory linear models works for the entire family. The trick is that maximum likelihood estimates of β can be found easily for any value of φ. These estimates do not depend on φ; therefore, the estimates must be maximum likelihood even when φ is an unknown parameter. Given the maximum likelihood estimates of β,
312
9. Generalized Linear Models
finding the maximum likelihood estimate of φ requires solving one equation: ˆ φ)/dφ = 0. This method is used by Christensen (1996b, Section 2.4) d(β; to find maximum likelihood estimates for normal theory linear models. Maximum likelihood estimation of φ seems preferable to the essentially ad hoc methods that are illustrated above. Of the examples that we have considered, Poisson regression and logistic regression are generalized linear models for one-parameter exponential families. Poisson sampling seems to be relatively uncommon for contingency tables; the standard sampling schemes are multinomial and productmultinomial. However, under very mild conditions, maximum likelihood estimates for Poisson sampling are also maximum likelihood estimates for the other sampling schemes. Of course, logistic regression can also be viewed as a special case of product-multinomial log-linear modeling. In regard to Poisson sampling, Santner and Duffy (1989, Problem 3.3) present an interesting data set. The data, originally given in Quine (1975), are on the number of absences of 113 Australian school children. The data are categorized using four factors: age at three levels, sex, cultural background (aboriginal, white), and learning ability (slow, average). The number of absences for different children might be considered as observations on independent Poisson random variables. Note that the number of cross-classifications from the four factors is 3 × 2 × 2 × 2 = 24, but there are 113 Poisson observations. The analysis of such data would be analogous to a four-factor analysis of variance with unequal numbers of replications on the various treatments. Moreover, Santner and Duffy (1989, p. 135) suggest that the data suffer from overdispersion, i.e., φ > 1. It is interesting to note that an observed value of X 2 /(n − p) much larger than 1 can indicate either lack of fit or overdispersion. See McCullagh and Nelder (1989, Sections 4.5, 5.5) for discussion of overdispersion. The other two examples considered in this chapter, normal theory linear models and gamma distribution regression, involve the two-parameter family of distributions that was used in the basic theory. Generalized linear model methods can be used to analyze other useful models; see McCullagh and Nelder (1989) for a broad range of applications. While generalized linear models are a useful idea and provide an excellent computing device, care must be taken in their application. For log-linear models, Poisson sampling does not always lead to the same analysis as multinomial and product-multinomial sampling. The distinctions as well as the similarities must be kept in mind. The validity of asymptotic distributions must also be examined carefully. As seen in Section 11.2, a careful analysis of asymptotic issues can be quite complicated. Some extensions of the basic theory such as overdispersion, e.g., allowing φ to be a nondegenerate parameter in binomial and Poisson sampling, and quasi-likelihood methods have been proposed. Such extensions are widely accepted as providing valuable data analytic tools; however, many people have difficulty in understanding the theoretical basis for them.
9.5 Exercises
9.5
313
Exercises
Exercise 9.5.1. Show that for the binomial model of Section 1, r(θ) = ˙ = p and r¨(θ) = p(1 − p). log(1 + eθ ) and that r(θ) Exercise 9.5.2. For the gamma model of Section 1, use the definition of θ and r(θ) to show that the mean and variance are as given. Exercise 9.5.3. Show that the likelihood equations (9.2.2) have the form Qj (β)/φ = 0, j = 1, . . . , p. Exercise 9.5.4. Show that if f (yi |θ, φ; w) from (9.1.1) isthe common n density of independent observations yi , i = 1, . . . , n, then i=1 yi has a density f (yi |θ∗ , φ∗ , w∗ ) for some θ∗ , φ∗ , and w∗ . Exercise 9.5.5. Let Y = (y1 , . . . yn ) . Show that a generalized linear model with canonical link has X Y as a sufficient statistic. Exercise 9.5.6. Using the definitions of this chapter, find the Pearson statistic (defined in Section 3) for Poisson and binomial regression in terms of the yi ’s and mi ’s. Show that these are identical to the Pearson statistics defined in Chapter 2.
10 The Matrix Approach to Log-Linear Models
Analysis of variance and regression analysis are both branches of linear model theory. Regression analysis and linear model theory are usually taught using matrices. It is less common to teach analysis of variance with matrices. Although standard log-linear model theory is analogous to analysis of variance, the basic results are more easily stated in matrix notation. It is assumed that the reader is familiar with the basics of using matrices. We begin with some simple examples of writing log-linear models with matrices. Example 10.0.1.
Consider a 3 × 4 table. The log-linear model
log(mij ) = u + u1(i) + u2(j) , can be written in matrix form as 1 1 log(m11 ) log(m12 ) 1 1 log(m13 ) 1 1 log(m14 ) 1 1 log(m21 ) 1 0 log(m22 ) 1 0 = log(m23 ) 1 0 log(m24 ) 1 0 log(m31 ) 1 0 log(m32 ) 1 0 1 0 log(m33 ) 1 0 log(m34 )
0 0 0 0 1 1 1 1 0 0 0 0
i = 1, . . . 3 , j = 1, . . . , 4 , 0 0 0 0 0 0 0 0 1 1 1 1
1 0 0 0 1 0 0 0 1 0 0 0
0 1 0 0 0 1 0 0 0 1 0 0
0 0 1 0 0 0 1 0 0 0 1 0
0 0 0 1 0 0 0 1 0 0 0 1
u
u1(1) u1(2) u1(3) . u2(1) u2(2) u2(3) u2(4)
(1)
10. The Matrix Approach to Log-Linear Models
315
The log-linear model log(mijk ) = u + u1(i) + u2(j) + u12(ij)
(2)
can be written in matrix form as
u
u1(1) u1(2) u1(3) u log(m11 ) 2(1) u2(2) log(m ) 12 log(m13 ) u2(3) log(m ) u 14 2(4) log(m21 ) u12(11) log(m22 ) u log(m ) = X u12(12) 23 12(13) log(m24 ) u12(14) log(m31 ) u12(21) log(m ) u 32 12(22) u12(23) log(m33 ) u12(24) log(m34 ) u 12(31) u12(32) u12(33) u12(34)
where 1 1 1 1 1 1 1 1 1 0 1 0 X= 1 0 1 0 1 0 1 0 1 1
0 0
0 0 0 0 1 1 1 1 0 0 0 0
0 0 0 0 0 0 0 0 1 1 1 1
1 0 0 0 1 0 0 0 1 0 0 0
0 1 0 0 0 1 0 0 0 1 0 0
0 0 1 0 0 0 1 0 0 0 1 0
0 0 0 1 0 0 0 1 0 0 0 1
1 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0
The uniform association model log mij = u + u1(i) + u2(j) + γxi wj
0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 . 0 0 0 0 0 1
316
10. The Matrix Approach to Log-Linear Models
can be written log(m11 ) 1 1 log(m12 ) 1 1 log(m13 ) 1 1 log(m14 ) 1 1 log(m21 ) 1 0 log(m22 ) 1 0 = log(m23 ) 1 0 log(m24 ) 1 0 log(m31 ) 1 0 log(m32 ) 1 0 1 0 log(m33 ) 1 0 log(m34 )
0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1
1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0
0 0 1 0 0 0 1 0 0 0 1 0
0 0 0 1 0 0 0 1 0 0 0 1
x1 w1 x1 w2 x1 w3 x1 w4 x2 w1 x2 w2 x2 w3 x2 w4 x3 w1 x3 w2 x3 w3 x3 w4
u
u1(1) u1(2) u1(3) u2(1) . u2(2) u2(3) u2(4) γ
A matrix with only one column will be referred to as a vector. Let x = (x1 , . . . , xq ) be a vector. Define log(x) = (log(x1 ), log(x2 ), . . . , log(xq )) . Consider a table with any number of dimensions that has q cells in it. For a 3 × 4 table, q = 12. For an I × J × K table, q = IJK. The expected cell counts are denoted by the vector m = (m1 , . . . , mq ) . A log-linear model is a model log(m) = Xβ where log(m) is a q × 1 vector of unknown parameters, X is a q × p matrix with known values (often X consists entirely of 0s and 1s), and β is a p × 1 vector of unknown parameters. In Example 10.0.1, the log-linear model (1) has an X matrix with 12 rows and 8 columns that consists entirely of 0s and 1s. The β vector was the 8 × 1 matrix (u, u1(1) , u1(2) , u1(3) , u2(1) , u2(2) , u2(3) , u2(4) ) . For model (2), the X matrix has 12 rows and 20 columns. The β vector is a 20 × 1 matrix that contains u, the u1(i) ’s, the u2(j) ’s, and the u12(ij) ’s.
Example 10.0.2.
Consider a 2 × 3 × 2 table. The model log(mijk ) = u + u1(i) + u2(j) + u3(k)
10. The Matrix Approach to Log-Linear Models
can be written log(m111 ) 1 1 log(m112 ) 1 log(m121 ) 1 log(m122 ) log(m131 ) 1 1 log(m132 ) = log(m211 ) 1 log(m212 ) 1 log(m221 ) 1 log(m222 ) 1 1 log(m231 ) 1 log(m232 )
log(m)
1 1 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 1 1 1 1 1 1
=
1 1 0 0 0 0 1 1 0 0 0 0
0 0 1 1 0 0 0 0 1 1 0 0
0 0 0 0 1 1 0 0 0 0 1 1
1 0 1 0 1 0 1 0 1 0 1 0
0 1 0 1 0 1 0 1 0 1 0 1
u
317
u1(1) u1(2) u2(1) u2(2) u2(3) u3(1) u3(2)
X
β.
The model log(mijk ) = u + u1(i) + u2(j) + u3(k) + u23(jk) can be written log(m111 ) log(m112 ) log(m121 ) log(m ) 122 log(m131 ) log(m132 ) log(m ) 211 log(m212 ) log(m221 ) log(m ) 222 log(m231 ) log(m232 )
log(m)
=
1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 0 1 0 1 1
=
Exercise 10.1. log(m) = Xβ.
0 0
0 0 0 0 0 0 1 1 1 1 1 1
1 1 0 0 0 0 1 1 0 0 0 0
0 0 1 1 0 0 0 0 1 1 0 0
0 0 0 0 1 1 0 0 0 0 1 1
1 0 1 0 1 0 1 0 1 0 1 0
0 1 0 1 0 1 0 1 0 1 0 1
X
1 0 0 0 0 0 1 0 0 0 0 0
0 1 0 0 0 0 0 1 0 0 0 0
0 0 1 0 0 0 0 0 1 0 0 0
0 0 0 1 0 0 0 0 0 1 0 0
u 0 0 u1(1) 0 0 u1(2) 0 0 u2(1) 0 0 u2(2) u2(3) 1 0 0 1 u3(1) 0 0 u3(2) 0 0 u23(11) 0 0 u23(12) 0 0 u23(21) 1 0 u 23(22) 0
1
u23(31) u23(32)
β.
For a 3 × 4 table, write model (7.1.7) in the form
Exercise 10.2. For a 2 × 3 × 2 table, write the models [2][13], [13][23], [12][23][13], and [123] in the form log(m) = Xβ.
318
10. The Matrix Approach to Log-Linear Models
One advantage of establishing results for a general log-linear model log(m) = Xβ is the flexibility of the model. Results apply to ANOVA type models for any number of dimensions. The X matrix can be any known matrix, so models that incorporate known scores for ordered categories or use predictor variables to model interactions are also special cases of the general log-linear model. In this chapter, we present a summary of some basic results in maximum likelihood theory for log-linear models. Most of the results are presented more rigorously in Chapter 12. In Sections 1 and 2, results are presented for multinomial sampling. In Section 3, the extension to product-multinomial sampling is discussed. Section 4 discusses drawing inferences about model parameters. Section 5 examines the Newton-Raphson alternative to iterative proportional fitting for finding MLEs. Section 6 discusses the GSK method of fitting log-linear models. Section 7 considers residual analysis.
10.1
Maximum Likelihood Theory for Multinomial Sampling
Suppose we have a table with q cells, observations n = (n1 , . . . , nq ) , and the log-linear model log(m) = Xβ holds. Under multinomial sampling, the likelihood function is q
n· ! L(p) = q
i=1 ni ! i=1
pni i
where p = (p1 , . . . , pq ) . Equivalently, we can write this as a function of m because mi = n· pi and n· is the known sample size. In terms of the mi ’s, the likelihood becomes n· ! L(m) = q
q
i=1 ni ! i=1
(mi /n· )ni .
Estimation Maximum likelihood estimates (MLEs) are values m ˆ i that maximize L(m) subject to the constraints of our model. There are two constraints on the model: One is the log-linear structure log(m) = Xβ
for some
β
(1)
and q the other relates to the fact that with multinomial sampling, 1 = i=1 pi . This second condition is equivalent to n· = m· . Let J be a q × 1 vector consisting entirely of 1s. The condition n· = m· can be written as n J = m J .
(2)
10.1 Maximum Likelihood Theory for Multinomial Sampling
319
Rather than maximizing L(m), it is simpler to maximize the log of L(m), log L(m) = log(n· !) −
q
log(ni !) +
i=1
q
ni log(mi ) −
i=1
q
ni log(n· ) .
i=1
q The only term that involves the mi ’s is i=1 ni log(mi ) = n log(m), so it is enough to maximize (m) ≡ n log(m) . The MLE, m, ˆ is the value that maximizes (m) subject to conditions (1) and (2). In other words, m ˆ must have the properties that log(m) ˆ = X βˆ
for some βˆ ,
(3)
n J = m ˆ J
(4)
˜ J, then and if m ˜ is any other vector with log(m) ˜ = X β˜ and n J = m (m) ˜ ≤ (m) ˆ . It turns out that for a broad class of possible X matrices, the maximization can be performed without imposing condition (2). As will be discussed below, this occurs because the maximum of (m) subject only to condition (1) automatically satisfies condition (2). A standard method for finding the maximum of (m) subject to condition (1) is by setting appropriate partial derivatives equal to zero. It can be shown that the partial derivatives are zero at the point m ˆ that satisfies n X = m ˆ X,
(5)
cf. Chapter 12. Moreover, by considering the matrix of second partial derivatives, it can be shown that if (m) achieves its maximum, subject to the constraint log(m) = Xβ, then it will be at the unique value m ˆ that satisfies conditions (3) and (5). In other words, any value m ˆ that satisfies the (marginal) constraints (5) and the model (3) is the maximum likelihood estimate of m provided a maximum exists. This point was made repeatedly in Section 3.2. If X is chosen appropriately, then any m ˆ that satisfies conditions (3) and (5) automatically satisfies condition (2), i.e., satisfies (4). Before examining this claim, we introduce a very useful concept in log-linear model theory, the column space of X. The column space of X is defined to be the set C(X) = {µ|µ = Xβ
for some
β} .
Thus, C(X) consists of all of the possible values for log(m) that satisfy the log-linear model. Earlier, we discussed the fact that the models log(mij ) = u + u1(i) + u2(j) + u3(k) + u12(ij) + u13(ik)
(6)
320
10. The Matrix Approach to Log-Linear Models
and log(mij ) = u12(ij) + u13(ik)
(7)
are equivalent. If we write the model in equation (6) as log(m) = X1 β1 and the model in equation (7) as log(m) = X2 β2 , it is not difficult to show that C(X1 ) = C(X2 ). In other words, the possible values for log(m) are identical in models (6) and (7). That is why the models are equivalent. We now return to the claim that if m ˆ satisfies conditions (3) and (5), then for an appropriate X matrix, condition (4) is also satisfied. Suppose that J ∈ C(X). In other words, for some vector b, J = Xb. If m ˆ satisfies condition (5), then it follows that ˆ Xb = m ˆ J; n J = n Xb = m hence, condition (4) is satisfied. Thus, if J ∈ C(X) and if the MLE of m exists, then we can find the MLE by finding m ˆ that satisfies conditions (3) and (5). We will not give a detailed discussion concerning when MLEs exist; for such a discussion, see Haberman (1974a). However, we will mention one result. It is an immediate consequence of Theorem 12.2.1 that if ni > 0 for all i = 1, . . . , q, then the MLEs exist. The condition imposed above on X, i.e., J ∈ C(X), is not an onerous condition. It simply means that the model has a parameter u (with no subscripts) or that the model is equivalent to a model that contains a u term. The condition J ∈ C(X) is also necessary for the asymptotic results discussed in Section 2 and Chapter 12. We will henceforth always assume that J ∈ C(X). One final point: The MLE of m does not really depend on X, it depends on C(X). Any two parametrizations log(m) = X1 β1 and log(m) = X2 β2 with C(X1 ) = C(X2 ) have exactly the same MLE of m. Example 10.1.1. One version of the three-dimensional saturated model is log(mijk ) = u123(ijk) . If this is written in matrix form, X = Iq where Iq is the q × q identity matrix and q = IJK. The conditions for MLEs become log(m) ˆ = Iq βˆ = βˆ for some and
ˆ Iq . n Iq = m
βˆ
10.1 Maximum Likelihood Theory for Multinomial Sampling
321
Clearly, m ˆ = n satisfies the second of these equations and βˆ = log(n) satisfies the first equation. Thus, the MLE of m is m ˆ = n in a threedimensional saturated model. In fact, this argument is valid for any saturated model. The idea of a saturated model is that there are enough parameters to explain the data perfectly. This translates to the idea that C(X) = C(Iq ) = Rq . Example 10.1.2.
Consider a three-dimensional table and the model
log mijk = u + u1(i) + u2(j) + u3(k) + u23(jk) . If one writes out the matrix X (cf. Example 10.0.2), it is easily seen that ˆ ··· , ni·· = m ˆ i·· , n·j· = m ˆ ·j· , the condition n X = m X is precisely n··· = m ˆ ··k , n·jk = m ˆ ·jk . Several of these conditions are redundant. It is n··k = m ˆ i·· and n·jk = m ˆ ·jk . The other relationships follow sufficient to have ni·· = m from these two. If the model is reparametrized as log(mijk ) = u1(i) + u23(jk) , then the condition n X = m X for the new X matrix gives only the condiˆ i·· and n·jk = m ˆ ·jk . The MLEs of the mijk ’s are precisely the tions ni·· = m ˆ i·· and n·jk = m ˆ ·jk and can be written as values m ˆ ijk that satisfy ni·· = m ˆ1(i) + u ˆ23(jk) log(m ˆ ijk ) = u for some values u ˆ1(i) and u ˆ23(jk) . It is easily seen that if m ˆ ijk = ni·· n·jk /n··· , these conditions are satisfied. Exercise 10.3. (a) For a 3 × 4 table, find the conditions that the MLEs must satisfy in model (7.1.4). (b) Repeat (a) for model (7.1.7).
Testing Hypotheses One of the tests that is allowed for three-way tables is the test of [1][2][3] versus [1][23]. If these models are written as log(m) = X0 β0 and log(m) = Xβ, respectively, the fact that [1][23] is a larger model is reflected in the fact that C(X0 ) ⊂ C(X). Recall that for a 2 × 3 × 2 table, X0 and X where presented in Example 10.0.2. In general, if we assume that log(m) = Xβ holds and that C(X0 ) ⊂ C(X), we can test the hypothesis H0 : log(m) = X0 β0 against the hypothesis HA : (H0 is not true) .
322
10. The Matrix Approach to Log-Linear Models
The likelihood ratio test statistic is ˆ 0 ) − log L(m)] ˆ G2 = −2[log L(m where m ˆ 0 is the MLE of m under the assumption that H0 is true and m ˆ is the MLE under the “unrestricted” model. However, in this case, the “unrestricted” model is that log(m) = Xβ. It is easily seen that G2
= −2[(m ˆ 0 ) − (m)] ˆ ˆ 0 ) − n log(m)] ˆ = −2[n log(m = 2n [log(m) ˆ − log(m ˆ 0 )] q ni log(m ˆ i /m ˆ 0i ) = 2 i=1
ˆ q ) is the MLE of m for log(m) = Xβ and where again m ˆ = (m ˆ 1, . . . , m ˆ 01 , . . . , m ˆ 0q ) is the MLE of m under the restriction that log(m) = m ˆ 0 = (m X0 β0 . In fact, G2 can be written as G2 = 2
q
m ˆ i log(m ˆ i /m ˆ 0i ),
i=1
which is our usual form. To see this equivalence, note that log(m) ˆ = X βˆ for ˆ ˆ 0 ) = X0 βˆ0 = X γˆ some β, and because C(X0 ) ⊂ C(X), we can write log(m ˆ X = n X. By substitution, for some βˆ0 and γˆ . Recall that m G2 = 2n [log(m) ˆ − log(m ˆ 0 )] = =
2n [X βˆ − X γˆ ] 2n X[βˆ − γˆ ]
= 2m ˆ X[βˆ − γˆ ] = 2m ˆ [log(m) ˆ − log(m ˆ 0 )] q m ˆ i log(m ˆ i /m ˆ 0i ) . = 2 i=1
10.2
Asymptotic Results
This section presents a few of the primary asymptotic results for log-linear models under multinomial sampling and mentions some applications of those results. More precise versions of these results are available in Chapter 12. We begin by setting some notation. Let x be a q × 1 vector. D(x) is used to denote the q × q diagonal matrix D(x) = [dij ]
where dii = xi , dij = 0,
i = j .
10.2 Asymptotic Results
323
One diagonal matrix is used often and has a special notation: D ≡ D(p) . Recall that J is a q × 1 vector of 1s. Define A = X(X DX)−1 X D and
Az = J(J DJ)−1 J D
where it is assumed (but not really necessary) that a parametrization log(m) = Xβ has been chosen so that the inverse of X DX exists. Note that because D(m) = n· D, D can be replaced by D(m) in A and Az without changing the resulting matrices. Note that A and Az depend on the unknown parameters p. We can estimate A and Az simply by estimating p. Rather than frequently writing log(m), let µ ≡ log(m) . If m ˆ is the MLE of m, µ ˆ = log(m) ˆ is the MLE of the µ. This follows from the invariance of maximum likelihood estimates; for any parameter θ and ˆ the MLE of a function of θ, say f (θ), is the corresponding function MLE θ, ˆ cf. Cox and Hinkley (1974, p. 287). of the MLE, f (θ), The key asymptotic results about MLEs are given in the following subsections. Throughout, let N ≡ n· .
Estimation We begin with results about the large sample distribution of the maximum likelihood estimates. Theorem 10.2.1. Let µ = Xβ be a log-linear model for a table with q cells. Let n be the result of a multinomial sample of N observations: (a) For N sufficiently large, µ ˆ − µ has the approximate distribution N (0, [A − Az ]D−1 (m)). P
(b) As N gets large, µ ˆ − µ converges (in probability) to zero; i.e., µ ˆ−µ → 0. (c) For N sufficiently large, m ˆ − m has the approximate distribution N (0, D(m)[A − Az ]). P
ˆ → p. (d) As N gets large, m/N ˆ converges (in probability) to p, i.e., N −1 m
324
10. The Matrix Approach to Log-Linear Models
Technically, (a) and (c) deal with convergence in distribution and are similar in spirit to the Central Limit Theorem. In Chapter 11, we will have 1 L µ − µ) → N (0, [A − Az ]D−1 ) and occasion to write such results as (a) N 2 (ˆ L L (c) N −1/2 (m−m) ˆ → N (0, D[A−Az ]). The symbol → indicates convergence in distribution. The L comes from the fact that a distribution is sometimes referred to as a distributional law or simply as a law. One interesting aspect of Theorem 10.2.1 is that, although µ ˆ−µ converges to zero, µ ˆ by itself does not converge to anything. As N gets large, µ = log(m) = log(N p) also gets large. Although the difference µ ˆ − µ gets small, we cannot say that µ ˆ converges to µ because µ changes with N . Corollary 10.2.2. If the inverse of (X DX) exists and βˆ satisfies ˆ then βˆ − β converges (in probability) to zero. µ ˆ = X β, Consider the problem of drawing asymptotic inferences about a particular cell. The parameters of interest are pi , mi , and µi . We will start from the premise that estimates of the mi ’s are available. These can be obtained from iterative proportional fitting as discussed in Section 3.3 or from use of the Newton-Raphson algorithm as discussed later in Section 5. Recall that ˆ q ) , m ˆ = (m ˆ 1, . . . , m
ˆ q )) , µ ˆ = log(m) ˆ = (log(m ˆ 1 ), . . . , log(m and pˆ =
1 m ˆ = (m ˆ 1 /N, . . . , m ˆ q /N ) . N
To use Theorem 10.2.1, we need one key result. If Y is a q × 1 vector with a multivariate normal distribution, i.e., Y ∼ N (ξ, Σ) , and if ρ is a q × 1 vector, then the scalar random variable ρ Y has a (univariate) normal distribution. In particular, ρ Y ∼ N (ρ ξ, ρ Σρ) . Let ei = (0, · · · , 0, 1, 0, · · · , 0) where the 1 is in the ith place. It follows that µ ˆi − µi m ˆ i − mi
= ei (ˆ µ − µ) , = ei (m ˆ − m) ,
and p − p) . pˆi − pi = ei (ˆ
10.2 Asymptotic Results
325
Applying Theorem 10.2.1, for N large we get the approximations µ ˆi − µi ∼ N 0, ei [A − Az ]D−1 (m)ei , m ˆ i − mi ∼ N (0, ei D(m)[A − Az ]ei ) ,
and pˆi − pi ∼ N
0, ei
1 D(m)[A − Az ]ei N2
.
In order to use these results, we need to be able to find or at least estimate the variances. We begin with ei [A − Az ]D−1 (m)ei . This value is the ith diagonal element of A D−1 (m) minus the ith diagonal element of Az D−1 (m). To find ei AD−1 (m)ei , note that 1 ei , D−1 (m)ei = mi so ei AD−1 (m)ei =
1 e Aei . mi i
The value ei Aei is just aii , the ith diagonal element of A. This is precisely the leverage of the ith case. Leverages were introduced in Section 6.7 and methods for estimating them were given. The maximum likelihood estimate of ei AD−1 (m)ei = aii /mi is ˆi. a ˆii /m ei Az D−1 (m)ei
The computation of is even simpler. For multinomial sampling, Az ≡ J(J DJ)−1 J D = JJ D . This follows because J DJ = p· = 1. Moreover, Az D−1 (m)
= JJ DD−1 (m) 1 = JJ D(m)D−1 (m) N 1 JJ , = N
so ei Az D−1 (m)ei =
1 . N
Combining results, we see that ei [A − Az ]D−1 (m)ei =
aii 1 − ; mi N
326
10. The Matrix Approach to Log-Linear Models
thus,
aii 1 − µ ˆi − µi ∼ N 0, mi N
.
Estimating the variance leads to the approximation a ˆii 1 ∼ N (0, 1) . (ˆ µi − µi ) − m ˆi N Large sample confidence intervals for µi follow immediately, e.g., a 95% confidence interval has the end points a ˆii 1 . − µ ˆi ± 1.96 m ˆi N The α = .10 large sample test of H0 : µi = µi0 versus HA : µi = µi0 rejects when aii 1 > 1.645 . |ˆ µi − µi0 | − mi N Similar arguments lead to the asymptotic results Var(m ˆ i ) = mi aii − m2i /N and Var(ˆ pi ) = pi aii /N − p2i /N . Estimating the variances yields to the large sample distributions and
m ˆ i − mi m ˆ ia ˆii − m ˆ 2i /N pˆi − pi
pˆi (ˆ aii − pˆi )/N
∼ N (0, 1)
∼ N (0, 1) .
Given the distributions, inferential procedures follow in the usual way. Just as in regression analysis, leverages fall between zero and one and the sum of all of the leverages is precisely the degrees of freedom for the model, i.e., the rank of X. The first of these facts implies that an upper bound on the variance can always be obtained by taking aii = 1. This is convenient because when iterative proportional fitting has been used, finding a ˆii requires the computation of an auxiliary regression analysis. Assuming aii = 1 can be highly conservative because the true aii value may be much less than one. The second fact gives some idea of the extent of overestimation using aii = 1. If the table has q = 24 cells and the model has 12 degrees of freedom, the average size of the aii ’s is 12/24 = 12 . Thus, the variance terms based on aii = 1 tend to be about twice as large as they are using the estimates a ˆii .
10.2 Asymptotic Results
327
It is interesting to note that using the upper bound aii = 1 is equivalent to computing the variance under the saturated model. In the saturated model, we can take X = I. This implies that A = I(IDI)−1 ID = I . Thus, for all i under the saturated model, aii = 1. Clearly, the use of reduced models serves to reduce the variance of estimated cell parameters. Example 10.2.3. In the abortion opinion data of Chapter 3 with the model [RSO][OA] (cf. Table 6.7), the cell for nonwhite males between 18 ˆii = .222. and 25 years of age who support abortionhas m ˆ i = 14.52 and a 2 /2385 = is 14.52(.222) − (14.52) The asymptotic standard error for m ˆ i √ 3.2234 − .0884 = 1.77. An asymptotic 95% confidence interval for mi has end points 14.52 ± 1.96(1.77) . The interval is (11.05, 17.99). Similar computations lead to a 95% confidence interval for µi with end points 2.68 ± 1.96(.123) and a 90% confidence interval for pi with end points .0061 ± 1.645(.000742) .
Besides the parameters for individual cells, the parameters of primary interest are contrasts in the µi ’s. Contrasts in the µi ’s correspond to vectors ρ in which the elements of ρ add up to zero, i.e., ρ J = 0. The simplest such contrasts are log odds, but log odds ratios, the log of ratios of odds ratios, and so on, are also contrasts in the µi ’s. All of these correspond to ˆ i ’s, there functions ρ µ in which ρ has a very simple structure. Given the m ˆ = ρ log(m). ˆ The problem is in computing is no problem in computing ρ µ the variance. Finding variances for estimated contrasts is more complicated than finding them for estimates of cell parameters because contrasts involve the covariances between the estimated cell parameters. However, the fact that we are dealing with contrasts leads to one simplification based on ρ J = 0. ˆ) Var(ρ µ
= ρ (A − Az )D−1 (m)ρ = ρ AD−1 (m)ρ − ρ Az D−1 (m)ρ 1 = ρ X(X D(m)X)−1 X ρ − ρ JJ ρ N = ρ X(X D(m)X)−1 X ρ .
328
10. The Matrix Approach to Log-Linear Models
Computation of the variance requires fitting the model using the NewtonRaphson algorithm, cf. Section 5. Newton-Raphson can either be used exclusively or, if the initial fit was performed using iterative proportional fitting, the auxiliary regression model of Section 6.7 can be used. Recall that the auxiliary model requires that an ANOVA type model be reparametrized as a regression model. This is so that appropriate matrix inverses can be taken. If traditional ANOVA type models are used, a simple way to generate a regression model is to drop all u terms involving index values of 1. Example 10.2.4. In Example 3.2.4, we examined data on automobile injuries. We found that the model of no three-factor interaction, µijk = log(mijk ) = u + u1(i) + u2(j) + u3(k) + u12(ij) + u13(ik) + u23(jk) , fit the data very well. Below are given the data and the estimated expected cell counts based on the model. nijk (m ˆ ijk ) Injury (j) Driver Ejected (i)
No Yes
Accident Type (k) Collision Rollover Not Severe Severe Not Severe Severe 350 (350.49) 150 (149.51) 60 (59.51) 112 (112.49) 26 ( 25.51) 23 ( 23.49) 19 (19.49) 80 ( 79.51)
The regression parametrization based of i, j, or k equal 1 is 1 0 log(350.49) µ ˆ111 ˆ121 log(149.51) 1 0 µ ˆ112 log( 59.51) 1 0 µ ˆ122 log(112.49) 1 0 µ = = ˆ211 log( 25.51) 1 1 µ ˆ221 log( 23.49) 1 1 µ 1 1 log( 19.49) µ ˆ212 1 1 log( 79.51) µ ˆ222
on dropping u terms in which any 0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
0 0 0 0 0 0 1 1
0 0 0 0 0 1 0 1
0 γˆ 0 γˆ 0 1(2) γˆ3(2) 1 γˆ . 0 2(2) γˆ13(22) 0 γˆ12(22) 0 γˆ23(22) 1
Because there are 8 cells and 8 − 1 terms in the model, there is a very simple form to the matrix necessary for obtaining asymptotic variances: −1 X X D(m)X ˆ
X =
1 −1 −1 1 −1 −1 −1 ˆ − (5.52816)D (m) ˆ ˆ D (m) [1, −1, −1, 1, −1, 1, 1, −1] D (m). −1 1 1 −1
(1)
10.2 Asymptotic Results
329
The equality can be verified by direct computation of both sides. This approach requires a good matrix manipulation computer package for computing the left-hand side. The equality can also be verified by hand using orthogonality, projection operators, and the fact that rank(X) = 7 = q − 1. This approach requires facility with vector space concepts, cf. Christensen (1996b, p. 276). In Example 3.2.4, for both collisions and rollovers, we were interested in the ratio of the odds of a nonsevere injury when the driver was not ejected relative to the odds of a nonsevere injury when the driver was ejected. We proceed to find a 95% confidence interval for log(m11k m22k /m12k m21k ) . Recall that, based on the model of no three-factor interaction, this log odds ratio does not depend on k. The estimate of the log odds ratio is .77 = log(2.16) . This can be arrived at in either of two ways. For k = 1, define the vector ρ1 = (1, −1, 0, 0, −1, 1, 0, 0) so that ˆ = ρ1 log(m) ˆ ρ1 µ = log(m ˆ 111 m ˆ 221 /m ˆ 121 m ˆ 211 ) =
log [(350.49)(23.49)/(149.51)(25.51)]
=
log(2.16) .
Otherwise, for k = 2, let ρ2 = (0, 0, 1, −1, 0, 0, −1, 1) so that ˆ = log(m ˆ 112 m ˆ 222 /m ˆ 122 m ˆ 212 ) ρ2 µ = log [(59.51)(79.51)/(112.49)(19.49)] =
log(2.16) .
The estimated variance is −1
ˆ ρj X (X D(m)X)
X ρj = .045.
This can be computed directly using matrix manipulations, or it can be computed by hand using equation (1), or it can be computed from the reported standard error of γˆ12(22) using the auxiliary regression (which will be reviewed in the next example). The 95% confidence interval for the log odds ratio with k fixed has the end points √ .77 ± 1.96 .045 and is the interval (.35, 1.19). If we exponentiate the end points, we get a 95% confidence interval for the odds ratio of (1.4, 3.3) .
330
10. The Matrix Approach to Log-Linear Models
Thus, the evidence indicates that the odds of a nonsevere injury when the driver is not ejected are between, roughly, one and a half to three times the odds of a nonsevere injury when the driver is ejected.
Example 10.2.5.
Consider a 2 × 3 × 2 table and the model
µijk = u + u1(i) + u2(j) + u3(k) + u23(jk) . The design matrix X for this model is given in Example 10.0.2. Suppose we are interested in the log odds log(m1jk /m2jk ) = µ1jk − µ2jk = u1(1) − u1(2) . Note that in this model, the odds are the same for any values of j and k. Using the notation in Example 10.0.2, write ρ = (1, 0, 0, 0, 0, 0, −1, 0, 0, 0, 0, 0) so that ρ µ = µ111 − µ211 = u1(1) − u1(2) . The estimate is ρ µ ˆ = log(m ˆ 111 ) − log(m ˆ 211 ) = log(m ˆ 111 /m ˆ 211 ), but this estimate does not depend on the last two subscripts. For any j and k, ˆ = log(m ˆ 1jk /m ˆ 2jk ) . ρ µ The difficult part of the analysis is in finding the variance. The variance is most easily computed by setting the problem up as a regression analysis. Write µ111 1 0 0 0 0 0 0 1 0 0 0 1 0 0 µ112 1 0 1 0 0 0 0 µ121 γ0 1 0 1 0 1 1 0 µ122 γ µ131 1 0 0 1 0 0 0 1(2) γ2(2) µ132 1 0 0 1 1 0 1 γ = µ211 1 1 0 0 0 0 0 2(3) γ3(2) µ212 1 1 0 0 1 0 0 γ23(22) µ221 1 1 1 0 0 0 0 γ23(32) µ222 1 1 1 0 1 1 0 1 1 0 1 0 0 0 µ231 1 1 0 1 1 0 1 µ232 µ = W γ. The new design matrix was arrived at by eliminating every column of the old design matrix that corresponded to a u term involving i = 1, j = 1, or k = 1. The estimates of the γ’s are the estimates of the u’s subject to the
10.2 Asymptotic Results
331
side conditions 0 = u1(1) = u2(1) = u3(1) = u23(1k) = u23(j1) for all j and k, and " ρ W γ = −γ1(2) ρ µ = µ111 − µ211 = ρ Xβ = u1(1) − u1(2) . Let K
ˆ = ρ [Aˆ − Az ]D−1 (m)ρ −1 ˆ Xρ = ρ X(X D(m)X) −1 = ρ W (W D(m)W ˆ ) W ρ
so that, asymptotically,
ˆ) = K . Var(ρ µ
The value K is easily obtained by performing an auxiliary regression, as discussed in Section 6.7. In particular, fitting Y = Wγ + e with weights m ˆ i and dependent variable ˆ i ) + (ni − m ˆ i )/m ˆ i, yi = log(m the regression program will report SE(ˆ γ1(2) ) =
√
MSE K .
√ Dividing by MSE gives the correct asymptotic standard error. Almost any good regression program allows the user to print out the matrix −1 ˆ ) . Cov(ˆ γ )/MSE = (W D(m)W This is the key to obtaining asymptotic variances for log-linear models. Consider the log odds ratio log(mi21 mi32 /mi22 mi31 )
= µi21 − µi22 − µi31 + µi32 = u23(21) − u23(22) − u23(31) + u23(32) .
This log odds ratio does not depend on the value of i. Picking i = 1 for convenience, let ρ = (0, 0, 1, −1, −1, 1, 0, 0, 0, 0, 0, 0), so
ρ µ = ρ Xβ = u23(21) − u23(22) − u23(31) + u23(32) .
In the µ = W γ parametrization, this becomes ρ µ = ρ W γ = γ23(32) − γ23(22) .
332
10. The Matrix Approach to Log-Linear Models
There are two ways to arrive at this result. First, one can substitute the appropriate functions of the γ’s in place of the µijk ’s. This leads to ρ µ = µ121 − µ122 − µ131 + µ132 = [γ0 + γ2(2) ] − [γ0 + γ2(2) + γ3(2) + γ23(22) ] − [γ0 + γ2(3) ] + [γ0 + γ2(3) + γ3(2) + γ23(32) ] = γ23(32) − γ23(22) . Second, one can notice that ρ W = (0, 0, 0, 0, 0, −1, 1), so that ρ µ = ρ W γ = γ23(32) − γ23(22) . If we write λ = ρ W, then the estimated large sample variance is −1 ˆ Xρ ρ X(X D(m)X)
= ρ W (W D(m)W ˆ )−1 W ρ = λ (W D(m)W ˆ )−1 λ
ˆ )−1 . which is easily computed if the regression program provides (W D(m)W Variances for other estimated log odds ratios are computed in a similar manner. Because of the model, any log odds ratio with either j or k fixed, e.g., log(m1j1 m2j2 /m1j2 m2j1 ), is zero by assumption. Estimates of log odds in the j or k indices, e.g., log(mij1 /mij2 ), can also be estimated and large sample variances computed. However, because of the existence of the u23 interaction, the log odds will depend on the value of j. These issues are considered in more detail in the next example. Example 10.2.6. Consider again the data on classroom behavior used in Examples 3.0.1 and 3.2.2. The data and estimated expected cell counts for the model in which behavior is independent of risk and adversity are given below. nijk (m ˆ ijk )
Low
Risk (j) Non. Classroom Behavior (i)
Dev.
N 16 (14.02) 1 (2.98)
R 7 (6.60) 1 (1.40)
Adversity (k) Medium N 15 (14.85) 3 (3.15)
R 34 (34.64) 8 (7.36)
High N 5 (4.95) 1 (1.05)
R 3 (4.95) 3 (1.05)
10.2 Asymptotic Results
333
The ANOVA type model is µijk = u + u1(i) + u2(j) + u3(k) + u23(jk) . Except for the fact that this is a 2 × 2 × 3 table instead of a 2 × 3 × 2 table, the model is exactly as in the previous example. Dropping all u terms with i, j, or k equal to 1 leads to 1 0 0 0 0 0 0 log(14.02) µ ˆ111 ˆ log(6.60) 1 0 1 0 0 0 0 µ 121 ˆ log(14.85) 1 0 0 1 0 0 0 µ γˆ 112 ˆ122 log(34.64) 1 0 1 1 0 1 0 µ γˆ ˆ log(4.95) 1 0 0 0 1 0 0 1(2) µ γˆ2(2) 113 ˆ log(4.95) 1 0 1 0 1 0 1 µ γˆ = 123 = ˆ211 log(2.98) 1 1 0 0 0 0 0 3(2) µ γˆ3(3) ˆ log(1.40) 1 1 1 0 0 0 0 µ γˆ23(22) 221 ˆ212 log(3.15) 1 1 0 1 0 0 0 µ γˆ23(23) ˆ222 log(7.36) 1 1 1 1 0 1 0 µ 1 1 0 0 1 0 0 log(1.05) µ ˆ213 1 1 1 0 1 0 1 log(1.05) µ ˆ223 or, alternatively, µ ˆ = W γˆ . In particular, any weighted or unweighted regression analysis provides γˆ γˆ1(2) γˆ2(2) γˆ3(2) γˆ3(3)
= 2.640 , = −1.548 , = −0.754 , = 0.057 , = −1.041 ,
γˆ23(22)
=
1.601 ,
γˆ23(23)
=
0.754 .
(Actually, these are based on more significant digits for the m ˆ ijk ’s than were reported above.) The matrix of asymptotic variances and covariances for the γˆ ’s is obtained from doing the appropriate auxiliary regression. It is γˆ γˆ1(2) γˆ2(2) γˆ3(2) γˆ3(3) γˆ23(22) γˆ23(23)
γˆ .0610 −.0125 −.0588 −.0588 −.0588 −.0588 −.0588
γˆ2(2) γˆ3(2) γˆ3(3) γˆ23(22) γˆ23(23) γˆ1(2) −.0125 −.0588 −.0588 −.0588 −.0588 −.0588 .0713 .0000 .0000 .0000 .0000 .0000 .0000 .1838 .0588 .0588 −.1838 −.1838 . .0000 .0588 .1144 .0588 −.1144 −.0588 .0000 .0588 .0588 .2255 −.0588 −.2255 .0000 −.1838 −.1144 −.0588 .2632 .1838 .0000 −.1838 −.0588 −.2255 .1838 .5172
334
10. The Matrix Approach to Log-Linear Models
This matrix is the basis for all subsequent variance estimates. The estimate of the log odds of nondeviant behavior is ˆ 2jk ) = µ ˆ1jk − µ ˆ2jk log(m ˆ 1jk /m = u ˆ1(1) − u ˆ2(1) = −ˆ γ1(2) = 1.548 . The equivalence between the u parametrization and the γ parametrization is √ easily obtained by inspection of W γ. The asymptotic standard error is .0713, so a 90% confidence interval has end points √ 1.548 ± 1.645 .0713 . The odds of having a home situation that is not at risk depend on the adversity level. The log odds satisfy ˆ i2k ) log(m ˆ i1k /m
= u ˆ2(1) + u ˆ23(1k) − u ˆ2(2) − u ˆ23(2k) γ2(2) , k=1 −ˆ −ˆ γ2(2) − γˆ23(22) , k = 2 = −ˆ γ2(2) − γˆ23(23) , k = 3.
The estimated value when k = 2 is .754 − 1.601 = −.847 . With γ2(2) ) + 2Cov(ˆ γ2(2) , γˆ23(22) ) + Var(ˆ γ23(22) ), Var(−ˆ γ2(2) − γˆ23(22) ) = Var(ˆ the asymptotic estimated variance is .1838 − 2(.1838) + .2632 = .0794 . The interesting odds ratios involve the changes in the odds as k changes. log(mi11 mi22 /mi21 mi12 ) = u23(11) − u23(21) − u23(12) + u23(22) = γ23(22) , log(mi11 mi23 /mi21 mi13 ) = γ23(23) , log(mi12 mi23 /mi22 mi13 ) = γ23(23) − γ23(22) . The last of these has an estimate of .754 − 1.601 = −.847
10.2 Asymptotic Results
335
and an estimated asymptotic variance of .5172 − 2(.1838) + .2632 = .4128 . A 95% confidence interval for the log odds ratio is (−.2.106, .412). Transforming to the original scale gives an interval for the odds ratio of (.12, 1.5). The odds of being not at risk for medium-adversity schools is between .12 and 1.5 times those for high-adversity schools. The large interval is related to the very small numbers available at high risk. The result does not depend on the classroom behavior. As a matter of fact, the need for including the u23 terms in the model is driven by the fact that γˆ23(22) = 1.601 with an asymptotic standard error of √ .2632 = .513 . Thus, there is clear evidence that the odds of being not at risk are higher for low-adversity schools than for high-adversity schools. In fact, the odds are roughly between 2 and 13 times larger with 95% confidence. As we have seen, there is a problem with the output from standard regression software. Using a regression parametrization µ = W γ, the key matrix to be obtained is ˆ −1 (m) ˆ = W (W D(m)W ˆ )−1 W AD which does not depend on the choice of W . Unfortunately, most regression ˆ −1 (m); software does not report AD ˆ it only reports ˆ )−1 . (W D(m)W (The fact that there are good reasons for doing this makes it no less unforˆ −1 (m), ˆ tunate for our purposes.) If the software allows computation of AD then the simple structure of the ρ vectors allows simple computation of the ˆ )−1 W ρ. If the software does not allow estimated variance ρ W (W D(m)W −1 ˆ (m), ˆ then it is necessary to compute the vector direct computation of AD λ = ρ W . In other words, the simple function ρ µ must be reparametrized into λ γ.
336
10. The Matrix Approach to Log-Linear Models
Given λ and (W D(m)W ˆ )−1 , the variance λ (W D(m)W ˆ )−1 λ is easily computed. The problem is in identifying λ, i.e., identifying the function of γ that is equivalent to ρ µ. Even though the interesting functions ρ µ are simple, the functions of γ get progressively more complex as the model, (i.e., the matrix W ) gets more complex. For example, the asymptotic variance of a log odds in terms of model parameters gets progressively more complicated as the model involves more higher-order interactions, even though the log odds is an extremely simple function of µ. With a good matrix manipulation package, keeping track of the parameters can be accomplished numerically.
Asymptotic Variances for Saturated Models In Chapter 2, an asymptotic standard error was presented for estimated log odds ratios. The standard error is a consequence of applying Theorem 10.2.1a to a saturated model. Generally, standard errors for contrasts in the µi ’s are easily obtained for saturated models. Recall that for a saturated model, µ ˆ = log(n). Applying Theorem 10.2.1a, log(n) − µ is approximately N (0, [A − Az ]D−1 (m)). We wish to characterize [A − Az ]D−1 (m). The model is saturated, i.e., A = I(IDI)−1 ID = I, so AD−1 (m) = D−1 (m). For multinomial sampling (regardless of the loglinear model), 1 Az D−1 (m) = JJ . N Thus, for a saturated model with a large multinomial sample, we have the approximation 1 −1 log(n) − µ ∼ N 0, D (m) − JJ . N Let ρ = (ρ1 , . . . , ρq ) be a vector with ρ J = 0, i.e., ρ· = 0, so ρ µ is a contrast in the µi ’s. The large sample distribution of ρ log(n) is ρ log(n) − ρ µ ∼ N (0, ρ [D−1 (m) − (1/n· )JJ ]ρ) . With ρ J = 0, we have ρ log(n) − ρ µ ∼ N (0, ρ D−1 (m)ρ) or, equivalently,
ρ log(n) − ρ µ ∼ N (0, 1) . ρ D−1 (m)ρ
10.2 Asymptotic Results
337
For this distribution to be useful in drawinginferences about ρ µ, an estimate of the unknown standard deviation ρ D−1 (m)ρ must be incorporated. By Theorem 10.2.1d, the vector n/N converges to the vector p, so ρ D−1 (m)ρ/ρ D−1 (n)ρ = ρ D−1 (p)p ρ D−1 (n/N )ρ converges to 1 and ρ D−1 (m)ρ ρ D−1 (n)ρ converges to 1. Hence, for large samples, ρ log(n) − ρ µ ρ log(n) − ρ µ ρ D−1 (m)ρ = ∼ N (0, 1) . ρ D−1 (n)ρ ρ D−1 (m)ρ ρ D−1 (n)ρ This result can be very useful, especially for examining odds ratios. Example 10.2.7. For the 2×2×2 table of Example 3.2.4 concerning auto injuries, we were interested in whether the odds ratios p111 p221 /p121 p211 and p112 p222 /p122 p212 were equal. Because m11k m22k p11k p22k = , p12k p21k m12k m21k the log odds ratios are m11k m22k = µ11k − µ12k − µ21k + µ22k . log m12k m21k The odds ratios are equal if and only if the contrast in the µijk ’s (µ111 − µ121 − µ211 + µ221 ) − (µ112 − µ122 − µ212 + µ222 ) equals zero. [Note that if µ = (µ111 , µ112 , µ121 , µ122 , µ211 , µ212 , µ221 , µ222 ) , then ρ = (1, −1, −1, 1, −1, 1, 1, −1).] The estimated odds ratios were pˆ111 pˆ221 /ˆ p121 pˆ211
= =
pˆ112 pˆ222 /ˆ p122 pˆ212
= 60(80)/19(112) = 2.256 .
350(23)/26(150) 2.064
and
The estimate of the contrast is log(2.064) − log(2.256) = −0.089 . The standard error for the estimate is 1 1 1 1 1 1 1 1 −1 + + + + + + + ρ D (n)ρ = 350 23 26 150 60 80 19 112 = .4268 .
338
10. The Matrix Approach to Log-Linear Models
We can now test the hypothesis that the contrast is zero. The test statistic is −.089/.4268 = −.21. For an α level two-sided test, | − .21| is compared to z(1 − α/2). The hypothesis that the contrasts are equal is not rejected for any reasonable size of α. A 95% confidence interval for the contrast has limits −.089 ± (1.96)(.4268) . The test based on the asymptotic standard error is an alternative to the likelihood ratio and Pearson chi-squared tests for no three-factor interaction. To examine an individual cell, the term Az D−1 (m) must be accounted for in the covariance matrix. It is easily seen that for large samples, the appropriate distribution for pˆijk is
pˆijk − pijk ∼ N (0, 1) . pˆijk (1 − pˆijk )/N
Similar results hold for m ˆ ijk and µ ˆijk .
Testing Models Consider the problem of testing a model µ = X0 β0 against a larger model µ = Xβ. In particular, assume that µ = Xβ is valid and examine the test of H0 : µ = X0 β0 for some β0 versus HA : µ = X0 β0
for any β0 ,
ˆ be the where C(X0 ) ⊂ C(X), i.e., X0 = XB for some matrix B. Let m MLE of m under the assumption that µ = Xβ and let m ˆ 0 be the MLE of m under the assumption that µ = X0 β0 . The likelihood ratio test statistic is q m ˆ i log(m ˆ i /m ˆ 0i ) . G2 = 2 i=1
The Pearson test statistic is X2 =
q
(m ˆi −m ˆ 0i )2 /m ˆ 0i .
i=1
The main asymptotic results for testing hypotheses are given in the following theorem. Theorem 10.2.8.
Let r = rank(X) and r0 = rank(X0 ).
10.3 Product-Multinomial Sampling
339
(a) If H0 is true and N ≡ n· is large, the following distributions are approximately valid: G2 ∼ χ2 (r − r0 ) and X 2 ∼ χ2 (r − r0 ) . Moreover, P
G2 − X 2 → 0 . (b) If H0 is not true, then both G2 and X 2 get arbitrarily large as the sample size increases. It is interesting to note that the degrees of freedom for the test are rank(X) − rank(X0 ). This is the reason that degrees of freedom are computed exactly as in analysis of variance. In both cases, it is simply the linear structure of the model that determines the degrees of freedom.
10.3
Product-Multinomial Sampling
With a few minor changes, all of the results of Sections 1 and 2 hold for product-multinomial sampling. Suppose that we have t multinomial populations instead of just one. We can write the observations as nij , i = 1, . . . , t, j = 1, . . . , si , where t si is the number of categories in the ith multinomial. (Note that q = i=1 si .) The probabilities and expected cell counts can be written similarly as pij and mij , respectively. In place of the condition from multinomial sampling that all the probabilities in the table add to 1, cf. equation (10.1.2), we now have pi· = 1,
i = 1, . . . , t,
and because mij = ni· pij , we have mi· = ni· ,
i = 1, . . . , t .
Write the vectors n = (n11 , n12 , . . . , ntst ) and m = (m11 , m12 , . . . , mtst ) . Let Z be a q × t matrix of indicator variables for the t samples. Specifically, each column of Z corresponds to a different multinomial. A particular column of Z, say the ith column, has ones in the rows corresponding to ni1 , . . . , nisi and zeros in all other rows. (Note that if nij = µi + eij was a one-way ANOVA, Z would be the design matrix for the linear model.) With this definition of Z, the condition mi· = ni· , i = 1, . . . , t, becomes n Z = m Z . Suppose now that we have the log-linear model log(m) = µ = Xβ. It can be shown that maximizing the log-likelihood under product multinomial sampling is equivalent to maximizing (m) = n log(m), cf. Chapter 12.
340
10. The Matrix Approach to Log-Linear Models
The MLE of m must maximize (m) subject to the conditions
and
log(m) = Xβ
(1)
n Z = m Z .
(2)
Just as in Section 1, if m ˆ maximizes (m) subject only to condition (1), then m ˆ must satisfy ˆ X . (3) n X = m In order to get condition (2) satisfied, we restrict our attention to models log(m) = Xβ in which C(Z) ⊂ C(X). For such models, Z = XB for some matrix B; hence, (3) implies that ˆ XB = m ˆ Z . n Z = n XB = m The assumption that C(Z) ⊂ C(X) is not difficult to deal with. For an I × J table in which rows are independent multinomial samples, the condition C(Z) ⊂ C(X) is the requirement that every log-linear model include (the equivalent of) u1(i) terms for rows. In an I × J × K table in which there is an independent multinomial sample for each combination of row and layer, the condition C(Z) ⊂ C(X) is the requirement that every log-linear model include u13(ik) terms or their equivalent. Note that, for example, the models log(mijk ) = u13(ik) + u123(ijk) and log(mijk ) = u123(ijk) are equivalent models, so in spite of the fact that log(mijk ) = u123(ijk) does not contain u13(ik) terms, it does contain the equivalent of u13(ik) terms. Under product-multinomial sampling, the asymptotic results of Section 2 change very little. The matrix D(p) is no longer of interest. Instead, define m∗ = (m∗11 , . . . , m∗tst ) where m∗ij = ni· pij /n·· . Redefine D = D(m∗ ) . The matrix A is defined as before except that the new version of D is used. Also, redefine Az as Az = Z(Z DZ)−1 Z D . For asymptotic results, let N = n·· get large and let ni· /n·· remain fixed for each i. Write Ni = ni· = mi· . Before restating the asymptotic results, note that multinomial sampling is just a special case of product-multinomial sampling. In particular, it has t = 1, Z = J (J is a q × 1 vector of 1s), N = n· = n1· = n·· , and m∗ = p. Theorem 10.3.1.
For multinomial or product-multinomial sampling,
10.3 Product-Multinomial Sampling
341
the following distributions are approximately valid when N1 , . . . , Nt are large: (a) µ ˆ − µ ∼ N (0, (A − Az )D−1 (m)), (b) m ˆ − m ∼ N (0, D(m)(A − Az )). In addition, P
(c) µ ˆ − µ → 0, P
(d) N −1 m ˆ → m∗ . Estimation for product-multinomial sampling is similar to that for multinomial sampling. Theorem 10.3.1 looks identical to Theorem 10.2.1. The difference is that Az stands for something different. Again, the only problem is in computing asymptotic variances. Under product-multinomial samˆ is ρ [A − Az ]D−1 (m)ρ. Note that pling, the variance of ρ µ Z D(m)Z −1
(Z D(m)Z) and Az D
−1
= D(N1 , · · · , Nt ) , = D
1 1 ,···, N1 Nt
1 1 (m) = ZD ,···, N1 Nt
,
Z .
The variance of ρ µ ˆ is ρ AD−1 (m)ρ − ρ ZD
1 1 ,···, N1 Nt
Z ρ .
The second term can be computed exactly. The first term must be estimated and, even then, requires a computer to evaluate. For example, taking ρ = eij = (0, · · · , 0, 1, 0, · · · , 0) with the 1 in the column corresponding to the ij cell, Theorem 10.3.1 yields aij,ij 1 µ − µ) ∼ N 0, − µ ˆij − µij = eij (ˆ mij Ni where aij,ij is the diagonal element of A corresponding to the ij cell. Similarly, Var(m ˆ ij − mij ) = mij aij,ij − m2ij /Ni and with pij = mij /Ni , Var(ˆ pij − pij )
= pij aij,ij /Ni − p2ij /Ni = pij (aij,ij − pij )/Ni .
342
10. The Matrix Approach to Log-Linear Models
If ρ µ is a log odds or a log odds ratio that happens to be computed entirely within a particular multinomial, then ρ Z = 0 and ρ [A − Az ]D−1 (m)ρ = ρ AD−1 (m)ρ . This is computed exactly as in Section 2. Unfortunately, it again requires a computer to evaluate. For testing hypotheses, the large sample results appropriate for productmultinomial sampling are given in the following theorem. Theorem 10.3.2. Assume µ = Xβ and let X0 be a matrix with C(Z) ⊂ C(X0 ) ⊂ C(X). Let rank(X) = r and rank(X0 ) = r0 . For testing H0 : µ = X0 β0 for some β0 versus HA : µ = X0 β0 for any β0 , under multinomial or product-multinomial sampling, if N is large, then the following approximate distributions hold: (a) if H0 is true, G2 ∼ χ2 (r − r0 ), (b) if H0 is true, X 2 ∼ χ2 (r − r0 ), also, P
(c) if H0 is true, G2 − X 2 → 0, (d) if H0 is false, G2 and X 2 tend to infinity as N gets large. Note that by (c), if H0 is true, the difference between G2 and X 2 can be used as an indication of how good the large sample approximation is. If H0 is not true, then G2 and X 2 need not be equivalent in large samples.
10.4
Inference for Model Parameters
Thus far, we have been primarily concerned with estimation of m and µ. It may be of interest to estimate the parameter vector β in the log-linear model µ = Xβ. Estimates of β are obtained as in analysis of variance and regression, except that instead of performing operations on the data (y values), the operations are performed on µ ˆ. Suppose that rank(X) = p so that µ = Xβ is a regression model. βˆ satisfies ˆ µ ˆ = X β, so
(X X)−1 X µ ˆ = (X X)−1 X X βˆ = βˆ .
The MLE of βˆ is obtained by performing a regression on µ ˆ. (In fact, any ˆ weighted regression will give the same β.)
10.4 Inference for Model Parameters
343
Essentially the same argument holds for ANOVA type models. If one imposes side conditions on the parameters (something the author is loathe to do), then estimates of the parameters in ANOVA models are available. For example, in the model log(mij ) = u + u1(i) + u2(j) + u12(ij) if the side conditions u1(·) = u2(·) = u12(i·) = u12(·j) = 0 are imposed and if we denote ˆij , then wij = µ u ˆ u ˆ1(i) u ˆ2(j) u ˆ12(ij)
= w ¯·· , = w ¯i· − w ¯·· , = w ¯·j − w ¯·· , = wij − w ¯i· + w ¯·j + w ¯·· .
Again, these are precisely the estimates obtained by doing an ANOVA on µ ˆ. Tests and confidence intervals for functions ρ Xβ can be obtained from the asymptotic distribution
ˆ − ρ Xβ ρ µ ρ (A − Az )D−1 (n)ρ
∼ N (0, 1) .
For example, limits an asymptotic 95% confidence interval for ρ Xβ has ρµ ˆ ± 1.96 ρ (A − Az )D−1 (n)ρ and an α = .05 test of H0 : ρ Xβ = 0 versus HA : ρ Xβ = 0 rejects if
ˆ ρ µ > 1.96 ρ (A − Az )D−1 (n)ρ or if
ˆ ρ µ ρ (A − Az )D−1 (n)ρ
< −1.96 .
Example 10.4.1. In this and the previous three sections, a lot of machinery has been developed for analyzing log-linear models. In this example, we apply the matrix approach to the analysis of model (6.2.2) in Example 6.2.6. Our analysis also employs the data from Example 6.2.1 as summarized in Example 6.2.5. In matrix form, model (6.2.2) can be written as
344
10. The Matrix Approach to Log-Linear Models
λ log(m1111 ) λ1 λ2 log(m1112 ) log(m1121 ) λ3 λ4 log(m1122 ) log(m λ12 1211 ) log(m1212 ) λ13 λ14 log(m1221 ) =X log(m λ23 1222 ) log(m2111 ) λ24 λ34 log(m2112 ) log(m λ123 2121 ) log(m2122 ) λ124 λ134 log(m2221 )
log(m2222 )
λ234 λ1234
where X =
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 −1 −1 −1 −1 −1 −1 −1 −1
1 1 1 1 −1 −1 −1 −1 1 1 1 1 −1 −1 −1 −1
1 1 −1 −1 1 1 −1 −1 1 1 −1 −1 1 1 −1 −1
1 −1 1 −1 1 −1 1 −1 1 −1 1 −1 1 −1 1 −1
1 1 1 1 −1 −1 −1 −1 −1 −1 −1 −1 1 1 1 1
1 1 −1 −1 1 1 −1 −1 −1 −1 1 1 −1 −1 1 1
1 −1 1 −1 1 −1 1 −1 −1 1 −1 1 −1 1 −1 1
1 1 −1 −1 −1 −1 1 1 1 1 −1 −1 −1 −1 1 1
1 −1 1 −1 −1 1 −1 1 1 −1 1 −1 −1 1 −1 1
1 −1 −1 1 1 −1 −1 1 1 −1 −1 1 1 −1 −1 1
1 1 −1 −1 −1 −1 1 1 −1 −1 1 1 1 1 −1 −1
1 −1 1 −1 −1 1 −1 1 −1 1 −1 1 1 −1 1 −1
1 −1 −1 1 1 −1 −1 1 −1 1 1 −1 −1 1 1 −1
1 −1 −1 1 −1 1 1 −1 1 −1 −1 1 −1 1 1 −1
1 −1 −1 1 −1 1 1 −1 . −1 1 1 −1 1 −1 −1 1
The columns of the design matrix X can be identified as X0 , . . . , X1234 with the subscript of X identical to the subscript of the corresponding λ term. (X0 corresponds to λ.) Note that, say, X12 can be obtained by multiplying together the elements of X1 and X2 . Similarly, the elements of X134 can be obtained by multiplying together the elements of X1 , X3 , and X4 . In fact, any column with more than one subscript can be obtained by multiplying together the appropriate columns with one subscript. Another important fact is that any two columns, for example X12 and X134 , have the property that X12 X134 = 0. Any column, say X12 , 1 X12 Xβ = λ12 . The estimate of has X12 X12 = 16 = q, so we have 16 1 ˆ = (1/16)(ˆ µ11·· − µ ˆ12·· − µ ˆ21·· + µ ˆ22·· ) = λ12 is 16 X12 X βˆ = (1/16)X12 µ 4(w ¯11·· − w ¯12·· − w ¯21·· + w ¯22·· ) where whijk = log(nhijk ) because the model is saturated. 1 2 1 X12 µ ˆ is 16 X12 [D−1 (m) − Az D−1 (m)]X12 for large The variance of 16 samples. If the parameter λ12 is not forced into the model to deal with Az D−1 (m)X12 = 0, so the asympproduct-multinomial sampling, then X12
10.5 Methods for Finding Maximum Likelihood Estimates
345
totic variance is 2 2 1 1 1 X12 D−1 (m)X12 = q q mhijk hijk
ˆ 12 is where q = 16. The estimated asymptotic variance of λ 2 1 ˆ 12 ) = 1 - λ . Var( q nhijk hijk
In fact, the same asymptotic variance applies to all of the λ terms that are not forced into the model. Using the asymptotic distribution
1 q2
ˆ 12 − λ12 λ hijk
1
∼ N (0, 1),
nhijk
ˆ 12 = 0 is based on comparing the test statistic a test of H0 : λ
1 q2
ˆ 12 − 0 λ hijk
1 nhijk
to a N (0, 1) distribution. Using the numbers in Example 6.2.5, we see that ˆ T W | = 0.914/16 = .0571 , |λ the standard error is 1.307/16 = .0817 , and the test statistic is
.0571 − 0 = 0.70 , .0817 just as reported in Example 6.2.5. There is very little evidence that λT W = 0. Similarly, an asymptotic 95% confidence interval for λT W has end points .0571 ± 1.96(.0817) .
10.5
Methods for Finding Maximum Likelihood Estimates
In general, some sort of iterative technique is necessary to find MLEs for loglinear models. The two commonly used methods are iteratively reweighted least squares and iterative proportional fitting. Iterative proportional fitting was discussed in Section 3.3. It works only for ANOVA type models. Fitting of general log-linear models is usually performed using iteratively reweighted least squares.
346
10. The Matrix Approach to Log-Linear Models
Iteratively Reweighted Least Squares Maximum likelihood estimates for log-linear models can be found by performing a sequence of weighted linear regressions. This method is an application of the Newton-Raphson algorithm. Given a vector function f (β), Newton-Raphson is a method for finding a solution to f (β) = 0. It begins with an initial guess of β, say β0 . NewtonRaphson then defines a sequence of β’s, say β1 , β2 , . . ., that converge to ˆ = 0. The sequence is defined recursively; a value βˆ that satisfies f (β) we begin with an initial value β0 and define βt+1 given the value of βt . Specifically, let df (β) be the matrix of partial derivatives of the vectorvalued function f (β). By Taylor’s theorem, if βt and βt+1 are close to each other and δt = βt+1 − βt , then the approximate equality . f (βt+1 ) = f (βt ) + [df (βt )]δt holds. We are seeking a zero of f (β) so Newton-Raphson sets 0 = f (βt ) + [df (βt )]δt so that
δt = −[df (βt )]−1 f (βt ) .
With δt = βt+1 − βt , we have βt+1 = βt + δt . Consider a log-linear model µ = Xβ where X is a q × p matrix with rank(X) = p. Note that any log-linear model can be reparametrized so that rank(X) = p. The MLE of m will be the same regardless of the parametrization. We wish to find the maximum of the function (m). In particular, this can be done by setting appropriate partial derivatives of (m) equal to zero. The Newton-Raphson method can be used to find the zero of the partial derivative vector. Before applying the Newton-Raphson method, we set some notation. If x = (x1 , . . . , xq ) , write ex = (ex1 , . . . , exq ) . With log(m) = Xβ, m is a function of β. Write log(m(β)) = Xβ and m(β) = eXβ . In applying ˆ = 0 where Newton-Raphson, we find βˆ with f (β) f (β) = d(eXβ ) and d(eXβ ) is the matrix of partial derivatives of (eXβ ) with respect to ˆ the vector β. It follows that m ˆ = eX β will maximize (m) subject to the ˆ constraint that log(m) ˆ = X βˆ for some β. It is shown in Chapter 12 that f (βt ) = X (n − m(βt )) and that df (βt ) = −X D(m(βt ))X; thus, δt = [X D(m(βt ))X]−1 X (n − m(βt ))
10.6 Regression Analysis of Categorical Data
and
347
βt+1 = βt + [X D(m(βt ))X]−1 X (n − m(βt )) .
The value βt+1 can be obtained from βt simply by doing a weighted regression analysis. Let Y ≡ Xβt + [D(m(βt ))]−1 (n − m(βt )) .
(1)
If we fit the regression model Y = Xβ + e , E(e) = 0 , Cov(e) = [D(m(βt ))]−1 , the estimate of β is βt+1
= [X D(m(βt ))X]−1 X D(m(βt ))Y = βt + δt .
The matrix D(m(βt )) is diagonal, so this is the simplest form of weighted regression and can be performed on most standard regression programs. The weights are simply the individual values of the vector m(βt ). This method of finding MLEs, because it consists of a series of weighted regressions in which the weights continually change, is called iteratively reweighted least squares. The method does not depend on any particular choice of X except for the condition that rank(X) = p. Any log-linear model can be reparametrized so that X has full column rank, i.e., rank(X) = p, so the method is perfectly general.
10.6
Regression Analysis of Categorical Data
In this section, we present an alternative to maximum likelihood, namely the weighted least squares method of fitting log-linear models. This method was introduced by Grizzle, Starmer, and Koch (1969). It consists of fitting a linear model (regression model) to the logs of the counts while also using the counts as weights. We begin by explaining and illustrating the method. Mathematical justifications are given at the end of the section. Recall that for a saturated model, m ˆ = n is the MLE. For large samples, Theorem 10.3.1 applies and, because A = I, we have the approximation log(n) ∼ N (µ, D−1 (m) − Az D−1 (m)) . If we assume a log-linear model µ = Xβ , then
log(n) ∼ N (Xβ, D−1 (m) − Az D−1 (m)) ,
348
10. The Matrix Approach to Log-Linear Models
which can be rewritten as log(n) = Xβ + e,
e ∼ N (0, D−1 (m) − Az D−1 (m)) .
(1)
This is just a linear model, but it has an unusual covariance matrix for the errors. Most commonly in regression analysis, it is assumed that Cov(e) = σ 2 I. Courses on applied regression analysis often deal with weighted least squares, where Cov(e) = σ 2 D(w) and w is some q × 1 vector of known constants. This covariance structure can be handled very easily. In particular, most computer programs for doing regression analysis can handle this form of weighted regression. Unfortunately, the covariance matrix for model (1) is more complicated. As will be discussed later in the subsection on Mathematical Justifications, estimates in model (1) are precisely the same as estimates in log(n) = Xβ + e ,
e ∼ N (0, D−1 (m)) .
(2)
This is much closer to the standard form of σ 2 D(w). There are two differences. One is that in model (2) there is no variance σ 2 to be estimated; we know that σ 2 = 1. The second difference is that w is supposed to be known but m is not known. This problem is evaded by estimating m from the saturated model. Thus, the regression method is to fit the model log(n) = Xβ + e ,
e ∼ N (0, D−1 (n)) .
(3)
This procedure has essentially the same asymptotic properties as maximum likelihood estimation. Although we are using model (3) as a device for fitting the log-linear model, our real model is model (1). Model (3) gives a valid estimate for β, but it cannot be used for the entire analysis. Fortunately, when considering the most interesting parameters in β, model (3) can be used to construct asymptotically valid tests and confidence intervals. In particular, this works for parameters that are not forced into the model to account for the sampling scheme. Remember, we assume that C(Z) ⊂ C(X) where Z is the matrix of indicators for the product-multinomial samples, cf. Section 3. Any log-linear model can be reparametrized so that X = [Z, X1 ] and β = [α , β1 ]. The parameter vector α consists of parameters that are forced into the model to account for the sampling scheme. For drawing inferences about β1 , model (3) gives valid tests and confidence intervals. Because σ 2 = 1, when drawing inferences about model (3) one uses tests based on the normal distribution and the chi-square distribution rather than the t distribution and the F distribution. When performing chi-square tests, the test statistic is the numerator sum of squares from the usual F statistic with the appropriate number of degrees of freedom. Again, inferences must be restricted to parameters that are not forced into the model. Example 10.6.1. Drug Comparisons. The hypothetical data presented below has been analyzed in Koch, Imrey,
10.6 Regression Analysis of Categorical Data
349
Freeman, and Tolley (1976). They also mention other references. Three drugs A, B, and C were given to each of 46 subjects. The response of each subject to each drug was noted as favorable (F) or unfavorable (U). Assume a multinomial sampling scheme. The data are
Drug A
Drug B Drug C F U
F F 6 2
U U 16 4
F 2 6
U 4 6
First, consider fitting the log-linear model [AB][C] by fitting the corresponding linear model log(6) log(16) log(2) log(4) = log(2) log(4) log(6) log(6)
1 1 1 1 1 1 1 1
1 1 1 1 −1 −1 −1 −1
1 1 −1 −1 1 1 −1 −1
1 1 −1 −1 −1 −1 1 1
1 −1 1 −1 1 −1 1 −1
µ α β +e (αβ) γ
(4)
using the weights (6,16,2,4,2,4,6,6). The parameters α, β, αβ, and γ can be described as main effects for drugs A and B, the A × B interaction, and the drug C main effect. A regression program gave the following results for fitting this model: Regression Output Coefficient Std Error t µ 1.5662 .1335 11.73 α .1436 .1300 1.11 β .1436 .1300 1.11 αβ .5128 .1294 3.96 γ −.3055 .1201 −2.54 Sum of squared errors (SSE) = 1.7348 Degrees of freedom error (dfE) = 3 Mean squared error (MSE) = .5783
As discussed above, the regression program acts as if there is a scale parameter σ that needs to be estimated. For log-linear models, the scale parameter is one, so the regression output must be modified to remove the adjustments for scale. This consists of dividing the regression standard errors and multiplying the t values by (MSE)1/2 . Doing this gives
350
10. The Matrix Approach to Log-Linear Models
Coefficient µ α β αβ γ
GSK Estimates Estimate Std Error z 1.5662 — — .1436 .1710 .844 .1436 .1710 .844 .5128 .1702 3.011 −.3055 .1579 −1.932
The z values can be compared to the standard normal distribution for an asymptotic test of whether the coefficients are zero. The fact that no standard error is reported for µ is due to the fact that µ is forced into the model by the multinomial sampling. SSE is not used in the standard errors of coefficients, but it is used for testing different models. For example, fitting the model [A][B], i.e., 1 1 1 log(6) 1 1 1 log(16) 1 1 −1 log(2) µ 1 1 −1 log(4) α + e (5) = 1 −1 1 log(2) β 1 log(4) 1 −1 1 −1 −1 log(6) 1 −1 −1 log(6) with the counts as weights, gives SSE[model (5)] = 14.017 with 5 degrees of freedom. To test model (5) against model (4) [i.e., to test H0 : αβ = γ = 0], compare the difference in the error sums of squares, 14.0173 − 1.7348 = 12.2825, to a chi-square distribution with 5 − 3 = 2 degrees of freedom. Of course, to test the significance of one parameter, either the chi square or the normal test can be used. The tests are identical. Saturated models present some different features. Saturated models must fit perfectly; so for such a model, the SSE is zero. The fact that the saturated model has SSE = 0 causes a problem in finding standard errors and z values for regression coefficients. The regression output will try to use a scale parameter of zero, so the regression standard errors will all be reported as zero and the t values will be reported as infinite. It also follows that for any model other than the saturated model, the SSE reported in the regression output provides a direct test of lack of fit, i.e., a test of the model against the saturated model, when compared to a chi-square distribution with dfE degrees of freedom. For example, in the model [AB][C], comparing 1.7348 to a χ2 (2) provides a test for lack of fit. Finally, many regression programs give additional output on the sums of squares for the different coefficients such as Sum of Squares explained by
10.6 Regression Analysis of Categorical Data
351
each variable in the order they are entered into the model. For model (4), this is Due to Regression α β αβ γ
df 4 1 1 1 1
SS 18.3901 4.4353 1.6723 8.5381 3.7444
The test of H0 : γ = 0 can be performed by comparing 3.7444 to a chisquared distribution with 1 degree of freedom. The test of H0 : γ = αβ = 0 can be performed by comparing 8.5381 + 3.7444 = 12.2825 to a chi square with 2 degrees of freedom. Both of these tests are equivalent to previously discussed versions of the tests. Three things should be noted about the estimation technique of Grizzle, Starmer, and Koch (GSK). First, the method consists of performing one step of the Newton-Raphson algorithm. If the initial guess in the NewtonRaphson equation (10.5.1) is taken as Xβ0 = log(n), then one iteration gives the GSK estimate. Second, the GSK method of estimation depends on an asymptotic result. It is only asymptotically that model (1) is valid. Maximum likelihood, on the other hand, is a valid method of estimation for any sample size. Similarly, likelihood ratio statistics are reasonable statistics on which to base tests for any sample size. With maximum likelihood, all procedures will be based on sufficient statistics. Only the distributions depend on large samples. Finally, the GSK method has trouble with observations that are zero. Taking the log of zero is usually a problem. Koch et al. (1976) propose a compromise between maximum likelihood and weighted least squares. Suppose we wish to fit some model that cannot be conveniently fitted by iterative proportional fitting, say log(mijk ) = u + u1(i) + u2(j) + u3(k) + u12(ij) + γi k .
(6)
If software is available for iterative proportional fitting but not for iteratively reweighted least squares, perform the maximum likelihood fit of a slightly larger ANOVA type model, say log(mijk ) = u12(ij) + u13(ik) . The estimated vector m ˆ obtained from this can be used in place of n in the GSK procedure. The analysis follows the standard GSK methods. The compromise essentially provides GSK with better starting values.
352
10. The Matrix Approach to Log-Linear Models
Mathematical Justifications There are two things that require justification. First, that estimates are the same in models (1) and (2) and, second, that estimates of functions of β1 have the same variance in models (1) and (2). Why do models (1) and (2) have the same estimates? Note that Az D−1 (m)
= Z(Z D(m)Z)−1 Z D(m)D−1 (m) = Z(Z D(m)Z)−1 Z .
Thus, the covariance matrix in model (1) is D−1 (m) − Z(Z D(m)Z)−1 Z . The covariance matrix in model (2) can be written D−1 (m) = [D−1 (m) − Z(Z D(m)Z)−1 Z ] + Z(Z D(m)Z)−1 Z . Because C(Z) ⊂ C(X), this is precisely the condition needed to apply Theorem 10.1.3 in Christensen (1996b). The theorem implies that best linear unbiased estimates in models (1) and (2) are identical. The idea behind Christensen’s theorem is that because C(Z) ⊂ C(X), for some (non-negative definite) matrix B, the covariance matrix of (1) can be written D−1 (m) − XBX . Model (2) is equivalent to model (1) but with an additional independent error term added in. In particular, model (2) is equivalent to log(n) = Xβ + (e + e0 ) , e ∼ N (0, D−1 (m) − Z(Z D(m)Z)−1 Z ) , e0 ∼ N (0, Z(Z D(m)Z)−1 Z ) ,
(7)
where e and e0 are independent. The covariance matrix for the entire error is Cov(e + e0 ) = D−1 (m) − Z(Z D(m)Z)−1 Z + Z(Z D(m)Z)−1 Z = D−1 (m) . The trick involves the covariance matrix associated with e0 . With probability one, e0 ∈ C(Z(Z D(m)Z)−1 Z ) ⊂ C(Z) ⊂ C(X), cf. Christensen (1996b, Lemma 1.3.5). Because e0 ∈ C(X), we are adding error that we cannot distinguish from the mean Xβ. The only thing an unbiased estimate can do to such error is ignore it. Thus, the estimates with the additional error e0 and the estimates without the additional error are identical. We now examine the fact that estimates of estimable functions of β1 have the same variance in models (1) and (2).
10.6 Regression Analysis of Categorical Data
353
Estimates are the same in models (1) and (2), so using model (2) and standard linear model results, we have µ ˆ = X βˆ = A log(n) where A = X(X D(m)X)−1 X D(m). A is a projection operator onto C(X); this means that AX = X, so, in particular, AA = A and, because C(Z) ⊂ C(X), AZ = Z and AAz = Az . Again, write µ = Xβ = Zα + X1 β1 . Consider an estimable function of β1 , say λ β1 . For this to be estimable, by definition we must have λ β1 = ρ µ = ρ Zα + ρ X1 β1 for some q × 1 vector ρ. In particular, we must have ρ Z = 0 and ρ X1 = λ . Now consider the asymptotic variance of λ βˆ1 under model (1), Var(λ βˆ1 )
= Var(ρ µ ˆ) = Var(ρ A log(n)) = ρ A[D−1 (m) − Az D−1 (m)]A ρ = ρ AD−1 (m)A ρ − ρ AAz D−1 (m)A ρ = ρ AD−1 (m)A ρ .
The last equality follows from the fact that ρ AAz = ρ Az = ρ Z(Z D(m)Z)−1 Z D(m) = 0 because ρ Z = 0. The variance of λ βˆ1 under model (2) is Var(λ βˆ1 )
= Var(ρ A log(n)) = ρ A[D−1 (m)]A ρ,
so the variances are the same under models (1) and (2). Thus, for estimable functions of β1 , the standard error reported from fitting model (2) is identical to the true standard error which is computed using model (1). For estimating functions that involve α, model (2) cannot be used to obtain standard errors. In practice, neither model (1) nor (2) can be used because the covariance matrices involve the unknown parameter vector m. Model (3) substitutes the estimate n for m in the covariance matrix of (2). The same substitution in model (1) gives the most proper usable form for the GSK analysis. As above, the two models give the same estimate and the same standard errors for estimates of β1 . For drawing inferences about λ β1 , use the approximate distribution λ βˆ − λ β1 1 ∼ N (0, 1) . ρ D−1 (n)ρ
354
10. The Matrix Approach to Log-Linear Models
In particular, a large sample 95% confidence interval for λ β1 has end points λ βˆ1 ± 1.96 ρ D−1 (n)ρ . An α level test of H0 : λ βˆ1 = 0 rejects H0 when |λ βˆ1 | > z(1 − α/2) . ρ D−1 (n)ρ In summary, model (3) can be fitted very simply because it has a diagonal covariance matrix. Model (3) gives valid estimates of β. Model (3) yields valid estimates of the variance for parameters that are not forced into the model to deal with the sampling scheme. However, there are constraints on the forced parameters due to the sampling scheme that do not appear in model (3). These constraints reduce the variability to which the forced parameters are subject. Thus, instead of a covariance matrix D−1 (n), the appropriate covariance matrix has a term subtracted from D−1 (n) to reduce certain aspects of the variability.
10.7
Residual Analysis and Outliers
Residuals are used in regression analysis to check normality, look for serial correlation, examine possible lack of fit, look for heteroscedasticity of variances, identify outliers, and generally to examine whether the assumptions of the regression model appear to be appropriate. Addressing many of these issues is somewhat less appropriate in analysis of variance. For example, appropriate tests for lack of fit are readily available and the question of serial correlation comes up less frequently. For log-linear models, we will be interested in residuals primarily for identifying outliers and checking approximate normality. We define residuals by analogy with regression analysis. In a regression model Y = Xβ + e, (1) the residuals eˆ = (ˆ e1 , . . . , eˆn ) are defined as the difference between the observations and their estimated expected values. Symbolically, eˆ = Y − X βˆ = (I − H)Y, where βˆ = (X X)−1 X Y is the least squares estimate of β and H = X(X X)−1 X . If e ∼ N (0, σ 2 I) , then eˆ ∼ N (0, σ 2 (I − H)) .
(2)
10.7 Residual Analysis and Outliers
355
In most applications of residual analysis, the residuals are standardized before they are used. For example, in checking for outliers, we are checking for residuals that have unusually large absolute values. How large does a residual have to be before it is large enough to cause concern? If we standardize residuals so that they have a variance of about one, then we have a handle on what it means to have a large residual. There are two methods of standardizing residuals that have been commonly used. One method is a crude standardization σ, r˜i = eˆi /ˆ where σ ˆ 2 is the mean squared error from fitting model (1). Recall that the object of standardization is to make the variance of the residual about 1. Clearly, we should be dividing the residual by √ an estimate of its standard deviation. The standard deviation of eˆi is σ 1 − hii , where hii is the ith diagonal element of H. The problem with the crude standardized residuals is that they ignore H. Instead of using the correct distribution (2), crude standardized residuals behave as if eˆ ∼ N (0, σ 2 I). This is the correct distribution for e = Y − Xβ, but it ignores the fact that βˆ is estimated in ˆ In other words, the crude standardized residuals give just eˆ = Y − X β. that: a very crude standardization. The only advantage to the crude standardized residuals is that they do not require the computation of the hii values. The second method of standardizing √ residuals consists simply of doing it right. The standard deviation of eˆi is σ 1 − hii , so define the standardized residual as ri = eˆi σ ˆ 1 − hii . We now argue similarly for log-linear models. Again define the residuals as the difference between the observations and their estimated expected values. Symbolically, the residuals are ˆi. eˆi = ni − m
(3)
The need for standardizing these residuals is so glaring that it is almost unheard of to define residuals as in (3). To repeat an intuitive argument ˆ i = 2, then given earlier, suppose we have a cell in which ni = 7 and m eˆi = 7 − 2 = 5, which is not a very good fit. Now, suppose ni = 107 ˆ i seems to fit ni quite well. With our and m ˆ i = 102. Again, eˆi = 5, but m standard sampling schemes, variability tends to be large when the numbers ˆ i are large. For example, under multinomial sampling, for each i, ni and m Var(ni ) = N pi (1 − pi ) = mi (N − mi )/N . Unless pi is very close to zero or one, the variance is large when ni , and implicitly N , are large. In order to standardize the residuals, we need a relationship similar to (2) for log-linear models. This relationship is essentially that, for large samples, the approximate distribution is n−m ˆ ∼ N (0, D(m)(I − A)) .
356
10. The Matrix Approach to Log-Linear Models
A more formal statement is given in the following theorem in which we make explicit the dependence of n and m on the sample size N . Theorem 10.7.1. For multinomial, or product-multinomial sampling, if the log-linear model µ = Xβ holds, then as N → ∞, ˆ N ) → N (0, D(I − A)) N −1/2 (nN − m L
where D = D(m∗ ), m∗ is defined as in Section 3, A = X(X DX)−1 X D, and N is n· for multinomial sampling and n·· for product-multinomial sampling. Proof.
An argument similar to that in the proof of Lemma 12.3.3 gives ˆ N − N −1 mN − DAD−1 N −1 (nN − mN )] → 0 . N 1/2 [N −1 m P
[This is obtained by doing a Taylor expansion of the function m(·) ˆ rather than µ ˆ(·).] Adding and subtracting N −1/2 nN and multiplying by −1 gives N 1/2 [N −1 (nN − m ˆ N ) − N −1 (nN − mN ) + DAD−1 N −1 (nN − mN )] = [N −1/2 (nN − m ˆ N ) − (I − DAD−1 )N −1/2 (nN − mN )] → 0 . P
It follows that N −1/2 (nN − m ˆ N ) and (I − DAD−1 )N −1/2 (nN − mN ) have the same asymptotic distribution. By Theorem 12.3.1, N −1/2 (nN − mN ) → N (0, D − DAz ). L
Some algebra shows that (I − DAD−1 )N −1/2 (nN − mN ) → N (0, D(I − A)). L
2 For large samples, Theorem 10.7.1 gives the approximation n−m ˆ ∼ N (0, D(m)(I ˆ − A(m))) ˆ −1 ˆ X D(m). ˆ We are now in a position to dewhere A(m) ˆ = X(X D(m)X) fine both standardized residuals and crude standardized residuals. Crude standardized residuals are defined by ignoring the fact that m is estimated or, in other words, by ignoring the matrix A(m). ˆ Thus,
r˜i =
ˆi ni − m √ . m ˆi
In discussions of residuals for contingency tables, these values are often called the residuals or the standardized residuals. In previous chapters,
10.7 Residual Analysis and Outliers
357
these were referred to as the Pearson residuals. As in linear models, the primary advantage of the crude standardized residuals is that they do not require the computation of the diagonal elements of a complicated matrix depending on X. Note also that the sum of the squared crude residuals is precisely the Pearson test statistic for lack of fit. The standardized residuals are defined as ˆi ni − m ri = m ˆ i (1 − a ˆii ) where a ˆii is the ith diagonal element of the square matrix A(m). ˆ In some discussions of residuals, a ˆii is√defined to be the ith diagonal element of √ −1 ˆ D(m)X) ˆ X D( m). ˆ It is easily seen that the diagonal elD( m)X(X ements of these matrices are the same. These standardized residuals are also known as adjusted residuals. The term “adjusted” was introduced to distinguish these from the crude standardized residuals because the crude standardized residuals are often referred to as standardized residuals. Given the standardized residuals, we can check for normality. Although maximum likelihood estimates and tests based on the likelihood ratio test statistics make sense with any sample size, we have discussed particular confidence intervals and tests that assume the validity of asymptotic distribution theory. We would like to know if this assumption is reasonable. One way to check is to see whether the standardized residuals really seem to be normally distributed. As in regression analysis, we can check this assumption by doing a normal (rankit) plot or a Shapiro-Francia test, cf. Christensen (1996a, Section 2.4). Note that the validity of these procedures depends on having a valid log-linear model. Another way to check the validity of the asymptotic distributions is by comparing G2 and X 2 . If the asymptotic approximations are good and the model is true, then G2 and X 2 should be about equal. If the asymptotic distributions do not seem to be valid, we have a problem. One possible solution is simply to accept the fact that significance levels and confidence coefficients given by asymptotics are very crude. If our assumed sampling schemes are appropriate, the point estimates and test statistics are reasonable, but without valid distributions only crude conclusions can be made. The conditional approaches discussed in Section 3.5 or Bayesian methods similar to Chapter 13 can also be used here. Finally, another possibility is to try to incorporate a more realistic sampling scheme than the simple multinomial and product-multinomial schemes considered here. The other primary use of standardized residuals is in identifying outliers. Standardized residuals are asymptotically distributed as N (0, 1), so we can test whether residuals really have mean zero. Typically, we would be interested in the standardized residuals with largest absolute values. This is equivalent to testing all residuals, so a multiple comparison method would be appropriate. The Bonferroni method is easy to apply (cf. Christensen,
358
10. The Matrix Approach to Log-Linear Models
1996a, Sections 6.2, 7.9 or Christensen, 1996b, Section 5.3). We declare that a case is an outlier if |ri | > z(1 − α/2q) where z(η) is the ηth percentile of a standard normal distribution, α is the size of the test, and q is the number of cells in the table. An alternative method for identifying outliers is based on exploratory data analysis. This method does not rely on asymptotic distributions. Treating the standardized residuals as a sample, compute the quartiles and the interquartile range (IQR). Any case with a residual more than (1.5)IQR from the nearest quartile is considered an outlier. Rather than using an ad hoc test for outliers based on standardized residuals, we can construct a likelihood ratio test. Suppose our model is µ = Xβ .
(4)
Without loss of generality, consider testing whether the observation in the qth cell, nq , is an outlier. An outlier in the qth cell can be modeled by fitting a separate parameter to the cell. Let vq = (0, . . . , 0, 1) and consider the model (5) µ = Xβ + vq γ . The likelihood ratio test of this model against the reduced model (4) is a test of whether the qth cell is an outlier. Typically, we would want to examine each cell for being an outlier; thus, we need q likelihood ratio test statistics. Again, applying the Bonferroni method for multiple comparisons would be appropriate. Computing each of the q likelihood ratio test statistics requires an iterative procedure for obtaining estimates in models like model (5). In computing estimates of m, µ, and β in model (5), we can use estimates from model (4) as starting values. To reduce costs, we might stop after just one step of the iterative procedure. For these one-step procedures, closed forms for the likelihood ratio test can be obtained. Unfortunately, there are several possible approaches to deriving one-step approximations and it is not clear which, if any of them, work well. The remainder of this section is devoted to deriving a one-step approximation to Cook’s distance.
Derivation of Cook’s Distance Rewrite model (5) as µ[q] = log(m[q] ) = Xβ + vq γ , where µ[q] and m[q] are used to distinguish the parameters in model (5), where cell q may be an outlier, from the parameters in model (4). The MLE
10.7 Residual Analysis and Outliers
of m[q] must satisfy
359
X X n = m ˆ [q] vq vq
where log(m ˆ [q] ) ∈ C(X, vq ). In the discussion below, a subscript (q) indicates that the rowcorresponding to case q has been deleted from a matrix X(q) 0 X(q) . Notice that C(X, vq ) = C . Thus, or vector, so X = xq 0 1 m ˆ [q] satisfies X(q) 0 X(q) 0 n= m ˆ [q] 0 1 0 1 or, equivalently, writing m ˆ [q] = (m ˆ [q](q) , m ˆ [q]q ), X(q) n(q) = X(q) m ˆ [q](q)
and ˆ [q]q . nq = m Thus, m ˆ [q](q) can be obtained by fitting the model, say µ(q) = X(q) β(q) , in which cell q has been deleted and β(q) denotes the new parameter vector ˆ(q) . that applies to this model. In particular, µ ˆ[q](q) = µ A natural version of Cook’s distance that is appropriate for log-linear models is ˆ p) = Cq (X D(m)X,
(βˆ − βˆ(q) ) X D(m)X( ˆ βˆ − βˆ(q) ) . p
This is the same measure as used in Section 6.7, but is written in a different form. Note that X D(m)X ˆ is the inverse of the estimated asymptotic covariance matrix for βˆ under model (4) with Poisson sampling. A one-step version of Cook’s distance is Cq1 (X D(m)X, ˆ p)
=
1 1 ) X D(m)X( ˆ βˆ − βˆ(q) ) (βˆ − βˆ(q)
p
1 is a one-step approximation to βˆ(q) . where βˆ(q)
Using the Newton-Raphson method with a starting value of βˆ and a result similar to Proposition 13.5.1 in Christensen (1996b) on the inverse of a sum of matrices, the one-step estimate is 1 βˆ(q)
=
βˆ + [X(q) D(m ˆ (q) )X(q) ]−1 X(q) [n(q) − m ˆ (q) ]
=
βˆ + [X D(m)X ˆ −m ˆ q xq xq ]−1 [X (n − m) ˆ − xq (nq − m ˆ q )] ˆ −m ˆ q xq xq ]−1 [−xq (nq − m ˆ q )] βˆ + [X D(m)X
=
−1 ˆ + βˆ − (X D(m)X)
=
m ˆq −1 −1 (X D(m)X) ˆ xq xq (X D(m)X) ˆ 1−a ˆqq
360
10. The Matrix Approach to Log-Linear Models
=
× [xq (nq − mq )] 1 −1 βˆ − (X D(m)X) ˆ xq (nq − m ˆ q) . 1−a ˆqq
The equation
ˆq nq − m 1 −1 βˆ(q) = βˆ − (X D(m)X) ˆ xq 1−a ˆqq
leads to a computational formula for the one-step version of Cook’s distance. The one-step version can be written as Cq1 (X D(m)X, ˆ p) = =
1 1 ) X D(m)X( ˆ βˆ − βˆ(q) ) (βˆ − βˆ(q)
p a ˆqq (nq − m ˆ q )2 a ˆqq 1 1 = rq2 . 2 p (1 − a ˆqq ) m ˆq p 1−a ˆqq
As mentioned just prior to Subsection 6.7.1, this definition of Cook’s distance has weaknesses. Primarily, it does not take into account the marginal constraints imposed by multinomial or product-multinomial sampling, so it is most appropriate for Poisson sampling. In general, the implications of deleting a cell in a multinomial distribution are hard to grasp. Anderson (1992) has a valuable idea. For multinomial sampling, rather than merely deleting cell i, he proposes looking at the probabilities than an observation occurs in the other cells conditional on the observation not appearing in cell i. He then develops a version of Cook’s distance that can be written in terms of the standardized residuals. Unfortunately, Anderson (1992) uses standardized residuals that seem to conflict with Theorem 10.7.1; i.e., he ˆ i ). The probseems to have a different asymptotic variance for N −1/2 (ni − m lem appears to be that he does not use the term Az in Theorem 10.3.1b. [Of course, the really fun thing about this is that the reader gets to wonder who is making the mistake. Anderson’s standardized residuals are based on Rao (1973, p. 394). Theorem 10.7.1 is, I believe, identical to a result in Haberman (1974a).] In any case, the reported results should be used with care. In other work, Thomas and Cook (1989, 1990) discuss influence for generalized linear models (which include log-linear models, cf. Chapter 9).
10.8
Exercises
Exercise 10.8.1. Show that for a 3 × 3 table with n11 = n13 , n31 = n33 , ˆ for the independence model are also the m’s ˆ for and n·1 = n·3 that the m’s the uniform association model. Exercise 10.8.2. Waite (1911) reports data on classifications of general intelligence made for students from a secondary school in London. Classifications were made after two different school terms and each student
10.8 Exercises
361
was classified by two instructors. Classifications were based on Pearson’s criteria as explained in Exercise 2.6.3. The data are given in Table 10.1. The entry for Term 1, row C, column D of 55 indicates that there were 55 people who were classified as C by one instructor and D by the other. We want to develop interesting log-linear models for such data. For the moment, consider only Term 1. A model of interest is that the two teachers have the same marginal distribution of assigning various classifications and that they assign them independently. Write the marginal probabilities for the categories C, D, E, F, and G as p1 , p2 , p3 , p4 , and p5 , respectively. What are the table probabilities pij in terms of the marginal probabilities? Take logs of the pij ’s to identify a log-linear model. Write the log-linear model log(mij ) = αi + βj for these data in matrix form. Incorporate the restrictions αi = βi to get a model log(m) = Xγ and fit the model to the data of Table 10.1. What conclusions can you reach about the data? TABLE 10.1. Intelligence Classifications
C D E F G
C D E F G
C 13
C 17
Term 1 D E 55 29 123 326 421
Term 2 D E 51 17 129 479 700
F 1 24 253 107
G 0 0 17 31 5
F 1 46 343 109
G 0 5 28 72 21
Exercise 10.8.3. Use the saturated model and Theorem 10.2.1 to find the large sample distribution of a multinomial sample in terms of its category probabilities. Exercise 10.8.4. The Delta Method. Let vN be a sequence of q × 1 random vectors and suppose that √ L N (vN − θ) → N 0, Σ(θ) . Suppose that F (·) is a differentiable function taking q vectors into r vectors. Let dF be the r × q matrix of partial derivatives of F . Then √ L N F (vN ) − F (θ) → N 0, dF Σ(θ) dF .
362
10. The Matrix Approach to Log-Linear Models
For technical reasons, it is advantageous to assume that dF and Σ(θ) are also continuous. For mathematical details, see Bishop, Fienberg, and Holland (1975, Section 14.6.3). Assuming the saturated model for a 2×2 table, use Theorem 10.2.1c and the delta method to find an asymptotic standard error and asymptotic confidence intervals for the odds ratio. Show that the intervals do not change if based on Theorem 10.3.1b. Apply this method to the data of Example 2.1.1 to get a 95% interval. How does this interval compare to the interval given at the end of the subsection on The Odds Ratio in Section 2.1? Exercise 10.8.5. that if
Use the delta method of the previous exercise to show √
L N (vN − θ) → N 0, Σ(θ) ,
then for any r × q matrix A, √
L N (AvN − Aθ) → N 0, AΣ(θ)A .
√ √ Show that if Cov( N vN ) = Σ(θ), then Cov( N AvN ) = AΣ(θ)A . Exercise 10.8.6. Testing Marginal Homogeneity in a Square Table. For an I × I table, the hypothesis of marginal homogeneity is H0 : pi· = p·i , i = 1, . . . , I. A natural statistic for testing this hypothesis is the vector d = (n1· − n·1 , . . . , nI· − n·I ) Clearly, E(di ) = pi· − p·i . Use the results of Exercise 1.6.5 to show that Var(di ) = N (pi· + p·i − 2pii ) − (pi· − p·i )2
!
and Cov(dh , di ) = N [(phi + pih ) + (ph· − p·h )(pi· − p·i )] . Use the previous exercise to find the large sample distribution of d. Show that Pr(J d = 0) = 1. Show that the asymptotic covariance matrix of d has rank I − 1 and thus is not invertible. It is well known that if Y ∼ N (0, V ) with V an s × s nonsingular matrix, then Y V −1 Y ∼ χ2 (s). Use this fact along with the asymptotic distribution of d to obtain a test of the hypothesis of marginal homogeneity. Apply the test of marginal homogeneity to the data of Exercise 2.6.10.
11 The Matrix Approach to Logit Models
In this chapter, we again discuss logistic regression and logit models, but here we use the matrix approach of Chapter 10. Section 1 discusses the equivalence of logit models and log-linear models. This equivalence is used to arrive at results on estimation and testing. Because the data in a typical logistic regression correspond to very sparse data in a contingency table, the asymptotic results of Section 10.2 are not appropriate. Section 6 presents results from Haberman (1977) that are appropriate for logistic regression models. Section 2 discusses model selection criteria for logistic regression. Direct fitting of logit models is considered in Section 3. The appropriate maximum likelihood equations and Newton-Raphson procedure are given. Section 4 indicates how the weighted least squares model-fitting procedure is applied to logit models. Models appropriate for response variables with more than two categories are examined in Section 5. Finally, Section 7 considers the discrimination problem.
11.1
Estimation and Testing for Logistic Models
In general, if the dependent variable has only two categories, regardless of the number of predictor variables, the table can be considered as a two-dimensional table with two columns (one column for each category of the dependent variable). In this structure, all predictor variables are being pooled into the rows of the table. For t distinct sets of predictor variables,
364
11. The Matrix Approach to Logit Models
we can write the 2t × 1 vector of observations as n = (n11 , n21 , . . . , nt1 , n12 , . . . , nt2 ) with similar notations for p, m, and µ. A logistic model is a linear model for the values log(pi1 /pi2 ). Note that we are modeling the log odds of category 1 compared to category 2. These are the log odds of observing category 1 given that the observation falls in row i. If each row constitutes an independent binomial, then we are simply modeling the log odds for the various binomials. Logistic models are nothing more than log-linear models. All of the results of Chapter 10 apply to logistic models. We now consider the exact nature of this equivalence for prospective studies. Let η = log(p11 /p12 ), log(p21 /p22 ), . . . , log(pt1 /pt2 ) be the vector of log odds. A linear logistic model is a model η = Xβ, where X is a t × k matrix. Define L = [It , −It ] and note that for prospective studies, log(pi1 /pi2 ) = log(mi1 /mi2 ) = log(mi1 ) − log(mi2 ), so η = L µ . Thus, the logistic model can be written as L µ = Xβ . Now define a log-linear model µ = X∗ ξ where
I X∗ = t It
X 0
γ and ξ = . β
It is easily seen that if µ = X∗ ξ, then L µ = Xβ. It is only moderately more difficult to see (cf. Section 12.4 and Christensen, 1996b, Section 3.3) that {µ|L µ = Xβ} = C(X∗ ) . Thus, the restriction on µ imposed by the logistic model η = Xβ is precisely the same as the log-linear model µ = X∗ ξ. In other words, the logistic model η = Xβ is identical to the log-linear model µ = X∗ ξ. Unfortunately, there can be problems with the asymptotic results of Chapter 10 when applied to logistic models. The asymptotic results of Section 10.2 are based on the assumption of a fixed number of cells q in the table. This number is q = 2t. It is assumed that the sample size in each cell gets large. Often, logistic models are used in situations that are more similar to regression than analysis of variance. In such cases, any additional
11.1 Estimation and Testing for Logistic Models
365
observations obtained typically correspond to new rows of the design matrix X. This implies the addition of a new row to the t × 2 table; thus, the assumption of a fixed number of cells is invalidated. These issues are dealt with in Section 6. In practice, the data are fixed and neither the sample sizes nor the number of cells increases. Both remain constant. If the number of observations in each cell is reasonably large, the usual asymptotic theory should work adequately. If the number of cells is large relative to the number of observations, new asymptotic results are required as a basis for statistical inference. Frequently, when the dependent variable has two categories, the data are collected so that for each unique set of predictor variables (i.e., for each row of the t × 2 table), the counts are independent and have a binomial distribution. As in Section 10.4, the existence of product-multinomial sampling (product-binomial, in this instance) restricts us to a special form for the design matrix of the log-linear model. In particular, it is required that C(Z) ⊂ C(X∗ ) where Z is a matrix of indicators for the rows of the t × 2 table. With µ = (µ11 , . . . , µt1 , µ12 , . . . , µt2 ) , I Z= t . It
Since X∗ =
It It
X , 0
it is clearly the case that C(Z) ⊂ C(X∗ ). It follows that hypothesizing a logistic model automatically makes the model appropriate for productbinomial sampling. Moreover, when fitting logistic models, it suffices to imagine fitting a log-linear model to a table with product-binomial sampling. This mental device of imagining product-binomial sampling assures that the structure of the log-linear model implies the existence of a valid logistic model. To see this, note that a quite arbitrary model appropriate for product-binomial sampling can be written with the design matrix It X1 . It X2 It X1 − X2 It X1 It X1 − X2 =C where However, C It X2 It 0 It 0 has the form given earlier for logistic models.
Estimation We now consider the problem of estimation in a logistic model η = Xβ. Recall that X is t × k and that η and β are t and k vectors, respectively.
366
11. The Matrix Approach to Logit Models
In particular, consider estimation of a linear function ρ η, where ρ is an arbitrary t × 1 vector. Note that ρ η = ρ Xβ, so these functions can be considered as functions of β. If rank(X) = k and ei is a t vector of 0s with a 1 in the ith position, then choosing ρ = ei (X X)−1 X gives ρ Xβ = βi , where β = (β1 , . . . , βk ) . Write X XL = , 0 where XL is 2t × k and 0 is a t × k matrix of zeros. As above, η = Xβ is equivalent to It X γ γ µ = X∗ ξ = = [Z, XL ] β β It 0 = Zγ + XL β , where β is identical in the logistic and log-linear models. Using invariance of the MLEs, the MLE of ρ η comes directly from the MLE of µ. Because η = L µ, ˆ. ρ ηˆ = ρ L µ In terms of estimating β, we get ρ X βˆ ≡ ρ ηˆ ˆ = ρ L µ ˆ = ρ L (Z γˆ + XL β) = ρ L XL βˆ = ρ X βˆ . Thus, ρ Xβ can be estimated in either the logistic model or the log-linear model and the estimates are identical. To form tests and confidence intervals for ρ Xβ, we need a distribution ˆ Asymptotically, for any vector ρ∗ , for ρ X β.
ˆ − ρ∗ X∗ ξ ρ∗ µ ρ∗ (A − Az )D−1 (m)ρ∗
∼ N (0, 1)
(1)
and A − Az
= X∗ (X∗ DX∗ )−1 X∗ D − Z(Z DZ)− Z D = X∗ (X∗ D(m)X∗ )−1 X∗ D(m) − Z (Z D(m)Z)−1 Z D(m) ,
so (A − Az )D−1 (m) = X∗ (X∗ D(m)X∗ )−1 X∗ − Z(Z D(m)Z)−1 Z .
11.1 Estimation and Testing for Logistic Models
367
For estimating ρ Xβ in a logistic model, let ρ∗ = ρ L , so ρ∗ µ ˆ = ρ ηˆ and −1 = ρ Xβ. To apply (1), we need to find ρ∗ (A − Az )D (m)ρ∗ . In the appendix to this section, it is shown that
ρ∗ X∗ ξ
ρ∗ (A − Az )D−1 (m)ρ∗ = ρ X[X D(b)X]−1 X ρ , where
(2)
b = (b1 , . . . , bt )
and bi = mi1 mi2 /(mi1 + mi2 ) . ˆ i1 m ˆ i2 /ni· and ˆb = (ˆb1 , . . . , ˆbt ) gives, asymptotically, Taking ˆbi = m ρ ηˆ − ρ Xβ ∼ N (0, 1) . ρ X[X D(ˆb)X]−1 X ρ Tests and confidence intervals follow in the usual way. In particular, using ρ = ei = (0, · · · , 0, 1, 0, · · · , 0), for large samples log(ˆ pi1 /ˆ pi2 ) − log(pi1 /pi2 ) ∼ N (0, 1) , a ˆii ˆbi
(3)
where a ˆii is the leverage as found in Section 4.3 as modified by Subsection 4.4.1. The asymptotic variance for log odds ratios can also be computed. Except for the difference in the weights bi , the value −1
Var(ρ ηˆ) = ρ X (X D(b)X)
X ρ
looks like that used in Section 10.2. Methods for evaluating variances are similar. Using the delta method of Exercise 10.8.4, the logistic transform, (3), and writing Ni = ni1 + ni2 , asymptotic inferences for pij are based on pˆij − pij ∼ N (0, 1) , pˆij (1 − pˆij )ˆ aii /Ni cf. Exercise 11.8.5
Testing Hypotheses Assume a logistic model η = Xβ and consider the problem of testing a reduced model η0 = X0 β0 against η = Xβ, where C(X0 ) ⊂ C(X). This test can be performed by testing log-linear models. The full model corresponds to µ = X∗ ξ. We can write the reduced model as µ0 = X∗0 ξ0 ,
368
11. The Matrix Approach to Logit Models
where X∗0 = [Z, XL0 ]
and XL0
X0 = 0
.
The degrees of freedom for the chi-square test is rank(X∗ )−rank(X∗0 ) = rank(X) − rank(X0 ). The likelihood ratio test statistic is G2 = 2
2 t
m ˆ ij log(m ˆ ij /m ˆ 0ij ) .
i=1 j=1
Appendix To simplify notation somewhat, let Dm ≡ D(m). In this appendix, we wish to show equation (2), i.e., for ρ∗ = ρ L , ρ∗ X∗ [X∗ Dm X∗ ]−1 X∗ ρ − ρ∗ Z[Z Dm Z]−1 Z ρ∗ = ρ X[X D(b)X]−1 X ρ. The algebra necessary for this demonstration is quite nasty. We break it up into several parts. Lemma 11.1.1. Proof.
ρ∗ Z[Z Dm Z]−1 Z ρ∗ = 0.
ρ∗ Z = ρ L Z but L Z = 0.
2
Now, all we have to deal with is ρ∗ X∗ [X∗ Dm X]−1 X∗ ρ . We will rewrite this using a perpendicular projection operator (cf. Christensen, 1996b, Appendix B) and then use a property of perpendicular projection operators to derive the result. Let √ √ 1/2 Dm = D( m11 , . . . , mt2 ) so that ρ∗ X∗ [X∗ Dm X∗ ]−1 X∗ ρ∗
= ρ L X∗ [X∗ Dm X∗ ]−1 X∗ Lρ . / −1/2 1/2 1/2 −1/2 = ρ L Dm Dm Dm X∗ [X∗ Dm X∗ ]−1 X∗ Dm Lρ
−1/2 −1/2 = ρ L Dm P Dm Lρ ,
(4)
11.1 Estimation and Testing for Logistic Models
369
where P = Dm X∗ [X∗ Dm X∗ ]−1 X∗ Dm . The matrix P is the perpendicular projection operator onto 1/2
1/2
1/2 1/2 1/2 C(Dm X∗ ) = C(Dm Z, Dm XL ).
We need one property of the perpendicular projection operator (cf. Christensen, 1996b, Section 9.2), namely P = M1 + M 2 , where
1/2 1/2 M1 = Dm Z[Z Dm Z]−1 Z Dm
(5)
and 1/2 1/2 1/2 XL [XL Dm (I − M1 )Dm XL ]−1 M2 = (I − M1 )Dm 1/2 × XL Dm (I − M1 ). (6)
From (4), we see that ρ∗ X∗ [X∗ Dm X∗ ]−1 X∗ ρ
−1/2 −1/2 −1/2 −1/2 = ρ L Dm M1 Dm Lρ + ρ L Dm M2 Dm Lρ .
The first term on the right-hand side vanishes. Lemma 11.1.2.
−1/2
ρ L Dm
−1/2
M1 Dm
Lρ = 0.
−1/2
Proof. Note that 0 = L Z = L Dm −1/2 M1 , we see that 0 = L Dm M1 .
1/2
[Dm Z]. Using formula (5) for 2
By Lemmas 11.1.1 and 11.1.2, we have reduced the demonstration of equation (2) to showing −1/2 −1/2 ρ L Dm M2 Dm Lρ = ρ X[X D(b)X]−1 X ρ .
(7)
Again, we break the demonstration into parts. Lemma 11.1.3. Proof.
−1/2
ρ L Dm
(I − M1 )Dm XL = ρ X. 1/2
−1/2
As in the proof of Lemma 11.1.2, L Dm −1/2 1/2 ρ L Dm (I − M1 )Dm XL
M1 = 0. Thus,
−1/2 1/2 = ρ L Dm Dm XL
= ρ L XL = ρ X .
2
370
11. The Matrix Approach to Logit Models
Define m1 = (m11 , . . . , mt1 ) and m2 = (m12 , . . . , mt2 ) . Lemma 11.1.4.
XL Dm Z = X D(m1 ).
Proof. XL Dm Z
=
[X , 0 ]
D(m1 ) 0 0 D(m2 )
It It
= X D(m1 ) . A similar argument yields Lemma 11.1.5.
2
XL Dm XL = X D(m1 )X.
We need two additional results. Lemma 11.1.6.
[Z Dm Z] = D(m1 + m2 ).
Proof. Z Dm Z
Lemma 11.1.7. Proof.
D(m1 ) 0 0 D(m2 ) = D(m1 ) + D(m2 ) = D(m1 + m2 ).
=
[It , It ]
It It
2
XL Dm (I − M1 )Dm XL = X D(b)X. 1/2
1/2
Using Lemmas 11.1.4, 11.1.5, and 11.1.6 gives 1/2 1/2 XL Dm (I − M1 )Dm XL = XL Dm XL − XL Dm Z[Z Dm Z]−1 Z Dm XL
= X D(m1 )X − X D(m1 )D−1 (m1 + m2 )D(m1 )X = X [D(m1 ) − D(m1 )D−1 (m1 + m2 )D(m1 )]X = X D(b)X .
2
We can now obtain equation (7). Using (6), Lemmas 11.1.3 and 11.1.7 give −1/2 −1/2 M2 Dm Lρ = ρ X[X D(b)X]−1 X ρ ρ L Dm and we are done.
11.2 Model Selection Criteria for Logistic Regression
11.2
371
Model Selection Criteria for Logistic Regression
The purpose of this section is to show that not only do the model selection criteria of Section 3.6 apply to logistic regression, but that they have interpretations similar to those in normal theory regression. For a logistic regression model η = Xβ or, equivalently, I X α µ= t , β It 0 write the likelihood ratio test statistic for testing the model against the saturated model as G2 (X). Let J be a t × 1 matrix of one’s. A natural definition of R2 for logit models gives precisely the same definition as for log-linear models. In defining R2 for logistic regression, the smallest interesting model is typically the model that contains only the intercept η = Jγ . The corresponding log-linear model is I J ξ µ= t γ It 0
(1)
which is equivalent to µ = X0 δ ≡
It It
J 0
α 0 γ1 . J γ2
(2)
These are equivalent because the µ vectors that can be obtained from the models are identical. Any µ from model (1) can be obtained from model (2) by taking α = ξ, γ1 = γ, and γ2 = 0. Conversely, any µ from model (2) can be obtained from model (1) by taking ξ = α + Jγ2 and γ = γ1 − γ2 . This is really just a demonstration that C(X0 ) is identical to the column space of the model matrix in (1). If we think of logistic regression as the analysis of a t × 2 table, model (2) is log(mij ) = αi + γj which is the model of independence. So comparing the logistic model η = Xβ to the logistic intercept model η = Jγ is the same as comparing the log-linear equivalent of η = Xβ to the independence model. The G2 (X) statistic is the same whether considering logit models or their log-linear equivalents. If k = rank(X), the degrees of freedom are the number of cells in the table minus the rank of the log-linear model design matrix, 2t−(t+k) = t−k. Note that this can also be viewed as the number
372
11. The Matrix Approach to Logit Models
of cases (number of independent binomials) minus the rank of the logistic model design matrix, just like the degrees of freedom error in a normal theory model. The independence model is equivalent to fitting an intercept in logistic regression, so G2 (X0 ) has degrees of freedom 2t − t − 1 = t − 1. Note that this is the number of cases minus 1 for the intercept, just like the error for fitting only an intercept in normal theory regression. The log-linear model definition R2 =
G2 (X0 ) − G2 (X) G2 (X0 )
(cf. Section 3.6) makes perfect sense when applied to logistic regression models. The definition of Adj R2 when applied to logit models gives Adj R2 = 1 −
G2 (X)/(t − k) . G2 (X0 )/(t − 1)
Finally, Akaike’s information criterion suggests picking X to minimize ! Ax = G2 (X) − (2t) − 2 t + k = G2 (X) + 2k. Because t is fixed, minimizing Ax is equivalent to minimizing Ax − q ≡ Ax − 2t = G2 (X) − 2[t − k] . As illustrated in Section 4.1, it is common to report A∗ , the information relative to a full model.
11.3
Likelihood Equations and Newton-Raphson
When dealing with logit models, some simplification occurs in the likelihood equations and the Newton-Raphson algorithm. Write the log-linear model version of the logit model η = Xβ as It X γ µ= , (1) β It 0 where X is a t × k matrix. Write mj = (m1j , . . . , mtj ) and nj = (n1j , . . . , ntj ) for j = 1, 2, so m = (m1 , m2 ) and n = (n1 , n2 ). Also write N = (N1 , . . . , Nt ) , where Ni = ni1 + ni2 .
11.3 Likelihood Equations and Newton-Raphson
373
As in Chapter 10, the likelihood equations are It It (n − m) = 0 . X 0 Noting that n2 = N − n1 , we get the equations n 1 − m1 It It =0 X 0 N − n 1 − m2 or
(n1 − m1 ) + (N − n1 − m2 ) = 0, X (n1 − m1 )
which simplifies to
N − (m1 + m2 ) = 0. X (n1 − m1 )
(2)
We are seeking values γˆ and βˆ that give solutions to equation (2). We ˆ Write now show that the value of γˆ can be determined by the value of β. x1 .. X= . xt
and γˆ = (ˆ γ1 , . . . , γˆt ) ; then, m ˆ i1 m ˆ i2
= =
ˆ , exp[ˆ γi + xi β] exp[ˆ γi ] .
(3)
ˆ 1i + m ˆ 2i and Because γˆ and βˆ provide a solution to (2), Ni = m Ni
ˆ + exp[ˆ exp[ˆ γi + xi β] γi ] γ ˆi ˆ = e [1 + exp(xi β)] , =
so ˆ eγˆi = Ni /[1 + exp(xi β)]
(4)
and ˆ . γˆi = log(Ni /[1 + exp(xi β)]) This completes the demonstration. ˆ the likelihood equations can be reduced to Because γˆ is a function of β, the bottom half of (2), which is solely a function of β. The top half of (2) is satisfied for any βˆ by taking γˆ as indicated above. To write the bottom
374
11. The Matrix Approach to Logit Models
half of (2) as a function of β, we need only write m1 as a function of β. From equations (3) and (4) mi1
= eγi exi β = (Ni /[1 + exi β ])exi β = Ni
(5)
xi β
e . 1 + exi β
We can use the Newton-Raphson algorithm to find a solution to the likelihood equations X (n1 − m1 ) = 0 . The highlights of using Newton-Raphson are given below. More detailed discussions are given in Chapter 10 and Section 12.4. To apply NewtonRaphson, we need the derivative of X (n1 − m1 (β)) with respect to β, i.e., the matrix of partial derivatives with respect to the βj ’s. Using the chain rule, this is just −X times the matrix of partial derivatives of m1 (β) with respect to β. Note that m1 (β)
=
[m11 (β), . . . , mt1 (β)] . / = N1 ex1 β /(1 + ex1 β ), . . . , Nt ext β /(1 + ext β ) .
The partial derivative of m1i (β) with respect to βj is ∂mi1 (β) ∂βj
= Ni xij exi β (1 + exi β )−1
+ Ni exi β (−1)(1 + exi β )−2 xij exi β 2 xi β xi β e e . − = Ni xij (1 + exi β ) 1 + exi β
(6)
Recalling equation (5), define pi ≡ mi1 Ni = exi β (1 + exi β ) .
(7)
For product-binomial sampling, pi is the probability of an observation in the ith row occurring in the first column of the table. Substituting pi into (6), the partial derivative is ∂mi1 (β) ∂βj
= Ni xij (pi − p2i ) = Ni pi (1 − pi )xij .
11.4 Weighted Least Squares for Logit Models
375
The matrix of partial derivatives can be written as N1 p1 (1 − p1 )x11 · · · N1 p1 (1 − p1 )x1k .. .. dm1 (β) = . . Nt pt (1 − pt )xt1 · · · = D(Ni pi (1 − pi ))X .
Nt pt (1 − pt )xtk
The matrix of partial derivatives for X (n − m(β)) is then X dm1 (β) = X D(Ni pi (1 − pi ))X . Note that Ni pi (1 − pi ) = bi from (11.1.2). We can now apply the Newton-Raphson algorithm. Given a current estimate βs , the next estimate is βs+1 = βs + δs , where
δs = [X D(Ni pi (1 − pi ))X]−1 X (n1 − m1 (βs ))
and pi in Ni pi (1 − pi ) is actually pi = pi (βs ) as defined by equation (7). As with log-linear models, the estimates can be found by doing a series of weighted regressions. Let yi = xi βs + [ni1 − mi1 (βs )]/Ni pi (1 − pi ), so that Y = Xβs + D(Ni pi (1 − pi ))−1 (n1 − m1 (βs )) . If the weighted regression model Y = Xβ + e , E(e) = 0 , Cov(e) = D−1 (Ni pi (1 − pi )) is fitted, then the estimate of β is βs+1
11.4
= [X D(Ni pi (1 − pi ))X]−1 X D(Ni pi (1 − pi ))Y = βs + δs .
Weighted Least Squares for Logit Models
The methods introduced by Grizzle, Starmer, and Koch (1969) are actually quite general and can be applied to logit models as well as log-linear models, cf. Section 10.6. As before, asymptotic properties of the GSK method are often the same as maximum likelihood, but the small sample justification is less compelling. Moreover, for some small samples, the GSK method cannot be used at all unless ad hoc modifications to the data are introduced. As with log-linear models, the GSK method amounts to performing one step of the Newton-Raphson algorithm. For a logit model η = Xβ ,
376
11. The Matrix Approach to Logit Models
where η
= [µ11 − µ21 , . . . , µ1t − µ2t ] = [log(p11 /p21 ), . . . , log(p1t /p2t )] .
The model Y = Xβ + e , E(e) = 0 , Cov(e) = D−1 (Ni pˆi (1 − pˆi )) is fitted where pˆi ≡ pˆi1 = ni1 /Ni and
(1)
Y = [y1 , . . . , yt ]
with pi /(1 − pˆi )] . yi = log[ˆ The justification for the GSK procedure is asymptotic. The estimates obtained are asymptotically optimal if the Ni ’s are all large. As the justification for the GSK procedure is based on its large sample optimality, there is no apparent reason to use GSK for small samples. Moreover, it is not clear that the sort of asymptotic arguments which involve large numbers of small samples can be extended to the GSK approach. Perhaps the most obvious difficulty with the GSK approach is that if pˆi is either zero or one, yi is not defined. When Ni = 1, yi is always undefined. Of course, the corresponding weight from the inverse of the covariance matrix is also zero, so one could argue that GSK simply ignores such cases. But this could result in ignoring great quantities of data. For the Chapman data of Section 4.1, all of the data would be ignored. An ad hoc correction for this problem has been proposed, which is simply to substitute for any value pˆi that is 0 or 1, the values pˆi ± i , where the substitution forces pˆi to be between 0 and 1 and i is some small number. It is frequently suggested that any values nij = 0 be replaced by nij = 0.5. It may be noted that the pˆi ’s given in (1) are also the natural starting values for the Newton-Raphson algorithm and that small samples also require that pˆi ’s of 0 or 1 be adjusted before they can be used. The key difference is that the justification for MLEs does not depend on properties of the starting values. Any starting values that lead to MLEs are perfectly acceptable. The GSK method depends crucially on the initial estimates. The details of a GSK logit model analysis will not be given because, for uniformly large samples, they are exactly analogous to the log-linear model analysis given in Section 10.6. For large Ni ’s, the SSE can be used to give a chi-square test for lack of fit. The degrees of freedom for the test are the degrees of freedom for error. Models can be compared by comparing sums of squares for error. Reported standard errors must be corrected for the root mean square error; t statistics must be corrected for the root mean square error and compared to the standard normal distribution. The only difference is that a valid standard error exists for the intercept.
11.5 Multinomial Response Models
11.5
377
Multinomial Response Models
An integral point in the definition of logistic regression models is that the response variable has only two categories. The model posits a linear mean structure for log(mi1 /mi2 ). As discussed in the previous chapter, if the response variable has more than two categories, it is by no means clear how to extend the logistic regression model to deal with the additional categories. It may then be somewhat surprising to find that it is clear how to extend the log-linear model version of the logistic regression model to more than two categories. We will discuss the appropriate log-linear model for a three-category response model. Extensions to responses with more than three categories follow the same pattern. Before beginning with three-category responses, we reconsider the nature of the log-linear model for a two-category response. The model is I X α µ= . (1) I 0 β This model lacks symmetry in the categories. The model matrix has an X submatrix corresponding to the first category of each response, but for the second category, the submatrix is zero. A model that treats the first and second categories on the same basis is the model α I X 0 (2) γ1 . µ= I 0 X γ2 The important thing to note about model (2) is that it is equivalent to model (1). Any vector µ from model (1) can be obtained from (2) and vice versa. To see this, note that I X I X 0 C =C . I 0 I 0 X Any two models with the same column space are equivalent models. Model (2) is a reparametrization of (1). Given model (2), there is an obvious generalization to a three-category response. Simply take α I X 0 0 γ (3) µ = I 0 X 0 1 γ2 I 0 0 X γ3 where µ = (µ11 , . . . , µt1 , µ12 , . . . , µt2 , µ13 , . . . , µt3 ) . As model (2) is equivalent to model (1), it is easily seen that model (3) is equivalent to α I X 0 µ = I 0 X β1 . β2 I 0 0
378
11. The Matrix Approach to Logit Models
Writing
x1 X = ...
xt
gives model (3) as
log(mij ) = µij = αi + xi γj
(4)
where i = 1, . . . , t and j = 1, 2, 3. With this notation, it becomes a simple matter to consider models such as log(mij /mi j+1 ) = xi (γj − γj+1 ) , or
log(mij /miJ ) = xi (γj − γJ ) ,
j = 1, . . . , J − 1, j = 1, . . . , J − 1,
as were discussed in Chapter 4. Note that if X contains a column of ones, model (4) includes a term for column effects; thus, both the row and column margins of the t × 3 table are fixed.
11.6
Asymptotic Results
We now consider asymptotic results for log-linear models contained in Haberman (1977). The results are quite general. In particular, we will show that they give the standard asymptotics for log-linear models and we will show how they apply to logistic regression. In Section 11.1, no explicit discussion was given concerning the nature of the asymptotic theory used to justify the results on estimation and testing. If the expected count in each cell approaches infinity, the usual asymptotic theory applies. Unfortunately, in regression settings, this is often not appropriate. For regression problems, a more reasonable approach is to allow additional observations to have distinct values of the predictor variables rather than having them occur at values of the predictor variables that have already occurred. These additional observations with new predictors constitute new cells in the table, so the table itself is getting larger. We need to think in terms of the convergence of a sequence of models. Haberman’s results do this. They apply when fitting log-linear models to many situations in which there are few observations relative to the number of cells in the table. In particular, they satisfy our need for asymptotic results appropriate to logistic regression. Frequently, they do not apply when testing a log-linear model against the saturated model with data from a large sparse multinomial distribution. For asymptotic results that apply to this case, see Koehler (1986). Other results on large sparse multinomials are contained in Koehler and Larntz (1980), Simonoff (1983, 1985, 1986), and Zelterman (1987). The mathematics in this section are the most sophisticated in the book (with the possible exception of Chapter 12).
11.6 Asymptotic Results
379
Before discussing Haberman’s asymptotic results, we set notation for a fixed sample size problem. Consider a log-linear model µ = Xβ
(1)
where µ = log(m). To deal with the sampling scheme, assume that C(Z) ⊂ C(X). We are interested in the asymptotic distribution of those estimable functions of β that do not depend on the parameters that are forced into the model to deal with the sampling scheme. In other words, we are interested in functions γ µ = γ Xβ for which γ Z = 0. Moreover, to avoid trivial cases, ˆ. The asymptotic we assume that γ X = 0. The MLE of µ is denoted µ ˆ will be related to the function variance of γ µ σ 2 (γ µ ˆ) = γ A(m)D−1 (m)γ where A(m) = X(X D(m)X)−1 X D(m). This function can be estimated by σ ˆ 2 (γ µ ˆ) = γ A(m)D ˆ −1 (m)γ ˆ . Also of interest are the asymptotic distributions of the Pearson test statistic X 2 and the likelihood ratio test statistic G2 for testing model (1) against a reduced model µ = Wδ (2) where C(Z) ⊂ C(W ) ⊂ C(X). Let r = rank(X) − rank(Z) and s = rank(W ) − rank(Z). The asymptotic results require one more concept. Let Az = Z(Z D(m)Z)−1 Z D(m) and let N (Az ) be the null space of Az , i.e., N (Az ) = {x|Az x = 0}. If we write a vector in C(X) as x = (x1 , . . . , xq ) , define d to be 0 1 d = sup |xi | x D(m)x : x ∈ N (Az ) ∩ C(X) .
(3)
To get asymptotic results, consider a sequence of log-linear models indexed by t. Thus, the log-linear models are µt = Xt βt where µt = log(mt ) and C(Zt ) ⊂ C(Xt ). Our estimable functions of interest ˆt . Similarly, are γt µt , where γt Zt = 0. The MLE of γt µt is γt µ ˆt ) = γt At (mt )D−1 (mt )γt σ 2 (γt µ
380
11. The Matrix Approach to Logit Models
and
ˆt ) = γt At (m ˆ t )D−1 (m ˆ t )γt . σ ˆ 2 (γt µ
The reduced model of interest in testing models is µt = Wt δt with C(Zt ) ⊂ C(Wt ) ⊂ C(Xt ). The ranks are rt = rank(Xt )−rank(Zt ) and st = rank(Wt ) − rank(Zt ). Note that rt is also the rank of N (Azt ) ∩ C(Xt ), cf. Proposition 11.6.5. Finally, 0 1 dt = sup |xi | x D(mt )x : x ∈ N (Azt ) ∩ C(Xt ) . Obviously, the standard asymptotic results will not hold for all sequences of log-linear models. There must be some restrictions on the models. The restriction involves dt . Theorem 11.6.1. If rt dt → 0 as t → ∞, then L (a) [γt µ ˆt − γt µt ] σ 2 (γt µ ˆt ) → N (0, 1), P
ˆt )/σ 2 (γt µ ˆt ) → 1. (b) σ ˆ 2 (γt µ If rt dt → 0 as t → ∞, then L [γt µ ˆt − γt µt ] σ ˆ 2 (γt µ ˆt ) → N (0, 1) .
Corollary 11.6.2.
Let G2 and X 2 be the likelihood ratio and Pearson test statistics for testing model (2) against model (1). Theorem 11.6.3. If rt dt → 0 and rt − st → f as t → ∞ and if µt ∈ C(Wt ) for t ≥ 0, then L
(a) G2t → χ2 (f ), L
(b) Xt2 → χ2 (f ), P
(c) G2t − Xt2 → 0. These results imply the usual asymptotic results, cf. Chapter 12. In the usual results, replace the index t by the sample size N . For all N , we have ZN = Z, WN = W , XN = X, and γN = γ. Moreover, rN = r,
11.6 Asymptotic Results
381
rN − sN = r − s, and mN = N m∗ , where m∗ is defined as in Section 10.3. With these adjustments, Theorems 1 and 3 give the standard results. For the theorems to apply, we need rN dN → 0. Because rN is fixed, this is simply the condition that dN → 0. To see that dN → 0, use the CauchySchwartz inequality. Let ei = (0, . . . , 0, 1, 0, . . . , 0) where the 1 is in the ith place. Thus, for any vector x, xi = ei x. As N → ∞, dN → 0.
Proposition 11.6.4. Proof.
By Cauchy-Schwartz, for x ∈ C(X) (ei x)2
√ √ = ([ei D(1/ mN )][D( mN )x])2 ≤ (ei D(1/mN )ei )(x D(mN )x) .
By the definition of dN , d2N
≤ sup{(ei x)2 /x D(mN )x : x ∈ C(X)} ≤ max ei D(1/mN )ei i
= N −1 max ei D(1/m∗ )ei . i
Because maxi ei D(1/m∗ )ei is a fixed positive constant, d2N → 0; thus, dN → 0. 2 A primary use of these theorems is in their application to logistic regression models. In logistic regression, the model is η = Xβ, where η = (log(m11 /m21 ), . . . , log(m1t /m2t )) . The equivalent log-linear model is α I X α . µ= ≡ [Z, XL ] β I 0 β As discussed in Section 1, rather than letting the mij ’s get large (i.e., taking additional observations in existing cells), it is more realistic to let the number of cells get large while retaining the same basic structure in the logistic regression model. In other words, as the sample size increases, we add new rows to the design matrix while the expected counts in each cell are allowed to remain small. Setting the notation for this case, µt is a 2t × 1 matrix, It Xt αt µt = (4) It 0 βt where It is a t × t identity matrix, 0 is a t × p matrix of zeros, and Xt is a t × p design matrix for the logistic regression model. We assume that for
382
11. The Matrix Approach to Logit Models
t large enough, the matrix Xt has rank(Xt ) = p. For testing (4) against a reduced model, write the reduced model as It X0t αt (5) µt = It 0 ηt where C(X0t ) ⊂ C(Xt ) and rank(X0t ) = p0 . Note that the full model is a log-linear model where the rank of the design matrix is t + p and rt = (t + p) − t = p. Similarly, for the reduced model, st = (t + p0 ) − t = p0 . As a practical matter, we never really have a sequence of models. We have one model. Thus, t is fixed (it is the number of binomial populations in the logistic regression). To apply Theorem 11.6.1, we need rt dt → 0; thus, if rd is small for our model, we use the approximation γµ ˆ − γµ ∼ N (0, 1) . σ ˆ 2 (γ µ ˆ) To apply Theorem 11.6.3, we need rt dt → 0 and (rt − st ) → f . In our current setup, rt − st = p − p0 , which is fixed; so, again, if rd is small and the reduced model is adequate, we use the approximations G2 ∼ χ2 (p − p0 ) and X 2 ∼ χ2 (p − p0 ) . It remains for us to get a handle on what conditions are necessary to have rt dt → 0. Before doing that, we comment on why lack of fit tests often work poorly for logistic regression. The standard test for lack of fit of a logistic regression model is to test a model against the saturated log-linear model. To simplify notation, we will use model (5) as the model to be tested, but now, instead of testing model (5) against a model with similar structure like model (4), we test it against the saturated model. The saturated model can be written I I αt . µt = t t It 0 βt The difference between the saturated model and model (4) is that Xt is replaced by It . Whereas rank(Xt ) = p, we now have rank(It ) = t. With the saturated model as the full model, we have rt = t. For Theorem 11.6.3 to apply, we need rt dt → 0 and, for some value f , rt − st → f . First, rt −st = t−p0 → ∞, so there is no appropriate number of degrees of freedom for the asymptotic test. More importantly, because the condition rt dt → 0 is the condition used to ensure that the model behaves well asymptotically and because the saturated model has rt dt = tdt , for the saturated model to behave well asymptotically it must have dt → 0 very rapidly. Under standard sampling schemes, this does not happen and Theorem 11.6.3 does
11.6 Asymptotic Results
383
not apply. This is not to say that the lack of fit test will always work poorly. If the expected counts in each cell are large, we do not need to appeal to the sequence of models argument and, thus, the usual asymptotic results give an approximate χ2 distribution for the lack of fit test. However, if expected cell counts are not large, there is no reason to believe that the asymptotic lack of fit test will work well and experience indicates that it does not work well. A condition under which rt dt → 0 for a sequence of logistic regression models is that the logistic regression leverages aii all approach zero while the bi ’s do not, cf. equation (11.1.2). The remainder of this section is devoted to mathematical details associated with this demonstration. It requires a facility with linear models comparable to that developed in Christensen (1996b). The primary result is finding an explicit form for (A − Az )D−1 (m). With rt = p for all t, it suffices to show that dt → 0. To do this, we will characterize d for an arbitrary logistic regression model. The expected cell counts are m = (m11 , m21 , . . . , mt1 , m12 , . . . , mt2 ) . Write mj = (m1j , . . . , mtj ) for j = 1, 2 so that m = (m1 , m2 ). For a logistic regression model, write I X W ≡ X∗ = , I 0 so the symbol W is playing the role generally reserved for X in a log-linear model. Write A = W (W D(m)W )−1 W D(m). For logistic regression, the value of d is defined as the sup of a function of w over all w’s in N (Az ) ∩ C(W ). First, we need to characterize N (Az ) ∩ C(W ). N (Az ) ∩ C(W ) = C(A − Az ).
Proposition 11.6.5.
Proof. A is a projection operator onto C(W ). In particular, for w ∈ C(W ), Aw = w and AA = A. Similarly, Az is a projection operator onto C(Z). Moreover, AAz = Az . If w ∈ N (Az ) ∩ C(W ), then (A − Az )w = Aw − Az w = Aw = w; thus, w ∈ C(A − Az ). If w ∈ C(A − Az ), clearly w ∈ C(W ). We need to show that w ∈ N (Az ). The matrix (A − Az ) is a projection operator, so (A − Az )w = w and, thus, 2 w = (A − Az )w = Aw − Az w = w − Az w, so Az w = 0. We now examine the behavior of d. For i = 1, . . . , 2t, let ei = (0, . . . , 0, 1, 0, . . . , 0) where the 1 is in the ith place. By (3), 1 0 d = sup max |ei x| x D(m)x : x ∈ C(A − Az ) . i
384
11. The Matrix Approach to Logit Models
Because x ∈ C(A − Az ) can be written as x = (A − Az )b for some b, write 0 1 d = sup max |ei (A − Az )b| b (A − Az ) D(m)(A − Az )b : all b . i
Note that |ei (A − Az )b| = |ei (A − Az )(A − Az )b| √ √ = |[ei (A − Az )D−1 ( m)][D( m)(A − Az )b]| ei (A − Az )D−1 (m)(A − Az ) ei b (A − Az ) D(m)(A − Az )b ≤ where the inequality is just the Cauchy-Schwartz inequality. It follows that d ≤ max i
Proposition 11.6.6.
ei (A − Az )D−1 (m)(A − Az ) ei .
(A − Az )D−1 (m)(A − Az ) = (A − Az )D−1 (m).
Proof. Using the definitions of A and Az , the fact that (A − Az )Z = 0, and that AZ = Z, so Z A = Z , we find (A − Az )D−1 (m)(A − Az ) = (A − Az )D−1 (m)D(m)[W (W D(m)W )−1 W − Z(Z D(m)Z)−1 Z ] = (A − Az )[W (W D(m)W )−1 W − Z(Z D(m)Z)−1 Z ] = (A − Az )[W (W D(m)W )−1 W ] = A[W (W D(m)W )−1 W ] − Az [W (W D(m)W )−1 W ] = [W (W D(m)W )−1 W ] − Z(Z D(m)Z)−1 Z D(m)[W (W D(m)W )−1 W ] = AD−1 (m) − Z(Z D(m)Z)−1 Z A = AD−1 (m) − Z(Z D(m)Z)−1 Z = AD−1 (m) − Az D−1 (m) = (A − Az )D−1 (m).
2
Using Proposition 11.6.6, we now have d ≤ max i
ei (A − Az )D−1 (m)ei .
(6)
11.6 Asymptotic Results
385
So far, we have not used the logistic regression structure of the log-linear model. We now use the special structure of logistic regression to further characterize inequality (6). Proposition 11.6.7. D(m1 )D−1 (m1 + m2 ) D(m2 )D−1 (m1 + m2 ) Az = . D(m1 )D−1 (m1 + m2 ) D(m2 )D−1 (m1 + m2 ) Proof. This follows immediately from Lemma 11.1.6, the definitions of Z and Az , and the fact that D(m1 ) 0 D(m) = . 2 0 D(m2 ) To simplify notation, for vectors v = (v1 , . . . , vq ) and u = (u1 , . . . , uq ) , let vu = (v1 u1 , . . . , vq uq ) and v/u = (v1 /u1 , . . . , vq /uq ) . As in (11.1.2), b ≡ m1 m2 /(m1 + m2 ). Proposition 11.6.8. A − Az D(m2 /(m1 + m2 ))X = [X D(b)X]−1 [X D(b) , −X D(b)] . −D(m1 /(m1 + m2 ))X √ Proof. The perpendicular projection operator onto C(D( m)W ) is √ √ −1 D( m)AD ( m)√≡ Mw and the √ perpendicular projection operator onto √ C (D( m)Z) is D( m)Az D−1 ( m) ≡ Mz . It follows that √ √ √ √ Mw − Mz = D( m)AD−1 ( m) − D( m)Az D−1 ( m) √ √ = D( m)(A − Az )D−1 ( m) . √ √ If we can find Mw − Mz , then A − Az = D−1 ( m)(Mw − Mz )D( m). The matrix Mw − Mz is the perpendicular projection operator onto √ X C (I − Mz )D( m) , 0 cf. Christensen (1996b, Sections 9.1, 9.2). √ X (I − Mz )D( m) 0 √ √ D( m1 )X D(m1 m1 /(m1 + m2 ))X √ = − 0 D(m1 m2 /(m1 + m2 ))X √ D( m1 m2 /(m1 + m2 ))X √ = . −D( m2 m1 /(m1 + m2 ))X
386
11. The Matrix Approach to Logit Models
Multiplying out to get the perpendicular projection operator and simplifying gives the result. 2 Finally, the main result is Proposition 11.6.9. (A − Az )D
−1
D(m2 /(m1 + m2 ))X (m) = [X D(b)X]−1 −D(m1 /(m1 + m2 ))X × [X D(m2 /(m1 + m2 )) , −X D(m1 /(m1 + m2 ))]
Proof.
Multiply A − Az by D−1 (m).
2
We can now examine the exact nature of ei (A − Az )D−1 (m)ei . Write x 1
x2 X= .. ; . xt then for i = 1, . . . , t, let j = i and ei (A − Az )D−1 (m)ei = [mj2 /(mj1 + mj2 )]2 xj [X D(b)X]−1 xj , for i = t + 1, . . . , 2t, let j = i − t and ei (A − Az )D−1 (m)ei = [mj1 /(mj1 + mj2 )]2 xj [X D(b)X]−1 xj . If these terms approach zero for a sequence of logistic regression models, then inequality (6) implies that Theorems 11.6.1 and 11.6.3 hold. In practice, if these terms are small for all i, then Theorems 11.6.1 and 11.6.3 should provide reasonable approximate distributions. The key (cf. Exercise 11.1) is that the X matrix needs to have the property that xi [X D(b)X]−1 xi is small for all i. This condition can also be related to linear model theory. If the elements of m1 and m2 are bounded above zero, then the terms xi [X D(b)X]−1 xi will converge to zero if and only if the terms xi [X X]−1 xi converge to zero. The condition that the terms xi [X X]−1 xi all converge to zero is known as Huber’s condition. This is the condition assumed by Arnold (1981) to show that the usual distributions hold asymptotically for linear models with non-normal independent errors. Similarly, if Huber’s condition holds
11.7 Discrimination, Allocation, and Retrospective Data
387
for a logistic regression model, then the usual asymptotic results for logistic regression hold. Note that Huber’s condition is sufficient to imply that asymptotic results hold; it is not a necessary condition. Exercise 11.1. Show for logistic regression that dt → 0 as the maximum leverage goes to zero.
11.7
Discrimination, Allocation, and Retrospective Data
The Chapman data of Section 4.1 are the result of a prospective study; a large number of people were sampled and they were classified by whether they had experienced a coronary incident and by their values on the variables age, systolic blood pressure, diastolic blood pressure, cholesterol, height, and weight. Only 26 of the 200 people had coronary incidents, so most of the information in the data is about people who did not have coronaries. Retrospective studies are commonly used to examine events that are relatively rare, like coronary incidents. They address the problem of having a sample that contains few observations on the rare event. Consider a response variable with three levels: no incident, mild coronary incident, and severe coronary incident. One might take a sample of 125 people with no incidents, a sample of 40 people with mild incidents, and a sample of 35 people with severe coronary incidents. Thus, the sample sizes in the rare event categories are fixed by design. Once again, the case variables age, systolic blood pressure, diastolic blood pressure, cholesterol, height, and weight can be measured for each of the 200 individuals. When the response categories are also the populations sampled, it is easier to get substantial numbers of observations in each response category. Prospective and retrospective studies were discussed earlier at the beginning of Chapter 4. While all of the case variables discussed above are really continuous, there are only a finite number of values that one could actually measure for any of the variables. For example, one often measures age in integer values of years and height in integer values of inches. Moreover, there are upper bounds on these values. Similar limitations based on the accuracy of the measuring instruments exist for all continuous variables. Thus, there are only a finite number of combinations of the case variables that can be considered. Call this very large but finite number S. The retrospective study described above yields a 3×S table in which each of the three rows is an independent multinomial sample. We wish to model these multinomials so that we can explain the data and, perhaps more importantly, predict the population into which a new case would fall when only the information on the case variables is available. The modeling problem can be thought of
388
11. The Matrix Approach to Logit Models
as discriminating among the three populations. The prediction problem is one of allocating new cases to the appropriate population. Consider the allocation problem in more detail. Write the case variables as a vector xi = (Agi , Si , Di , Chi , Hi , Wi ) , i = 1, . . . , S. The value phi is the probability under population h of being in the category determined by xi . Here, h = 1, 2, 3 and i = 1, . . . , S. Write f (xi |h) = phi . The function f (xi |h) is just the discrete density (probability mass function) of population h. The value of h is a parameter. Given a new case with known case variables x in one of the S observable categories, we can view f (x|h) as a likelihood function. The maximum likelihood allocation rule assigns the new case x to the population h that has the largest value for the likelihood. The only problem with this procedure is that the probabilities f (x|h) are not known. They have to be estimated from the data. S is typically extremely large, so there are typically many more parameters (probabilities) to be estimated than observations with which to estimate them. Some sort of additional assumptions must be made in order to proceed. One way to proceed is to abandon the fact that the observed data are discrete and assume a continuous density f (x|h) as a model for the observations. If the underlying but unobservable case variables are continuous, this is extremely reasonable. In fact, it is so reasonable that people often overlook the fact that it is an approximation to what is properly a discrete distribution for the observations. The problem is not in approximating a discrete distribution with a continuous distribution but in finding a continuous distribution that provides an appropriate model. Traditionally, the case variables have been modeled with a multivariate normal distribution. For multivariate normals, estimating the mean vector and the covariance matrix for each population leads to natural estimates for the f (x|h)’s. This approach to discrimination and allocation is originally due to R. A. Fisher (1936). More recently, nonparametric density estimation has been used to model the distributions, cf. Seber (1984, Section 6.5). Rather than invoking continuity, another way to proceed is to cut down the number of categories to a manageable size. Rather than using all S categories, one can restrict attention to the x values that were actually observed. In our hypothetical example, if all the xi ’s are distinct, this restriction yields a 3 × 200 table. The xi ’s are frequently distinct when any of the case variables are continuous, but if they are not distinct, it simply reduces the number of columns in the table. We will assume that the xi ’s are distinct. The original sampling scheme for the 3×S table was product-multinomial in the rows and the standard method of analysis, as illustrated in Section 4.7, also treats the 3 × 200 table as product-multinomial in the rows. Alas, restricting the table invalidates this product-multinomial sampling scheme for the reduced table. For example, under product-multinomial sampling,
11.7 Discrimination, Allocation, and Retrospective Data
389
there is a positive probability of getting zeros for all three of the counts in a given column. We are using only the xi ’s that are actually observed, so every column must have at least one observation in it. There are two ways to analyze the data in the 3×200 table. The standard method illustrated in Section 4.7 is both a partial likelihood analysis and an extended likelihood analysis. Alternatively, one can perform a conditional likelihood analysis to obtain information on some parameters. Partial likelihood analysis depends on having a likelihood that can be factored into the product of two terms. One term, the partial likelihood, must involve all of the parameters of interest and only those parameters. The second term involves only nuisance parameters. Without loss of generality, assume that the actual observations occur in the first 200 categories. For each population h, let ph = (ph1 , . . . , phS ) be the probability vector and let Nh ≡ nh· be the sample size. The likelihood is L(p1 , p2 , p3 )
=
3 S
pnhihi
h=1 i=1
% =
200 3
&% pnhihi
h=1 i=1
% =
3 200
&
S 3
& pnhihi
h=1 i=201
pnhihi
h=1 i=1
where the last equality holds because for i > 200, nhi = 0 and any loglinear model has phi > 0 for all h and i. With all the zero counts, the likelihood cannot be maximized subject to the condition of positive cell probabilities. However, this function is also the partial likelihood involving only the parameters phi , h = 1, 2, 3, i = 1, . . . , 200. As such, it can be maximized. Technically, write % L(p1 , p2 , p3 ) =
200 3
& pnhihi
· Γ(phi : h = 1, 2, 3; i = 201, . . . , S)
h=1 i=1
where Γ(phi : h = 1, 2, 3; i = 201, . . . , S) ≡ 1. We have factorized the likelihood appropriately, so the partial likelihood for phi : h = 1, 2, 3, i = 1, . . . , 200 is 200 3 h=1 i=1
pnhihi .
390
11. The Matrix Approach to Logit Models
Obviously, the log of the partial likelihood is 200 3
nhi log(phi ) .
h=1 i=1
We now incorporate a log-linear model into the analysis. Assume a typical multinomial response model for the parameters mhi = Nh phi that consists of
log(mhi ) = αi + xi γh
(1)
for all h and i, cf. model (11.5.4). Dropping a constant that depends only on the Nh ’s, the log-likelihood becomes (γ1 , γ2 , γ3 , αi , i = 1, . . . , S) =
S 3
nhi log(mhi )
h=1 i=1
=
S 3
nhi [αi + xi γh ]
h=1 i=1
=
200 3
nhi [αi + xi γh ]
h=1 i=1
where, again, the last equality follows from the fact that nhi = 0 for i = 201, . . . , S. In the 3 × S table, the row totals are fixed by the productmultinomial sampling scheme. The standard analysis of the 3 × 200 table also treats the rows as product-multinomials, so the row totals are fixed. Fixing the row totals requires the inclusion of main effects for rows in the log-linear model. These can be incorporated into the xi γh terms. Write xi = (1, xi2 , . . . , xip ), where the case variables are xi2 , . . . , xip . Then with γh = (γh1 , . . . , γhp ) , the intercept parameters γh1 are the row main effects. For a partial likelihood analysis, observe that the log-likelihood is the sum of two terms, one of which depends on the parameters of interest γ1 , γ2 , γ3 , αi , i = 1, . . . , 200, and another, which in this case is identically equal to zero, that depends only on αi , i = 201, . . . , S. (The second function is identically zero, so it depends on anything we want it to depend on.) A partial likelihood analysis then obtains estimates of γ1 , γ2 , γ3 , αi , i = 1, . . . , 200, by maximizing the term that involves only those parameters. Of course, the only difference between the log partial likelihood and the log-likelihood is that the log partial likelihood is considered as a function of fewer parameters. In particular, the log partial likelihood is exactly the same as the log-likelihood for the 3 × 200 table under product-multinomial sampling. Thus, the MLEs from the standard analysis are maximum partial likelihood estimates for the full 3 × S table.
11.7 Discrimination, Allocation, and Retrospective Data
391
Another productive way to use the log-likelihood is to consider extended maximum likelihood estimates, cf. Haberman (1974a, p. 402). An estimate m ˆ is an extended maximum likelihood estimate if the log-likelihood (m) converges to its supremum as m converges to m. ˆ In this setup, the usual MLEs from the 3 × 200 table are extended MLEs. The log-likelihood function only depends on γ1 , γ2 , γ3 , αi , i = 1, . . . , 200, so the reduced table MLEs together with pˆhi = 0 for h = 1, 2, 3, i = 201, . . . , S maximize the full table log-likelihood function subject to the constraints pˆh· = 1. The only problem is that log-linear models do not allow pˆhi = 0 for any h, i. Allowing extended MLEs removes the problem. Whether the justification is partial likelihood or extended likelihood, we arrive at an analysis based on the MLEs for the 3 × 200 table with productmultinomial sampling in the rows. Details of the analysis are given in the next subsection. The conditional likelihood analysis simply defines the likelihood in terms of the conditional distribution of the 3×200 table given that these were the only 200 columns of the 3 × S table that were observed. The conditional likelihood of the 3 × 200 table is 3 200 200 3 pnhihi prhihi (2) r
h=1 i=1
h=1 i=1
where the sum is over all 3 × S tables of counts r = (r11 , . . . , r3S ) with rh· = nh· = Nh , r·i = n·i = 1, r·i = 0,
h = 1, 2, 3 , i = 1, . . . , 200 ,
i = 201, . . . , S .
It is not difficult to see that the conditional likelihood does not depend on the αi ’s or the intercept terms γh1 , cf. Exercise 11.8.4. Thus, any inferences that require estimates of these quantities cannot be made using the conditional likelihood approach. In particular, it will be seen in the next subsection that allocation of observations depends on the vectors γh , including the components γh1 . Thus, the conditional likelihood approach cannot be used for allocation. The key early paper on logistic discrimination was written by Anderson (1972). Anderson and Blair (1982) clarified several aspects of the theory and introduced another basis for analysis: penalized maximum likelihood. Some other relevant works are Farewell (1979), Prentice and Pyke (1979), and Breslow and Day (1980).
The Partial Likelihood Analysis We have a vector of allocation variables xi = (xi1 , . . . , xip ) that are observed on each of t individuals. Thus, far in the section, we have always
392
11. The Matrix Approach to Logit Models
used t = 200, but the conclusions hold for any value of t. In addition to observing the xi ’s, we know to which of the three populations each individual belongs. Our goal is to use the information on these t individuals in order to allocate future individuals into an appropriate population. To do this, we set up a model similar to the standard multinomial response model. Let x1 .. X= . . xt
Our data are n = (n11 , . . . , n1t , n21 , . . . , n2t , n31 , . . . , n3t ) where nhi = 1 if the ith case belongs to population h and nhi = 0 otherwise. For a prospective multinomial response model, the 3 × t table either is or can be considered to be the result of taking t independent trinomial samples and can be analyzed by standard logistic regression. With the retrospective sampling appropriate for discrimination problems, the sampling scheme is the result of taking three independent multinomial samples each with S categories where S is large and unknown. As discussed above, for the purpose of estimation the samples can be treated as three independent multinomials with t categories. The standard prospective approach assumes that column totals of the 3 × t table are fixed by the sampling scheme, whereas the retrospective (discrimination) approach assumes that the row totals are fixed by the sampling scheme. With a 3 × t table in which the row totals are fixed, indicator variables must be included in the model to ensure that the estimated row totals equal the observed row totals. This is accomplished by requiring that the X matrix include a column of 1s (or its equivalent). In other words, for logistic models, the sampling scheme requires that models for discrimination data include intercepts. It is a common practice to include an intercept in multinomial response models, so the fact that an intercept is required for retrospective data is easily overlooked. The log-linear model log(mhi ) = αi + xi γh for the 3 × t table is written in matrix form as α It X 0 0 γ log(m) = µ = It 0 X 0 1 . γ2 It 0 0 X γ3 This model is of exactly the same form as a standard multinomial response model and is fit in exactly the same way. The difference is in the interpretation of the underlying probabilities. In prospective sampling, the multinomial expected cell counts are mhi = n·i phi where p·i = 1. For retrospective
11.7 Discrimination, Allocation, and Retrospective Data
393
sampling, mhi = nh· phi = Nh phi where ph· = 1 . In particular,
log(phi ) = αi + xi γh − log(nh· ) .
The MLE of log(phi ), under the device of treating the sampling as productmultinomial in the rows of the 3×t table, is both the maximum partial likelihood estimate and an extended maximum likelihood estimate of log(phi ) in the full 3 × S table. The maximum likelihood allocation method applied to the observed data assigns case i to the population h with log(phi ) = maxk {log(pki )}. The estimated maximum likelihood allocation method assigns i to the population h with log(ˆ phi ) = max{log(ˆ pki )}. (3) k
Note that equation (3) is equivalent to αi + xi γˆk − log(nk· )} α ˆ i + xi γˆh − log(nh· ) = max {ˆ k
which is equivalent to xi γˆh − log(nh· ) = max {xi γˆk − log(nk· )} . k
(4)
Equation (4) does not depend on the α ˆ i ’s; thus, the allocation procedure depends on the individual only through the value of xi . Equation (4) can also be used as the basis for classifying new cases from an unknown population into one of the three possible populations. If the new case has observation vector x, the estimated maximum likelihood allocation rule is to classify the new case into population h if x γˆh − log(nh· ) = max {x γˆk − log(nk· )} . k
This result depends on the fact that the allocation rule is really a function of the likelihood ratios of the various populations. The likelihood ratios depend only on x, the γ’s, and the log(nh· )’s. All of these are either known or can be estimated. For a given value of x, the corresponding value of α does not enter into the analysis. Moreover, as illustrated in Section 4.7, if the likelihood ratios can be estimated, the posterior probabilities can also be estimated. To evaluate how well the model discriminates between populations, check to see how often the cases in the data are allocated to the correct population. In other words, when case i is really in population h, see how often
394
11. The Matrix Approach to Logit Models
the probability that case i comes from population h is larger than the probabilities that case i comes from any of the other populations. Because the evaluation is carried out on the same data that generated the estimates of the phi ’s, the results of the evaluation will be biased in favor of the discrimination method; i.e., the method will look better than it really is. See Section 4.7 for more discussion of this problem.
11.8
Exercises
Exercise 11.8.1. Analyze the data of Exercise 8.4.1 as a logistic regression with nodal involvement as the response. Include the investigation of higher-order interactions in your analysis. The original investigator was particularly interested in whether acid was a valuable predictor of nodal involvement. Exercise 11.8.2. Asymptotic Inference for the LD(50). In Exercise 4.8.9, models and methods for estimating the LD(50) were discussed. Use the delta method of Exercise 10.8.4 to obtain an asymptotic standard error for the LD(50). Using the data of Exercise 4.8.11, give a 99% confidence interval for the LD(50). Exercise 11.8.3. Fieller’s Method for the LD(50). Fieller’s method is an alternative to the delta method for obtaining an asymptotic confidence interval for the LD(50), cf. Exercise 11.8.2. Fieller’s method is thought to be less sensitive to the high correlation that is typiˆ From standard results, one can obtain the cally present between α ˆ and β. ˆ thus, for any estimated asymptotic variance and covariance for α ˆ and β; ˆ is fixed but unknown value w, an asymptotic standard error for α ˆ + βw readily available as a function of w. Denote this standard error by σ ˆ (w). If α + βw is some known value Q, a 99% confidence region for w can be obtained from ˆ −Q (ˆ α + βw) .99 = Pr −2.5758 ≤ ≤ 2.5758 σ ˆ (w) ˆ − Q)2 − 2.57582 σ = Pr (ˆ α + βw ˆ 2 (w) ≤ 0 . The 99% confidence region consists of all values of w that satisfy ˆ − Q)2 − 2.57582 σ (ˆ α + βw ˆ 2 (w) ≤ 0. Show how to use the quadratic formula to find the end points of the region. Under what conditions is the confidence region an interval? What other possibilities exist? How does this result apply to estimating the LD(50)?
11.8 Exercises
395
Using the data of Exercise 4.6.11, give a 99% confidence interval for the LD(50). Exercise 11.8.4. Show that the conditional likelihood given by equation (11.7.1) and display (11.7.2) does not depend on the αi ’s or the intercept terms γh1 . Here, xi = (1, xi2 , . . . , xik ) and γ = (γh1 , . . . , γhk ) . Exercise 11.8.5. Use the delta method of Exercise 10.8.4, the logistic transform, and (11.1.3) to show that pˆij − pij ∼ N (0, 1) , pˆij (1 − pˆij )ˆ aii /Ni where Ni = ni1 + ni2 .
12 Maximum Likelihood Theory for Log-Linear Models
This chapter presents the basic theoretical results of fitting log-linear models by maximum likelihood. The level of mathematical sophistication is considerably higher than in the rest of the book. The presentation assumes knowledge of advanced calculus, mathematical statistics, and large sample theory. Although the results in this chapter are proven in a different manner than for regular linear models, the results themselves are quite similar in nature. The common linear structure of the two techniques leads to the well-known analogies between them. A familiarity with log-linear models at the level of, say Fienberg (1980), is assumed. In order to simplify proofs, the o(·), O(·), op (·), Op (·) notations have been used extensively. See Bishop, Fienberg, and Holland (1975) for a detailed discussion of these. Section 1 introduces notation and recalls some analytic results. Section 2 discusses finite sample properties of maximum likelihood estimators. Section 3 treats asymptotic results. Section 4 examines how the theory applies to weighted least squares, obtaining variance estimates, and logit and multinomial response models. Section 5 contains two proofs that are more involved than the rest of the chapter.
12.1
Notation
All vectors are considered as column vectors unless otherwise stated.
12.2 Fixed Sample Size Properties
397
We will frequently apply the same real valued function to each element of a vector. Let x be a vector in Rs and f a function from R to R, then the function from Rs to Rs that maps x elementwise is denoted f (x) = f (x1 ), . . . , f (xs ) . The most common choice of f will be the log function; thus, log(x) = (log x1 , . . . , log xs ) . A diagonal matrix with the values of the vector x on the diagonal will be written D(x). As usual, an s × 1 vector of ones is written Js with the subscript dropped when the dimension is clear. Proofs using the op (·), Op (·) notation are usually divided into an analytic argument and a stochastic argument. To reduce the length of the discussion, these arguments have frequently been run together. Therefore, if f (x) = o(g(x)) and XN is a sequence of random variables, we write f (XN ) = o(g(XN )). Two frequently used properties are (a) o(Op (N −α )) = op (N −α ) for any α > 0 and (b) o(op (1)) = op (1). For example, if g(XN ) = op (1), we have f (XN ) = o(g(XN )) = o(op (1)) = op (1). If F is a function from Rs into Rt with F (x) = (f1 (x), . . . , ft (x)) , then the derivative of F at c is the t × s matrix of partial derivatives, dF (c) = [∂fi /∂xj |x=c ]. If g maps Rs into R, then dg(c) is a 1×s row vector. The second derivative matrix of g at c is ! d2g(c) = d[(dg(x)) |x=c ] = ∂ 2g/∂xi ∂xj |x=c , which is an s × s matrix. Taylor’s theorem can be written ! g(x) = g(c) + dg(c)(x − c) + (x − c) d2g(c) (x − c)/2 + o x − c2 , where x−c2 = (x−c) (x−c). Critical points are points c, where dg(c) = 0. The chain rule can be written as a matrix product: If f : Rs → Rt and g : Rt → Ru , then d(g ◦ f )(c) = dg(f (c))df (c).
12.2
Fixed Sample Size Properties
Consider a table of counts with q cells. The observations are denoted n = (n1 , . . . , nq ) . The ni ’s are assumed to be the result of Poisson, multinomial, or product-multinomial sampling. Let E(n) = m and let X be a known q×p matrix. The log-linear model is log(m) ≡ µ = Xb for some vector b. The log-linear model is simply the requirement that µ ∈ C(X). Unless otherwise indicated, X will be assumed to have rank p.
398
12. Maximum Likelihood Theory for Log-Linear Models
If the observations ni in the cells are independent Poisson(mi ) random variables, the likelihood function is q
! e−mi mni i/ni ! .
(1)
i=1
The log-likelihood is p (n, µ)
= = =
q i=1 q i=1 q
[−mi + ni log mi − log(ni !)] [−eµi + ni µi − log(ni !)] −eµi + n µ −
i=1
q
(2)
log(ni !).
i=1
If the log-linear model holds, µ = Xb, so p (n, µ) = n Xb −
q i=1
eµi −
q
log(ni !).
i=1
Since the distribution of n is in the exponential family, X n is a complete sufficient statistic. If the observations come from a product-multinomial sampling scheme, certain of the margins are fixed. Assume that there are r independent multinomials. (If r = 1, the sampling scheme is a simple multinomial.) Partition {1, . . . , q} into r sets Q1 , . . . , Qr , each set containing the indices for one of the multinomials. For i = 1, . . . , q, j = 1, . . . , r, let xj be a vector with ith row, xij = δQj (i) where δQj (i) is one if i ∈ Qj and zero otherwise. Thus, xj is a column of dummy variables indicating the jth population. By the sampling scheme, n xj is fixed for j = 1, . . . , r. In particular, n xj = m xj = Nj , the sample size for the jth multinomial. It will be convenient to combine the vectors xj into a matrix, say X0 = [x1 , . . . , xr ]. With product-multinomial sampling, there are two restrictions on the parameters: (a) µ ∈ C(X), and (b) n X0 = m X0 . Estimates of the parameters also need to satisfy these conditions. If we assume that C(X0 ) ⊂ C(X), we will see that the MLE of m, based only on condition (a), will automatically satisfy condition (b). We will assume throughout that C(X0 ) ⊂ C(X). For Poisson sampling, X0 can be taken as a matrix of zeros. We also need the assumption that Jq ∈ C(X). For product-multinomial sampling, Jq ∈ C(X0 ), so this is not a new restriction. For Poisson sampling, we are requiring that an overall mean (or its equivalent) be fitted. Recall that the probability of an occurrence in the ith cell under productmultinomial sampling is mi /Nj , where i ∈ Qj , so the likelihood function
12.2 Fixed Sample Size Properties
is r k=1
mi ni Nk ! . ni ! Nk
399
i∈Qk
(3)
i∈Qk
Let m (n, µ) be the log of (3). For m (n, µ) to be a log-likelihood, µ must have the property that n X0 = m X0 , where m = eµ . In fact, m (n, µ) is only defined for such µ. In particular, the maximum likelihood estimate of µ, under product-multinomial sampling, must be a value of µ for which m (n, µ) is defined. We will expand the domain of m (n, µ) to include all real vectors µ. For a log-linear model µ ∈ C(X) with C(X0 ) ⊂ C(X), we will find the maximum of m (n, µ) without reference to the condition n X0 = m X0 . We will then observe that the value of µ that maximizes m (n, µ) also satisfies the condition n X0 = m X0 . This value of µ must be the maximum likelihood estimate. m We now proceed to expand the domain of (n, µ). Since i∈Qk ni = i∈Qk mi = Nk , (3) can be rewritten as r
Nk !eNk N −Nk k
r
mni i e−mi/ni !
i∈Qk
k=1
or
Nk !eNk Nk−Nk
q
e−mi mni i/ni !
.
(4)
i=1
k=1
The second term in (4) is exactly (1). The first term depends only on the Nk ’s. If we write a(N1 , . . . , Nr ) as the log of the first term we can write the log-likelihood for product-multinomial sampling as m (n, µ) = a(N1 , . . . , Nr ) + p (n, µ). Since m (n, µ) is defined only for values of µ satisfying n X0 = m X0 , this relationship holds only for such values of µ. However, the relationship can be used to define the function m (n, µ) for all values of µ, because p (n, µ) is defined for all values of µ. Since the difference of p (n, µ) and m (n, µ) does not depend on µ, the maximums of the two functions, with respect to µ, will occur at the same place. Rather than using either p (n, µ) or m (n, µ), remove the term in (2) that does not depend on µ to define (n, µ) = n µ −
q
eµi = n µ − J m.
(5)
i=1
For any of the sampling schemes considered, it will be enough to find MLEs by maximizing (n, µ).
400
12. Maximum Likelihood Theory for Log-Linear Models
If the log-linear model µ ∈ C(X) holds, we need to maximize (n, µ) subject to the condition that µ ∈ C(X). Since X is of full rank, µ can be written uniquely as µ = Xb. We need to find the unconstrained maximum of fn (b) ≡ (n, µ). Taking derivatives with respect to b (and µ) dfn (b) = [d(n, µ)] dµ(b), q d(n, µ) = d n µ − eµi = n − (eµ1 , . . . , eµq ) = n − m ,
i=1
and dµ(b) = d(Xb) = X. Substituting, we get
dfn (b) = (n − m) X.
(6)
It should be recalled that m is a function of b (m = m(b)). Critical points are found by setting the partial derivative matrix, dfn (b), equal to zero. As will be seen below, ˆb, the MLE of b, must occur at a critical point, so ˆb must satisfy X m(ˆb) = X n.
(7)
In particular, since C(X0 ) ⊂ C(X), we have X0 m(ˆb) = X0 n. As indicated above, if C(X0 ) ⊂ C(X) the additional restriction on the MLEs from product-multinomial sampling is automatically satisfied. By considering d2fn (b), we can investigate the nature of the critical points. d2fn (b) = d(dfn (b) ) = d(X n − X m(b)) = −X dm(b). Write
x1 X = ... = [xij ] ,
xq
then
m(b) = (m1 , . . . , mq ) = ex1 b , . . . , exq b .
Therefore,
x11 ex1 b .. dm(b) = .
xq1 exq b
· · · x1p ex1 b .. .
· · · xqp exq b
(8)
12.2 Fixed Sample Size Properties
x1 ex1 b x1 m1 = ... = ...
401
xq exq b
(9)
xq mq
= D(m)X, where m = m(b). Substitution into (8) gives d2fn (b) = −X D(m(b))X.
(10)
In a log-linear model, mi is always positive because mi = eµi ; therefore, d2fn (b) is negative definite and fn (b) is strictly convex. If fn (b) takes on its maximum, it must be at a critical point and the maximum is unique. The maximum is the MLE ˆb = ˆb(n). ˆb uniquely determines MLEs µ ˆ=µ ˆ(n) = X ˆb and m ˆ = m(n) ˆ = m(ˆ µ) = m(ˆb). A simple condition exists that ensures that fn (b) takes on its maximum. Theorem 12.2.1. If there exists ξ ⊥ C(X) such that ni + ξi > 0 for i = 1, . . . , q, then (n, µ) = fn (b) attains its maximum. Proof.
(n, µ) = n µ −
q
eµi .
i=1
Let ξ ⊥ C(X), then for µ ∈ C(X), (n, µ) = g(µ) where g(µ)
=
(n + ξ) µ −
q
eµi
i=1
=
q
(ni + ξi )µi − eµi .
i=1
With all the ni + ξi ’s positive, as any µi → −∞, g(µ) → −∞. Similarly, if any µi → ∞, g(µ) → −∞. This establishes that the convex function g(µ) must take on its maximum. Moreover, g(µ) must take on its maximum for µ ∈ C(X) because C(X) is a closed set. 2 In particular, if all the ni ’s are positive, the MLE of µ exists. Henceforth, we assume that the (unique) MLE of µ exists. In finding MLEs of µ and m, we relied on a particular parameterization µ = Xb. Any alternative parameterization µ = Zγ where C(Z) = C(X) is equally valid. Regardless of the parameterization, we are maximizing (n, µ) subject to the constraint that µ ∈ C(X). Since we found a unique MLE µ ˆ, it must be valid for any parameterization. Similarly, m ˆ does not depend on the parameterization. In particular, Z need not be of full column rank.
402
12. Maximum Likelihood Theory for Log-Linear Models
When Z is not of full rank, γ is not estimable. The problem of estimability can be handled as it is for standard linear models. For testing hypotheses, the likelihood ratio test statistic (LRTS) is often used. To test H0 : µ = µ0 versus HA : µ = µ0
where µ0 ∈ C(X),
ˆ)]. For testing the LRTS is −2[(n, µ0 ) − (n, µ / C(X1 ) H0 : µ ∈ C(X1 ) versus HA : µ ∈
where C(X1 ) ⊂ C(X),
(11)
ˆ)], where µ ˆ1 is the MLE of µ for the logthe LRTS is −2[(n, µ ˆ1 ) − (n, µ linear model µ ∈ C(X1 ). [The reduced model is also assumed to have C(X0 ) ⊂ C(X1 ) and J ∈ C(X1 ).] In both cases, the LRTS simplifies considerably. We investigate the LRTS for hypothesis (11). From (5), (n, µ ˆ1 ) − (n, µ ˆ) = n (ˆ µ1 − µ ˆ ) − J (m ˆ 1 − m). ˆ ˆ 1 − m) ˆ = J n − J n = 0. Since J ∈ C(X1 ) ⊂ C(X), from (7) we get J (m Substitution gives ˆ) = n (ˆ µ1 − µ ˆ). (n, µ ˆ1 ) − (n, µ ˆ ∈ C(X), (7) also gives Since µ ˆ1 , µ µ1 − µ ˆ) = n M (ˆ µ1 − µ ˆ) = m ˆ M (ˆ µ1 − µ ˆ) = m ˆ (ˆ µ1 − µ ˆ), n (ˆ where M = X(X X)−1 X is the perpendicular projection matrix onto C(X). Thus, the LRTS is
µ1 − µ ˆ)] = 2 − 2[m ˆ (ˆ
q
m ˆ i log(m ˆ i /m ˆ 1i ).
(12)
i=1
Contingency tables with structural zeros (mi = 0) frequently occur. Under the sampling schemes considered here, mi = 0 implies that Pr(ni = 0) = 1; therefore, without loss of generality, such cells can simply be dropped from the model and the theory does not change.
12.3
Asymptotic Properties
In establishing asymptotic properties, we let N go to infinity, where N is the total sample size in product-multinomial sampling and the sum of the expected values for the q cells in Poisson sampling. For product-multinomial sampling, we assume that the probabilities in each cell remain constant and that the individual sample sizes for each population remain in a fixed proportion; i.e., Nj /N remains constant. For Poisson sampling, we assume that the ratio of any pair of expected values remains constant.
12.3 Asymptotic Properties
403
Terminology appropriate for product-multinomial sampling will be used, but the results also hold for Poisson sampling. N will be referred to as the sample size. nN is a sample of size N with E(nN ) = mN . By our assumptions, mN = N m∗ for some vector m∗ . For multinomial sampling, m∗ is the vector of probabilities. For product-multinomial sampling, m∗ is the vector of normalized probabilities. The probabilities q are normalized by the relative sizes of the populations so that J m∗ = i=1 m∗i = 1. In particular, if i is a cell in the jth multinomial population, m∗i is pi (Nj /N ) where pi is the probability of getting an observation from the jth population in the ith cell. For Poisson sampling, m∗ is the vector of expected values divided by N , thus mN = N m∗ . Note that for all sampling schemes, J mN = N , hence J m∗ = 1. For a log-linear model to be valid, we need additional restrictions on mN . In particular, writing µN = log(mN ), we need µN ∈ C(X). Since mN = N m∗ , we have µN = log(mN ) = (log N )J + log(m∗ ). Since J is assumed to be in C(X), it is sufficient to require log(m∗ ) ∈ C(X). Henceforth, we make the assumption that for some b∗ , log(m∗ ) = Xb∗ . Define µ∗ = Xb∗ ; then, µN = µ∗ + (log N )J. Some special matrices will be used frequently in the sequel. Let D = D(m∗ ) and A = X(X DX)−1 X D. Let A0 and A1 be defined as A, with X0 and X1 replacing X. Note that A is a projection matrix onto C(X). (In fact, A is the perpendicular projection matrix for the inner product determined by D.) A has the same form as a matrix giving best linear unbiased estimates√in a regular linear model with covariance matrix D−1 . Taking D1/2 = D( m∗ ) we see that P = D1/2 AD−1/2 is the perpendicular projection matrix onto C(D1/2 X) (with the usual inner product). We first consider properties of nN for large samples. These results are well known but necessary for the rest of the development. Theorem 12.3.1. (1) N −1 nN → m∗ a.s., P
(2) N −1 nN → m∗ , L
(3) N −1/2 (nN − mN ) → N(0, D[I − A0 ]), and (4) N −1 (nN − mN ) = Op N −1/2 . Justification. For product-multinomial sampling, (1) is a direct application of the strong law of large numbers applied to the elements of nN . For Poisson sampling, (1) follows from Chebyshev’s inequality and the Borel-Cantelli Lemma. Part (2) follows from (1). Part (4) follows from (3). It remains only to show (3).
404
12. Maximum Likelihood Theory for Log-Linear Models
(a) Poisson sampling. For Poisson sampling, X0 = 0, so A0 = 0. Since the nN i ’s are independent, it suffices to show the N −1/2 (nN i − mN i ) is asymptotically N (0, m∗i ). We use a moment generating function argument. The moment generating function of a random variable w is ϕw (u) = E(euw ). These behave similarly to characteristic functions, but moment generating functions do not exist for some random variables. The moment generating function we need is ϕN −1/2 (nN i −mN i ) (u) = ϕnN i (N −1/2 u) exp −N −1/2 mN i u , where ϕnN i (u) = exp[−mN i (1 − eu )]. Using a Taylor’s expansion, eau = 1 + au + a2 u2/2 + a3 u ˜3/6, for some u ˜ ∈ [0, u], we can write log ϕN −1/2 (nN i −mN i ) (u) . / −1/2 = −mN i 1 − eN u − N −1/2 mN i u / . = −mN i 1 − 1 + N −1/2 u + N −1 u2/2 + N −3/2 u ˜3/6 − N −1/2 mN i u = N −1 mN i u2/2 + N −3/2 mN i u ˜3/6 = m∗i u2 /2 + N −1/2 m∗i u ˜3 /6. As N → ∞, so
log ϕN −1/2 (nN i −mN i ) (u) → m∗i u2/2, N −1/2 (nN i − mN i ) → N (0, m∗i ). L
(b) Multinomial Sampling. For multinomial sampling, X0 = J, and A0 = JJ D, so D [I − A0 ] = D − DJJ D. Part (3) is then the standard large sample result for multinomials, which is an immediate consequence of the multivariate central limit theorem. (c) Product-Multinomial Sampling. The different multinomial populations are asymptotically normal and they are independent by assumption. It remains only to establish that D [I − A0 ] is the correct block diagonal covariance matrix. Write nN so that all observations in the first multinomial population are listed first, the second population is listed second, etc. Considering the implied structure of X0 , it is easily seen −1 = N D∗ , where D∗ = Diag N1−1 , . . . , Nr−1 , and that that (X0 DX0 ) D [I − A0 ] = D − N DX0 D∗ X0 D. It is easily seen that D − N DX0 D∗ X0 D is precisely the block diagonal matrix needed. 2
12.3 Asymptotic Properties
405
The main result used in finding asymptotic properties of maximum likelihood estimates is a relationship between the MLE µ ˆN and the observations nN . Lemma 12.3.2. Proof.
P µN − µN ) − AD−1 N −1/2 (nN − mN ) → 0. N 1/2 (ˆ 2
See Section 5.
Since the asymptotic distribution of N −1/2 (nN − mN ) is known, the µN − µN ) and, by a Taylemma gives the asymptotic distribution of N 1/2 (ˆ ˆ N − mN ). lor’s expansion, the asymptotic distribution of N −1/2 (m Theorem 12.3.3.
L µN − µN ) → N 0, [A − A0 ] D−1 . (1) N 1/2 (ˆ P
(2) µ ˆN − µN → 0. L
(3) N −1/2 (m ˆ N − mN ) → N(0, D [A − A0 ]) . P
(4) N −1 m ˆ N → m∗ .
Proof. (1) Theorem 12.3.1 and Lemma 12.3.2 imply that L N 1/2 (ˆ µN − µN ) → N 0, AD−1 [D(I − A0 )]D−1 A . It is easily seen that AD−1 [D(I − A0 )]D−1A
= AD−1A − AA0 D−1A = AD−1 − A0 D−1 (A − A0 )D−1 .
=
µN − µN ) = Op (1), so µ ˆN − µN = Op N −1/2 = op (1). (2) From (1), N 1/2 (ˆ
(3) Recall that exp(y) = (ey1 , . . . , eyq ) . Taylor’s theorem gives = exp(x) + [d exp(x)] (y − x) + o(y − x) = exp(x) + D exp(x) (y − x) + o(y − x). (1) ˆN −µN = op (1), Let y = µ ˆN − 12 log N J and x = µN − 12 log N J. Since µ rearranging equation (1) and evaluating exp(y) and exp(x) gives . / µN − µN ) = o op (1) = op (1). N −1/2 m ˆ N − N −1/2 mN − D N −1/2 mN (ˆ exp(y)
406
12. Maximum Likelihood Theory for Log-Linear Models
Since mN = N m∗ , we have ˆ N − mN ) − [D] N 1/2 (ˆ µN − µN ) = op (1). N −1/2 (m Applying part (1) of the theorem, we see that L ˆ N − mN ) → N 0, D(A − A0 )D−1 D , N −1/2 (m but D(A − A0 )D−1 D = D(A − A0 ). (4) N −1/2 ˆ N − mN ) = Op (1) by part (3), so N −1 (m ˆ N − mN ) = (m −1/2 = op (1). Now part (4) follows by observing that N −1 mN = m∗ . Op N 2 P
It is of interest to note that Theorem 12.3.3 implies µ ˆN −(log N )J → µ∗ , ∗ so µ ˆN is not consistent for µ . The following corollary will be useful in examining the likelihood ratio test. Corollary 12.3.4.
P ˆbN − bN = Op N −1/2 , therefore ˆbN − bN → 0.
ˆN − µN/ = Proof. From Theorem 12.3.3, X ˆbN − XbN .= µ −1/2 . It follows that ˆbN − bN = (X X)−1 X X ˆbN − XbN = Op N 2 (X X)−1 X Op N −1/2 = Op N −1/2 . We now consider the problem of testing H0 : µN = µ0N
versus
HA : µN = µ0N .
(2)
As the sample size N gets larger, mN gets larger and so does µN . Using a fixed value for the hypothesis, say H0 : µN = µ0 , is not appropriate. µ0N must be in C(X) and compatible with having a sample size of N , i.e., J m0N = N . In accordance with our other assumptions, we consider only the case where µ0N = (log N )J + µ∗0 with µ∗0 ∈ C(X), and J m∗0 = 1. m0N , b0N , m∗0 , and b∗0 are defined in the usual way. Note that (2) is equivalent to H0 : m∗ = m∗0
versus
HA : m∗ = m∗0 ,
so one can think of the hypothesis as being on the (normalized) vector of probabilities. The asymptotic distribution theory for the more interesting hypothesis H0 : µN ∈ C(X1 )
versus
HA : µN ∈ / C(X1 ),
(3)
where C(X0 ) ⊂ C(X1 ) ⊂ C(X), can be handled quite simply after dealing with (2).
12.3 Asymptotic Properties
407
We want to show that the likelihood ratio test statistic (LRTS), ˆN )], has an asymptotic χ2 distribution. −2[(nN , µ0N ) − (nN , µ Theorem 12.3.5. χ2 (p − r).
L
If µN = µ0N , then −2[(nN , µ0N ) − (nN , µ ˆN )] →
Proof. The proof is in four parts. The first three find statistics asymptotically equivalent to the LRTS. The last one establishes the distribution of the LRTS. (a) Let fn (b) = (n, µ(b)). A Taylor’s expansion of fn (b) about ˜b gives . / 1 fn (b) = fn (˜b) + [dfn (˜b)](b − ˜b) + (b − ˜b) d2fn (˜b) (b − ˜b) 2 2 ˜ + o b − b . Substituting for dfn (˜b) and d2fn (˜b) as found in (12.2.6) and (12.2.10), we get . / fn (b) − fn (˜b) − n − m(˜b) X(b − ˜b) . / 1 + (b − ˜b) X D m(˜b) X (b − ˜b) = o b − ˜b2 . (4) 2 Apply equation (4) with n = nN , ˜b = ˆbN , and b = bN . From Corollary 12.3.4, ˆbN − bN = op (1), so
ˆN ) − (n − m ˆ N ) X(bN − ˆbN ) (nN , µN ) − (nN , µ 1 ˆ N )X](bN − ˆbN ) = o op (1) . (5) + (bN − ˆbN ) [X D(m 2 By (12.2.7), (nN − m ˆ N ) X = 0. After multiplying by −2, (5) becomes − 2[(nN , µN ) − (nN , µ ˆN )] − (µN − µ ˆN ) D(m ˆ N )(µN − µ ˆN ) = op (1). (6) The quadratic form in (6) can be rewritten as ˆN ) D N −1 m ˆ N N 1/2 (µN − µ ˆN ). N 1/2 (µN − µ
(7) L
(b) For random variables YN and ZN , it is well known that if YN → Y P P and ZN → 0, then YN ZN → 0. Repeated application of this gives the L P P result: if YN → Y and ZN → Z, then YN ZN YN − Y N → 0 where YN N ZY is a q vector and ZN is a q × q matrix. Let ZN = D N −1 m ˆ N in (7). Since P
ˆ N → m∗ , N −1 m
ˆN ) D N −1 m ˆ N N 1/2 (µN − µ ˆN ) N 1/2 (µN − µ ˆN ) DN 1/2 (µN − µ ˆN ) → 0. − N 1/2 (µN − µ P
408
(c)
12. Maximum Likelihood Theory for Log-Linear Models
Applying Lemma 12.3.2 gives
ˆN ) DN 1/2 (µN − µ ˆN ) N 1/2 (µN − µ − N −1/2 (nN − mN ) D−1 A DAD−1 N −1/2 (nN − mN ) → 0. (8) P
It is easily seen that D−1 A DAD−1 = AD−1 = D−1/2 P D−1/2 . √ Recall that, D−1/2 = D 1 m∗ and P is the perpendicular projection matrix onto C(D−1/2 X). Rewrite the second quadratic form in (8) as .
/ . / D−1/2 N −1/2 (nN − mN ) P D−1/2 N −1/2 (nN − mN ) .
(9)
L
(d) From Theorem 12.3.1, D−1/2 N −1/2 (nN − mN ) → Y , where Y ∼ 1/2 −1/2 . As in part (c), it is easy to see that D1/2 [I − N 0, D [I − A0 ]D A0 ]D−1/2 = I − P0 , where P0 is the perpendicular projection matrix onto C(D−1/2 X0 ). The quadratic form (9) converges in distribution to Y P Y . By Theorem 1.3.6 in Christensen (1996b), Y P Y will have a χ2 distribution with tr[P (I − P0 )] degrees of freedom if (I − P0 )P (I − P0 )P (I − P0 ) = (I − P0 )P (I − P0 ).
(10)
Since C(P0 ) ⊂ C(P ), we have P P0 = P0 . Simplifying gives both sides of (10) as (P − P0 ). The theorem follows by observing that under H0 , µN = µ0N , so the asymptotic distribution of (9) is the same as that of the LRTS, and that 2 tr[P (I − P0 )] = tr[P − P0 ] = tr(P ) − tr(P0 ) = p − r. We would like to show that the likelihood ratio test is consistent. That P is, if µ0N = µN , then −2[(nN , µ0N ) − (nN , µ ˆN )] → ∞. Consistency is an immediate result of the following theorem. Theorem 12.3.6. ˆN )] → −2[(m∗ , µ∗0 ) − (m∗ , µ∗ )] . −2N −1 [(nN , µ0N ) − (nN , µ P
If µ0N − µN ≡ µ∗0 − µ∗ = 0, the right-hand side is strictly positive. Proof. ˆN )] (11) −2N −1 [(nN , µ0N ) − (nN , µ −1 −1 = −2N [(nN , µ0N ) − (nN , µN )] − 2N [(nN , µN ) − (nN , µ ˆN )] .
12.3 Asymptotic Properties
409
Consider the second term of (11). From (6), ˆN )] −2N −1 [(nN , µN ) − (nN , µ −1 − N (µN − µ ˆN ) D(m ˆ N )(µN − µ ˆN ) = op N −1 = op (1). P
P
ˆN ) → 0 and N −1 m ˆ N → m∗ , we have N −1 (µN − Since (µN − µ P P ˆ N )(µN − µ ˆN ) → 0. Therefore, −2N −1 [(nN , µN ) − (nN , µ ˆN )] → µ ˆN ) D(m 0. Now consider the first term on the right of (11). By definition, −2N −1 [(nN , µ0N ) − (nN , µN )] = −2N −1 [nN µ0N − J m0N ] + 2N −1 [nN µN − J mN ] = −2N −1 [nN (µ0N − µN ) − J (m0N − mN )] . P
Since µ0N − µN = µ∗0 − µ∗ , N −1 m0N = m∗0 , N −1 mN = m∗ , and N −1 nN → m∗ , we have −2N −1 [(nN , µ0N ) − (nN , µN )] → −2[m∗ (µ∗0 − µ∗ ) − J (m∗0 − m∗ )] = −2[(m∗ , µ∗0 ) − (m∗ , µ∗ )] . P
As discussed in Lemma 12.5.3, µ∗ is the unique MLE of µ when the data ˆ(m∗ )], so (m∗ , µ∗ ) is the unique maximum of (m∗ , µ). are m∗ [i.e. µ∗ = µ ∗ ∗ 2 If µ0 = µ , −2[(m∗ , µ∗0 ) − (m∗ , µ∗ )] > 0. We now consider the problem of testing (3). L
ˆ1N ) − (nN , µ ˆN )] → Theorem 12.3.7. If µN ∈ C(X1 ), then −2[(nN , µ ˆ1N is the MLE of µN under the model µN ∈ C(X1 ) and χ2 (p − p1 ), where µ r(X1 ) = p1 . Proof. −2[(nN , µ ˆ1N ) − (nN , µ ˆN )] = −2[(nN , µN ) − (nN , µ ˆN )] + 2[(nN , µN ) − (nN , µ ˆ1N )] . Since C(X1 ) ⊂ C(X), the proof of Theorem 12.3.5 applies to both terms on the right-hand side. In particular, (9) can be applied as is and also with X1 substituted for X. If P1 is the perpendicular projection matrix onto C(D−1/2 X1 ), then P1 = P P1 = P1 P , so the LRTS is asymptotically ! ! equivalent to D−1/2 N −1/2 (nN − mN ) (P − P1 ) D−1/2 N −1/2 (nN − mN ) . Since (P − P1 )(I − P0 )(P − P1 ) = (P − P1 ), verification of the conditions of Theorem 1.3.6 is trivial. The LRTS has a χ2 distribution with r(P − P1 ) = 2 p − p1 degrees of freedom.
410
12. Maximum Likelihood Theory for Log-Linear Models
Theorem 12.3.8 establishes the consistency of the likelihood ratio test for hypothesis (3). Theorem 12.3.8. ˆ1N ) − (nN , µ ˆN )] → −2[(m∗ , µ ˆ1 (m∗ )) − (m∗ , µ∗ )] . −2N −1 [(nN , µ P
µN ∈ / C(X1 ) if and only if the right-hand side is positive. Proof. The proof involves several arguments from the proof of Lemma 12.3.2, so it is deferred until Section 5. 2 Finally, we establish the asymptotic equivalence under H0 of the LRTS and the Pearson test statistic (PTS). The Pearson test statistic is ˆ 1N ) D−1 (m ˆ 1N )(m ˆN −m ˆ 1N ) (m ˆN −m
(12)
for the hypothesis (3). If µN ∈ C(X1 ), then
Theorem 12.3.9.
ˆ1N ) − (nN , µ ˆ 1N ) D−1 (m −2[(nN , µ ˆN )] − (m ˆN −m ˆ 1N )(m ˆN −m ˆ 1N ) → 0. P
Proof.
The PTS can be written elementwise as q
2 ˆ 1N i . (m ˆ Ni − m ˆ 1N i ) m
(13)
i=1
The elementwise form for the LRTS is found in (12.2.12). Note that (13) is equivalent to N
q
N −1 (m ˆ Ni − m ˆ 1N i )
!2 −1 N m ˆ 1N i
i=1
and the LRTS is 2N
q
! N −1 m ˆ N i log N −1 m ˆ N i − log N −1 m ˆ 1N i .
(14)
i=1
ˆ N , N −1 m ˆ 1N ). Taking a secondTo simplify notation, let (x, y) = (N −1 m ∗ ∗ order expansion of (14) about (m , m ) gives 2N
q i=1
xi [log xi − log yi ]
12.3 Asymptotic Properties
= 2N
q
411
m∗i [log m∗i − log m∗i ]
i=1
+ 2N
q
[log xi − log yi + xi (1/xi )] |(x,y)=(m∗ ,m∗ ) (xi − m∗i )
i=1
+ 2N
q
[−xi (1/yi )] |(x,y)=(m∗ ,m∗ ) (yi − m∗i )
i=1
+N
q
! x−1 |x=m∗ (xi − m∗i )2 i
i=1 q
+ 2N
! −yi−1 |y=m∗ (xi − m∗i )(yi − m∗i )
i=1
+N
q
! xi yi−2 |(x,y)=(m∗ ,m∗ ) (yi − m∗i )2
i=1
+ N o x − m∗ 2 + y − m∗ 2 . This easily simplifies to 2N
q
xi [log xi − log yi ]
i=1
= 2N
q
(xi − yi )
i=1 q
+N
(1/m∗i ) (xi − m∗i )2 − 2(xi − m∗i )(yi − m∗i ) + (yi − m∗i )2
!
i=1
+ N o x − m∗ 2 + y − m∗ 2 q 2 (xi − yi ) m∗i + N o x − m∗ 2 + y − m∗ 2 . = N i=1
The last equality is because J ∈ C(X1 ) ⊂ C(X), so that by (12.2.7), N J x = J nN = N J y and J (x − y) = 0. In our regular notation, 2
q i=1
q ! 2 ˆ Ni m ˆ 1N i = m ˆ N i log m (m ˆ Ni − m ˆ 1N i ) mN i i=1
+ N o N −1 (m ˆ N − mN ) 2 + N −1 (m ˆ 1N − mN ) 2 .
Investigating the argument of o(·), ˆ N − mN ) 2 + N −1 (m ˆ 1N − mN ) 2 N −1 (m /2 . /2 . = Op N −1/2 + Op N −1/2 = Op N −1 .
412
12. Maximum Likelihood Theory for Log-Linear Models
Since o(Op (N −1 )) = op (N −1 ) and N op (N −1 ) = op (1), we have the LRTS asymptotically equivalent to q
2 (m ˆ Ni − m ˆ 1N i ) mN i
i=1
= (m ˆN −m ˆ 1N ) D−1 (mN )(m ˆN −m ˆ 1N ) −1/2 −1 ∗ −1/2 (m ˆN −m ˆ 1N ) D (m )N (m ˆN −m ˆ 1N ). = N
(15)
P ˆ 1N → D−1 (m∗ ), so (15) is asymptotically equivaUnder H0 , D−1 N −1 m lent to (12) the PTS. 2 A similar argument holds to show that the LRTS and the PTS are asymptotically equivalent under H0 for testing the hypothesis (2). Results similar to Theorems 12.3.6 and 12.3.8 also exist for the PTS. For a more extensive discussion of the asymptotic properties of log-linear models, see Haberman (1974a).
12.4
Applications
a) Weighted Least Squares. We now consider two methods of obtaining estimates for log-linear models. The first is the Newton-Raphson technique, which is an iterative method for obtaining the maximum likelihood estimates. The Newton-Raphson method can be performed by doing a series of weighted least squares regression analyses. The second method is an approximate technique based on the asymptotic results that we have derived. It is a noniterative weighted least squares regression approach. The Newton-Raphson technique is an iterative procedure for finding where a function equals the zero vector. Let g be a function mapping Rp into Rp . We wish to find b∗ such that g(b∗ ) = 0. Let b0 be an initial guess for b∗ . Newton-Raphson defines (recursively) a sequence bt that converges to b∗ . By Taylor’s theorem, if bt+1 is near bt , we have the approximate equality g(bt+1 ) = g(bt ) + [dg(bt )] δt , where δt = bt+1 − bt . The Newton-Raphson technique assumes that bt is known, sets g(bt+1 ) = 0, and seeks to find bt+1 , i.e., 0 = g(bt ) + [dg(bt )] δt , so
−1
δt = − [dg(bt )]
g(bt )
and bt+1 = bt + δt .
(1)
12.4 Applications
413
For finding maximum likelihood estimates, we wish to find where the deriva tive of fn (b) ≡ (n, Xb) is zero. With g(b) ≡ [dfn (b)] and substituting (12.2.6) and (12.2.10) into (1), we get !−1 X n − m(bt ) . δt = X D m(bt ) X The sequence bt converges to a critical point of (n, Xb) which, as we have seen, must be the MLE of b under fairly weak conditions. A weighted least squares computer program can be used to execute the Newton-Raphson procedure. Fit the model Y = Xb+e, E(e) = 0, Cov(e) = −1 where Y is taken as D m(bt ) Y
−1 n − m(bt ) = Xbt + D m(bt ) =
log(mt ) + D(mt )−1 (n − mt ).
Let bt+1 denote the estimate of b obtained from this procedure. Clearly, bt+1
!−1 X D m(bt ) X X D(mt )Y !−1 = bt + X D m(bt ) X X n − m(bt ) , =
which is the Newton-Raphson value for bt+1 . The second method is based on the asymptotic results of Theorem 12.3.3 and the fact, shown in Theorem 10.1.3 in Christensen (1996b), that best linear unbiased estimates (BLUEs) for the linear model Y = Xβ+e, E(e) = 0, Cov(e) = V , are the same as those for the model Y = Xβ + e, E(e) = 0, Cov(e) = V + XU X , where U is any non-negative definite matrix. The saturated log-linear model, µ ∈ Rq always fits the data. For the saturated model µ ˆ = log(n) and A = I. If N is large, Theorem 12.3.3 gives the asymptotic relation (2) log(n) − µ ∼ N 0, (I − A0 )D−1(m) . It will be convenient to rewrite the term A0 D−1(m). It is easily seen that −1
A0 D−1(m) = X0 (X0 D(m)X0 )
X0 .
Now consider the term X0 D(m)X0 . The matrix X0 has the same structure as the design matrix for a one-way ANOVA model, say yij = µi + eij . Exploiting the simple form of X0 and the fact that X0 n = X0 m = (N1 , . . . , Nr ) , it is easily seen that X0 D(m)X0 = D(X0 n).
414
12. Maximum Likelihood Theory for Log-Linear Models
We now have
A0 D−1(m) = X0 D−1(X0 n)X0 ,
and the asymptotic distribution of log(n) is log(n) − µ ∼ N 0, D−1(m) − X0 D−1(X0 n)X0 .
(3)
Imposing the linear constraint µ = Xb, the asymptotic distribution (3) leads to fitting the linear model log(n) = Xb + e,
E(e) = 0,
Cov(e) = D−1(m) − X0 D−1 (X0 n)X0 . (4)
By Theorem 10.1.3, the BLUEs in model (4) are the same as those in log(n) = Xb + e,
E(e) = 0,
Cov(e) = D−1(m).
(5)
Of course, m is unknown, so (5) cannot be used directly. Estimating m with n gives the model log(n) = Xb + e,
E(e) = 0,
Cov(e) = D−1(n).
(6)
Note that n is the MLE of m under the saturated model. One virtue of model (6) is that it can be fit with any regression program that does weighted regression. Besides the rationale just given, there are two other justifications for using this approximate procedure. First, if we take m0 = n in the NewtonRaphson algorithm, then the first step of the algorithm is precisely fitting model (6). Second, for product-multinomial data, fitting model (6) is the same procedure as that proposed by Grizzle, Starmer, and Koch (1969). In their paper, they consider modeling the vector of probabilities p = (p1 , . . . , pq ) . Their method specifies a generalized linear model F (p) = Xb, where F is a quite general function from Rq into Rq . In particular, one can choose F (p) = log(m). With this choice of F , their estimation procedure amounts to a weighted least squares analysis with the covariance matrix Cov(e) = D−1(n) − X0 D−1(X0 n)X0 . Because C(X0 ) ⊂ C(X), the best linear unbiased estimates under this covariance matrix are the same as those using model (6). b) Asymptotic Variances Under Saturated Models. The relation (2) is the basis for a number of asymptotic variance formulas commonly used with saturated models. For a saturated model, Cov(ˆ µ) = D−1(m) − A0 D−1(m). Suppose that we write a saturated model µ = Xb, where X = [X0 , X1 ] and b = [b0 , b1 ]. The parameter b0 is forced into the model to deal with the product-multinomial sampling. [Recall that to deal with productmultinomial sampling, we always assume that C(X0 ) ⊂ C(X).] For a linear
12.4 Applications
415
function ρ µ where ρ ⊥ X0 , one gets ρ µ = ρ X0 b0 + ρ X1 b1 = ρ X1 b1 and ˆ) = ρ D−1(m)ρ because ρ A0 D−1(m)ρ = 0. The maximum likeliVar(ρ µ hood estimate of ρ D−1(m)ρ is ρ D−1(n)ρ. In other words, for estimable functions of the parameters that are not forced into the model (i.e., functions involving only b1 ), the estimate of the variance of ρ µ ˆ = ρ X1ˆb1 is −1 ρ D (n)ρ. Example 12.4.1. Consider now a 2 × 3 table. One log-odds ratio is µ11 − µ12 − µ21 + µ22 , with estimate log(n11 n22 /n12 n21 ). The estimated −1 −1 −1 variance is then n−1 11 + n12 + n21 + n22 . c) Logit and Multinomial Response Models. Suppose that the sampling scheme is product-multinomial where each multinomial has exactly two categories. Without loss of generality, we can write n = (n11 , . . . , nr1 , n12 , . . . , nr2 ) where the pairs (ni1 , ni2 ) are the multinomial outcomes. (This two-subscript notation will be used for all vectors discussed.) Logits are defined by log(mi1 /mi2 ) = µi1 − µi2 . Let η = (µ11 − µ12 , . . . , µr1 −µr2 ) be the vector of logits. A logit model is a model η = Zβ∗ for some β∗ , where Z is an r × p matrix. We wish to show that the logit model defines a log-linear model for µ. Let Ir be an r × r identity matrix, and let L = [Ir , −Ir ], so that η = L µ. Then the restriction placed on µ is that µ ∈ M, where M = {µ|L µ = Zβ∗ for some β∗ }. Because of the product-multinomial sampling, we have X0 = [Ir , Ir ] . Now let Z X∗ = , 0rp where 0rp is an r × p matrix of zeros. It is easily seen that M = {µ|L µ = L X∗ β∗ for some β∗ }. Arguing as in Section 3.3 of Christensen (1996b), M is a vector space, so the logit model has defined a log-linear model. Example 12.4.2. For a 3 × 2 table, a linear logit model is defined by µi1 − µi2 = γ0 + γ1 ti , i = 1, 2, 3. The equation L µ = L X∗ β∗ becomes
µ11 1 t1 µ21 1 t2 . / 1 0 0 −1 0 0 1 0 0 −1 0 0 µ 1 t3 γ0 0 1 0 0 −1 0 31 = 0 1 0 0 −1 0 . µ 0 0 γ1 0 0 1 0 0 −1 12 0 0 1 0 0 −1 µ22 µ32
0 0
0 0
Continuing as in Section 3.3, M can be rewritten as M = {µ|µ = µ0 + µ1 , where µ0 ⊥ C(L) and µ1 ∈ C(X∗ )}. Thus, M is the space spanned by the columns of X∗ and any spanning set for the orthogonal complement of C(L). In particular, X0 is a matrix with C(X0 ) = C(L)⊥ . We can write the log-linear model as µ = X0 β0 + X∗ β∗ .
416
12. Maximum Likelihood Theory for Log-Linear Models
It was assumed at the beginning of the discussion that the sampling was product-multinomial with two categories in each multinomial. Normally, we consider only log-linear models µ = Xb such that C(X0 ) ⊂ C(X). In the current case, we have defined the logit model, i.e., µ ∈ M, M = {µ|L µ = L X∗ β∗ for some β∗ }. We have then shown that M = C(X0 , X∗ ), so that a logit model must satisfy the condition C(X0 ) ⊂ M. It is interesting to note that even if the two-category product-multinomial sampling scheme had not been assumed, the logit model would still correspond to a log-linear model consistent with that sampling scheme. Example 12.4.3. Write the log-linear version of the logit model as µ = Xβ, where X = [X0 , X∗ ] and β = [β0 , β∗ ]. Because the MLEs satisfy ˆ we have X0 n = X0 m, ˆ i.e., ni· = m ˆ i· for i = 1, . . . , r. The X n = X m, log-linear model must be of the form log(mij ) = u1(i) + · · · . The notation we have used is quite general, but it lends itself best to twodimensional tables. Consider a four-dimensional log-linear model with a logit model in the last variable. If the observations are nhijk , we have done nothing but substitute the three subscripts hij for the one subscript k. The ˆ hij· argument presented here implies that the MLEs must satisfy nhij· = m and the log-linear model must be of the form log(mhijk ) = u123(hij) + · · · . Consider now the problem of estimating ρ1 η; we can write ρ = Lρ1 so that ρ1 η = ρ1 L µ = ρ µ. Because of the particular structures of L and X∗ and the fact that C(L) ⊥ C(X0 ), the estimate of ρ1 η is ˆ = ρ1 L (X0 βˆ0 + X∗ βˆ∗ ) = ρ1 L X∗ βˆ∗ = ρ1 Z βˆ∗ . ρ1 ηˆ = ρ µ The estimates in the logit model come directly from the log-linear model and all the asymptotic distribution results continue to apply. In particular, the estimate of η = Zβ∗ is ηˆ = Z βˆ∗ , where βˆ∗ is estimated from the loglinear model. Consider a logit model η = Zβ∗ and a corresponding log-linear model µ = Xβ, where X = [X0 , X∗ ] and β = [β0 , β∗ ]. We wish to be able to test the adequacy of a reduced logit model, say η = Z1 γ∗ , where C(Z1 ) ⊂ C(Z). If the log-linear model corresponding to η = Z1 γ∗ , say µ = X1 γ, has C(X1 ) ⊂ C(X), then the test can proceed immediately from log-linear = [Z1 , 0rp1 ] and model theory. If Z1 is a r × p1 matrix, we can write X1∗ X1 = [X0 , X1∗ ]. Clearly, if C(Z1 ) ⊂ C(Z), we have C(X1∗ ) ⊂ C(X∗ ) and C(X1 ) ⊂ C(X), so the test can proceed. The hypothesis that a logit model η = Z1 γ∗ fits the data relative to a general log-linear model µ = Xβ is equivalent to hypothesizing,
12.4 Applications
417
for X1∗ with C(X1∗ ) ⊂ C(X), that µ ∈ M, where M = {µ|µ ∈ C(X) and L µ = L X1∗ γ for some γ}. We can rewrite M as M = {µ|µ = µ0 + µ1 , where µ1 ∈ C(X1∗ ), µ0 ∈ C(X) and µ0 ⊥ C(L)}. Thus, M is the space spanned by the columns of X1∗ and any spanning set for the subspace of C(X) orthogonal to C(L). The usual test for lack of fit of a logit model is H0 : µ ∈ M versus HA : µ ∈ Rq , i.e., C(X) = Rq . Many types of multinomial response models can be written as log-linear models using the method outlined here. An exception are continuation ratio models. They do not correspond to a single log-linear model. d) Estimation of Parameters. Estimation of parameters in log-linear models is very similar to that in standard linear models. A standard linear model Y = Xβ + e, E(e) = 0 implies that E(Y ) = Xβ. The least squares estimate of Xβ is Yˆ = M Y . The least squares estimate of ρ Xβ is ρ M Yˆ = ρ M Y . Similarly, in a log-linear model we have log(m) ≡ µ = Xb. Computer programs often give the MLE of m, i.e., m. ˆ From this, one can obtain µ ˆ = log(m). ˆ Because µ ˆ ∈ C(X), the MLE of ρ Xb is ρ µ ˆ = ρ M µ ˆ. The key to finding the estimate of an estimable function λ β or λ b is in obtaining M ρ so that λ = ρ X = ρ M X. Given M ρ, estimates in the standard linear model can be obtained from Y and estimates in a log-linear model can be obtained from µ ˆ. Finding such a vector M ρ depends only on λ and X. It does not depend on whether a linear or a log-linear model is being fitted. Christensen (1996b) discusses, in great detail, how to find estimates of estimable functions for standard linear models. The procedure amounts to finding M ρ. Precisely the same vectors M ρ work for log-linear models. In other words, if one knows how to use Y to estimate something in a standard linear model, exactly the same technique applied to µ ˆ will give the estimate in a log-linear model. Example 12.4.4. tion
Consider a two-dimensional table with parameterizaµij = γ + αi + βj + (αβ)ij .
In discussions of log-linear models, this model would commonly be written as µij = u + u1(i) + u2(j) + u12(ij) ,
418
12. Maximum Likelihood Theory for Log-Linear Models
but it is the same model with either parameterization. Estimates follow just ˆij as in a two-way ANOVA. To simplify this as much as possible, let zij = µ and assume the “usual” side conditions, then
γˆ α ˆi βˆi (αβ) ij
= z¯·· , = z¯i· − z¯·· , = z¯·j − z¯·· , = zij − z¯i· − z¯·j + z¯·· .
It seems very reasonable (to me at any rate) to restrict estimation to estimable functions of b. In that case, the choice of side conditions is of no importance. Tests and confidence intervals for ρ Xb can be based on Theorem 12.3.3. A large sample approximation is ρ µ ˆ − ρ Xb ∼ N (0, 1). ρ (A − A0 )D−1(m)ρ Of course, AD−1(m) has to be estimated in order to get a standard error. [A0 D−1(m) does not depend on unknown parameters.] As indicated in application (b), variances are easy to find in the saturated model; unfortunately, the estimable functions of b are generally quite complicated in the saturated model. If one is willing to use side conditions, the side conditions can sometimes give the illusion that the estimable functions are not complicated.
12.5
Proofs of Lemma 12.3.2 and Theorem 12.3.8
Two results from advanced calculus are needed. Recall that if F : Rq × Rp → Rp , then dF (x, y) is a p by q + p matrix. Partition dF (x, y) into a p × q matrix, say dx F = [∂Fi /∂xj ], and a p × p matrix, dy F = [∂Fi /∂yj ]. The Implicit Function Theorem. If F : Rq × Rp → Rp , F (a, c) = 0, F (a, y) is differentiable, and dy F is nonsingular at y = c, then F (x, y) = 0 determines y uniquely as a function of x in a neighborhood of (a, c).
12.5 Proofs of Lemma 12.3.2 and Theorem 12.3.8
419
This unique function, say ξ(x), is differentiable and satisfies ξ(a) = c and F (x, ξ(x)) = 0 for x in a neighborhood of a. Proof.
See Bartle (1964). dξ(x) = −[dy F ]−1 [dx F ] where y = ξ(x).
Corollary 12.5.1. Proof.
2
See Bartle (1964).
Lemma 12.5.2.
2
If a is a scalar and n is a q vector of counts, then
(1) m(an) ˆ = am(n) ˆ (2) µ ˆ(an) = [log(a)]J + µ ˆ(n).
Proof. m(an) ˆ is the unique solution of [an − m] X = 0 with log(m(an)) ˆ ∈ C(X). We will show that am(n) ˆ is also a solution with log(am(n)) ˆ ∈ C(X), so m(an) ˆ = am(n). ˆ m(n) ˆ is the unique solution of ˆ ∈ C(X). Clearly, if [n − m(n)] ˆ X = 0, [n − m] X = 0 with log(m(n)) X = 0, but log(am(n)) ˆ = [log(a)]J + log(m(n)) ˆ ∈ C(X) then [an − am(n)] ˆ because both J and log(m(n)) ˆ are in C(X). Taking logs gives µ ˆ(an) = [log(a)]J + µ ˆ(n). 2 Lemma 12.5.3.
ˆ ∗ ) = m∗ . µ ˆ(m∗ ) = µ∗ and m(m
Proof. By definition, m∗ = m(b∗ ), so b∗ is a solution of [m∗ − ˆ(m∗ ) is unique, we must have µ ˆ(m∗ ) = Xb∗ = µ∗ . m(b)] X = 0. Since µ m(m ˆ ∗ ) = exp[ˆ µ(m∗ )] = exp[µ∗ ] = m∗ .
Lemma 12.3.2
2
P µN − µN ) − AD−1 N −1/2 (nN − mN ) → 0. N 1/2 (ˆ
Proof. The MLE µ ˆN is defined by µ ˆN = X ˆbN , where ˆbN is a function of nN which is defined implicitly as the solution to dfnN (b) = [nN − m(b)] X = 0. The proof follows from investigating the properties of the Taylor’s expansion µ(n0 )(n − n0 ) + o(n − n0 ). µ ˆ(n) = µ ˆ(n0 ) + dˆ
(1)
420
12. Maximum Likelihood Theory for Log-Linear Models
The expansion is applied with n = N −1 nN and n0 = N −1 mN = m∗ . Rewriting (1) gives ˆ(m∗ ) − dˆ µ(m∗ )(N −1 nN − m∗ ) = o(N −1 nN − m∗ ). (2) µ ˆ(N −1 nN ) − µ ˆ(m∗ ) and dˆ µ(m∗ ) separately. We examine the terms µ ˆ(N −1 nN ) − µ (a)
We show that for any observations vector nN , ˆ(m∗ ) = µ ˆ(nN ) − µN . µ ˆ(N −1 nN ) − µ
By Lemmas 12.5.2 and 12.5.3, ˆ(m∗ ) = [log N −1 ]J + µ ˆ(nN ) − µ∗ . µ ˆ(N −1 nN ) − µ Since µN = [log N ]J + µ∗ , we have the result. ˆ(n) = X ˆb(n), so dˆ µ(n) = (b) We characterize the q × q matrix dˆ µ(m∗ ). µ X[dˆb(n)], with ˆb(n) defined implicitly as a zero of F (n, b) = X [n − m(b)]. For any fixed vector b0 , let n0 = m(b0 ). Then F (n0 , b0 ) = 0, so by the Implicit Function Theorem, there exists ˆb(n) such that if n is close to n0 , F (n, ˆb(n)) = 0 and (from Corollary 12.5.1) dˆb(n) = −[dˆb F ]−1 [dn F ]. To find dˆb(n), we need dF (n, b) = [X , −X dm(b)]. From (12.2.9), dm(b) = D(m(b))X, so dF (n, b) = [X , −X D(m(b))X], −1 ˆ X, dˆb(n) = [X D(m)X]
µ(n0 ) is always defined. and dˆ µ(n) = X[X D(m(b))X]−1X . In particular, dˆ We need dˆ µ(m∗ ). From Lemma 12.5.3, we have that F (m∗ , ˆb(m∗ )) = 0, µ(m∗ ) = X[X D(m(m ˆ ∗ ))X]−1X . Again, from so dˆ µ(m∗ ) is defined and dˆ ∗ ∗ ∗ ) = m , so D(m(m ˆ )) = D(m∗ ) = D and dˆ µ(m∗ ) = Lemma 12.5.3, m(m ˆ −1 −1 X[X DX] X = AD . (c) Using N −1 nN − m∗ = Op N −1/2 and the results of (a) and (b) in (2) gives µ ˆ(nN ) − µN − AD−1 N −1 (nN − mN ) = o Op N −1/2 = op N −1/2 . Multiplying through by N 1/2 gives µN − µN ) − AD−1 N −1/2 (nN − mN ) = op (1). N 1/2 (ˆ
Theorem 12.3.8. ˆ1N ) − (nN , µ ˆN )] → −2[(m∗ , µ ˆ1 (m∗ )) − (m∗ , µ∗ )] . −2N −1 [(nN , µ P
2
12.5 Proofs of Lemma 12.3.2 and Theorem 12.3.8
421
µN ∈ / C(X1 ) if and only if the right-hand side is positive. Proof. −2N −1 [(nN , µ ˆ1N ) − (nN , µ ˆN )] = −2N −1 [(nN , µ ˆ1N ) − (nN , µN )] + 2N −1 [(nN , µ ˆN ) − (nN , µN )] . As in Theorem 12.3.6, 2N −1 [(nN , µ ˆN ) − (nN , µN )] → 0, P
so we need only investigate the behavior of −2N −1 [(nN , µ ˆ1N ) − (nN , µN )] = −2N −1 [nN (ˆ µ1N − µN ) − J (m ˆ 1N − mN )] . P
−1 ∗ From Theorem . As in the proof of Lemma 12.3.2, 12.3.1, N ∗ nN → m −1 −1 ˆ1 N nN − µ and N (m ˆ 1N − mN ) = m ˆ 1 N −1 nN − m∗ . µ ˆ1N − µN = µ ˆ1 (·) (ensured by the Implicit Function By the continuity of m ˆ 1 (·) and µ P P ˆ 1(m∗ ) and µ ˆ1 N −1 nN → µ ˆ1(m∗ ), so Theorem), m ˆ 1 N −1 nN → m
ˆ1N ) − (nN , µN )] −2N −1 [(nN , µ → −2[m∗ (ˆ µ1 (m∗ ) − µ∗ ) − J (m ˆ 1 (m∗ ) − m∗ )] ∗ ∗ ∗ = −2[(m , µ ˆ1 (m )) − (m , µ∗ )] . P
Since µ ˆ(m∗ ) = µ∗ , (m∗ , µ∗ ) is the unique maximum of (m∗ , µ) for µ ∈ ˆ1 (m∗ ) = µ∗ , C(X). Since µ ˆ1 (m∗ ) is in C(X), if µ −2[(m∗ , µ ˆ1 (m∗ )) − (m∗ , µ∗ )] > 0. This occurs whenever µ∗ ∈ / C(X1 ) because µ ˆ1 (m∗ ) ∈ C(X1 ). Finally, µ∗ ∈ / / C(X1 ). 2 C(X1 ) if and only if µN ∈
13 Bayesian Binomial Regression
Standard methods for analyzing binomial regression data rely on asymptotic inferences. Bayesian methods performed using simple computations apply for any sample size. We discuss Bayesian inferences for binomial regression with an emphasis on inferences for the probability of “success.” Furthermore, we illustrate diagnostic tools, perform model selection among non-nested models, and examine the sensitivity of the Bayesian methods. This chapter is closely related to Bedrick, Christensen, and Johnson (1997) and to earlier drafts of that article. Section 1 introduces Bayesian binomial regression. Section 2 discusses standard Bayesian inference procedures with an emphasis on the predictive distribution. Section 3 presents Bayesian diagnostics including influence measures, global model checking methods, and a procedure for selection of the appropriate link function. Section 4 discusses computations.
13.1
Introduction
The purpose of this chapter is to illustrate the simplicity of a fully Bayesian approach to binomial regression models. Historically, it has been difficult to specify realistic prior information on regression coefficients in nonlinear models, and computations for inference and diagnostics were difficult due to intractable integrations. These difficulties no longer exist. We illustrate a fairly complete analysis for two data sets using methods that are simple and easy to apply. In particular, we discuss a method for specifying the
13.1 Introduction
423
prior distribution that focuses on binomial probabilities, rather than esoteric regression coefficients. For computations, we focus on Monte Carlo methods because of their flexibility and their ease of implementation. We show how Monte Carlo sampling is used for prediction, making inferences on regression coefficients and probabilities, diagnostics, model checking, link selection, and sensitivity analysis of the prior. Leonard (1972) first discussed Bayesian hierarchical models for binomial data. Zellner and Rossi (1984) gave a overview of Bayesian methods for binomial regression models. Johnson and Geisser (1985) and Johnson (1985) introduced general Bayesian predictive and estimative case deletion diagnostics that apply to binomial regression. We integrate these ideas along with Box’s (1980) work on model checking to provide a variety of tools appropriate for analyzing binomial response data. Consider regression data (yi , xi ), i = 1, . . . , n, where the xi ’s are known k vectors of covariates and the yi ’s are independent binomial random variables with Ni trials. The probability of success p for any single trial y with covariate x is F (x β), i.e., F (x β) ≡ p ≡ Pr(y = 1|x, β). Here, the vector β is an unknown k vector of regression coefficients. Although the function F (·) could be an arbitrary cdf, we consider logistic, probit, and complementary log-log regression models in which F (x β) is modeled as one of . / xβ 1 + ex β Logistic e F (x β) = Φ(x β) . / Probit 1 − exp −ex β Complementary log-log . Here, Φ(u) is the cdf of a standard normal distribution. The success probability p is related to β through F −1 (p) = x β, which is the link function from Chapter 9. For the logistic, probit, and complementary log-log models, F −1 (p) = log{p/(1 − p)}, Φ−1 (p), and log{− log(1 − p)}, respectively. The likelihood function for the complete data Y = (y1 , . . . , yn ) is n n Ni L(β|yi ) ≡ [F (xi β)]yi [1 − F (xi β)]Ni −yi . (1) L(β|Y ) ≡ y i i=1 i=1 For a prior distribution on β, say π(β), obtaining posterior and predictive distributions requires computing the posterior of β, π(β|Y ) = 3
L(β|Y )π(β) . L(β|Y )π(β)dβ
Most interesting aspects of a Bayesian analysis can be obtained from various integrals involving this posterior density. Integrals involving π(β|Y ) are intractable, so we must use approximations. Monte Carlo methods yield a discrete approximation to the posterior distribution that takes values β r with probability q˜r , r = 1, . . . , t. Methods
424
13. Bayesian Binomial Regression
for obtaining a discrete approximation are discussed in Section 4. Given a function h(β), the posterior expectation E{h(β) | Y } is approximated by '
. h(β r )˜ qr . h(β)π(β|Y )dβ = t
(2)
r=1
Typically, the Strong Law of Large Numbers implies that the error in the approximation converges almost surely to zero as the simulation sample size t increases.
13.2
Bayesian Inference
13.2.1
Specifying the Prior and Approximating the Posterior
Bayesian inference requires the specification of a prior distribution π(β). In the past, several methods of specifying priors for binomial regression problems have been used. The standard approach has been to assume either a normal distribution for β or the “noninformative” diffuse prior π(β) = 1. These are convenient in large sample situations where the posterior on β is approximately normal. See Zellner and Rossi (1984) for relevant discussion. Another type of prior focuses on the assessment of “success” probabilities for various choices of covariate values, rather than on the assessment of regression coefficients. Example 13.2.1. Consider a simple situation with k = 2. Imagine that we are recruiting statistics students into a graduate program. We will attempt to recruit from two populations: domestic students (i = 1) and international students (i = 2). If N1 domestic students apply and N2 international students apply, assuming independence of students we successfully recruit y1 ∼ Bin(N1 , p1 ) domestic students and y2 ∼ Bin(N2 , p2 ) international students. We can write a one-way ANOVA logit model log{pi /(1 − pi )} = µ + αi , i = 1, 2. This model is overparameterized, so we impose the side condition α1 = 0 to make the model a logistic regression. We now have log{p1 /(1 − p1 )} = µ,
log{p2 /(1 − p2 )} = µ + α2 .
The graduate advisor has specified prior distributions p1 ∼ Beta(4, 4) and p2 ∼ Beta(4, 1), reflecting (in part) the beliefs that about 80% = E(p2 ) = 4/(4 + 1) of the international students and half, [4/(4 + 4)], of the domestic students will be successfully recruited. The prior specification includes the assumption that p1 and p2 are independent. Having placed a joint distribution on p1 and p2 , it is a calculus problem to determine the corresponding
13.2 Bayesian Inference
425
distribution on the “regression” parameters µ and α2 . We discuss the exact procedure later. While we assumed that the distributions of p1 and p2 were independent, the approach can, in theory, be carried out with any joint distribution for p1 and p2 . The problem is not in doing the calculus, but in specifying a realistic joint distribution when the independence assumption is not appropriate. The independence assumption is a key part of the procedure. With p1 and p2 independent, if we were told the value of p1 , we should not be inclined to revise our thinking about p2 . That certainly seems reasonable if we are told that p1 is something near its expected value .5. It seems less reasonable if we are told, say, that p1 ≥ .95. Knowing that p1 ≥ .95 would probably make us want to revise our distribution of p2 to make larger values more probable. However, .95 is 2.7 prior standard deviations above the prior mean for p1 , so this event is extremely unlikely under the prior specification. If p1 ≥ .95 is more likely than the original prior specification allows, the entire prior should be recalibrated, at which point the independence assumption may be called in question. However, if, after reflection, those situations that might cause concern about the independence assumption are thought unlikely, then we believe the independence assumption is reasonable. Lack of independence can also occur if the international students were thought to be very similar to the domestic students regardless of the behavior of the domestic students. In this case, knowing p˜1 is highly informative about p˜2 and our prior is not appropriate. The main idea in Example 13.2.1 was to specify prior distributions for p1 and p2 rather than on the regression parameters µ and α2 . We do this because p1 and p2 have natural interpretations. In a simple logistic regression, log{p/(1 − p)} = β0 + β1 τ , there are again only two regression parameters (β0 and β1 ), but there is no obvious choice for probabilities p1 and p2 at which to specify the prior. In such cases, we must pick two values, say τ˜1 and τ˜2 , and specify prior distributions for p˜1 , the probability of success when τ = τ˜1 , and p˜2 , the probability of success when τ = τ˜2 . Example 13.2.2. O-Ring Data. Consider fitting a simple regression model on temperature to the data in Table 2.1. Let pi be the probability that any O-ring fails in case i and model this as F −1 (pi ) = β0 + β1 τi = xi β, where τi is the temperature. Our prior is defined by giving independent distributions to the probabilities of O-ring failure at temperatures τ˜1 = 55 and τ˜2 = 75 degrees Fahrenheit. Write β β0 + β1 τ˜i = [1, τ˜i ] 0 = x ˜i β β1
426
13. Bayesian Binomial Regression
and define p˜1 and p˜2 by p˜i = F (˜ xi β). The τ˜i ’s should be chosen in the expected range of the observed temperatures but far enough apart so that information about the corresponding probabilities can be reasonably assumed independent. The selected temperatures should also be amenable to expert opinion. Our priors on p˜1 and p˜2 are Beta(1, .577) and Beta(.577, 1) respectively. The prior on p˜1 was chosen because it has a “J” shape and gives Pr[˜ p1 > 1/2] = 2/3. The prior on p˜2 has a “J” shape and gives Pr[˜ p2 < 1/2] = 2/3. The prior on β = (β0 , β1 ) is determined using the change-of-variables method. Under the logistic model, the prior on β is a data augmentation prior (DAP) in the sense that it has the same functional form as the likelihood function, i.e., π(β) ∝
2
˜
[F (˜ xi β)]y˜i [1 − F (˜ xi β)]Ni −˜yi ,
i=1
˜1 = N ˜2 = 1.577, y˜1 = 1, and y˜2 = .577. With this DAP, the prior where N on p˜1 can be thought of as one prior O-ring failure out of 1.577 trials at τ˜1 = 55, and for p˜2 , it can be thought of as .577 prior O-ring failures out of 1.577 trials at τ˜2 = 75. The weight attached to the prior is equivalent ˜1 + N ˜2 “prior” observations, about 3. The posterior density for β also to N has the same functional form as the likelihood, i.e., π(β|Y ) ∝
n i=1
[F (xi β)]yi [1 − F (xi β)]Ni −yi
2
˜
[F (˜ xi β)]y˜i [1 − F (˜ xi β)]Ni −˜yi .
i=1
Many standard computer programs, e.g., GLIM and SPLUS, can be used to find the posterior mode βM and an asymptotic dispersion matrix Σ(βM ) for the posterior. To compute the mode, simply augment the observed data with a prior “binomial” observation at 55 degrees consisting of 1.577 trials and 1 observed O-ring failure and include a prior observation at 75 degrees with 1.577 trials and .577 O-ring failures. The posterior mode of β is the maximum likelihood estimate (MLE) from the augmented data. The asymptotic covariance matrix computed from the augmented data is the asymptotic dispersion matrix for the posterior. These quantities are of interest in themselves and can also be used to create a good discrete approximation to the posterior. Figures 13.1 and 13.2 give contour plots of the prior and posterior distributions on β, respectively. Note the high correlation between β0 and β1 in both the prior and the posterior. The posterior exhibits appreciable skewness, with longer tails in the direction of small slopes and large intercepts. The high correlation between β0 and β1 is largely eliminated if we standardize the temperature to have mean zero, i.e., if we change the model to logit(pi ) = β0 + β1 (τi − τ¯· ). For some problems, this may be preferable to ease the computational burden. As there were no computational difficulties
13.2 Bayesian Inference
427
with these data, and as prediction and model validation are independent of the regression parameterization, we consider only the original version of the model. In general, we derive the prior on β for a model with k regression parameters from a prior elicited on success probabilities p˜i at k suitably selected ˜i −˜ yi , N yi ) priors on the p˜i , predictor vectors x ˜i . We place independent Beta(˜ regardless of the choice of the link function. For an arbitrary link function, the induced prior on β has the form π(β) ∝
k
˜
[F (˜ xi β)]y˜i −1 [1 − F (˜ xi β)]Ni −˜yi −1 f (˜ xi β),
i=1
where f (·) is the first derivative of the function F (·). In the case of logistic regression, k ˜ [F (˜ xi β)]y˜i [1 − F (˜ xi β)]Ni −˜yi , (1) π(β) ∝ i=1
which has the same form as the likelihood function. Therefore, (1) is a data augmentation prior (DAP), so named because the likelihood times ˜i ), the prior has the form of a likelihood with additional “prior” data (˜ yi , N i = 1, . . . , k. In other words, for the logistic model we can think of the ˜i and a prior parameters of the prior distribution as a prior sample size N ˜i . number of successes y˜i corresponding to the vector of predictors x Incidentally, this procedure can also be executed with priors for the p˜i ’s other than betas. In fact, with different link functions, different distributions on the p˜i ’s lead to different DAPs. (Note that the likelihood depends on the link function, so DAPs depend on the link function.) We now consider our primary example. Example 13.2.3. Trauma Data. We analyze data on a randomly selected subset of 300 patients admitted to the University of New Mexico Trauma Center between the years 1991 and 1994. For each patient, we have their injury severity score (ISS), their revised trauma score (RTS), their AGE, the type of injuries (TI), that is, whether they were blunt (T I = 0), e.g., the result of a car crash, or penetrating (T I = 1), e.g., gunshot wounds, and the dependent variable, whether the patient eventually survived the injuries. The ISS is an overall index of a patient’s injuries based on the approximately 1300 injuries catalogued in the Abbreviated Injury Scale. The ISS can take on values from 0 for a patient with no injuries to 75 for a patient with severe injuries in three or more body areas. The RTS is an index of physiologic injury and is constructed as a weighted average of an incoming patient’s systolic blood pressure, respiratory rate, and Glasgow Coma Scale. The RTS takes on values from 0 for a patient with no vital signs to 7.84 for a patient with
430
13. Bayesian Binomial Regression
normal vital signs. The data are available electronically from STATLIB as well as from my web homepage: http://stat.unm.edu/˜fletcher Additional information is given in the Preface. Figure 13.3 gives side-by-side boxplots comparing the 278 survivors and 22 fatalities on RTS, ISS, and AGE. Seventeen of the 225 patients with blunt injuries died. Five of the 75 patients with penetrating injuries died. The data were provided by Dr. Turner Osler, a trauma surgeon at the University of Vermont and former head of the Burn Unit at the University of New Mexico Trauma Center. Dr. Osler proposed a logistic regression model to estimate the probability of a patient’s death using an intercept and predictors ISS, RTS, patient’s AGE (used as a surrogate for physiologic reserve), TI, and an interaction between AGE and TI. Similar logistic models are used by trauma centers throughout the United States. Dr. Osler’s expert opinions formed the basis for our prior distribution. To induce a proper prior distribution on the k = 6 dimensional vector β, we require a joint distribution on death probabilities for 6 sets of conditions x ˜i = (1, ISSi , RT Si , AGEi , T Ii , AGEi × T Ii ). Based on discussions with our expert and two-dimensional plots of the data, we defined a 24 factorial design having ISS at levels 25 and 41, RTS at levels 3.34 and 7.84, AGE at levels 10 and 60, and TI at levels 0 and 1. The idea was to pick values of the variables that were relatively extreme within the data but still had substantial probabilities for both success and failure. The prior conditions were chosen as a 1/4 replicate of this 24 with two center points. However, the center points were taken to be values that could actually exist — none of ISS, RTS, and TI are truly continuous variables. In fact, TI is a binary variable, so one “center point” was taken with T I = 0 and the other with T I = 1. Bedrick, Christensen, and Johnson (1996) (henceforth referred to ˜ = as BCJ) recommend calculating the condition number of the matrix X ˜k ) to ascertain that the chosen x ˜i ’s are not too close or too far (˜ x1 , . . . , x apart. See Belsley (1991) for discussion of condition numbers. Beta priors were found to be suitable for the p˜i ’s with parameters given in Table 13.1. Figure 13.4 gives plots of the priors on the p˜i ’s as well as the posteriors. The priors are generally consistent with the posteriors. Relative to the amount of data, the priors are not overwhelming, being the equivalent of 57.5 = 6 ˜ i=1 Ni observations compared to 300 data points. (The posterior densities were obtained by sampling from the discrete approximate posterior and smoothing the samples.) Our initial discussion with Dr. Osler involved eliciting 1st, 50th, and 99th percentiles for each p˜i . These actually overspecify a beta distribution. We wrote a computer program to find the beta distributions that most nearly satisfied the specifications, plotted these distributions, and validated them with our expert.
13.2 Bayesian Inference
433
TABLE 13.1. Trauma Data: Prior Specification ˜i − y˜i ) Beta (˜ yi , N ˜i − y˜i N y˜i
Design for Prior i 1 2 3 4 5 6
1 1 1 1 1 1
25 25 41 41 33 33
x ˜i 7.84 3.34 3.34 7.84 5.74 5.74
60 10 60 10 35 35
0 0 1 1 0 1
0 0 60 10 0 35
1.1 3.0 5.9 1.3 1.1 1.5
8.5 11.0 1.7 12.0 4.9 5.5
The first probability p˜1 corresponds to an individual that “has good physiology, is ‘not bad hurt,’ does not have a lot of reserve,” and for whom there is “added uncertainty due to age.” The Beta(1.1, 8.5) suitably reflects Dr. Osler’s uncertainty about p˜1 . The median of his prior is around .09. The second type of individual “has bad physiology, is very ill, but is young and resilient and is not so bad hurt.” The prior for p˜2 is Beta(3, 11) with median around .20. Incidentally, “bad physiology” and “very ill” apparently refer to bad RTS scores, while how badly hurt one is relates to ISS. The third individual has “bad physiology, a pretty bad injury, and there is much more uncertainty here due to the age factor.” The prior is Beta(5.9, 1.7) with median around .8. Prior individual four “is young, resilient, and has a big injury.” The prior is Beta(1.3, 12) with a median of around .07. Dr. Osler had more difficulty with the 5th and 6th types of individuals because their conditions were both less extreme and more related than those already considered. The priors for p˜5 and p˜6 are Beta(1.1,4.9) with approximate median .15, and Beta(1.5, 5.5) with approximate median .19, respectively. The assumption of independence seems reasonable with the possible exception of p˜5 and p˜6 . If our expert were told that p˜5 = .3, he would definitely want to revise his probability for p˜6 upward. This is because he is fairly confident that the difference between these two probabilities, p˜6 − p˜5 , is positive but reasonably small, while he is less certain about the magnitude of the probabilities themselves. Having p˜6 − p˜5 small but positive is reflecting his perception that penetrating injuries are worse than blunt ones but not a lot worse. Because of our concern about possible lack of independence for the two values p˜6 and p˜5 only, we also considered a prior in which the information about p˜6 was left out of the specification. This results in a partially informative prior (see BCJ, Sec. 4.1, for a full discussion) which is an improper DAP using five prior observations instead of the six required for a proper DAP. We found that all statistical inferences were essentially the same for the two priors, so we have presented results only for the full prior. Finally, it should be pointed out that the process of coming up with a prior is very much a collaboration between the expert and the statisticians.
434
13. Bayesian Binomial Regression
The judgement and expertise of both are needed. It is also quite a bit of work for everyone involved. Bedrick, Christensen, and Johnson (1996, 1997) give further details on this approach to specifying priors for regression problems, including discussions of priors with order restrictions on the p˜i ’s and partial prior information. As mentioned above, a particularly useful form of partial prior information is specifying k < k values p˜i . The genesis of this approach lies with Tsutakawa (1975), Tsutakawa and Lin (1986), and Grieve (1988) who considered independent prior distributions on two probabilities of “success” in simple linear binomial regression problems. Tsutakawa and Lin (1986) argued that eliciting information about success probabilities should be much easier than eliciting information about regression coefficients, a position with which we heartily agree. This is clearly true if one entertains the possibility of two or more models, such as logistic regression versus probit regression. The regression coefficients for these two models require separate elicitations, whereas if one has elicited a prior for probabilities, it is straightforward to induce the requisite prior on β for either model. BCJ extended the Tsutakawa approach to generalized linear models (GLMs) with multiple covariates. For a hypothetical observation y˜i with yi |˜ xi ). This covariate vector x ˜i , BCJ specify a prior on the mean value E(˜ is done for k locations x ˜i , i = 1, . . . , k, where k is the common dimension of the x ˜i ’s. The prior on the regression coefficient vector β is induced xi )’s into a distribution on β. by transforming the distribution on the E(˜ yi |˜ BCJ call such priors conditional means priors (CMPs) and elaborate on the approach in considerable detail. The conditional means provide parameters that are more intuitive than regression coefficients and thus easier to specify prior information for. BCJ also make connections between CMPs and DAPs. (Note that to make the GLM approach apply to binomial regression, just as in Chapter 9, the yi ’s have to be defined as binomial proportions rather than our usual binomial counts.) A key feature in this approach is assuming prior independence of the xi )’s. This assumption might be unreasonable if the x ˜i ’s are “too close” E(˜ yi |˜ together (cf. Grieve, 1988). There are also technical difficulties if they are “too far apart.” BCJ (Sec. 5) examined these issues in detail.
13.2.2
Predictive Probabilities
The predictive probability of success in one new trial y with covariate x is Pr(y = 1|Y, x) = E[F (x β)|Y, x] =
'
F (x β)π(β|Y ) dβ.
(2)
438
13. Bayesian Binomial Regression
form ' Pr(a < βj ≤ b|Y ) =
. I(a,b] (βj )π(β|Y )dβ = I(a,b] (βjr )˜ qr t
r=1
where I(a,b] (βj ) is 1 if a < βj ≤ b and 0 otherwise. Example 13.2.2 continued. Table 13.2 presents posterior means, standard deviations, and percentiles of β0 and β1 for the O-ring data. Using our prior, Pr(β1 < 0|Y ) > .99, which suggests the slope is not zero. Figure 13.8 gives the Bayesian marginal posterior density for β1 in the O-ring data. As before, this was actually generated by smoothing 5000 samples from the approximate posterior distribution, i.e., using Rubin’s (1987) SIR algorithm. TABLE 13.2. Posterior Marginal Distribution: O-Rings
βˆi = E(βi |Y ) Std. Dev.(βi |Y ) 5% 25% 50% 75% 95%
Full Data β0 β1 12.97 5.75 4.56 9.04 12.44 16.20 23.38
−.2018 .0847 −.355 −.251 −.194 −.144 −.077
Case 18 Deleted β0 β1 16.92 7.09 6.85 11.98 16.13 20.86 29.96
−.2648 .1056 −.459 −.324 −.252 −.191 −.114
Example 13.2.3 continued. Table 13.3 presents posterior means, standard deviations, and percentiles for the βj ’s from the trauma data along with the maximum likelihood estimates, and asymptotic standard errors as well as posterior summaries obtained from the diffuse prior π(β) = 1. In addition, the informative prior gives Pr(β1 > 0|Y ) > .99, which suggests that the coefficient of ISS is not zero. Recall that low values of RTS are bad for the patient, so the tendency of the RTS coefficients to be negative is reasonable. Central 90% posterior intervals for the βj ’s are about 3/4’s as wide using the informative prior as with the diffuse prior.
13.2.4
Inference for LDα
With the O-ring data, it is of interest to estimate the temperature at which the chance of O-ring failure is, say 50%, or some other prespecified amount α. This percentile is often called the LDα in bioassay problems (LD for “lethal dose”), and satisfies LDα = {F −1 (α) − β0 }/β1 . The LDα is a function of the vector β, so its approximate posterior distribution is easily
440
13. Bayesian Binomial Regression
TABLE 13.3. Fitted Trauma Model
Variable
Informative Posterior Summaries Based on informative prior Estimate Std. Error .05% .95% −1.79 .07 −.60 .05 1.10 −.02
−3.54 .03 −.82 .03 −.66 −.06
.02 .10 −.37 .07 2.87 .03
Posterior Summaries Based on diffuse prior Estimate Std. Error .05% −2.81 1.60 −5.34 .09 .03 .05 −.59 .17 −.86 .06 .02 .03 1.46 1.36 −.79 −.01 .03 −.07
.95% −.18 .13 −.32 .09 3.69 .05
Intercept ISS RTS AGE TI AGE × TI
Variable Intercept ISS RTS AGE TI AGE × TI
1.10 .02 .14 .01 1.06 .03
Maximum Likelihood Estimate −2.73 .08 −.55 .05 1.34 −.01
Std. Error 1.62 .03 .17 .01 1.33 .03
obtained. The approximate posterior takes on the value {F −1 (α) − β0r }/β1r with probability q˜r . Table 13.4 presents the posterior median and central 90% intervals for LDα using five values of α for the O-ring data. In particular, the Bayesian analysis gives 69.8 degrees as the posterior median temperature at which the chance of O-ring failure is .25. The tails of the LDα ’s are very heavy due to a non-negligible probability of getting β1 values near zero. TABLE 13.4. Posterior Summaries for LDα ’s
α .90 .75 .50 .25 .10
13.3
Full Data Percentiles 5% 50% 95% 30.2 43.4 55.9 65.1 70.3
52.9 58.5 64.2 69.8 75.4
60.4 64.0 68.5 76.4 88.3
α .90 .75 .50 .25 .10
Case 18 Deleted Percentiles 5% 50% 95% 39.8 48.9 57.5 64.1 68.3
55.1 59.4 63.8 68.1 72.4
61.2 64.0 67.5 73.0 80.9
Diagnostics
In this section, we examine a variety of influence diagnostics based on deleting cases. We also explore Box’s (1980) method of model checking. Finally,
13.3 Diagnostics
441
we consider the choice of an appropriate link function and an associated case deletion diagnostic.
13.3.1
Case Deletion Influence Measures
Case deletion diagnostics were pioneered by Cook (1977), Belsley, Kuh and Welsch (1980), and Pregibon (1981). Johnson and Geisser (1982, 1983, 1985) introduced Bayesian predictive and estimative case deletion diagnostics for the linear model and Johnson (1985) introduced diagnostics for the estimation of probabilities in logistic regression. Here we present the Johnson-Geisser influence measures for this nonlinear Bayesian setting. Our purpose is to detect those cases that, upon deletion from the data, noticeably affect inferences. For example, if the predictive probability of O-ring failure were to change radically upon deletion of a single case, it is incumbent upon us to report and quantify that fact. It may or may not be appropriate to delete such cases in a final analysis. The effect of case deletion on the posterior of β is easily formulated. Recalling (13.1.1), the likelihood for β based on all the data except yi is L(β|Y(i) ) =
L(β|Y ) L(β|yi )
where Y(i) denotes the data Y with yi deleted. It follows that π(β|Y(i) ) = 3
L(β|Y(i) )π(β) π(β|Y )/L(β|yi ) =3 . L(β|Y(i) )π(β)dβ π(β|Y )/L(β|yi )dβ
(1)
If we renormalize the probability weights in our discrete approximation, q˜r /L(β r |yi ) , ˜k /L(β k |yi ) k=1 q
q˜r(i) = t
then the distribution taking values β r with probability q˜r(i) gives a discrete approximation to the posterior (1). Expectations with respect to π(β|Y(i) ) are evaluated using this approximate distribution.
Estimative Influence Kullback-Leibler (KL) divergences can be used as in Johnson and Geisser (1985) and Pettit and Smith (1985) to measure the discrepancy between full and reduced data posteriors. The KL divergence with respect to the posterior density with the ith case deleted is defined as ' π(β|Y(i) ) β π(β|Y(i) )dβ ≥ 0. D1i ≡ log π(β|Y )
442
13. Bayesian Binomial Regression
β A large value of D1i indicates that deletion of case i results in a different posterior for β than if it were retained, possibly resulting in different inferences for β. β . The predictive probaWe now present a computational formula for D1i bility that a future binomial observation y with covariate vector xi equals the observed yi value, given Y(i) , can be expressed in two equivalent ways:
' Pr(y = yi |Y(i) , xi ) =
L(β|yi )π(β|Y(i) )dβ =
L(β|yi )π(β|Y(i) ) . π(β|Y )
(2)
To see this, note that from (1), L(β|yi )π(β|Y(i) ) 1 =3 . π(β|Y ) π(β|Y )/L(β|yi )dβ Also, from (1), ' L(β|yi )π(β|Y(i) )dβ = 3 L(β|yi )π(β|Y )/L(β|yi )dβ 1 3 =3 . π(β|Y )/L(β|yi )dβ π(β|Y )/L(β|yi )dβ Equation (2) gives
Pr(y = yi |Y(i) , xi ) = log π(β|Y(i) )dβ L(β|yi ) ' = log Pr(y = yi |Y(i) , xi ) − log L(β|yi )π(β|Y(i) )dβ & % t t . r L(β |yi )˜ qr(i) − log L(β r |yi )˜ qr(i) . = log '
β D1i
r=1
r=1
The KL divergence with respect to the posterior based on all observations is defined as ' π(β|Y ) β D2i ≡ log π(β|Y )dβ. π(β|Y(i) ) Using equation (2), β D2i
% t & t . r r = log L(β |yi )˜ qr − log L(β |yi )˜ qr(i) . r=1
r=1
The symmetric divergence is defined to be the sum of the divergences for β β + D2i . the deleted and full posteriors, Diβ ≡ D1i
13.3 Diagnostics
443
Predictive Influence The predictive distribution for a single trial is Bernoulli, i.e., takes on the values 0 and 1. The symmetric KL divergence is used to measure the discrepancy between full and reduced data predictive distributions. The symmetric KL divergence between two Bernoulli distributions with probabilities p and q reduces to p(1 − q) . J(p, q) ≡ (p − q) log (1 − p)q As in Johnson (1985), we define a symmetric predictive divergence diagnostic for predicting new observations at the original data locations when case i is deleted: Dip
n ≡ J Pr(y = 1|Y, xj ), Pr(y = 1|Y(i) , xj ) . j=1
Here, Pr(y = 1|Y, x) is the predictive probability of success from all the data as defined in (13.2.2), and Pr(y = 1|Y(i) , x) is the predictive probability of success based on all the data except case i: ' Pr(y = 1|Y(i) , x) =
. F (x β)π(β|Y(i) )dβ = F (x β r )˜ qr(i) . t
r=1
The symmetric predictive divergence diagnostic for predicting observations at an arbitrary set of locations, say xfj , j = 1, . . . , r, is Dif ≡
r J Pr(y = 1|Y, xfj ), Pr(y = 1|Y(i) , xfj ) . j=1
A large value of Dip or Dif indicates that deletion of case i results in different predictive probabilities than if it were retained, possibly resulting in different inferences or decisions. Example 13.3.1. O-Ring Data. Figure 13.9 gives index plots of Dip and Dif for the O-ring data. The new locations used in defining Dif were xfj = 31, 33, 35, . . . , 51. The plots for β β D1i and D2i were similar to the plot of Dip , so they are not included. Case 18, which corresponds to the flight where O-rings failed at the highest launch temperature, consistently stands out. Note that the values of Dif are larger for cases with low temperatures. This occurs because the predictions being made are also at low temperatures. The estimative measures and the predictive measure Dip are qualitatively similar for these data, although they need not be in general.
446
13.3.2
13. Bayesian Binomial Regression
Model Checking
We consider two methods for model checking. The first is a global model check due to Box (1980). This involves finding the probability that a new vector Y∗ has a marginal probability smaller than that of the vector Y that we actually observed, i.e., Pr [p(Y∗ ) ≤ p(Y )] , '
where p(Y ) =
L(β|Y )π(β)dβ.
This is essentially a P value, so small values are of significance. For the O-ring data, this value is approximated as .58. The probability is large, so there is no indication of a substantial problem with the model. If the improper diffuse prior π(β) = 1 is used, the required marginal distribution of the data may not exist. Another model check considers the criterion for one element of the Y vector at a time, i.e., Pr [p(yi∗ ) ≤ p(yi )] . This can be viewed as a Bayesian outlier check because we are assessing whether each observation is unusual relative to the model. For the O-ring data, all of these values are 1 except the two identical cases 13 and 14 that give .43 and case 18 that gives .37. This diagnostic gives no indication of substantial problems with the model. The model checking computations were performed by sampling from the prior distribution. We sample pairs (˜ p1 , p˜2 ) and solve the equations pi ) = β0 + β1 x ˜i , i = 1, 2, to obtain samples of β0 and β1 . Sampling F −1 (˜ the pairs (˜ p1 , p˜2 ) is easy with our prior because the p˜i ’s have independent r , r = 1, . . . , v, from the prior, beta distributions. Given a sample β# . 1 r L(β# |Y ). p(Y ) = v r=1 v
For an individual component, . 1 r p(yi ) = L(β# |yi ). v r=1 v
Computing Pr[p(Y∗ ) ≤ p(Y )] for a new vector Y∗ requires an additional r , r = 1, . . . , v, generate new independent round of sampling. For each β# r )), respectively. random variables yir∗ , i = 1, . . . , n, that are Bin(Ni , F (xi β# The yir∗ ’s form vectors Yr∗ for which we can compute p(Yr∗ ) as above. Pr[p(Y∗ ) ≤ p(Y )] is approximated by the proportion of p(Yr∗ )’s that are no greater than p(Y ). Computation of Pr[p(yi∗ ) ≤ p(yi )] is similar.
13.3 Diagnostics
447
Rubin (1988) advocated Bayesian model checks using predictive rather than marginal distributions. On the O-ring data, Rubin’s analogues of the global and local model checks lead to identical conclusions. Chaloner and Brant (1988) check for outliers using the posterior of β. Similar methods also apply to the trauma data.
13.3.3
Link Selection
We now allow the Bayesian paradigm to indicate which of the three link function models is most appropriate for the data: logistic (M1 ), probit (M2 ), or complementary log-log (M3 ). Bayes factors for comparing models Mj and Mk are numbers BFjk such that / P (M ) P (Mj |Y ) . j = BFjk . P (Mk |Y ) P (Mk ) The Bayes factor is the multiplier that changes the prior odds for the models into the posterior odds. It is a simple application of Bayes’s theorem to show that p(Y |Mj ) . BFjk = p(Y |Mk ) p(Y |M1 ) was computed previously as p(Y ); it is the marginal probability of obtaining Y from the logistic model. Computing p(Y |M ) for an alternative model M involves integrating the corresponding likelihood function with respect to the induced prior on β for that model. As in the logistic case, p(Y |M ) is estimated using samples generated from the prior on the p˜j ’s. Example 13.3.1 continued. O-Ring Data. For the O-ring data, the Bayes factors under our prior are BF21 = 1.086, BF31 = 1.403, and, thus, BF32 = BF31 /BF21 = 1.403/1.086 = 1.292. None of these values is large enough to suggest a serious preference for one of the three models. In particular, if the prior odds for the probit versus logit models are 1, the posterior odds are merely 1.086. Example 13.3.2 continued. Trauma Data. For the trauma data, the Bayes factors under our prior are BF21 = 1.05, BF13 = 20.72, and, thus, BF23 = BF21 /BF31 = BF21 BF13 = 1.05(20.72) = 21.83 . There is a suggestion against the complementary log-log model, but there is little to choose from between the logistic and probit models. If the prior odds for the probit versus logit models are 1, the posterior odds are merely 1.05. (These numbers were based on an importance sample of 10,000 observations. Based on only 2000 observations, we got BF21 = 1.97, BF13
13.4 Posterior Computations and Sample Size Calculation
449
from the new posteriors might be needed. When changes in the prior are not dramatic, renormalization of the original Monte Carlo weights might be sufficient. For example, the posterior based on a prior π ∗ (β) is approximated by the discrete distribution taking values β r with probabilities q˜r∗ , where qr /π(β r ) π ∗ (β r )˜ q˜r∗ = t . ∗ k q /π(β k ) k k=1 π (β )˜ Example 13.3.1 continued. O-Ring Data. We used two additional priors to evaluate the sensitivity of our analysis. Each of the priors is a product of independent beta distributions placed at p1 ∼ Beta(.9,.1) and p˜2 ∼ Beta(.1,.9)] τ˜1 = 55 and τ˜2 = 75 degrees. Prior II [˜ places a prior (mean) probability of .9 for O-ring failure at 55 degrees and prior probability of .1 for O-ring failure at 75 degrees, while making the beliefs equivalent to one prior observation. Prior III placed Jeffrey’s “noninformative” Beta(.5,.5) priors on the p˜’s. The posteriors using the original prior and Prior III were similar, whereas the posterior using Prior II was similar to the posterior obtained from the original prior after omitting case 18. Given the small effect of case 18 on our original analysis, we felt that our posterior analysis was not overly sensitive to these changes in the prior. Example 13.3.2 continued. Trauma Data. To examine sensitivity to the prior specifications, we considered case deletions of the “prior observations.” In Figure 13.12 we present plots of p(y = 1|Y, xj , Y˜ ) − p(y = 1|Y, xj , Y˜(i) ), where each is a predictive probability of success but based on different prior information. Here, the data are the same and the priors involve case deletion. In Figure 13.10, the data involve case deletion but the priors are the same. Note that Y˜(i) represents partial prior information in the sense of BCJ.
13.4
Posterior Computations and Sample Size Calculation
In recent years, Bayesian analysis has been performed by using numerical integrations (Naylor and Smith, 1982; Smith et al., 1985), by using the analytic Laplace approximation (Leonard,1982; Tierney and Kadane, 1986; Kass et al., 1988), and by using Monte Carlo methods (Zellner and Rossi, 1984; Gelfand and Smith, 1990; Dellaportas and Smith, 1993). See Gelman et al. (1995, Chaps. 9-11) for a nice summary of these methods. We prefer Monte Carlo methods to Laplace approximations in regression problems because when performing many predictions, only a single Monte Carlo sample is necessary to perform all predictions, while the Laplace method
13.4 Posterior Computations and Sample Size Calculation
451
requires a separate analytic approximation for each prediction. We prefer Monte Carlo methods to numerical integration because of their potential to deal with high-dimensional problems. Monte Carlo methods provide a discrete approximation to the posterior distribution. We discuss a variant of importance sampling that is especially simple when used with a DAP. In importance sampling, one chooses a density function g(β) that is similar in shape to the known kernel of the posterior L(β|Y )π(β) with tails that do not decay more rapidly than the tails of the posterior. Then sample β 1 , . . . , β t from the distribution with density g(β). For r = 1, . . . , t, compute the weights qr = q(β r ) = and q˜r = qr
L(β r |Y )π(β r ) g(β r ) t
(1)
qk .
k=1
The discrete approximation to the posterior distribution takes values β r with probability q˜r . Under fairly weak assumptions (cf. Geweke, 1989), the approximation in (13.1.2) has a large sample normal distribution with estimated variance σ ˆh2 =
t
{h(β r ) − θ¯h }2 q˜r
(2)
r=1
where θ¯h is the approximation from (13.1.2). The variance of θ¯h depends critically on the tails of g(β) through the weight function q(β) of (1). Geweke (1989) concluded that to attain high efficiency across a variety of functions, q(β) should be reasonably constant with small tails. If the tails of the importance function were allowed to decrease much more rapidly than the tails of the posterior density, the normalized weights q˜r could be dominated by individual importance samples in the tail of the approximate posterior. This needlessly inflates the variance of θ¯h . Similar difficulties can arise with any renormalization of the weights for dealing with case deletions or different priors. A natural choice for the importance density g(β) is a multivariate Student’s t density with v degrees of freedom, with location equal to the posterior mode βM , and dispersion proportional to Σ(βM ), the asymptotic posterior covariance matrix evaluated at the mode. The approximate N (βM , Σ(βM )) posterior density is an alternative possibility, but the thin tails of the normal often cause problems; see Zellner and Rossi (1984). Johnson (1987) gives simple algorithms for generating the multivariate normal and t(v) distributions. Prior to selecting the importance sampling density g(β), plot the kernel of π(β|Y ) along the asymptotic principal component directions and choose
452
13. Bayesian Binomial Regression
the degrees of freedom v so that the tails of g(β) are at least as heavy as those of π(β|Y ). Specifically, with Σ(βM ) = T T , where the columns of T are orthogonal, plot g(βM + δT ei ) as a function of δ in each of the unit directions ei , i = 1, ..., k, and similarly for the kernel of the posterior. (ei is a vector of 0s except for a 1 in the ith place.) In cases of extreme asymmetry along these directions, we recommend sampling from split-t distributions. The split-t distributions allow for different tail heights in each direction, in addition to asymmetry about the mode; see Geweke (1989) for details. Figure 13.13 gives a plot of the posterior kernel and the normal, t(6), and split-t(6) densities in the direction of the first principal component for the O-ring data. Each function was normalized to have a maximum value of 1. The plot of the posterior reflects the skewness seen in Figure 13.2. The normal density is inferior to the t(6) density as an importance function because the normal underestimates the posterior upper tail in this direction. The weights q(β) in this direction at 3, 4, and 4.5 standard deviations above zero are 5, 40, and 150 times greater than the weight at zero for the normal density. For the t(6) density, this ratio is below 3. The corresponding plot along the second principal component was similar, with the exception that the posterior is skewed to the left. The split-t(6) density has heavier tails than the posterior in each direction and reproduces the shape in the center of the posterior. We concluded that the split-t(6) is best among the three importance functions, with the t(6) a close second. Heavier tails on the t distribution could have been obtained by reducing the degrees of freedom, but this was unnecessary. The posterior summaries based on both t(6) and split-t(6) sampling were obtained; they were similar. For the O-ring data, we decided on the importance sample size by first generating a pilot study of 500 samples. Prediction was a primary goal. We decided that the estimates for the probability of O-ring failure F (x β) and success 1 − F (x β) at the 23 observed lift-off temperatures must be accurate. The maximum coefficient of variation across estimates under our prior was 4.4%. To reduce this to a target value 2%, the sample size needed to be increased by a factor of (2.2)2 = 4.84, to approximately 2500. We decided to sample 5000 observations. The estimated maximum coefficient of variation for the parameters of interest based on 5000 samples was 1.4%. Similar methods were used for the trauma data, with a pilot study of 2500 samples and a total sample of 10,000 from a split-t(6). We noted earlier that βM and Σ(βM ) are easily computed using standard software when the prior is a DAP. An interesting special case is the improper prior π(β) = 1, where βM is the MLE βˆml based on the original data and Σ(βM )−1 is the observed Fisher information evaluated at βˆml . For non-DAP priors, the posterior mode βM must be computed using specialized software for numerical maximization. Typically, βM is the solution to S(β) = 0, where S(β) is the vector of partial derivatives of the log of the posterior kernel, i.e., log{L(β|Y )π(β)}. The inverse of minus one times
454
13. Bayesian Binomial Regression
The recent books by Carlin and Louis (1996), Gelman et al. (1995), and Gilks, Richardson and Spiegelhalter (1996) examine a variety of complex modeling problems that can be handled easily using Bayesian methods. Standard references for Bayesian prediction are Aitchison and Dunsmore (1975) and Geisser (1993).
Appendix: Tables
A.1
The Greek Alphabet TABLE .1. The Greek alphabet Capital A B Γ ∆ E Z H Θ I K Λ M
Small α β γ δ, ∂
, ε ζ η θ ι κ λ µ
Name alpha beta gamma delta epsilon zeta eta theta iota kappa lambda mu
Capital N Ξ O Π P Σ T Υ Φ X Ψ Ω
Small ν ξ o π ρ σ τ υ φ χ ψ ω
Name nu xi omicron pi rho sigma tau upsilon phi chi psi omega
456
A.2
Appendix: Tables
Tables of the χ2 Distribution TABLE .2. Percentiles of the χ2 Distribution df
0.80
0.90
0.95
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
1.64 3.22 4.64 5.99 7.29 8.56 9.80 11.03 12.24 13.44 14.63 15.81 16.99 18.15 19.31 20.47 21.61 22.76 23.90 25.04 26.17 27.30 28.43 29.55 30.67 31.79 32.91 34.03 35.14 36.25 37.36 38.47 39.57 40.68 41.78
2.71 4.61 6.25 7.78 9.24 10.65 12.02 13.36 14.68 15.99 17.27 18.55 19.81 21.06 22.31 23.54 24.77 25.99 27.20 28.41 29.61 30.81 32.01 33.20 34.38 35.56 36.74 37.92 39.09 40.26 41.42 42.59 43.75 44.90 46.06
3.84 5.99 7.81 9.49 11.07 12.59 14.07 15.51 16.92 18.31 19.67 21.03 22.36 23.69 25.00 26.30 27.59 28.87 30.14 31.41 32.67 33.92 35.17 36.41 37.65 38.89 40.11 41.34 42.56 43.77 44.99 46.19 47.40 48.60 49.80
Percentiles 0.975 0.98 5.02 7.38 9.35 11.14 12.83 14.45 16.01 17.53 19.02 20.48 21.92 23.34 24.74 26.12 27.49 28.85 30.19 31.53 32.85 34.17 35.48 36.78 38.08 39.36 40.65 41.92 43.19 44.46 45.72 46.98 48.23 49.48 50.73 51.97 53.20
5.41 7.82 9.84 11.67 13.39 15.03 16.62 18.17 19.68 21.16 22.62 24.05 25.47 26.87 28.26 29.63 30.99 32.35 33.69 35.02 36.34 37.66 38.97 40.27 41.57 42.86 44.14 45.42 46.69 47.96 49.23 50.49 51.74 52.99 54.24
0.99
0.995
0.999
6.63 9.21 11.35 13.28 15.09 16.81 18.47 20.09 21.67 23.21 24.73 26.22 27.69 29.14 30.58 32.00 33.41 34.81 36.19 37.57 38.93 40.29 41.64 42.98 44.31 45.64 46.96 48.28 49.59 50.89 52.19 53.49 54.77 56.06 57.34
7.88 10.60 12.84 14.86 16.75 18.55 20.28 21.95 23.59 25.19 26.76 28.30 29.82 31.32 32.80 34.27 35.72 37.16 38.58 34.00 41.40 42.80 44.18 45.56 46.93 48.29 49.65 50.99 52.34 53.67 55.00 56.33 57.65 58.96 60.27
10.83 13.82 16.27 18.47 20.51 22.46 24.32 26.13 27.88 29.59 31.26 32.91 34.53 36.12 37.70 39.25 40.79 42.31 43.82 45.31 46.80 48.27 49.73 51.18 52.62 54.05 55.48 56.89 58.30 59.70 61.10 62.49 63.87 65.25 66.62
A.2 Tables of the χ2 Distribution
TABLE .3. Percentiles of the χ2 Distribution df
0.80
0.90
0.95
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 70 80 90 100 110 120 150 200 250 300 350 400
42.88 43.98 45.08 46.17 47.27 48.36 49.46 50.55 51.64 52.73 53.82 54.91 55.99 57.08 58.16 59.25 60.33 61.41 62.50 63.58 64.66 65.74 66.82 67.90 68.97 79.72 90.41 101.05 111.67 122.25 132.81 164.35 216.61 268.60 320.40 372.05 423.59
47.21 48.36 49.51 50.66 51.81 52.95 54.09 55.23 56.37 57.51 58.64 59.77 60.91 62.04 63.17 64.29 65.42 66.55 67.67 68.80 69.92 71.04 72.16 73.28 74.40 85.53 96.58 107.57 118.50 129.39 140.23 172.58 226.02 279.05 331.79 384.31 436.65
51.00 52.19 53.38 54.57 55.76 56.94 58.12 59.30 60.48 61.66 62.83 64.00 65.17 66.34 67.51 68.67 69.83 70.99 72.15 73.31 74.47 75.62 76.78 77.93 79.08 90.53 101.88 113.15 124.34 135.48 146.57 179.58 234.00 287.88 341.39 394.62 447.63
Percentiles 0.975 0.98 54.44 55.67 56.90 58.12 59.34 60.56 61.78 62.99 64.20 65.41 66.62 67.82 69.02 70.22 71.42 72.62 73.81 75.00 76.19 77.38 78.57 79.75 80.93 82.12 83.30 95.02 106.63 118.13 129.56 140.92 152.21 185.80 241.06 295.69 349.87 403.72 457.308
55.49 56.73 57.97 59.20 60.44 61.67 62.89 64.11 65.34 66.55 67.77 68.99 70.20 71.41 72.61 73.82 75.02 76.22 77.42 78.62 79.81 81.01 82.200 83.39 84.58 96.39 108.07 119.65 131.14 142.56 153.92 187.67 243.19 298.05 352.42 406.45 460.20
0.99
0.995
0.999
58.62 59.89 61.16 62.43 63.69 64.95 66.21 67.46 68.71 69.96 71.20 72.44 73.68 74.92 76.15 77.39 78.62 79.84 81.07 82.29 83.51 84.73 85.95 87.17 88.38 100.42 112.33 124.11 135.81 147.42 158.95 193.20 249.45 304.95 359.90 414.47 468.71
61.58 62.89 64.18 65.48 66.77 68.05 69.34 70.62 71.89 73.17 74.44 75.70 76.97 78.23 79.49 80.75 82.00 83.25 84.50 85.75 87.00 88.24 89.47 90.72 91.96 104.21 116.32 128.30 140.18 151.95 163.65 198.35 255.28 311.37 366.83 421.89 476.57
67.99 69.35 70.71 72.06 73.41 74.75 76.09 77.42 78.75 80.08 81.40 82.72 84.03 85.35 86.66 87.97 89.27 90.57 91.88 93.17 94.47 95.75 97.03 98.34 99.62 112.31 124.84 137.19 149.48 161.59 173.62 209.22 267.62 324.93 381.34 437.43 492.99
457
References
Agresti, Alan (1984). Analysis of Ordinal Categorical Data. New York: John Wiley and Sons. Agresti, Alan (1990). Categorical Data Analysis. New York: John Wiley and Sons. Agresti, Alan (1992). A survey of exact inference for contingency tables, with discussion. Statistical Science, 7, 131-153. Agresti, Alan, Wackerly, Dennis, and Boyett, J.P. (1979). Exact conditional tests for cross-classifications: Approximation of attained significance levels. Psychometrika, 44, 75-88. Aitchison, J. and Dunsmore, I.R. (1975). Statistical Prediction Analysis. Cambridge, MA: Cambridge University Press. Aitkin, Murray (1978). The analysis of unbalanced cross-classifications. Journal of the Royal Statistical Society, Series B, 141, 195-223. Aitkin, Murray (1979). A simultaneous test procedure for contingency tables. Applied Statistics, 28, 233-242. Akaike, Hirotugu (1973). Information theory and an extension of the maximum likelihood principle. In Proceedings of the 2nd International Symposium on Information, edited by B.N. Petrov and F. Czaki. Budapest: Akademiai Kiado.
References
459
Anderson, E.B. (1980). Discrete Statistical Models with Social Science Application. Amsterdam: North-Holland. Anderson, E.B. (1991). The Statistical Analysis of Categorical Data, Second Edition. Berlin: Springer-Verlag Anderson, Erling B. (1992). Diagnostics in categorical data analysis. Journal of the Royal Statistical Society, Series B, 54, 781-791. Anderson, J.A. (1972). Separate sample logistic discrimination. Biometrika, 59, 19-35. Anderson, J.A. and Blair, V. (1982). Penalized maximum likelihood estimation in logistic regression and discrimination. Biometrika, 69, 123-136. Anderson, T.W. (1984). An Introduction to Multivariate Statistical Analysis, Second Edition. New York: John Wiley and Sons. Arnold, Steven F. (1981). The Theory of Linear Models and Multivariate Analysis. New York: John Wiley and Sons. Asmussen, S. and Edwards, D.G. (1983). Collapsibility and response variables in contingency tables. Biometrika, 70, 567-578. Balmer, D.W. (1988). Recursive enumeration of r × c tables for exact likelihood evaluation. Applied Statistics, 37, 290-301. Bartle, Robert G. (1964). The Elements of Real Analysis. New York: John Wiley and Sons. Bartlett, M.S. (1935). Contingency table interactions. Journal of the Royal Statistical Society, Supplement, 2, 248-252. Bedrick, Edward J. (1983). Adjusted chi–squared tests for cross-classified tables of survey data. Biometrika, 70, 591-595. Bedrick, E. J., Christensen, R., and Johnson, W. (1996). A new perspective on priors for generalized linear models. Journal of the American Statistical Association, 91, 1450-1460. Bedrick, E. J., Christensen, R., and Johnson, W. (1997). Bayesian binomial regression. The American Statistician, in press. Bedrick, Edward J. and Hill, Joe R. (1990). Outlier tests for logistic regression: a conditional approach. Biometrika, 77, 815-827. Belsley, D.A. (1991). Collinearity Diagnostics: Collinearity and Weak Data in Regression. New York: John Wiley and Sons. Belsley, D.A., Kuh, E., and Welsch, R.E. (1980). Regression Diagnostics. New York: John Wiley and Sons.
460
References
Benedetti, Jacqueline K. and Brown, Morton B. (1978). Strategies for the selection of log-linear models. Biometrics, 34, 680-686. Berge, C. (1973). Graphs and Hypergraphs. Amsterdam: North-Holland. Berger, James O. (1985). Statistical Decision Theory and Bayesian Analysis, Second Edition. New York: Springer-Verlag Berger, James O. and Wolpert, Robert (1984). The Likelihood Principle. Hayward, CA: Institute of Mathematical Statistics Monograph Series. Berkson, Joseph (1978). In dispraise of the exact test. Journal of Statistical Planning and Inference, 2, 27-42. Bickel, P.J., Hammel, E.A., and O’Conner, J.W. (1975). Sex bias in graduate admissions: data from Berkeley. Science, 187, 398-404. Binder, D.A., Gratton, M., Hidiroglou, M.A., Kumar, S., and Rao, J.N.K. (1984). Analysis of categorical data from surveys with complex designs: Some Canadian experiences. Survey Methodology, 10, 141-156. Birch, M.W. (1963). Maximum likelihood in three-way contingency tables. Journal of the Royal Statistical Society, Series B, 25, 220-233. Bishop, Yvonne M.M., Fienberg, Steven E., and Holland, Paul W. (1975). Discrete Multivariate Analysis. Cambridge, MA: MIT Press. Bortkiewicz, L. von (1898). Das Gesetz der Kleinen Zahlen. Leipzig: Teubner. Box, G.E.P. (1980). Sampling and Bayes’ inference in scientific modelling and robustness. Journal of the Royal Statistical Society, Series A, 143, 383-404. Boyett, James M. (1979). Random R × C tables with given row and column totals. Applied Statistics, 29, 329-332. Bradley, R.A. and Terry, M.E. (1952). The rank analysis of incomplete block designs. I. The method of paired comparisons. Biometrics, 39, 324345. Breslow, N.E. and Day, N.E. (1980). Statistical Methods in Cancer Research. Lyon: International Agency for Research on Cancer. Brier, Stephen S. (1980). Analysis of contingency tables under cluster sampling. Biometrika, 67, 591-596. Brown, Byron Wm., Jr. (1980). Prediction analyses for binary data. In Biostatistics Casebook, edited by R.G. Miller, Jr. et al. New York: John Wiley and Sons.
References
461
Brown, Morton B. (1976). Screening effects in multidimensional contingency tables. Applied Statistics, 25, 37-46. Brunswick, A.F. (1971). Adolescent health, sex, and fertility. American Journal of Public Health, 61, 711-720. Carlin, B. P. and Louis, T. A. (1996). Bayes and Empirical Bayes Methods for Data Analysis, New York: Chapman and Hall. Casella, G. and George, E.I. (1992). Explaining the Gibbs sampler. The American Statistician, 46, 167-174. Chaloner, K. and Brant, R. (1988). A Bayesian approach to outlier detection and residual analysis. Biometrika, 75, 651-660. Cheng, Bing and Titterington, D.M. (1994). Neural networks: A review from a statistical perspective, with discussion. Statistical Science, 9, 2-30. Christensen, Ronald (1987). Plane Answers to Complex Questions: The Theory of Linear Models, First Edition. New York: Springer-Verlag. Christensen, Ronald (1990). Linear Models for Multivariate, Time Series and Spatial Data. New York: Springer-Verlag. Christensen, Ronald (1996a). Analysis of Variance, Design, and Regression: Applied Statistical Methods. London: Chapman and Hall. Christensen, Ronald (1996b). Plane Answers to Complex Questions: The Theory of Linear Models, Second Edition. New York: Springer-Verlag. Christensen, Ronald and Utts, Jessica (1992). Testing for nonadditivity in log-linear and logit models. Journal of Statistical Planning and Inference, 33, 333-343. Chuang, Christy (1983). Multiplicative-interaction logit models for I × J × 2 three-way tables. Communications in Statistics, Theory and Methods, 12, 2871-2885. Clayton, Murray K., Geisser, Seymour, and Jennings, Dennis E. (1986). A comparison of several model selection procedures. In Bayesian Inference and Decision Techniques, edited by P. Goel and A. Zellner. Amsterdam: North-Holland Cook, R.D. (1977). Detection of influential observations in linear regression. Technometrics, 19 15-18. Cook, R.D. and Weisberg, S. (1982). Residuals and Influence in Regression. New York: Chapman and Hall.
462
References
Cox, D.R. and Hinkley, D.V. (1974). Theoretical Statistics. London: Chapman and Hall. Cox, D.R. and Snell, E.J.(1989). Analysis of Binary Data, Second Edition. London: Chapman and Hall. Cram´er, Harald (1946). Mathematical Methods of Statistics. Princeton, NJ: Princeton University Press. Cressie, N. and Read, T.R.C. (1984). Multinomial goodness-of-fit tests. Journal of the Royal Statistical Society, Series B, 46, 440-464. Dalal, S.R., Fowlkes, E.B., and Hoadley, B. (1989) Risk analysis of the space shuttle: Pre-Challenger prediction of failure. Journal of the American Statistical Association, 84, 945-957. Darroch, J.N., Lauritzen, S.L., and Speed, T.P. (1980). Markov fields and log-linear interaction models for contingency tables. Annals of Statistics, 8, 522-539. David, Herbert A. (1988). The Method of Paired Comparisons, Second Edition. New York: Oxford University Press. Davison, A.C. (1988). Approximate conditional inference in generalized linear models. Journal of the Royal Statistical Society, Series B, 50, 445461. del Pino, Guido (1989). The unifying role of iterative generalized least squares in statistical algorithms. Statistical Science, 4, 394-403. Dellaportas, P. and Smith, A.F.M. (1993). Bayesian inference for generalized linear models and proportional hazards via Gibbs sampling. Applied Statistics, 42, 443-459. Deming, W. Edwards and Stephan, Frederick F. (1940). On a least squares adjustment of a sampled frequency table when the expected marginal totals are known. Annals of Mathematical Statistics, 11, 427444. Dixon, Wilfrid J. and Massey, Frank J., Jr. (1983). Introduction to Statistical Analysis. New York: McGraw-Hill. Draper, Norman and Smith, Harry (1981). Applied Regression Analysis, Second Edition. New York: John Wiley and Sons. Duncan, Otis Dudley (1975). Partitioning polytomous variables in multiway contingency analysis. Social Science Research, 4, 167-182.
References
463
Duncan, O.D. and McRae, J.A., Jr. (1979). Multiway contingency analysis with a scaled response or factor. In Sociological Methodology. San Francisco: Jossey-Bass. Duncan, O.D., Schuman, H., and Duncan, B. (1973). Social Change in a Metropolitan Community. New York: Russell Sage Foundation. Edwards, David (1995). Introduction to Graphical Modeling. Berlin: Springer-Verlag. Edwards, David and Havranek, Tomas (1985). A fast procedure for model search in multidimensional contingency tables. Biometrika, 72, 339-351. Edwards, David and Kreiner, Sven (1983). The analysis of contingency tables by graphical models. Biometrika, 70, 553-565. Efron, B. (1975). The efficiency of logistic regression compared to normal discriminant analysis. Journal of the American Statistical Association, 70, 892-898. Everitt, B.S. (1977). The Analysis of Contingency Tables. London: Chapman and Hall. Farewell, V.T. (1979). Some results on the estimation of logistic models based on retrospective data. Biometrika, 66, 27-32. Fay, Robert E. (1985). A jackknifed chi–squared test for complex samples. Journal of the American Statistical Association, 80, 148-157. Feigl, P. and Zelen, M. (1965). Estimation of exponential probabilities with concomitant information. Biometrics, 21, 826-838. Fienberg, S.E. (1968). The Estimation of Cell Probabilities in Two-Way Contingency Tables. Ph. D. Thesis, Department of Statistics, Harvard University. Fienberg, Stephen E. (1979). The use of chi–squared statistics for categorical data problems. Journal of the Royal Statistical Society, Series B, 41, 54-64. Fienberg, Stephen E. (1980). The Analysis of Cross-Classified Categorical Data, Second Edition. Cambridge, MA: MIT Press. Fienberg, Stephen E. and Gong, Gail D. (1984). Comment on a paper by Landwehr et al. Journal of the American Statistical Association, 79, 72-77. Fienberg, Stephen E. and Larntz, K. (1976). Loglinear representation for paired and multiple comparisons models. Biometrika, 63, 245-254.
464
References
Finney, D.J. (1941). The estimation from individual records of the relationship between dose and quantal response. Biometrika, 34, 320-334. Finney, D.J. (1971). Probit Analysis, Third Edition. Cambridge: Cambridge University Press. Fisher, Ronald A. (1925). Statistical Methods for Research Workers. New York: Hafner Press. Fisher, Ronald A. (1936). The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7, 179-188. Freedman, D., Pisani, R., and Purves, R. (1978). Statistics. New York: W. W. Norton. Freeman, M.F. and Tukey, J.W. (1950). Transformations related to the angular and the square root. Annals of Statistics, 21, 607-611. Geisser, Seymour (1977). The inferential use of predictive distributions. In Foundations of Statistical Inference, edited by V.P. Godambe and D.A. Sprott. Toronto: Holt, Rinehart and Winston. Geisser, S. (1982). Aspects of the predictive and estimative approaches to the determination of probabilities. Biometrics Supplement: Current Topics in Biostatistics and Epidemiology, 38, 75-93. Geisser, S. (1993). Predictive Inference: An Introduction. New York: Chapman and Hall. Gelfand, A.E. and Smith, A.F.M. (1990). Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association, 85, 398-409. Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (1995). Bayesian Data Analysis, New York: Chapman and Hall. Geweke, J. (1989). Bayesian Inference in econometric models using Monte Carlo integration. Econometrica, 57, 1317-1339. Gilby, Walter H. (1911). On the significance of the teacher’s appreciation of general intelligence. Biometrika, VII, 79-93. Gilks, W. R., Richardson, S., and Spiegelhalter, D., eds. (1996). Practical Markov Chain Monte Carlo, New York: Chapman and Hall. Glymour, Clark, Scheines, Richard, Spirtes, Peter, and Kelly, Kevin (1987). Discovering Causal Structure: Artificial Intelligence, Philosophy of Science, and Statistical Modeling. New York: Academic.
References
465
Gong, Gail (1986). Cross-validation, the jackknife, and the bootstrap: Excess error estimation in forward logistic regression. Journal of the American Statistical Association, 81, 108-113. Good, I.J. (1975). The number of hypotheses of independence for a random vector or a multidimensional contingency table, and the Bell numbers. Iranian Journal of Science and Technology, 4, 77-83. Goodman, L.A. (1970). The multivariate analysis of qualitative data: Interaction among multiple classifications. Journal of the American Statistical Association, 65, 226-256. Goodman, L.A. (1971). Partitioning of chi-square, analysis of marginal contingency tables, and estimation of expected frequencies in multidimensional tables. Journal of the American Statistical Association, 66, 339-344. Goodman, L.A. (1973). The analysis of contingency tables when some variables are posterior to others: A modified path analysis approach. Biometrika, 60, 179-192. Goodman, L.A. (1979). Simple models for the analysis of association in cross-classifications having ordered categories. Journal of the American Statistical Association, 74, 537-552. Goodman, L.A. (1981). Association models and canonical correlation in the analysis of cross-classifications having ordered categories. Journal of the American Statistical Association, 76, 320-334. Grieve, A.P. (1988). A Bayesian approach to the analysis of LD50 experiments. In Bayesian Statistics 3, edited by J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith. Oxford University Press, Oxford, 617-630. Grizzle, James E., Starmer, C. Frank, and Koch, Gary G. (1969). Analysis of categorical data by linear models. Biometrics, 25, 489-504. Grizzle, James E. and Williams, O. Dale (1972). Log linear models and tests of independence for contingency tables. Biometrics, 28, 137-156. Gross, A.J. (1984). A note on ”chi–squared tests with survey data.” Journal of the Royal Statistical Society, Series B, 46, 270-272. Haberman, Shelby J. (1974a). The Analysis of Frequency Data. Chicago: University of Chicago Press. Haberman, Shelby J. (1974b). Loglinear models for frequency tables with ordered classifications. Biometrics, 30, 589-600.
466
References
Haberman, Shelby J. (1977). Log-linear models and frequency tables with small expected cell counts. Annals of Statistics, 5, 1148-1169. Haberman, Shelby J. (1978). The Analysis of Qualitative Data, Volume 1. New York: Academic Press. Haberman, Shelby J. (1979). The Analysis of Qualitative Data, Volume 2. New York: Academic Press. Hand, D.J. (1981). Discrimination and Classification. New York: John Wiley and Sons. Havranek, Tomas (1984). A procedure for model search in multidimensional contingency tables. Biometrics, 40, 95-100. Hirji, K.F., Mehta, C.R., and Patel, N.R. (1987). Computing distributions for exact logistic regression. Journal of the American Statistical Association, 82, 1110-1117. Holland, Paul W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81, 945-960 Holt, D., Scott, A.J., and Ewings, P.D. (1980). Chi-squared tests with survey data. Journal of the Royal Statistical Society, Series A, 143, 302320. Hosmer, David W. and Lemeshow, Stanley (1989). Applied Logistic Regression. New York: John Wiley and Sons. Imrey, P.B., Johnson, W.D., and Koch, G.G. (1976). An incomplete contingency table approach to paired comparison experiments. Journal of the American Statistical Association, 71, 614-623. Irwin, J.O. (1949). A note on the subdivision of χ2 into components. Biometrika, 36, 130-134. Jennings, Dennis E. (1986). Outliers and residual distributions in logistic regression. Journal of the American Statistical Association, 81, 987-990. Johnson, D.E. and Graybill, F.A. (1972). An analysis of a two-way model with interaction and no replication. Journal of the American Statistical Association, 67, 862-868. Johnson, M.E. (1987). Multivariate Statistical Simulation. New York: John Wiley and Sons. Johnson, W. (1985). Influence measures for logistic regression: Another point of view. Biometrika, 72, 59-65.
References
467
Johnson, W. and Geisser, S. (1982). Assessing the predictive influence of observations. In Statistics and Probability: Essays in Honor of C.R. Rao, edited by G. Kallianpur, P.B. Krishnaiah, and J.K. Ghosh. Amsterdam: North-Holland. Johnson, W. and Geisser, S. (1983). A predictive view of the detection and characterization of influential observations in regression. Journal of the American Statistical Association, 78, 137-144. Johnson, W. and Geisser, S. (1985). Estimative influence measures for the multivariate general linear model. Journal of the Statistical Planning and Inference, 11, 33-56 Kass, R., Tierney, L., and Kadane, J. (1988). Asymptotics in Bayesian computation. In Bayesian Statistics 3, edited by J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A.F.M. Smith. Oxford: Oxford University Press. Kempthorne, Oscar (1979). On dispraise of the exact test: reactions. Journal of Statistical Planning and Inference, 3, 199-213. Kihlberg, J.K., Narragon, E.A., and Campbell, B.J. (1964). Automobile crash injury in relation to car size. Cornell Aero. Lab. Report No. VJ1823-Rll. Kiiveri, H. and Speed, T.P. (1982). Structural analysis of multivariate data: A review. In Sociological Methodology, edited by S. Leinhart. San Francisco: Jossey–Bass. Kiiveri, H., Speed, T.P., and Carlin, J.B. (1984). Recursive causal models. Journal of the Australian Mathematics Society, 36, 30-52. Kish (1965). Survey Sampling. New York: John Wiley and Sons. Koch, G.G., Freeman, D.H., and Freeman, J.L. (1975). Strategies in the multivariate analysis of data from complex surveys. International Statistical Review, 43, 59-78. Koch, Gary G., Imrey, Peter B., Freeman, Daniel H., Jr., and Tolley, H. Dennis (1976). The asymptotic covariance structure of estimated parameters from contingency table log-linear models. Proceedings of the 9th International Biometric Conference, vol. 1, 317-336. Koehler, Kenneth J. (1986). Goodness-of-fit tests for log-linear models in sparse contingency tables. Journal of the American Statistical Association, 81, 483-493. Koehler, Kenneth J. and Larntz, Kinley (1980). An assessment of several asymptotic approximations for sparse multinomials. Journal of the American Statistical Association, 75, 336-344.
468
References
Kreiner, Sven (1987). Analysis of multidimensional contingency tables by exact conditional tests: Techniques and strategies. Scandinavian Journal of Statistics, 14, 97-112. Lachenbruch, P.A. (1975). Discriminate Analysis. New York: Hafner Press. Lachenbruch, P.A., Sneeringeer, C., and Revo, L.T. (1973). Robustness of the linear and quadratic discriminant function to certain types of nonnormality. Communications in Statistics, 1, 39-57. Lancaster, H.O. (1949). The derivation and partition of χ2 in certain discrete distributions. Biometrika, 36, 117-129. Landwehr, James M., Pregibon, Daryl, and Shoemaker, Anne C. (1984). Graphical methods for assessing logistic regression models. Journal of the American Statistical Association, 79, 61-71. Larntz, Kinley (1978). Small sample comparisons of exact levels for chisquare goodness-of-fit statistics. Journal of the American Statistical Association, 73, 253-263. Lauritzen, Steffen L. (1996). Graphical Models. Oxford: Oxford University Press. Lavine, M. (1991). Problems in extrapolation illustrated with pace shuttle o-ring data (with discussion). Journal of the American Statistical Association, 86, 919-921. Lazerwitz, Bernard (1961). A comparison of major United States religious groups. Journal of the American Statistical Association, 56, 568579. Lee, Elisa T. (1980). Statistical Methods for Survival Data Analysis. Belmont, CA: Lifetime Learning Publications. Lehmann, E.L. (1986). Testing Statistical Hypotheses, Second Edition. New York: John Wiley and Sons. Leonard, T. (1972). Bayesian methods for binomial data. Biometrika, 59, 581-589. Leonard, T. (1982). Comment on a paper by Lejeune and Faulkenberry. Journal of the American Statistical Association, 77, 657-658. Lindgren, B.W. (1993). Statistical Theory, Fourth Edition. New York: Macmillan.
References
469
McCullagh, P. (1986). The conditional distribution of goodness-of-fit statistics for discrete data. Journal of the American Statistical Association, 81, 104-107. McCullagh, P. and Nelder, J.A. (1989). Generalized Linear Models, Second Edition. London: Chapman and Hall. McLachlan, G. (1992). Discriminant Analysis and Statistical Pattern Recognition. New York: John Wiley and Sons. McNemar, Q. (1947). Note on the sampling error of differences between correlated proportions or percentages. Psychometrika, 12, 153-157. Mandel, J. (1961). Nonadditivity in two-way analysis of variance. Journal of the American Statistical Association, 56, 878-888. Mandel, J. (1971). A new analysis of variance model for nonadditive data. Technometrics, 13, 1-18. Mantel, N. and Haenszel, W. (1959). Statistical aspects of the analysis of data from retrospective studies of disease. Journal of the National Cancer Institute, 22, 719-748. Martz, H. F. and Zimmer, W. J. (1992). The risk of catastrophic failure of the solid rocket boosters on the space shuttle. The American Statistician, 46, 42-47. Mehta, C.R. (1994). The exact analysis of contingency tables in medical research. Statistical Methods in Medical Research, 3, 135-156. Mehta, C.R. and Patel, N.R. (1980). A network algorithm for the exact treatment of the 2 × k contingency table. Communications in Statistics– Part B, 9, 649-664. Mehta, C.R. and Patel, N.R. (1983). A network algorithm for performing Fisher’s exact test in r × c contingency tables. Journal of the American Statistical Association, 78, 427-434. Mehta, C.R., Patel, N.R., and Gray, R. (1983). Computing an exact confidence interval for the common odds ratio in several 2×2 contingency tables. Journal of the American Statistical Association, 80, 969-973. Mehta, C.R., Patel, N.R., and Tsiatis, A.A. (1984). Exact significance testing to establish treatment equivalence with ordered categorical data. Biometrics, 40, 819-825. Meyer, Michael M. (1982). Transforming contingency tables. Annals of Statistics, 10, 1172-1181.
470
References
Milliken, G.A. and Graybill, F.A. (1970). Extensions of the general linear hypothesis model. Journal of the American Statistical Association, 65, 797-807. Mosteller, Frederick, and Tukey, John W. (1977). Data Analysis and Regression. Reading, MA: Addison-Wesley. Naylor, J.C. and Smith, A.F.M. (1982). Applications of a method for the efficient computation of posterior distributions. Applied Statistics, 31, 214-225. Nelder, J.A. and Wedderburn, R.W.M. (1972). Generalized linear models. Journal of the Royal Statistical Society, Series A, 135, 370-384. Nordberg, Lennart (1981). Stepwise selection of explanatory variables in the binary logit model. Scandinavian Journal of Statistics, 8, 17-26. Nordberg, Lennart (1982). On variable selection in generalized linear and related regression models. Communications in Statistics, A, 11, 24272449. O’Neill, Terence J. (1994). The bias of estimating equations with application to the error rate in logistic discrimination. Journal of the American Statistical Association, 89, 1492-1498. Pagano, M. and Taylor-Halvorson, K. (1981). An algorithm for finding the exact significance levels of r × c contingency tables. Journal of the American Statistical Association, 76, 931-934. Patefield, W.M. (1981). An efficient method of generating random R × C tables with given row and column totals. Applied Statistics, 30, 91-97. Pettit, L.I. and Smith, A.F.M. (1985). Outliers and influential observations in linear models. Bayesian Statistics, 2, Edited by J.M. Bernardo. Amsterdam: North-Holland, 473-494. Plackett, R.L. (1981). The Analysis of Categorical Data, Second Edition. London: Griffin. Pregibon, Daryl (1981). Logistic regression diagnostics. Annals of Statistics, 9, 705-724. Prentice, R.L. and Pyke, R. (1979). Logistic disease incidence models and case-control studies. Biometrika, 66, 403-411. Press, S. James (1984). Applied Multivariate Analysis, Second Edition. New York: Holt, Rinehart and Winston.
References
471
Press, S. James and Wilson, Sandra (1978). Choosing between logistic regression and discriminant analysis. Journal of the American Statistical Association, 73, 699-705. Quine, S. (1975). Achievement Orientation of Aboriginal and White Australian Adolescents. Ph. D. Thesis, Australian National University. Radelet, M. (1981). Racial characteristics and the imposition of the death penalty. American Sociological Review, 46, 918-927. Rao, C. Radhakrishna (1965). Linear Statistical Inference and Its Applications. New York: John Wiley and Sons. Rao, C. Radhakrishna (1973). Linear Statistical Inference and Its Applications, Second Edition. New York: John Wiley and Sons. Rao, J.N.K. and Scott, A.J. (1981). The analysis of categorical data from complex survey samples: Chi-squared tests for goodness-of-fit and independence in two-way tables. Journal of the American Statistical Association, 76, 221-230. Rao, J.N.K. and Scott, A.J. (1984). On chi-squared tests for multiway contingency tables with cell proportions estimated from survey data. Annals of Statistics, 12, 46-60. Rao, J.N.K. and Scott, A.J. (1987). On simple adjustments to chi-squared tests with sample survey data. Annals of Statistics, 15, 385-397. Read, Timothy R.C. and Cressie, Noel A.C. (1988). Goodness-of-Fit Statistics for Discrete Multivariate Data. New York: Springer-Verlag. Reiss, I.L., Banwart, A., and Foreman, H. (1975). Premarital contraceptive usage: A study and some theoretical explorations. Journal of Marriage and the Family, 37, 619-630. Rosenberg, M. (1962). Test factor standardization as method interpretation. Social Forces, 41(October), 53-61. Rubin, D.B. (1987). The SIR algorithm – A discussion of Tanner and Wong’s: The calculation of posterior distributions by data augmentation. Journal of the American Statistical Association, 82, 543-546. Rubin, D.B. (1988). Bayesianly justifiable and relevant frequency calculations for the applied statistician. Annals of Statistics, 12, 1151-1172. Santner, Thomas J. and Duffy, Diane E. (1989). The Statistical Analysis of Discrete Data. New York: Springer-Verlag. Schwarz, Gideon (1978). Estimating the dimension of a model. Annals of Statistics, 16, 461-464.
472
References
Seber, G.A.F. (1984). Multivariate Observations. New York: John Wiley and Sons. Simonoff, Jeffrey S. (1983). A penalty function approach to smoothing large sparse multinomials contingency tables. Annals of Statistics, 11, 208-218. Simonoff, Jeffrey S. (1985). An improved goodness-of-fit statistic for sparse multinomials. Journal of the American Statistical Association, 80, 671-677. Simonoff, Jeffrey S. (1986). Jackknifing and bootstrapping goodness-offit statistics in sparse multinomials. Journal of the American Statistical Association, 81, 1005-1012. Skinner, C.J., Holt, D., and Smith, T.M.F. (1989). Analysis of Complex Surveys. New York: John Wiley and Sons. Smith, A.F.M. and Gelfand, A.E. (1992). Bayesian statistics without tears: A sampling-resampling perspective. The American Statistician, 46, 84-88. Smith, A.F.M., Skene, A.M., Shaw, J.E.H., Naylor, J.C., and Dransfield, M. (1985). The implementation of the Bayesian paradigm. Communications in Statistics – Theory and Methods, 14, 1079-1102. Thomas, D. Roland and Rao, J.N.K. (1987). Small-sample comparisons of level and power for simple goodness-of-fit statistics under cluster sampling. Journal of the American Statistical Association, 82, 630-636. Thomas, William and Cook, R. Dennis (1989). Assessing influence on regression coefficients in generalized linear models. Biometrika, 76, 741749. Thomas, William and Cook, R. Dennis (1990). Assessing influence on predictions for generalized linear models. Technometrics, 32, 59-65. Tierney, L. and Kadane, J. (1986). Accurate approximations for posterior moments and marginal densities. Journal of the American Statistical Association, 81, 82-86. Tsiatis, Anasatasios A. (1980). A note on a goodness-of-fit test for the logistic regression model. Biometrika, 67, 250-251. Tsutakawa, R.K. (1975). Bayesian inference for bioassay. Technical Report No. 52. Department of Statistics, Univ. of Missouri – Columbia. Tsutakawa, R.K. and Lin, H.Y. (1986). Bayesian estimation of item response curves. Psychometrika, 51, 251-267.
References
473
Tukey, J.W. (1949). One degree of freedom for nonadditivity. Biometrics, 5, 232-242. Waite, H. (1911). The teacher’s estimation of the general intelligence of school children. Biometrika, VII, 79-93. Weisberg, Sanford (1975). Inference for tables of counts: Contingency tables. Unpublished Stat 5021 Handout. School of Statistics, University of Minnesota, Minneapolis. Weisberg, Sanford (1985). Applied Linear Regression, Second Edition. New York: John Wiley and Sons. Wermuth, Nanny (1976). Model search among multiplicative models. Biometrics, 32, 253-263. Wermuth, Nanny and Lauritzen, Steffen L. (1983). Graphical and recursive models for contingency tables. Biometrika, 70, 537-552. Whittaker, Joe (1990). Graphical Models in Applied Multivariate Statistics. New York: John Wiley and Sons. Wing, J.K. (1962). Institutionalism in mental hospitals. British Journal of Social and Clinical Psychology, 1, 38-51. Woodward, G. Lange, S.W., Nelson, K.W., and Calvert, H.O. (1941). The acute oral toxicity of acetic, chloracetic, dichloracetic and trichloracetic acids. Journal of Industrial Hygiene and Toxicology, 23, 78-81. Yule, G.U. (1900). On the association of attributes in statistics: With illustration from the material of the childhood society, etc. Philosophical Transactions of the Royal Society, Series A, 194, 257-319. Zellner, A. and Rossi, P.E. (1984). Bayesian analysis of dichotomous quantal response models. Journal of Econometrics, 25, 365-393. Zelterman, Daniel (1987). Goodness-of-fit statistics for large sparse multinomial distributions. Journal of the American Statistical Association, 82, 624-629.
This page intentionally left blank
Author Index
Agresti, A., 1, 68, 103 Aitchison, J., 160, 454 Aitkin, M., 211, 234, 239 Akaike, H., 104, 106 Anderson, E.B., 1, 249, 271, 360 Anderson, J.A., 391 Anderson, T.W., 160 Arnold, S.F., 386 Asmussen, S., 117, 159, 196, 256
Boyett, J.M., 103 Bradley, R.A., 296 Brant, R., 447 Breslow, N.E., 170, 391 Brier, S.S., 103 Brown, B.W. Jr., 293 Brown, M.B., 217 Brunswick, A.F., 279
Balmer, D.W., 102 Banwart, A., 9 Bartle, R.G., 419 Bartlett, M.S., 83 Bedrick, E.J., 102, 104, 182, 422, 430, 434 Belsley, D.A., 441 Berge, C., 182 Berger, J.O., 46, 453 Berkson, J., 103 Bickel, P.J., 61 Binder, D.A., 104 Birch, M.W., 205 Bishop, Y.M.M., ix, 1, 66, 362, 396 Blair, V., 391 Bortkiewicz, L. von, 18 Box, G.E.P., 423, 440, 446
Calvert, H.O., 175 Campbell, B.J., 83 Carlin, B.P., 454 Carlin, J.B., 117, 192, 204, 205, 449, 454 Casella, G., 453 Chaloner, K., 447 Cheng, B., 160 Christensen, R., viii, x, 1, 23, 64, 108, 129, 136, 160, 167, 273, 276, 298, 304, 308, 312, 329, 352, 357, 358, 360, 364, 383, 385, 413, 415, 417, 422, 430, 434 Chuang, C., 269, 271, 275, 276 Clayton, M.K., 104, 107 Cook, R.D., 131, 171, 249, 360, 441
476
Author Index
Cox, D.R., 1, 43, 192, 239, 297, 305, 323 Cox, T., 160 Cram´er, H., 62 Cressie, N., 1, 66 Dalal, S.R., 54 Darroch, J.N., 187, 192 David, H.A., 296 Davison, A.C., 103 Day, N.E., 170, 391 del Pino, G., 311 Dellaportas, P., 449, 453 Deming, W.E., 87 Dixon, W.J., 120 Dransfield, M., 449 Draper, N., 136 Duffy D.E., ix, 1, 46, 312 Duncan, B., 277 Duncan, O.D., 277, 282, 284 Dunsmore, I.R., 160, 454 Edwards, D.G., 117, 159, 182, 192, 196, 242, 245, 250 Efron, B., 160 Everitt, B.S., ix, 1, 69 Ewings, P.D., 104 Farewell, V.T., 391 Fay, R.E., 104 Feigl, P., 171 Ferry, G., 160 Fienberg, S.E., ix, 1, 9, 45, 66, 83, 103, 117, 129, 205, 271, 273, 279, 296, 362, 396 Finney, D.J., 173, 175 Fisher, R.A., 18, 20, 388 Foreman, H., 9 Fowlkes, E.B., 54 Freedman, D., 61 Freeman, D.H. Jr., 103, 349, 351 Freeman, J.L., 103 Freeman, M.F., 66 Geisser, S., 104, 107, 167, 423, 435, 441, 454 Gelfand, A.E., 449, 453 Gelman, A., 449, 454 George, E.I., 453
Geweke, J., 451 Gilby, W.H., 62 Gilks, W.R., 454 Glymour, C., 117 Gong, G., 129, 167 Good, I.J., 245 Goodman, L.A., 117, 182, 192, 205, 271 Gratton, M., 104 Gray, R., 102 Graybill, F.A., 271, 273 Grieve, A.P., 434 Grizzle, J.E., x, 279, 347, 375, 414 Gross, A.J., 104 Haberman, S.J., ix, 1, 102, 170, 182, 192, 273, 320, 360, 363, 378, 391, 412 Haenszel, W., 115 Hammel, E.A., 61 Hand, D.J., 160 Havranek, T., 242, 245 Hidiroglou, M.A., 104 Hill, J.R., 102 Hinkley, D.V., 43, 192, 297, 305, 323 Hirji, K.F., 102 Hoadley, B., 54 Holland, P.W., ix, 1, 66, 117, 362, 396 Holt, D., 103, 104 Hosmer, D.W., 1 Imrey, P.B., 295, 348, 351 Irwin, J.O., 64, 294 Jennings, D.E., 104, 107, 128 Johnson, D.E., 271 Johnson, M.E., 451 Johnson, W., 131, 171, 422, 423, 430, 434, 441, 443 Johnson, W.D., 295 Kadane, J., 449 Kass, R., 449 Kelly, K., 117 Kempthorne, O., 103 Kihlberg, J.K., 83 Kiiveri, H., 117, 192, 204, 205
Author Index Kish, L., 103 Koch, G.G., x, 103, 295, 347, 348, 351, 375, 414 Koehler, K.J., 182, 378 Kreiner, S., 45, 103, 117, 182, 245 Kuh, E., 441 Kumar, S., 104 Lachenbruch, P.A., 160 Lancaster, H.O., 64, 294 Landwehr, J.M., 129 Lange, S.W., 175 Larntz, K., 45, 296, 378 Lauritzen, S.L., 117, 182, 187, 192, 203, 204, 205 Lavine, M., 54 Lazerwitz, B., 64 Lee, E.T., 289 Lehmann, E.L., 103 Lemeshow, S., 1 Leonard, T., 423, 449 Lin, H.Y., 434 Lindgren, B.W., 20, 26 Louis, T.A., 454
477
Patefield, W.M., 103 Patel, N.R., 102 Pettit, L.I., 441 Pisani, R., 61 Plackett, R.L., 1, 102 Pregibon, D., 129, 131, 133, 140, 173, 441 Prentice, R.L., 391 Press, S.J., 160 Purves, R., 61 Pyke, R., 391 Quine, S., 312 Radelet, M., 113 Rao, C.R., 160, 273, 308, 360 Rao, J.N.K., 104 Read, T.R.C., 1, 66 Reiss, I.L., 9 Revo, L.T., 160 Richardson, S., 454 Rosenberg, M., 275 Rossi, P.E., 423, 424, 449, 451 Rubin, D.B., 438, 447, 449, 454
O’Conner, J.W., 61 O’Neill, T.J., 160
Santner, T.J., ix, 1, 46, 312 Scheines, R., 117 Schuman, H., 277 Schwarz, G., 104 Scott, A.J., 103, 104 Seber, G.A.F., 388 Shaw, J.E.H., 449 Shoemaker, A.C., 129 Simonoff, J.S., 378 Skene, A.M., 449 Skinner, C.J., 104 Smith, A.F.M., 441, 449, 453 Smith, H., 136 Smith, T.M.F., 104 Sneeringeer, C., 160 Snell, E.J., 1 Speed, T.P., 117, 187, 192, 204, 205 Spiegelhalter, D., 454 Spirtes, P., 117 Starmer, C.F., x, 347, 375, 414 Stephan, F.F., 87 Stern, H.S., 449, 454
Pagano, M., 102
Taylor-Halvorson, K., 102
McCullagh, P., 102, 297, 304, 306, 312 McLachlan, G., 160 McNemar, Q., 67 McRae, J.A. Jr., 277 Mandel, J., 271 Mantel, N., 115 Martz, H.F., 54 Massey, F.J. Jr., 120 Mehta, C.R., 102, 103 Meyer, M.M., 87 Milliken, G.A., 273 Mosteller, F., 173 Narragon, E.A., 83 Naylor, J.C., 449 Nelder, J.A., 297, 304, 306, 312 Nelson, K.W., 175 Nordberg, L., 137
478
Author Index
Terry, M.E., 296 Thomas, D.R., 104 Thomas, W., 249, 360 Tierney, L., 449 Titterington, D.M., 160 Tolley, H.D., 349, 351 Tsiatis, A.A., 102, 129 Tsutakawa, R.K., 434 Tukey, J.W., 66, 173, 271 Utts, J., 273, 276 Wackerly, D., 103 Waite, H., 62, 360 Wedderburn, R.W.M., 297 Weisberg, S., 21, 131, 136, 171 Welsch, R.E., 441 Wermuth, N., 117, 192, 203, 204, 205, 211, 240
Whittaker, J., 182, 192 Williams, O.D., 279 Wilson, S., 160 Wing, J.K., 273 Wolpert, R., 46 Woodward, G., 175 Yule, G.U., 66 Zelen, M., 171 Zellner, A., 423, 424, 449, 451 Zelterman, D., 378 Zimmer, W.J., 54
Subject Index
Abortion opinion data, 110, 152, 195, 225, 268 acyclic hypergraphs, 191 adjusted R2 , 104, 372 adjusted residuals, 248, 357 AIC, 106 Aitkin’s method, 234, 256 Akaike’s information criterion, 106, 122, 136, 137, 372 allocation, 159, 167, 388 analysis of variance, 47, 92, 298 artificial intelligence, 192 association, heterogeneous uniform, 267 homogeneous uniform, 267 linear by linear, 260 marginal, 217 measures of, 65, 68 partial, 217 uniform, 261 asymptotic, 48 conditional distributions, 102 consistency, 323, 341, 380, 405, 406 distributions for MLEs, 323, 341, 380, 405
distributions for tests, 336, 342, 380, 407, 409 auxiliary regression model, 133, 136, 249 Backward elimination, 212, 230 Bartlett’s model, 83 Bayes factors, 447 Bayes theorem, 165, 423 Bayesian methods, 46, 422 binomial distribution, 13, 22, 57, 120, 299, 302, 363, 365, 415, 423 Bonferroni adjustment, 357 box plots, 250, 430 Bradley–Terry model, 295 Canonical link function, 301 case-control data, 119 Cauchy–Schwartz inequality, 381 causal graphs, 201 chain, 186, 187 closed, 188, 189, 190 length, 189 reduced, 190 Chapman data, 120 chord, 189
480
Subject Index
chordal graphs, 188 classification, 159, 167, 387 clique, 186, 242 closed chain, 188, 189, 190 closed form estimates, 180, 189, 191 cluster sampling, 103 cohort data, 119 collapsing tables, 192 graphical models, 194 column effects model, 262 column space, 319 complementary log-log regression, 423 complete independence, 72 complete set of vertices, 186, 242 complete sufficient statistic, 398 complex sampling, 103 conditional independence, 79, 178, 187, 204 conditional likelihood, 389, 391, 395 conditional probability, 5 conditional tests, 102 configuration >, 202 conformal graphs, 182 continuation ratios, 152, 176 Cook’s distance, 248, 358 computations, 249 one-step approximation, 359 correlated data, 67 correlation, 12 covariance, 12 covariance selection, 192 cross-product ratio; see odds ratios, cross-sectional data, 119 cross-validation, 167 crude standardized residuals, 248, 356 cumulative logits, 152 Data base management, 192 decomposable models, 180, 188, 190, 191, 202, 240 degrees of freedom, adjustments for fixed zeros, 280 adjustments for random zeros, 288 for general models, 339, 342 for incomplete tables, 280 for three dimensional tables, 95
for two dimensional tables, 35, 45 delta method, 361, 395 Deming–Steffan algorithm, 87 dependent variables, 37, 54, 116 deviance, 297, 306 direct causes, 202 directed edges, 197 discrete random variable, 11 discrimination, 163, 388 dispersion parameter, 301 distribution, 11 ED(50), 175, 394 edges, 184 empty cells, 279 endogenous factors, 201 enumeration, 102 error function, 301 estimated expected cell counts, 43, 73, 76, 80, 87 estimation, 92, 304, 306, 318, 323, 365, 417 exogenous factors, 201 expected value, 11, 22 explanatory factors, 37, 116 exploratory data analysis, 358 exponential family, 299 extended likelihood, 393 Factor analysis, 192 factor scores, known, 157, 259, 267 unknown, 269, 275 Fieller’s method, 394 Fisher’s exact test, 64, 102 fitting, iterative proportional, 87 Newton–Raphson, 87, 306, 328, 345, 372, 412 fixed zeros, 279 forward selection, 212, 226 Framingham study, 264 Freeman–Tukey residuals, 66 G2 , 45, 95, 96, 321 gamma distribution, 303 gamma regression, 304 generalized likelihood ratio, 42, 45 generalized linear model, 297, 301
Subject Index generalized Pearson statistic, 307 graphical models, 182, 194 GSK method, 246, 347, 375, 414 Heterogeneous uniform association, 267 hierarchical models, 48, 90 homogeneity, marginal, 68, 362 of proportions, 34 homogeneous uniform association, 267 Huber’s condition, 386 hypergeometric sampling, 102 Importance sampling, 451 incomplete tables, 279 independence, 6, 75 complete, 72 conditional, 79 in graphical models, 187 in recursive causal models, 204 index plots, 250 indirect estimates, 83, 87 initial models, all s factor effects, 215 testing each term last, 218 interaction contrasts, 51, 93 interaction plot, 51, 54 iterative proportional fitting, 87 iteratively reweighted least squares, 87, 306, 328, 345, 346, 372, 412 iteratively reweighted nonlinear least squares, 269, 276 J, 318 Kullback-Leibler divergence, 131, 441 Lancaster–Irwin partitioning, 64, 294 large sparse multinomials, 378 LDα , 438 LD(50), 175, 394 Fieller’s method, 394 length of a chain, 190 leverage, 132, 247, 357
481
likelihood, conditional, 389, 391, 395 function, 42, 58, 72, 318, 398, 399, 423 equations, 305, 319, 372, 400 extended, 389, 391 partial, 389, 391 principle, 46 likelihood ratio statistic, asymptotic distribution, 339, 342, 380, 407 compared with Pearson statistic, 45, 46 definition, 45 differences of, 96 linear predictor, 301 link function, 301, 423, 447 logistic discrimination, 159, 387 logistic distribution, 175, 423 logistic regression, 54, 116, 423 diagnostics, 130, 440 lack of fit, 127 model selection, 122, 136 logistic transformation, 117 logit models, 116, 141, 150, 415 logit transformation, 117 McNemar’s test, 67 Mallow’s Cp , 104, 107, 137 Mandel’s models, 271, 275 Mantel–Haenszel statistic, 114, 170 marginal association, 217 marginal constraints, 73, 76, 81, 83, 260, 262, 319, 400 marginal homogeneity, 68, 362 marginal probabilities, 5 maximal complete sets, 186, 242 maximum likelihood allocation, 393 maximum likelihood estimation, 43 definition, 43 existence, 320, 401 general models, 318 invariance, 43, 323 quantitative factor models, 260, 262 three-way tables, 73, 75, 80, 87 two-way tables, 43, 44 MCMC, 454 measures of association, 65, 68
482
Subject Index
MLE, 43 model checking, 446 model interpretations, 178 model parameters, 92, 417 inference for, 342 model selection, Aitkin’s method, 234 best subset, 246 criteria, 104, 122, 246, 371 initial models, 215 stepwise methods, 212, 224 Wermuth’s method, 186, 240 Monte Carlo methods, 449, 453 multinomial distribution, 14, 22 multinomial response models, 150, 377, 415 multiple effects, 214 Newton–Raphson algorithm, 87, 306, 328, 345, 346, 372, 412 nodes, 183 nonlinear least squares, 276 normal plot, 251, 357 null space, 379 Odds, 2 odds ratios, 2, 5, 29 as contrasts, 51 association measures based on, 68 asymptotic variance, 30 definition, 2, 5 equality of, 83, 85 relation to independence, 32, 39, 85, relation to u terms, 90 ordered categories, 258 ordinal factor levels, 258 outliers, 130, 247, 354 overdispersion, 312 overparameterization, 49, 91, 109 Paired comparisons, 295 partial association, 217 partial likelihood, 391, 393 partitioning, Lancaster–Irwin, 64, 294 likelihood ratio, 79, 96, 208 polytomous factors, 282
Pearson residuals, 29, 248, 357 Pearson test statistic, 21, 27, 31, 35 asymptotic distribution, 339, 342, 380, 410 compared with G2 , 45, 46, 339, 342, 380, 410 definition, 95, 96 generalized, 307 Poisson distribution, 18, 299, 302, 312, 398 power divergence statistics, 66 predictive probabilities, 434 prior distribution, 423, 424 probit regression, 423 probit transformation, 175, 423 product multinomial, 16, 99, 339, 398 prospective studies, 38, 118, 387 Q, 65, 68 quantitative factor levels, 157, 258 quasi-likelihood, 312 R2 , 104, 126, 372 random variable, 11 random zeros, 286 rankit plot, 251, 357 recursive causal models, 192, 195 regression, 47, 56, 297 residuals, 132, 248, 354 adjusted, 248, 357 asymptotic distribution, 356 box plots, 250 computations, 249 crude standardized, 248, 356 index plots, 250 normal plots, 251, 357 Pearson, 29, 248, 357 standardized, 132, 248, 357 response factors, 37, 54, 116 retrospective studies, 38, 118, 387 row effects model, 263 Saddlepoint methods, 102 sampling models, binomial, 13, 22, 57, 120, 299, 302, 363, 365, 415, 423 cluster, 103 complex, 103
Subject Index hypergeometric, 102 multinomial, 14, 22 Poisson, 18, 299, 302, 312, 398 product multinomial, 16, 33, 99, 339, 398 other, 102 stratified, 103 saturated model, 49, 89, 110 asymptotic variances for, 41, 220, 336, 414 scores, factor, 157, 258, 269, 275 sensitivity analysis, 448 Shapiro-Francia test, 357 shorthand notation, 91, 92, 109, 143, 179 simple effects, 213 Simpson’s paradox, 70, 113 small samples, 45 sparse multinomials, 378 square tables, 66, 67, 362 standard deviation, 12 standard error, 25, 27, 41, 94, 326, 336, 341, 349, 357, 366, 380 standardized deviance, 306, 307 standardized residuals, 132, 248, 357 stepwise model selection, 136, 212, 224 stimulus–response studies, 175 stratified sampling, 103 structural equation models, 192 studentized residuals, 132, 248, 357 sufficient statistics, 192, 398 symmetry, 66 Testing, 94, 321, 338, 342 three-way tables, 69 trauma data, 427 Tukey model, 271, 275 two-way tables, 23
483
underlying undirected graph, 203 uniform association, 261 heterogeneous, 267 homogeneous, 267 unknown factor scores, 269, 275 Variance, 12, 22 variance function, 302 vertices, 183, 184 Weighted least squares, 246, 347, 375, 412 Wermuth’s method, 186, 240 Yule’s Q, 65, 68 Zeros, fixed, 279 random, 286