Student Workbook with Solutions Heecheon You Pohang University of Science and Technology
Applied Statistics and Probability lor Engineers Third Edition
Douglas C. Montgomery Arizona State University
George C. Runger Arizona State University
JOHN WILEY & SONS, INC.
Cover Image: Norm Christiansen
T o order books or for customer service call 1-800-CALL-WILEY (225-5945).
Copyright O 2003 by John Wiley & Sons, Inc. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc. 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-601 1, fax (201) 748-6008, E-Mail:
[email protected]. ISBN 0-47 1-42682-2 Printed in the United States of America.
Printed and bound by Courier Kendallville, Inc.
This book is dedicated to: Jiyeon, Sophia, Jaechang, Sookhyun, Young, and Isik
Preface Objectives of the Book As a supplement of Applied Statistics and Probability for Engineers (3rdedition) authored by Drs. Douglas C. Montgomery and George C. Runger (abbreviated by MR hereinafter), this book is developed to serve as an effective, learner-friendly guide for students who wish to establish a solid understanding of probability and statistics. First, this study guide outlines learning goals and presents detailed information accordingly; this goal-oriented presentation is intended to help the reader see the destination before learning and check histher achievement after learning. Second, this book helps the reader build both scientific knowledge and computer application skill in probability and statistics. Third, this book provides concise explanations, step-by-step algorithms, comparison tables, graphic charts, daily-life examples, and real-world engineering problems, which are designed to help the reader to efficiently learn concepts and applications of probability and statistics. Lastly, this book covers essential concepts and techniques of probability and statistics; thus, the reader who has interest in the advanced topics indicated as CD only in MR should refer to the MR CD.
Features of the Book The reader will enjoy the following learner-friendly features of the book: Learning Goals: Each section starts with a list of learning goals which states topics covered and competency levels targeted. Organized, Concise Explanation: According to the learning goals defined, topics are presented with concise, clear explanations. Learning Aids: Tables, charts, and algorithms are provided wherever applicable to help the reader form integrated mental pictures by contrasting or visualizing concepts and techniques. Easy-to Understand Examples: For topics requiring problem-solving practices, daily-life examples (which are readily understandable) are used to help the reader better see the application process of statistical techniques. Engineering-Context Exercises: Together with daily-life examples, engineering problems (selected from exercises in the MR textbook; the original exercise numbers are indicated) are used to have the reader explore the application of statistical techniques to the real-world engineering domain. Complete Solutions of Exercises: At the end of each chapter, complete solutions of exercises are provided so that the reader checks hidher own answers with ease. Software Demonstration: To help students learn use of Excel and Minitab (student version) in solving probability and statistics problems, respectively, step-by-step directions and screen snapshots are provided for examples as applicable.
Organization of the Book This book presents various topics of probability and statistics with the structure displayed in the figure on next page. Chapters 2 through 5 describe probability and probability distributions. After explaining fundamental concepts, rules, and theorems of
vi
Preface probability, this book presents various discrete and continuous probability distributions of single and multiple random variables. Then, chapters 6 through 15 explain descriptive and inferential statistics. Descriptive statistics deals with measures and charts to describe collected data, whereas inferential statistics covers categorical data analysis, statistical inference on singleltwo samples, regression analysis, design of experiment, and nonparametric analysis.
Multiple Discrete Probability Distributions
1
Multiple Continuous J Probability Distributions
Ch.5
Ch. 11 Ch. 12 Ch. 13 Ch. 14
Acknowledgements I would like to express my considerable gratitude to Jenny Welter for her sincere support, trust, and encouragement in developing this book. I also wish to thank Drs. Douglas C. Montgomery and George C. Runger at Arizona State University for their pleasant partnership and helpful advice in producing this book. Finally, I greatly appreciate the insightful feedback and useful suggestions from the professors and students who reviewed the various parts of the manuscript: Dr. Yasser M. Dessouky at San Jose State University; Major Brian M. Layton at U.S. Military Academy; Dr. Arunkumar Pennathur at University of Texas at El Paso; Dr. Manuel D. Rossetti at University of Arkansas; Dr. Bruce Schmeiser at Purdue University; Dr. Lora Zimmer at Arizona State University; Christi Barb, Shea Kupper, Maria Fernanda Funez-Mejia, Tram Huynh, Usman Javaid, Thuan Mai, Prabaharan Veluswamy, Cheewei Yip, and Ronda Young at Wichita State University. Without their invaluable contributions, this book may not have been completed. Peace be with them all. Heecheon You
Contents Chapter 1
The Role of Statistics in Engineering 1- 1 1-2 1-3 1-4
Chapter 2
The Engineering Method and Statistical Thinking Collecting Engineering Data 1-2 Mechanistic and Empirical Models 1-3 Probability and Probability Models 1-3
Probability
1- 1
2-1
2-1 Sample Spaces and Events 2-1 2-2 Interpretations of Probability 2-4 2-3 Addition Rules 2-5 2-4 Conditional Probability 2-6 2-5 Multiplication and Total Probability Rules 2-6 Independence 2-8 2-7 Bayes' Theorem 2-9 2-8 Random Variables 2- 10 Answers to Exercises 2-1 1
Chapter 3
1-1
2-6
Discrete Random Variables and Probability Distributions 3-1 3- 1 Discrete Random Variables 3- 1 3-2 Probability Distributions and Probability Mass Functions 3-1 3-3 Cumulative Distribution Functions 3-3 3-4 Mean and Variance of a Discrete Random Variable 3-4 3-5 Discrete Uniform Distribution 3-5 3-6 Binomial Distribution 3-6 3-7 Geometric and Negative Binomial Distributions 3-8 3-8 Hypergeometric Distribution 3-13 3-9 Poisson Distribution 3- 16 Summary of Discrete Probability Distributions 3-20 EXCELApplications 3-2 1 Answers to Exercises 3-23
vii
viii
Contents
Chapter 4
Continuous Random Variables and Probability Distributions 4-1 4- 1 4-2 4-3 4-4 4-5 4-6 4-7
Continuous Random Variables 4- 1 Probability Distributions and Probability Density Functions 4-1 Cumulative Distribution Functions 4-3 Mean and Variance of a Continuous Random Variable 4-4 Continuous Uniform Distribution 4-5 Normal Distribution 4-6 Normal Approximation to the Binomial and Poisson Distributions 4-8 4-9 Exponential Distribution 4-1 1 4- 10 Erlang and Gamma Distributions 4- 14 4-1 1 Weibull Distribution 4-1 8 4- 12 Lognormal Distribution 4-20 Summary of Continuous Probability Distributions 4-21 EXCELApplications 4-22 Answers to Exercises 4-25
Chapter 5
Joint Probability Distributions
5-1
5-1 Two Discrete Random Variables 5-1 5-2 Multiple Discrete Random Variables 5-5 5-3 Two Continuous Random Variables 5-7 5-4 Multiple Continuous Random Variables 5- 12 5-5 Covariance and Correlation 5- 13 5-6 Bivariate Normal Distribution 5-15 5-7 Linear Combinations of Random Variables 5- 16 5-9 Moment Generating Functions 5-18 5- 10 Chebyshev's Inequality 5-20 Answers to Exercises 5-2 1
Chapter 6
Random Sampling and Data Description 6- 1 Data Summary and Display 6- 1 6-2 Random Sampling 6-3 6-3 Stem-and-Leaf Diagrams 6-4 6-4 Frequency Distributions and Histograms 6-5 BOXPlots 6-12 6-6 Time Sequence Plots 6-13 6-7 Probability Plots 6-14 MINITABApplications 6- 15 Answers to Exercises 6-20
6-8
6-1
Contents
Chapter 7
Point Estimation of Parameters
ix
7-1
7- 1 Introduction 7- 1 7-2 General Concepts of Point Estimation 7-2 7-3 Methods of Point Estimation 7-6 7-4 Sampling Distributions 7-7 7-5 Sampling Distributions of Means 7-7 Answers to Exercises 7-10
Chapter 8
Statistical Intervals for a Single Sample
8-1
8- 1 8-2
Introduction 8- 1 Confidence Interval on the Mean of a Normal Distribution, Variance Known 8-2 8-3 Confidence Interval on the Mean of a Normal Distribution, Variance Unknown 8-5 8-4 Confidence Interval on the Variance and Standard Deviation of a Normal Population 8-7 8-5 A Large-Sample Confidence Interval for a Population Proportion 8-9 8-6 A Prediction Interval for a Future Observation 8-1 1 8-7 Tolerance Intervals for a Normal Distribution 8- 13 Summary of Statistical Intervals for a Single Sample 8-14 MINITABApplications 8- 15 Answers to Exercises 8- 18
Chapter 9
Tests of Hypotheses for a Single Sample 9-1 9-2 9-3
9-1
Hypothesis Testing 9-1 Tests on the Mean of a Normal Distribution, Variance Known 9-6 Tests on the Mean of a Normal Distribution, Variance Unknown 9- 10 9-4 Hypothesis Tests on the Variance and Standard Deviation of a Normal Population 9- 13 9-5 Tests on a Population Proportion 9-15 9-7 Testing for Goodness of Fit 9- 17 9-8 Contingency Table Tests 9-22 MINITABApplications 9-28 Answers to Exercises 9-37
X
Contents
Chapter 10
Statistical Inference for Two Samples
10-1
""
10-2 Inference for a Difference in Means of Two Normal Distributions, Variances Known 10-1 10-3 Inference for a Difference in Means of Two Normal Distributions, Variances Unknown 10-6 10-4 Paired t-Test 10-11 10-5 Inference on the Variances of Two Normal Populations 10-14 10-6 Inference on Two Population Proportions 10-19 MINITABApplications 10-24 Answers to Exercises 10-30
Chapter 11
Simple Linear Regression and Correlation
11-1
11- 1 Empirical Models 11- 1 11-2 Simple Linear Regression 11-3 11-3 Properties of the Least Squares Estimators 11-8 11-4 Some Comments on Uses of Regression 11- 10 11-5 Hypothesis Tests in Simple Linear Regression 11- 11 11-6 Confidence Intervals 11- 17 11-7 Prediction of New Observations 11-21 11-8 Adequacy of the Regression Model 11-23 11-9 Transformations to a Straight Line 11-29 11-11 Correlation 11-30 MINITABApplications 11-34 Answers to Exercises 11-37
Chapter 12
Multiple Linear Regression
12-1
12-1 Multiple Linear Regression Model 12- 1 12-2 Hypothesis Tests in Multiple Linear Regression 12-5 12-3 Confidence Intervals in Multiple Linear Regression 12-13 12-4 Prediction of New Observations 12-15 12-5 Model Adequacy Checking 12-16 12-6 Aspects of Multiple Regression Modeling 12-18 MINITABApplications 12-28 Answers to Exercises 12-35
Contents
Chapter 13
Design and Analysis of Single-Factor Experiments: The Analysis of Variance 13-1 13- 1 Designing Engineering Experiments 13- 1 13-2 The Completely Randomized Single-Factor Experiment 13-3 The Random Effects Model 13- 13 13-4 Randomized Complete Block Design 13- 15 MINITABApplications 13-2 1 Answers to Exercises 13-26
Chapter 14
Design of Experiment with Several Factors
13-2
14-1
14-1 Introduction 14- 1 14-3 Factorial Experiments 14- 1 14-4 Two-Factor Factorial Experiments 14-4 14-5 General Factorial Experiments 14-1 1 14-7 2k Factorial Designs 14-1 1 14-8 Blocking and Confounding in the 2k Design 14-18 14-9 Fractional Replication of the 2k Design 14-2 1 MINITABApplications 14-29 Answers to Exercises 14-32
Chapter 15
Nonparametric Statistics
i 5-1
15- 1 Introduction 15- 1 15-2 Sign Test 15-2 15-3 Wilcoxon Signed-Rank Test 15-7 15-4 Wilcoxon Rank-Sum Test 15-12 15-5 Nonparametric Methods in the Analysis of Variance MINITABApplications 15- 18 Answers to Exercises 15-24
15-14
xi
The Role of Statistics in Engineering
1-1 The Engineering Method and Statistical Thinking 1-2 Collecting Engineering Data
1-1
1-3 Mechanistic and Empirical Models 1-4 Probability and Probability Models
The Engineering Method and Statistical Thinking
O Explain engineering problem solving process. 0 Describe the application of statistics in the engineering problem solving process. 0 Distinguish between enumerative and analytic studies. "
M
*
*
X
p
~
~
~
Engineering Problem Solving Process
v
m
_
_
I
w ^ ^
I
~
_v4--aIIMI*ldXw -
vllm.w~-XI~m --.-* m -
UXmXIIt
--. * - / w M
zXI,(Y
-iiymw
r
"
a- - a
c
Engineers develop practical solutions and techniques for engineering problems by applying scientific principles and methodologies. Existing systems are improved and/or new systems are introduced by the engineering approach for better harmony between humans, systems, and environments. In general, the engineering problem solving process includes the following steps: 1 . Problem definition: Describe the problem to solve. 2. Factor identification: Identify primary factors which cause the problem. 3. Model (hypothesis) suggestion: Propose a model (hypothesis) that explains the relationship between the problem and factors. 4. Experiment: Design and run an experiment to test the tentative model. 5. Analysis: Analyze data collected in the experiment. 6. Model modification: Refine the tentative model. 7. Model validation: Validate the engineering model by a follow-up experiment. 8. Conclusion (recommendation): Draw conclusions or make recommendations based on the analysis results. (Note) Some of these steps may be iterated as necessary.
1-2
Chapter 1
Application of Statistics
The Role of Statistics in Engineering Statistical methods are applied to interpret data with variability. Throughout the engineering problem solving process, engineers often encounter data showing variability. Statistics provides essential tools to deal with the observed variability. Examples of the application of statistics in the engineering problem solving process include, but are not limited to, the following: 1. Summarizing and presenting data: numerical summary and visualization in descriptive statistics (Chapter 6). 2. Inferring the characteristics (mean, median, proportion, and variance) of single/two populations: z, t, and F tests in parametric statistics (Chapters 7-10); signed, signed-rank, and rank-sum tests in nonparametric statistics (Chapter 15). 3. Testing the relationship between variables: correlation analysis (Chapter 11); categorical data analysis (Chapter 8). 4. Modeling the causal relationship between the response and independent variables: regression analysis (Chapters 1 1-12); analysis of variance (Chapters 13-14). 5. Identifying the sources of variability in response: analysis of variance (Chapters 13-14). 6. Evaluating the relative importance of factors for the response variable: regression analysis (Chapter 12); analysis of variance (Chapter 14). 7. Designing an efficient, effective experiment: design of experiment (Chapters 13-14). These applications would lead to development of general laws and principles such as Ohm's law and design guidelines.
x*,
Enumerative vs. Analytic Studies
1-2
Two types of studies are defined depending on the use of a sample in statistical inference: 1. Enumerative study: Makes an inference to the well-defined population from which the sample is selected. (e.g.) defective rate of products in a lot 2. Analytic study: Makes an inference to a future (conceptual) population. (e.g.) defective rate of products at a production line
Collecting Engineering Data
0 Explain three methods of data collection: retrospective study, observational study, and designed experiment. Data Collection Methods
Three methods are available for data collection: 1. Retrospective study: Use existing records of the population. Some crucial information may be unavailable and the validity of data be questioned. 2. Observational study: Collect data by observing the population with as minimal interference as possible. Information of the population for some conditions of interest may be unavailable and some observations be contaminated by extraneous variables. 3. Designed experiment: Collect data by observing the population while controlling conditions on the experiment plan. The findings would obtain scientific rigorousness through deliberate control of extraneous variables.
1-3 Mechanistic and Empirical Models
1-3
**_* j.i,. l
1-3
Mechanistic and Empirical Models
Explain the difference between mechanistic and empirical models. nn"en/.ir~la.vrax**~~m~a~'~~.~~"r.nc~xlr"xr.~~awr~arrex"~~>~-.~~w
_ " l U _ ~ ~ = ~ ~ - ~ ~ - ~ a ~ " ~ ~ " ..,ese.ss. ~ ' ~ w " w~w .. ~ ww ~~ m ~ ~ - ~ I _ 1 8 V 8 U ~ _ n _ P P ~ , X P ~ " am " ~l-(.s P I I I , I - I - L - ~ ~ " ~ * ~ ~ ~ " ~ 8 w ' w a , n ~ , ~ . a . P l i n l a " r * a x u ~ ~ a - r ~ a r a r ~ i a a n I
Models (explaining the relationship between variables) can be divided into two categories: 1. Mechanistic model: Established based on the underlying theory, principle, or law of a physical mechanism. E (e.g.) I = - + E (Ohm's law) R where: I = current, E = voltage, R= resistance, and E = random error 2. Empirical model: Established based on the experience, observation, or experiment of a system (population) under study. (e.g.1 Y = P o + P I X + E where: y = sitting height, x = stature, and E = random error
Mechanistic vs. Empirical Models
1-4
Probability and Probability Models
O Describe the application of probability in the engineering problem solving process. X.__
___"*-
~
~
Application of Probability
~
w
m
~
~
n
_
m
/
w-
r
r
a~ a
~
m
B~
"
m
-x
~
x
a~
~
z
m- ~
~
~
~
-
-
-
- m-smaaxwurw u
--arrwar-=
m
Along with statistics, the concepts and models of probability are applied in the engineering problem solving process for the following: 1. Modeling the stochastic behavior of the system: discrete and continuous probability distributions. 2. Quantifying the risks involved in statistical inference: error probabilities in hypothesis testing. 3. Determining the sample size of an experiment for a designated test condition: sample size selection
Probability
2-1 2-2 2-3 2-4 2-5
Sample Spaces and Events Interpretations of Probability Addition Rules Conditional Probability Multiplication and Total Probability Rules
2-1
2-6 Independence 2-7 Bayes' Theorem 2-8 Random Variables Answers to Exercises
Sample Spaces and Events
0 Explain the terms random experiment, sample space, and event. Define the sample space and events of a random experiment. 0 Define a new joint event from existing events by using set operations. 0 Assess if events are mutually exclusive andlor exhaustive. Use a Venn diagram and/or tree diagram to display a sample space and events.
Random Experiment
When different outcomes are obtained in repeated trials, the experiment is called a random experiment. Some sources of the variation in outcomes are controllable and some are not in the random experiment. (e.g.) Sources of variability in testing the life length of an INFINITY light bulb (I) Material (2) Manufacturing process (3) Production environment (temperature, humidity, etc.) (4) Measurement instrument (5) Drift of current (6) Observer
Sample Space
A sample space is the set of possible outcomes of a random experiment (denoted as S). Two types of sample space are defined: 1. Discrete Sample Space: Consists of finite (or countably infinite) outcomes. (e.g.) Tossing a coin: S = {head, tail) 2. Continuous Sample Space: Consists of infinite and innumerable outcomes. (e.g.) The life length of the INFINITY light bulb (X):S = {x; x 2 0)
(9
2-2
Chapter 2 Event
Probability An event is a subset of the sample space of a random experiment (denoted by E).
0 Set Operations
Three set operations can be applied to define a new joint event from existing events: 1. Union ( E l u E 2 ): Combines all outcomes of E l and E2. 2. Intersection ( E l n E2 ): Includes outcomes that are common in El and E2.
3. Complement ( E' or (Ef)' = E . Laws for Set Operations
Mutually Exclusive1 Exhaustive Events
E ): Contains outcomes that are
in E. Note that
The following laws are applicable to the set operations: 1. Commutative Law E,nE2=E2nEl EluE2=E2uEl 2. Distributive Law (ElnE2)uE3=(EluE,)n(E,uE,) (El u E,) n E, = (El n E,) u (E, n E,) 3. DeMorgan's Law (E, ~ E , ) ' = E , ' u E ; (E, u E2)'= E; n E; A collection of events El, E2,. .. ,E k is said to be mutuallv exclusive (see Figure 2-I), if the events do not have any outcomes in common, i.e., E, n E, = 0 for all pairs (i,j), i # j
Next, the collection of events El, E2,. .. ,Ek is said to be exhaustive (see Figure 2I), if the union of the events is equal to S, i.e., El u E , u . . . u E , = S
r
Figure 2-1 Mutually exclusive and exhaustive events. Example 2.1
Two dice are cast at the same time in an experiment. 1. (Sample Space) Define the sample space of the experiment. h The sample space is S = {(x, y); x = 1,... , 6, y = 1,. .. , 6 ) , where x and y represent the outcome of each die. Since there are 6 possible outcomes with the first and second dice each, the sample space consists of a total of 6 x 6 = 36 outcomes. 2. (Event) Of the sample space, find the pairs whose sum is 5 (El) and the pairs whose first die is odd (E2).
2-1 Sample Spaces and Events
2-3
Example 2.1 (cont.)
I
3. (Set Operation) Find El
u E , , E, n E , , and E;
4. (Mutually Exclusive Events) Are El and E2 mutually exclusive? h No,
because El n E ,
#
0.
5. (Exhaustive Events) Are El and E2 exhaustive? h No,
Exercise 2.1 (MR 2-33)
because El u E ,
#
S.
The rise times (unit: min.) of a reactor for two batches are measured in an experiment. 1. Define the sample space of the experiment. 2. Define El where the reactor rise time of the first batch is less than 55 min. and E2 where the reactor rise time of the second batch is greater than 70 min. 3. Find E,
u E , , El n E, ,and E ; .
4. Are El and E2 mutually exclusive? 5. Are El and E2 exhaustive?
Diagrams
Diagrams are often used to display a sample space and events in an experiment: 1. Venn Diagram: A rectangle represents the sample space and circles indicate individual events, as illustrated in Figure 2-2.
Figure 2-2 Venn diagrams: (a) El u E , ; (b) E, n E , ; (c) E; . The areas shaded indicate the results of union, intersection, and complement, respectively.
724 Chapter 2 Probability Diagrams (cont.)
2. Tree Diagram: Branches represent possible outcomes (see Figure 2-3). The tree diagram method is useful when the sample space is established through a series of steps or stages.
HHH HHT HTH
HTT
THH
THT
TTH
TTT
Figure 2-3 Tree diagram for outcomes of tossing three coins at the same time.
2-2
Interpretations of Probability
0 Explain the termprobability.
0 Determine the probability of an event. Probability
The probability of an event means the likelihood of the event occurring in a random experiment. If S denotes the sample space and E denotes an event, the following conditions should be met: (1) P(S)=I (2) 0 5 P(E)I1
( 3 ) P(E, u E, u E, u = P(E, ) + P(E,) are mutually exclusive. . a = )
Probability of Event
+ P(E,) + -.., where the events
If a sample space includes n outcomes that are equally likely, the probability of each outcome is lln. Thus, the probability of an event E consisting of k equally likely outcomes is 1 k P(E)=kx-=n n where: n = number of possible outcomes in S k = number of equally likely elements in E (Note) For any event E, P ( E f )= 1- P ( E )
2-5
2-3 Addition Rules
2-3
Addition Rules
0 Find the probability of a joint event by using the probabilities of individual events. I^n
_LLnj,m
----.--"
*._*~-_^n%"YLn""
a.mm
*-*,"
_ilajlm-lm_-,*n,
nc-.re.r*r*
nxl*ann.~a,m.*.nra
rm~am.-n
*emarxa,i.Pxa
- w ~ ~ . a ~ * . w . ~ d , a , * d * ' ~ ~ . m < w a~r*xriri
w.-aariexnmi.i
,ar.i,;
w'ww
The probability of a joint event can often be calculated by using the probabilities of the individual events involved. The following addition rules can be applied to determine the probability of a joint event when the probabilities of existing events are known: P(El u E , ) = P ( E , ) + P ( E , ) - P ( E l n E , )
Probability o f Joint Event
+P(E,)
if El n E, # 0 P(El W E , u E , ) = P ( E , ) + P ( E , ) + P ( E , ) -P(El n E , ) - P(El n E , ) - P(E, n E , ) + P(El n E, n E , ) =P(El)
r
nrren-.xlawa
Example 2.2
(Probability o f Joint Event) An instructor of a statistics class tells students that the probabilities of earning an A, B, C, and D or below are 3 , &, and respectively. Find the probabilities of ( I ) earning an A or B and ( 2 ) earning a B or below. Ohr Let E l , E2, E3, and E4 denote the events of earning an A, B, C, and D or below, respectively. These individual events are mutually exclusive and exhaustive because P(El u E, u E, u E , ) = P ( E l )+ P ( E , ) + P ( E , ) + P ( E 4 )
t,
h,
I I
(I) The event of earning an A or B is E, u E, . Therefore,
( 2 ) The event of earning a B or below is E, u E, u E4 , which is equal to E; . Therefore,
Exercise 2.2 (MR 2-45)
Test results of scratch resistance and shock resistance for 100 disks of polycarbonate plastic are as follows: Shock Resistance
High (4
LOW
(B')
80 9 Scratch High (A) Resistance LOW ( A '2 6 5 * ~ = -* --** * * ** Let A and A ' denote the event that a disk has high scratch resistance and the event that a disk has low scratch resistance, respectively. Let B and B ' denote the event that a disk has high shock resistance and the event that a disk has low shock resistance, respectively. 1. When a disk is selected at random, find the probability that both the scratch and shock resistances of the disk are high. -
m8*
ww*-e
*AwM-*
m-
*ww-wwBwws*+e
m--w--*
w WAww
m
m
*em--A
a
sw w#--re-
a
~
Me r m
~
2-6
Chapter 2
Probability
2. When a disk is selected at random, find the probability that the scratch or shock resistance of the disk is high. 3. Consider the event that a disk has high scratch resistance and the event that a disk has high shock resistance. Are these two events mutually exclusive?
Exercise 2.2 (cont.)
2-4
Conditional Probability
O Calculate the conditional probability of events.
r
Conditional Probability
The conditional probability P(B 1 A) is the probability of an event B, given an event A. The following formula is used to calculate the conditional probability: P(A n B ) ,where P(A) > 0 P(B ( A) = P(A)
Example 2.3
(Conditional Probability) A novel method to screen carpal tunnel syndrome (CTS) at the workplace is tested with two groups of people: 50 workers having CTS and 50 healthy workers without CTS. Let A and A' denote the event that a worker has CTS and the event that a worker does not have CTS, respectively. Let B and B ' denote the event that a CTS test is positive and the event that a CTS 1 test is negative. resnectivelv. The summarv of CTS test results is as follows: " Test Result Positive ( 8 ) Negative ( B ') CTS (A) 10 Group 45 Healthy ( A ' )
I
I
1
Find the probability that a CTS test is positive (B) when a worker has CTS (A).
h
Exercise 2.3
2-5
In Exercise 2.2, find the probability that a disk has high scratch resistance (A) when it has high shock resistance (B).
Multiplication and Total Probability Rules
R Apply a total probability rule to find the probability of an event when the event is partitioned into several mutually exclusive and exhaustive subsets.
2-5 Multiplication and Total Probability Rules
Multiplication Rule Total Probability Rule
2-7
From the definition of conditional probability, P(B n A) = P(B I A)P(A) = P(A I B)P(B) When event B is partitioned into two mutually exclusive subsets B n A and B n A' (see Figure 2-4), P(B) = P(B n A) + P(B n A') = P(B I A)P(A) + P(B I A')P(A')
Figure 2-4 Partitioning event B into two mutually exclusive subsets: B n A and ~n A'. As an extension, when event B is partitioned by a collection of k mutually exclusive and exhaustive events E l , E2,... ,Ek, P ( B ) = P ( B n E l ) +P ( B n E 2 ) + . . . + P ( B n E,)
= P ( B I E l ) P ( E l )+ P(B I E 2 ) P ( E 2 )
+ - . a +
P(B I E,)P(E,)
Figure 2-5 Partitioning event B into k mutually exclusive subsets: B n E l ,
r
Example 2.4
(Total Probability Rule) In Example 2.3, the CTS screening method experiment indicates that the probability of screening a worker having CTS ( A ) as positive (B) is 0.8 and the probability of screening a worker without CTS ( A ') as positive (B) is 0.1, i.e., P(B I A) = 0.8 and P(B I A') = 0.1 Suppose that the industry-wide incidence rate of CTS is P(A)= 0.0017. Find the probability that a worker shows a positive test (B) for CTS at the workplace. P(A) = 0.0017 and P(Ar)= 1 - P(A) = 0.9983 By applying a total probability rule, P(B) = P(B I A)P( A) + P(B I A')P(Ar) =0.8~0.0017+0.1~0.9983=0.101
2-8
d
Chapter 2
Probability
Exercise 2.4 ( M R 2-97)
2-6
Customers are used to evaluate preliminary product designs. In the past, 95% of highly successful products, 60% of moderately successful products, and 10% of poor products received good reviews. In addition, 40% of product designs have been highly successful, 35% have been moderately successful, and 25% have been poor products. Find the probability that a product receives a good review.
Independence
0 Explain the term independence between events. Assess the independence of two events. Two events A and B are stochastically independent if the occurrence of A does affect the probability of B and vice versa. In other words, two events A and B. are independent if and only if (1) P(A I B) = P(A) (2) P(B I A) = P(B) ( 3 ) P ( A n B) = P(A)P(B)
Independence of Events
r
(Derivation) P(A nB ) = P(A)P(B) P ( A n B) P(B I A) = = P ( B ) if A and B are independent. P(A)
Example 2.5
I
(Independence) For the CTS test results in Example 2.3, the following probabilities have been calculated: 4 45 P(B) = - and P(B I A) = 100 5 Check if events A and B are independent, 4 45 Ikr Since P(B I A) = - # P(B) = -, A and B are not independent. This 5 100 indicates that information from the CTS test is useful to screen workers having CTS at the workplace.
I
I
Eb
I Exercise 2.5
I
I
I
For the resistance test results in Exercise 2.2, the following probabilities have been calculated: 89 40 P ( A ) = - and P ( A I B ) = 100 43 Assess if events A and B are independent.
2-7 Bayes' Theorem
2-7
2-9
Bayes' Theorem p
m
m -7 --a -
Apply Bayes' theorem to find the conditional probability of an event when the event is partitioned into several mutually exclusive and exhaustive subsets.
Bayes' Theorem
From the definition of conditional probability, P(A n B ) - P(B I A)P(A) P(B I A)P(A) P(A I B ) = P(B) P(B) P(B n A) + P(B n A')
The multiplication rule for a collection of k mutually exclusive and exhaustive events E l , E2,... ,Ek and any event B is P(B) = P(B I E l ) P ( E , )+ P(B I E 2 ) P ( E 2 + ) ... + P(B I Ek ) P ( E k ) From the two expressions above, the following general result (known as Bayes' theorem) is derived:
Example 2.6
Exercise 2.6
(Bayes' Theorem) In Examples 2.3 and 2.4, the following probabilities have been identified: P(B I A) = 0.8, P(B I A') =0.1, P(A) = 0.0017, and P(B) = 0.101 Find the probability that a worker has CTS ( A )when the test is positive (B). h By applying Bayes' theorem, P(B I A)P(A) - P(B I A)P(A) P(AI B ) = P(B I A)P(A) + P(B I A')P( A') P(B) - 0.8 x 0.0017 = 0.013 0.101 Since the incidence rate of CTS in industry is low, the probability that a worker has CTS is quite small even if the test is positive.
I
In Exercise 2.4, what is the probability that a new design will be highly successful if it receives a good review?
2-10
Chapter 2
2-8
Random Variables
Probability
O Describe the terms random variable X and range o f X . R Distinguish between discrete and continuous random variables. Random Variable
A random variable, denoted by an uppercase such as X, associates real numbers with individual outcomes of a random experiment. Note that a measured value of Xis denoted by a lowercase such as x = 10. The set of possible numbers of X is referred to as the range of X. Depending on the type of range, two categories of random variables are defined: 1. Discrete Random Variable: Has a finite (or countably infinite) range. (e.g.) Tossing a coin: X = 0 for head and X = 1 for tail 2. Continuous Random Variable: Has an interval of real numbers for its infinite range. (e.g.) The life length of an INFINITY light bulb: X2 0
Answers to Exercises
2-1 1
Answers to Exercises Exercise 2.1
1. (Sample Space) S = {x; x > 0), where x represents the rise time of the reactor for a certain batch.
2. (Event) E l = {x;O<x<55) E2 = {x; x > 70) 3. (Set Operation) E, u E , = { x ; O < x < 5 5 o r x > 7 0 ) E, n E, = 0 E; = (x; x 2 55) 4. (Mutually Exclusive Events)
Yes, because E, n E ,
= 0.
5. (Exhaustive Events) No, because E , u E , Exercise 2.2
I (Probability of Joint Event)
I Exercise 2.3
Exercise 2.4
+S .
80 3. Since P(A nB) = -# 0 , A and B are not mutually exclusive. 100
I (Conditional Probability)
(Total Probability Rule) Let E l , E2, and E3 represent the event of being a highly successful product, the event of being a moderately successful product, and the event of being a poor product, respectively. Also, let G denote the event of receiving good reviews from customers.
P
2-12
Chapter 2
Probability
Then, P ( G I E l ) = 0 . 9 5 , P ( G I E 2 ) = 0 . 6 0 ,P(GIE,)=O.lO,
Exercise 2.5 (cont.)
P ( E l )= 0.40, P ( E 2 )= 0.35, and P ( E 3 )= 0.25 The events E l , E2, and E3 are mutually exclusive and exhaustive because P(El u E , u E,) = P ( E l )+ P ( E 2 )+ P ( E 3 )= 1 = P ( S )
By applying a total probability rule, P(G) = P(G I El )P(El)+ P(G I E2 )P(E2 + P(G I E, )P(E3 = 0.95 x 0.40 + 0.60 x 0.35 + 0.10 x 0.25 = 0.62 Exercise 2.5
( (Independence) 40 89 Since P(A I B) = -# P(A) = -,A and B are not independent. This
I I
Exercise 2.6
43 100 indicates that the scratch resistance and shock resistance of disk are stochastically related.
(Bayes' Theorem) By applying Bayes' theorem, P(G I El )P(E,) P(G I E, )P(E,) + P(G I E2 )P(E, + P(G I E, )P(E3 0.95 x 0.40 = 0.62 0.95 x 0.40 + 0.60 x 0.35 + 0.10 x 0.25
P(E1 I G ) =
Discrete Random Variables and Probability Distributions 3- 1 Discrete Random Variables 3-2 Probability Distributions and Probability Mass Functions 3-3 Cumulative Distribution Functions 3-4 Mean and Variance of a Discrete Random Variable 3-5 Discrete Uniform Distribution 3-6 Binomial Distribution
3-1
3-7 Geometric and Negative Binomial Distributions 3-7.1 Geometric Distribution 3-7.2 Negative Binomial Distribution 3-8 Hypergeometric Distribution 3-9 Poisson Distribution Summary of Discrete Probability Distributions EXCELApplications Answers to Exercises
Discrete Random Variables
See Section 2-8. Random Variables to review the notion of discrete random variable.
3-2
Probability Distributions and Probability Mass Functions
Distinguish between probability mass function and cumulative distribution function.
0 Determine the probability mass function of a discrete random variable. Probability Distribution
A probability distribution indicates how probabilities are distributed over possible values of X.
Two types of functions are used to express the probability distribution of a discrete random variable X 1. Probability Mass Function (p.m.f.): Describes the probability of a value of X, i.e., P(X= xi).
3-2
r
Chapter 3
Discrete Random Variables and Probability Distributions
Probability Distribution (cont.)
2. Cumulative Distribution Function (c.d.f.): Describes the sum of the probabilities of values ofXthat are less than or equal to a specified value, i.e., P(X I xi).
Probability Mass Function (p.m.f.)
The probability mass function of a discrete random variable X, denoted asf(xi), is f ( x i ) = P ( X = x i ) ,xi=x1,x2,... , x n and satisfies the following properties (see Figure 3- 1): (1) f (xi) 2 0 for all xi
Figure 3-1 Probability mass function of casting a die. Example 3.1
(Probability Mass Function) The grades of n = 50 students in a statistics class are summarized as follows:
Grade (X)
B
C
(x = 2) 20
(x = 3)
A (X =
No. of students
-m-*--(1ILn_--mm-"~-~m-7-m"-~~-"b-~%"~-~"~-~-a
10
1) ~
X
"
P
I
I
"
~
~
X
I
I
X
T
X
E
~
,
-
,
~
~
,
~
"
b
~
~
~
~
m .il".a (i ~
"
15 ~
-
~ .XmPI. ~ " ~ ~ XVl,.*,"I"~
D or below ~
"
~
~
. I-XXY. ~
(X
= 4)
".,"
5
.*?".
-Cll"lln
-lil
Let X denote a grade in statistics. Let the values 1,2, 3, and 4 represent an A, B, C, and D or below, respectively. Determine the probability mass function of X and plot f (xi) .
Probability mass function of X
3-3
3-3 Cumulative Distribution Functions
Exercise 3.1 (MR 3-10)
I
I
The orders from n = 100 customers for wooden panels of various thickness (X) are summarized as follows: Wooden Panel Thickness (X; unit: inch)
+
i 20
No. of customer orders *
e_l*_m_Xm
*-r_U-l_j___**jle-s-*~~x
**-rx;xr
-38
70 n ir
mmmx#d^naea.dm
rlaixrxi
xiun
rmmaaaa*a
-
10 rxu
as-x
x
Determine the probability mass function of X and plot f (x, ) .
3-3
Cumulative Distribution Functions
Determine the cumulative distribution function of a discrete random variable.
Cumulative Distribution Function (c.d.f.)
The cumulative distribution function of a discrete random variable X, denoted as F(x), is F(x)= P(X l x ) = f(xi) X,
SX
and satisfies the following properties (see Figure 3-2): (1) 0 I F(x) 51 (2) F(x) 5 F ( y ) ifx l y
Figure 3-2 Cumulative distribution function of casting a die. Example 3.2
(Cumulative Distribution Function) In Example 3-1, the following probabilities have been calculated: 1 2 3 1 P ( X = l ) = - , P ( X = 2 ) = - , P ( X = 3 ) = - , and P ( X = 4 ) = 5 5 10 10 Determine the cumulative distribution function of X and plot F(x) h By
using the probability mass function of X,
3-4
Chapter 3
Discrete Random Variables and Probability Distributions
Example 3.2 (cont.)
Cumulative distribution function of X
Exercise 3.2
In Exercise 3-1, the following probabilities have been calculated: P(X=$)=0.2, P(X=$)=0.7,and ~ ( ~ = $ ) = 0 . l
I Determine the cumulative distribution function of X and plot F(x) . 3-4
Mean and Variance of a Discrete Random Variable ---=-
amMm #ww-mmmwm-*8-m---m-
-mmm-*-
Calculate the mean, variance, and standard deviation of a discrete random variable.
Mean
Variance of X (oZ)
The mean of X, denoted as p or E(X) , means the expected value o f X
The variance ofX, denoted as o2or V(X) , indicates the dispersion ofXabout p: 02 = V ( X ) = C ( x
- p ) l f (x) = E x 2f (X) -pi
X
X
The standard deviation of X is o =
,/V(X)
(Derivation) V(X) = Z x 2f (x) - p2 (see page 67 of MR) X
Example 3.3
(Mean, Variance, and Standard Deviation) Suppose that, in an experiment of casting a single die, six outcomes (X)are equally likely. Determine the mean, variance, and standard deviation of X.
BBm
3-5 Discrete Uniform Distribution
Example 3.3
( chr
The probability mass function of X is
I
Therefore,
3-5
I
Exercise 3.3
In Exercise 3-1, the following probabilities have been calculated: P(X=$)=0.2, P(X=$)=0.7,and P(X=;)=O.~ Determine the mean, variance, and standard deviation of X.
3-5
Discrete Uniform Distribution
Describe the probability distribution of a discrete uniform random variable.
0 Determine the probability mass function, mean, and variance of a discrete uniform random variable.
Discrete Uniform Random Variable
A discrete uniform random variable X has an equal probability for each value in the range o f X = [a, b], a < b (see Figure 3-3). Thus, the probability mass function of X is 1 f(x) = b - a + 1 , w h e r e x = a , a + l , ..., b
Figure 3-3 A discrete uniform distribution. The mean and variance of X are p=- b + a and o = (b-a+112 -1 2 12
3-6
Chapter 3
Example 3.4
Discrete Random Variables and Probability Distributions
Suppose that six outcomes are equally likely in the experiment of casting a single ldie. 1. (Probability Mass Function; Discrete Uniform Distribution) Determine the probability mass function of the number (X)of the die. h Since x = 1,2,. . . , 6, a = 1 and b = 6. Therefore, the probability mass function of X is
I
2. (Probability) Find the probability that the number ( X ) of the die in the experiment is greater than three.
1 3. (Mean and Variance) Calculate the mean and variance of X (hSince a =
I Exercise 3.4 (MR 3-50)
(
3-6
1 and b = 6, the mean and variance of X are
(Note) Check that the values of p and o2above are the same with the answers in Example 3.2. Suppose that product codes of 2, 3, or 4 letters are equally likely. 1. Determine the probability mass function of the number of letters (X)in a product code. 2. Calculate the mean and variance o f X 3. Calculate the mean and variance of the number of letters in 100 product codes (Y = 1002'). Note that E(cX) = cE(X) and V(cX) = c2v(X) (refer to Section 5-7. Linear Combinations of Random Variables for further detail).
Binomial Distribution
Describe the terms Bernoulli trial and binomial experiment.
O Describe the probability distribution of a binomial random variable. Determine the probability mass function, mean, and variance of a binomial random variable. Binomial Experiment
A binomial experiment refers to a random experiment consisting of n repeated trials which satisfy the following conditions: (1) The trials are independent, i.e., the outcome of a trial does not affect the outcomes of other trials, (2) Each trial has only two outcomes, labeled as 'success' and 'failure,' and (3) The probability of a success (p) in each trial is constant.
3-6 Binomial Distribution
Binomial Experiment (cont.) Bernoulli Trial
3-7
In other words, a binomial experiment consists of a series of n independent Bernoulli trials (see the definition of Bernoulli trial below) with a constant probability of success (p) in each trial. A Bernoulli refers to a trial that has only two possible outcomes. (e.g.) Bernoulli trials (1) Flipping a coin: S = {head, tail) (2) Truth of an answer: S = {right, wrong) (3) Status of a machine: S = {working, broken) (4) Quality of a product: S = {good, defective) (5) Accomplishment of a task: S = {success, failure) The probability mass function of a Bernoulli random variable X is
The mean and variance of a Bernoulli random variable are p = p and o 2 = p ( l - p )
(Derivation) p and 02of a Bernoulli random variable p=~xf(x)=ox(l-p)+lxp=p
Binomial Random Variable
A binomial random variable Xrepresents the number of trials whose outcome is a success out of n trials in a binomial experiment with a probability of success p (see Table 3-1).
* If an item selected from a population is replaced before the next trial, the size of the population is considered infinite even if it may be finite. If the probability of success p is constant, the trials are considered independent; otherwise, the trials are dependent. The probability mass function of X, denoted as B(n, p ) , is
(Note) The number of combinations of x from n: C," = The mean and variance of X are p = np and o 2 = np(1- p )
n!
3-8
r
Chapter 3
Example 3.5
Discrete Random Variables and Probability Distributions
A test has n = 50 multiple-choice questions; each question has four choices but only one answer is right. Suppose that a student gives hisfher answers by simple guess. 1. (Probability Mass Function; Binomial Distribution) Determine the probability mass function of the number of right answers (X)that the student gives in the test. ~ . r rSince a 'right answer' is a success, the probability of a success for each question is p = $ . Thus, the probability mass function of X is
I
2. (Probability) Find the probability that the student answers at least 30
questions correctly.
3. (Mean and Variance) Calculate the mean and variance of X. = np = 50 x 0.25 = 12.5 right answers
~ . r rp
o2 = np(1- p) = 50x 0.25 x 0.75 = 9.4 = 3.12 Exercise 3.5 (MR 3-67)
3-7 3-7.1
Because all airline passengers do not show up for their reserved seat, an airline sells n = 125 tickets for a flight that holds only 120 passengers. The probability that a passenger does not show up is 0.10. Assume that the passengers behave independently. 1. Determine the probability mass function of the number of passengers (X) who show up for the flight. 2. Find the probability that the flight departs with empty seats. 3. Calculate the mean and variance ofX.
Geometric and Negative Binomial Distributions Geometric Distribution
0 Describe the probability distribution of a geometric random variable. O Compare the geometric distribution with the binomial distribution. 0 Explain the lack of memory property of the geometric distribution. Determine the probability mass function, mean, and variance of a geometric random variable. 111/e--*w-*wa
_iawaw-
_w---m
_*a_i_C_
"we-
___WnXa-I--
ammm--m-
*Iw--Xlmt(P-Xldll-"'
nm-
m-MwA--m-H
",*--*
9-s w* a--**mw"w
a
-" -*,-
"--*- -m+
-
3-9
3-7 Geometric and Negative Binomial Distributions
Geometric Random Variable
A geometric random variable Xrepresents the number of trials conducted until the first success in a binomial experiment with a probability of success p: Fail Fail Fail Success
u
P(X=x)
Fail (x - 1) times ( ~ - ~ ) ~ - l
=
t
Succeed at the'x time X
P
The probability mass function of X is f ( ~ ) = ( l - ~ ) " - ' ~x,= 1,2, ... The mean and variance of X are 2 1-P 1 p = - and o =P p2 The probability of X decreases geometricallv as illustrated in Figure 3-4.
0
2
4
8
6
10
12
Figure 3-4 Geometric distributions with selected values ofp. Geometric vs. Binomial Distributions
In the geometric distribution the number of trials is a random variable and the number of successes is one, whereas in the binomial distribution the number of trials is constant and the number of successes is a random variable (see Table 32).
Table 3-2 The Characteristics of Binomial and Geometric Distributions Distribution
Probability of No.of Succe~sfp)~i
Population*
i
No.of
Successes f
Geometric Infinite Constant i.. Variable -....... .......(..X..)...... ......... 1...... .. .+ * If an item selected from a population is replaced before the next trial, the size of the wB-
m
~
~
~
~
-
M
m
w
d
~
~
~
~
~
~
-
-
-
a
~
~ .?.
~
population is considered infinite even if itmay be finite. If the probability of successp is constant, the trials are considered independent; otherwise, the trials are dependent.
~
~
~
-
-
~
-
3-10
Chapter 3
Lack of Memory Property
Discrete Random Variables and Probability Distributions The lack of memory property means that the system does not remember the history of previous outcomes so that the timer of the system can be reset at any time, i.e., P(t<X
t)= = P ( X < At) P ( X > t) The lack of memory property of the geometric distribution implies that the value o f p does not change with trial. Due to the memoryless property of the geometric distribution, the number of trials (X) until the next success can be counted from any trial. The geometric distribution is the & discrete distribution having the lack of memory property.
(Proof) The lack of memory property The conditional probability P ( X < t + At ( X > t) is
Note that the sum of the first n terms of the geometric sequence a , ar, ar 2 , ...,ar is n-1
S,
=Car k=O
k
=-
a - urn , where r # 1 l-r
Accordingly,
t -1
(t+At-1)-1
C ~ ( 1 - p )- -~C P ( ~ - P ) ~ -
k=O
k=O
, where k = x - 1
t-1
1 - C P ( ~- P)' k=O
P-~(1-P) -
l-(l-P)
t+At-l
- P -~
( 1 P)' -
l-(l-p) P - ~ ( -PIt 1 1l-(l-P)
3-7 Geometric and Negative Binomial Distributions
3-11
Lack of Memory Property (cont.)
The probability P(X < At) is At-I
P(X < At) = P ( X 2 At - 1) =
(1 - p)"-' p
(At-I)-]
= ~ ( I - ~ ) " where - ~ ~k =, x - l
Therefore, P(Xt)=P(X
t
Example 3.6
Suppose that the probability of meeting a 'right' person for marriage is 0.1 at a blind date and blind dates are independent (i.e., the probability of meeting a right person for marriage is constant). 1. (Probability Mass Function; Geometric Distribution) Determine the probability mass function of the number of blind dates (X) to make until succeeding at meeting a right person for marriage. h Since meeting a right person for marriage is a success, the probability of success is p = 0.1. Thus, the probability mass function of X is f(x)=(1-p)x-1p=0.9x-'0.1, x = 1 , 2,... 2. (Probability) Find the probability of meeting a right person for marriage within 10 blind dates.
I Exercise 3.6 (MR 3-73)
3. (Mean and Variance) Calculate the mean and variance of X.
Suppose that the probability of a successful optical alignment in the assembly of an optical data storage product is 0.8. Assume that the trials are independent. 1 . Determine the probability mass function of the number of trials ( X ) required until the first successful alignment. 2. Find the probability that the first successful alignment requires four trials maximum. 3. Calculate the mean and variance of X.
3-12
Chapter 3
3-7.2
Discrete Random Variables and Probability Distributions
Negative Binomial Distribution
0 Describe the probability distribution of a negative binomial random variable. Compare the negative binomial distribution with the binomial distribution.
O Determine the probability mass function, mean, and variance of a negative binomial random variable.
Negative Binomial Random Variable
A negative binomial random variable X represents the number of trials conducted until r successes in a binomial experiment with a probability of success p: Fail Fail Success - - . Success Fail Success v Succeed (r - 1) times during (x - 1) times Make the rthsuccess (= Fail (x - r) times during (x - 1) times) at the xthtime
f
The probability mass function of X is
The mean and variance of X are r 2 4 1 -PI p = - and =P P
Negative Binomial vs. Binomial Distributions
In the binomial distribution the number of trials is constant and the number of successes is a random variable, whereas the ovvosite becomes true in the negative binomial distribution (this is how the 'negative binomial' is named) (see Table 33).
Table 3-3 The Characteristics of Binomial and Negative Binomial Distributions Distribution
Population*
-----
Probability of Success (p)t Constant
; j Constant (n) Variable (X)i Infinite Binomial Negative Infinite Constant i Variable (X) Constant (r) j binomial =*> -".... ... ..........,....... .. .r * If an item selected from a population is replac t trial, the size of the population is considered infinite even if it may be finite. If the probability of success p is constant, the trials are considered inde~endent; otherwise. the trials are de~endent.
-
r
I No.
Example 3.7
?"*-""es
8
e*me-e'ea#-
# * w e ~ v ~ - - ~ - - a - - - ~ - m w
A travel company is recruiting new members by making calls for the membership A cost analysis indicates that at least 20 new members program ECQNQTRAVEL. per day be recruited to maintain the program. Suppose that the probability of recruiting a new member per call is 0.2. 1. (Probability Mass Function; Negative Binomial Distribution) Determine the probability mass function of the number of calls (4to make to recruit 20 new members per day.
3-13
3-8 Hypergeometric Distribution
Example 3.7 (cont.)
h Since recruiting a
new member is a success, the probability of success is p = 0.2 and the number of successes is r = 20. Thus, the probability mass function of X is
I
2. (Probability) Find the probability of recruiting 20 new members per day by making 100 calls.
3. (Mean and Variance) Calculate the mean and variance of X. r 20 P=-== 100 calls per day p 0.2
Exercise 3.7 (MR 3-83)
i
3-8
An electronic scale at an automated filling operation stops the manufacturing line after three underweight packages are detected. Suppose that the probability of an underweight package is 0.01 and fills are independent. 1. Determine the probability mass function of the number of fills (X)before the manufacturing line is stopped. 2. Find the probability that at least 100 fills can be completed before the manufacturing line is stopped. 3. Calculate the mean and variance of X,
Hypergeometric Distribution
R Describe the probability distribution of a hypergeometric random variable. 0 Compare the hypergeometric distribution with the binomial distribution. R Determine the probability mass function, mean, and variance of a hypergeometric random variable. 0 Apply the binomial approximation to a hypergeometric random variable.
-- --
m --w ---
rh #
Hypergeometric Random Variable
*-
a a
**
--
L
A hypergeometric random variable X represents the number of successes in a sample of size n that is selected at random without replacement from a finite population of size N consisting of K successes and N - K failures. Since each item selected from the population is not replaced, the outcome of a trial depends on the outcome(s) of the previous trial(s); therefore, the probability of success p at each trial is not constant.
3-14
Chapter 3
Discrete Random Variables and Probability Distributions The probability mass function of X is
Hypergeometric Random Variable (cont.)
f(XI =
(:)I I)
, x=max{O,n-(N-K))
The mean and variance of X are
(z l)
to min{K,n)
K
N p = np and o2 = np(1- p ) - where p = (Note) The variance of a hypergeometric random variable is different from the variance of a binomial random variable by (N - n)l(N - I), which is called finite population correction factor.
Hypergeometric vs. Binomial Distributions
In the hypergeometric distribution the population is finite and the probability of success is changing, whereas in the binomial distribution the population is infinite and the probability of success is constant (see Table 3-4).
Table 3-4 The,.~h.a~~~~.c~i~~~c~..~fBin~m~.~!..~.qd Hypergeometric Distributions : P-eters Distribution i Population* Probability of i No.of Trials No. o f Success 03)' Successes Hypergeometric
i
Finite
Changing ; Constant (n) Variable (X) IN K failuresL* * If an item selected from a populatio-d before the next trial, thesize oit-he population is considered infinite even if it may be finite. If the probability of successp is constant, the trials are considered independent; otherwise, the trials are dependent. m-,,
* l a n .
; (K successes;
-a#*bara-
-xsx
X
A
r
Example 3.8
An instructor of a statistics class is planning to interview a sample of n = 10 students who are randomly selected from the class. The class has a total of 30 students, consisting of 20 traditional and 10 non-traditional students. 1. (Probability Mass Function; Hypergeometric Distribution) Determine the probability mass function of the number of non-traditional students (X)in the sample. o-rr The size of the population is N = 30. Since a non-traditional student is a success, the number of successes in the population is K = 10. To determine the range of the number of non-traditional students (X)in the sample, calculate the following: max(0,n-(N - K))=max{O, 10 -(30 - 10)) =max(O, -10) = 0 min{K,n) = min{l0,10) = 10
(
Therefore, the probability mass function of X is
3-8 Hypergeometric Distribution
I
3-1 5
2. (Probability) Find the probability that at least one non-traditional student is in the sample.
1 3. (Mean and Variance) Calculate the mean and variance of X.
I
1 = 3.3 non-traditional students in the sample of 10 students 3
p = np = 10 x-
I
Exercise 3.8 (MR 3-90)
Binomial Approximation
r
Example 3.9
A lot of 75 washers contains 5 defectives whose variability in thickness is unacceptably large. A sample of 10 washers is selected at random without replacement. 1. Determine the probability mass function of the number of defective washers (X)in the sample. 2. Find the probability that at least one unacceptable washer is in the sample. 3. Calculate the mean and variance of X. The hypergeometric distribution approximates to the binomial distribution with p =KIN, if the sample size n is relatively small to the population size N. A rule of thumb is that the binomial approximation of a hypergeometric distribution is satisfactory if nlN < 0.1. (Binomial Approximation; Hypergeometric Distribution) Suppose that X has a hypergeometric distribution with N = 200, n = 10, and K = 20. Find P(X = 2) by using the hypergeometric distribution and approximate binomial distribution. Is the binomial approximation reasonable? thr The probability mass function of X is
(Note) max(0, n - ( N - K)) = max{O, 10 - (200 - 20)) = max(0, -170) = 0 min{K, n) = min(20,lO) = 10
3-16
Chapter 3
Discrete Random Variables and Probability Distributions
Thus,
Example 3.9 (cont.)
The approximate binomial distribution of the hypergeometric random variable Xis
K 20 (Note) p = -= -= 0.1 N 200 Based on the binomial approximation,
' 1
r$
Exercise 3.9
3-9
Since n is relatively small to N (nlN = 101200 = 0.05), the binomial approximation of the hypergeometric random variable Xis reasonable. Suppose that X has a hypergeometric distribution with N = 500, n = 50, and K = 100. Find P(X = 10) by using the hypergeometric distribution and approximate binomial distribution. Is the binomial approximation reasonable?
Poisson Distribution -mm-m-*Bwasm-8
w
~
~
w
-
-
~
m
.
~
~
~
-
-
~
-
~
m
v
-
~
~
~
-
-
B
~
~
~
-wsws--"~ ~
-
-
R Explain the term Poisson process. R Describe the probability distribution of a Poisson random variable. R Compare the Poisson distribution with the binomial distribution. Determine the probability mass function, mean, and variance of a Poisson random variable. Poisson Process
Suppose that the occurrence of an event over an interval (of time, length, area, space, etc.) is countable and the interval can be partitioned into subintervals. Then, a random experiment is defined as a Poisson process (see Figure 3-5) if (I) The probability of more than one occurrence in a subinterval is infinitesimal (approximately zero), (2) The occurrences of the event in non-overlapping subintervals are stochastically independent, and (3) The probability of one occurrence of the event in a subinterval is the same throughout all subintervals and prouortional to the length of the subinterval.
~
~
~
~
~
m
-
3-17
3-9 Poisson Distribution
Poisson Process (cont.)
possible
0, with I - p
...
probabilities I
I
I,
12
1, with p I
...
1,
subintervals Figure 3-5 Poisson process. In other words, the Poisson process is a binomial experiment with infinite n trials. (e.g.) Poisson Process (1) The number of defects of a product (2) The number of customers in a store (3) The number of automobile accidents (4) The number of e-mails received
Poisson Random Variable
A Poisson random variable Xrepresents the number of occurrences of an event of interest in a unit interval (of time, space, etc.) specified. The probability mass function of X is
The mean and variance of X are p = h and 0 2 = h
(Caution) Use consistent units to define a Poisson random variable X and the corresponding parameter h. For example, the following pairs of X and h are equivalent to each other:
---*--
Poisson vs. Binomial Distributions
X
h
(counts/unit interval)
(average no. of countsfunit interval)
No. of flaws a disk No. of flaws every 10 disks No. of flaws e v e x 100 disks
1 10 100
----Bzmm
--mw-w---m
-
---*
e-d
-
--
a"a
,*-wamaad
- - --a , '
a
In the Poisson distribution, the number of trials is infinite, whereas in the binomial distribution the number of trials is finite (see Table 3-5). In other words, the Poisson distribution with E(X) = h is the limiting form of the binomial distribution with E(X) = np: n-x e-hhx lim B(n, p ) = lim (1 - p ) n-x = 1im (1 =n-+n+x n+x x!
("Ip
(")(:)'
):
(Proof) Poisson vs. binomial distributions Suppose that Xis a binomial random variable with parameters n and p , and let h = np. Then, lim n+-
x
(1 -
n-x
= lim n+-
(L)x(l-q)n-x
x!(n - x)! n
3-18
Chapter 3
Discrete Random Variables and Probability Distributions
Poisson vs. Binomial Distributions (conb)
3L" n(n-l)...(n-x+1) -- lim X! "+== nX (1
-:In
(1
-
As n becomes large, n(n - l ) . . - ( n- x + 1) lim =1, n+nx
Therefore,
Table 3-5 The Characteristics of Binomial and Poisson Distributions -------4 -. Distribution Population* Probability of i No. ofTrials i No. of Success ( P ) ~f : Successes Poisson aW
r
n
M
W
Y
Infinite
Constant (n = hln)
w e ~ ~ ~ d ~ - ~ ~ e a m w " s ~ ~ we.' m--'rn---. d ~ - ~ , ~ * % ~
Am&-.-
j
.
Infinite
j
Variable (4
I -kw P P l l r l w w m+ ~ ~ ~ ~--m=.m.
. . . a
* If an item selected from a population is replaced before the next trial, the size of the
r
population is considered infinite even if itmay be finite. If the probability of successp is constant, the trials are considered inde~endent; otherwise, the trials are dependent.
Example 3.10
I
I
The number of customers who come to the YUMMYdonut store follows a Poisson distribution with a mean of 5 customers every 10 minutes. 1. (Probability Mass Function; Poisson Distribution) Determine the probability mass function of the number of customers (X)per hour coming to the donut store. a For a consistent unit for Xand L, h = E(X) = 5 customers/lO min. x 60 min. = 30 customers per hour
I
I
Therefore, the probability mass function of X is
2. (Probability) Find the probability that 40 customers come to the donut store in an hour.
3-9 Poisson Distribution
3-19
3. (Mean and Variance) Calculate the mean and variance of X. ~ - r rp = 3L = 30 customers per hour
Example 3.10 (cont.)
o2= I . Exercise 3.10 (MR 3-103)
1
= 30 = 5.52
The number of cracks that need repair in a section of interstate highway follows a Poisson distribution with a mean of two cracks per mile. 1. Determine the probability mass function of the number of cracks (X)in 5 miles of highway. 2. Find the probability that there are at least five cracks in 5 miles of highway that require repair. 3. Calculate the mean and variance of X.
3-20
Chapter 3
Discrete Random Variables and Probability Distributions
Summary of Discrete Probability Distributions A. Probability Distributions Probability Mass F W o n
Distribution
§tx%i~n
Mebus
Uniform
Binomial
Geometric Negative Binomial
, KIN,nlN,
Hypergeometric
nP
" ~ ( 1 - P)(-)
K P=N
N-n
3-8
N-1
x = max (0, n - ( N - K)) to min{K, n )
e-'
Poisson
h" x!
, x=O, 1,2, ...
h
h
3-9
B. Comparison
(X)
Geometric
Infinite
Constant
Variable
Negative Binomial
Infinite
Constant
Variable (X)
Constant (r)
Finite (K successes; N - K failures)
Changing
Constant (n)
Variable (X)
Infinite
Constant ( p = Wn)
Infinite
Variable ( X )
Hypergeometric
Poisson
1
* If an item selected from a population is replaced before the next trial, the size of the population is a
arrran"m
4a-;ean
mi1
**am--
a , *
rrxam
---nxwm
r
nn*.
.n
unusme-a--
r-n*euc
no
r-H
xn -rrr
n"
w*uru
. c
-a^
considered infinite even if it may be finite. If the probability of successp is constant, the trials are considered indevendent; otherwise, the trials are dependent.
a era
EXCELApplications
3-21
EXCELApplications Example 3.5
2. (Probability; Binomial Distribution)
(1) Choose Insert ) Function. ( 2 ) Select the category Statistical and the function BINOMDIST. number of trials (Trials; n), (3) Enter the number of successes (Number-s; 3, probability of a success of interest (Probability-s; p), and logical value (Cumulative; true for the cumulative distribution function and false for the probability mass function). (4) Find the result circled with a dotted line below.
= 0.999999836
Returns the individualterm bmomlal dKtrbut~onprobability, Cumulative is a logical valus: for the cumulative distribut~onfunct~on,use TRUE; For ther pobability mars function, use FALSE.
Example 3.7
2. (Probability; Negative Binomial Distribution)
(1) Choose Insert > Function. (2) Select the category Statistical and the function NEGBINOMDIST. (3) Enter the number of failures (Number-f; x - r), number of successes (Number-s; r), and probability of a success of interest (Probability-s; p) (4) Find the result circled with a dotted line below.
3-22
Chapter 3
Example 3.8
Example 3.10
Discrete Random Variables and Probability Distributions
2. (Probability; Hypergeometric Distribution) (I) Choose Insert N Function. (2) Select the category Statistical and the function HYPGEOMDIST. (3) Enter the number of successes in the sample (Sample-s; x), size of the sample (Number-sample; n), number of successes in the population (Population-s; K), and size of the population (Numberqop; N ) (4) Find the result circled with a dotted line below.
2. (Probability; Poisson Distribution) (1) Choose Insert N Function. (2) Select the category Statistical and the function POISSON. (3) Enter the number of events (X), mean (Mean; A), number of successes in the population (Population-s; K), and logical value (Cumulative; true for the cumulative distribution function and false for the probability mass function). (4) Find the result circled with a dotted line below.
Answers to Exercises
Answers to Exercises Exercise 3.1
(Probability Mass Function) P(X=f)=0.2, P(X=f)=0.7,and P(X=$)=0.1
Probability mass function of X Exercise 3.2
(Cumulative Distribution Function) By using the probability mass function of X, ~ ( t ) = P ( X S 6 ) = 0 . 2 F($)=P(XS$)=0.2+0.7=0.9 F($) = P ( X I $) = 0.9 + 0.1 = 1 x<$ 0.2,
F(x) =
10.9,
F1 I X < $ +1 x < $
Cumulative distribution function of X Exercise 3.3
(Mean, Variance, and Standard Deviation) F = z x f ( x ) = $ x 0 . 2 + f x 0 . 7 + $ x0.1=0.2375 inch X
o2= X x 2f ( x ) - ~=(f)2 ~ ~ 0 . 2 + ( $ x0.7+($12 )~ ~0.1-0.2375~ X
= 0.0045
o = ,/V(X>
=
.\!= = 0.067 inch
3-23
3-24
Chapter 3
Discrete Probability Distributions
1. (Probability Mass Function; Discrete Uniform Distribution) Since x = 2,3,4, a = 2 and b = 4. Therefore, the probability mass function of Xis
Exercise 3.4
2. (Mean and Variance) Since a = 2 and b = 4, the mean and variance of X are
3. (Mean and Variance of a Linearly Combination) The mean and variance of Y = 1OOX are E(Y) = E(100X) = lOOE(X) = 1 0 0 3~= 300
1. (Probability Mass Function; Binomial Distribution) Since "a passenger showing up" is a success and the probability of a success i s p = 1 - 0.1 = 0.9. Thus, the probability mass function of X is
Exercise 3.5
2. (Probability) The flight would have empty seats i f X < 120. Thus, the probability that the flight departs with empty seats is
3. (Mean and Variance) y = np = 125x 0.9 = 112.5 passengers
o2= np(1- p) = 125x 0.9 x (1 - 0.9) = 1 1.2 = 3.32 1 . (Probability Mass Function; Geometric Distribution) Since a successful optical alignment is a success, the probability of success is p = 0.8. Thus, the probability mass function of X i s f ( x ) = ( l - p ) x - 1 p = 0 . 2 " - ' 0 . 8 , x = l , 2 ,...
Exercise 3.6
I
2. (Probability)
Answers to Exercises Exercise 3.6 (cont.)
Exercise 3.7
3-25
( 3. (Mean and Variance) 1
1
p = - = - = 1.25 trials
1 1. (Probability Mass Function; Negative Binomial Distribution)
1 2. (Probability) 1 3. (Mean and Variance) r p
p=-=-
Exercise 3.8
3 0.01
= 300 fills
1. (Probability Mass Function; Hypergeometric Distribution) N=75, K = 5 , n = 1 0 max(0, n - (N - K)) = max(0, 10 - (75 - 5)) = max (0, -60) = 0 min{K,n) = min(5, 10) = 5
2. (Probability)
3. (Mean and Variance)
1 15
p = np = 10 x - = 0.67 unacceptable washers in the sample of 10 washers
3-26
Chapter 3
Exercise 3.9
Discrete Probability Distributions
I
(Binomial Approximation; Hypergeometric Distribution) The probability mass function of Xis
(Note) max{O,n - (N - K)} = max{0,50 - (500 - 100)) = max(0, -350) = 0 min{K,n) = min{100,50) = 50 Thus,
The approximate binomial distribution of the hypergeometric random variable Xis ~ x ) = ( nx) p x ( - p ) n - x =[~)0.2x0.8s0-x, x - 0 to 50
K 100 (Note) p = -= -= 0.2 N 500 Based on the binomial approximation, P ( X = 10) = f (10) =
= 0.140
Since n is relatively small to N (n/N = 50/500 = 0. l), the binomial approximation of the hypergeometric random variable X is satisfactory. Exercise 3.10
I
1. (Probability Mass Function; Poisson Distribution) h = E(X) = 2 crackslmile x 5 miles = 10 cracks every 5 miles
1 2. (Probability) 3. (Mean and Variance) p = h = 10 cracks every 5 miles o2 = h =1 0 = 3 . 2 ~
4
Continuous Random Variables and Probability Distributions
4- 1 Continuous Random Variables 4-2 Probability Distributions and Probability Density Functions 4-3 Cumulative Distribution Functions 4-4 Mean and Variance of a Continuous Random Variable 4-5 Continuous Uniform Distribution 4-6 Normal Distribution 4-7 Normal Approximation to the Binomial and Poisson Distributions
4-1
4-9 Exponential Distribution 4- 10 Erlang and Gamma Distributions 4-1 0.1 Erlang Distribution 4- 10.2 Gamma Distribution 4-1 1 Weibull Distribution 4- 12 Lognormal Distribution Summary of Continuous Probability Distributions EXCELApplications Answers to Exercises
Continuous Random Variables
See Section 2-8. Random Variables to review the notion of continuous random variable.
4-2
Probability Distributions and Probability Density Functions
Distinguish between probability density function and cumulative distribution function. R Determine the probability of a continuous random variable by using the corresponding probability density function. Probability Distribution
Like the probability distribution of a discrete random variable, two types of functions are used to express the probability distribution of a continuous random variable X
4-2
\
Chapter 4
Continuous Random Variables and Probability Distributions
Probability Distribution (cont.)
1. Probability Density Function (p.d.f.;J(x)): Describes the densities of probabilities over a range of X. 2. Cumulative Distribution Function (c.d.f.; F(x)): Represents the probability ofXthat is less than or equal to a specified value, i.e., P(X I x).
Probability Density Distribution (p.d. f.)
The probability density functionf (x) of a continuous random variable X satisfies the following properties: (1) f ( x ) > O
(2)
1- f ( x W
=1
The integral off (x) from points a to b represents the probability that X belongs to the interval [a, b] (i.e., the area under the density function between a and b as shown in Figure 4-1). (4) P(X = x) = 0 The probability of every single point is zero because the width of any point is zero. Thus, P(a<XIb)=P(a<X
Example 4.1
I
Figure 4-1 Probability density function. The probability P(a I X I b) (shaded) is determined by integratingf (x) from a to b. g b a b i l i t y Density Function) Suppose that the probability density iunction of f ( ~ ) = e - ~x,> O = 0, elsewhere Determine P(X < 2) , P(2 I X < 4) , and P ( X 2 4)
I
O-w
(I) P(X < 2) = &(x)&
=
e"& = - e-'
2
lo
= -e-2
+ e-'
= 0.86
7 4-3
4-3 Cumulative Distribution Functions
I
Exercise 4.1
Suppose that the probability density function of X is f(x)=3x2, O < x < l = 0, elsewhere Determine P ( X < f), P($ I X < -$), and P ( X 2
I
I
I I
3). 1
4-3
Cumulative Distribution Functions m - - - m - - M - v a ~ - - m - - m - - - *
wm-m-L#a
O Determine the cumulative distribution function of a continuous random variable.
---
*
I
e m m F m wm-awmam p p w a
Cumulative The cumulative distribution function of a discrete random variable X is Distribution F(x) = P ( X l x) = f (u)du Function co (~.d.~.) and satisfies the following properties: (1) 0 I F(x) l l
1
(2) F(x) l F(y) ifx 5 y
I I
t 1
dF(x) (Note) f (x) = -
t'
Example 4.2
(Cumulative Distribution Function) In Example 4- 1, the probability density function of X is f(x)=e-", x > O = 0, elsewhere Determine the cumulative distribution function of X. h By using the probability density function of X,
6;
F(x) = P ( X I x) = f (u)du =
1e-"du
= - e-"
I,
= -e-"
+ e-O = 1- e-"
Therefore, F(x) = Exercise 4.2
1"
1-e-",
x
In Exercise 4- 1, the probability density function of X is f(x)=3x2, O < X <1 = 0, elsewhere Determine the cumulative distribution function of X.
6
1 I
i
4-4
Chapter 4
4-4
Mean and Variance of a Continuous Random Variable
Continuous Random Variables and Probability Distributions
Calculate the mean, variance, and standard deviation of a continuous random variable.
Mean of X (P)
The mean (expected value) of X is
\,/,
The variance of X is
Variance of X (02)
Standard Deviation of X
Y
Example 4.3
The standard deviation of X is O=
1 1 I
JV(X>
(Mean, Variance, and Standard Deviation) In Example 4-1, the probability density function of X is f(x)=e-", x > O = 0, elsewhere Determine the mean, variance, and standard deviation ofX. (1) The mean of X is
R
p = xf(x)dx= [xe-"dx Apply the method of integration by parts as follows:
Thus, p=
I
R
x e - " ~= - xe-x
;1
+ Re-"dx=-e-"
- =-e
In
Apply the method of integration by parts as follows:
-=
+ eo = I
4-5 Continuous Uniform Distribution
Example 4.3 (cont.)
1
Then,
I
Therefore,
1
a
Exercise 4.3
4-5
=
Rx
4-5
e dx-1=2-1=1 -x
(3) The standard deviation of X is a = 1 . In Exercise 4-1, the probability density function of X is f(x)=3x2, O < X <1 = 0, elsewhere Determine the mean, variance, and standard deviation of X.
Continuous Uniform Distribution
0 Describe the probability distribution of a continuous uniform random variable. 0 Determine the probability density function, mean, and variance of a continuous uniform random .-..
variable. ".'-<*ws,,
~."* -,,,-..
"".m.m>B
Continuous Uniform Random Variable
", *.* ,a.".,~~**d,a*".+e,<>B~"~-.--".Bw#
**a~,d,
-v*~-e--*B*%,.",-*w...
~ ~ w ~ , . ' a ~ ~ . ~ ~ " " . ~ ~ . ~ . ~ ~ " ~ ~ : ~ - ~ w ~ ~ - * ~ . ~ - a , ~ ~ ~ ~ , ~ * ~ w.-*.a ~ # - we-*" ~ m ~ ~ ~ ~ ~ ~ ~ . a a " . ~ . , - . w A ~ " . ~ ~ , ~
A continuous uniform random variable X has a constant probability density function over the range o f X (see Figure 4-2): 1 f(x)=-b - a , a < x l b
Figure 4-2 A continuous uniform distribution. The mean and variance of X are p=- b + a and a =---(b-a12 ,where a l x l b 2 12
Example 4.4
Suppose that a random number generator produces real numbers that are uniformly distributed between 0 and 100. 1. (Probability Density Function; Continuous Uniform Distribution) Determine the probability density function of a random number ( X ) generated.
4-6
Chapter 4
Example 4.4 (cont.)
Continuous Random Variables and Probability Distributions
1 h a = 0,
b = 100 1 f(x)=----b-a
I
1
-
100-0
1 , 01x5100 100
2. (Probability) Find the probability that a random number ( X ) generated is between 10 and 90.
1 3. (Mean and Variance) Calculate the mean and variance ofX.
m
Exercise 4.4 (MR 4-34)
4-6
Suppose that the thickness of a flange is uniformly distributed between 0.95 and 1.05 mm. 1. Determine the probability density function of flange thickness (4. 2. Find the probability that the thickness of a flange (X) exceeds 1.02 mm. 3. Calculate the mean and variance of X.
Normal Distribution
--Describe the characteristics of a normal distribution. O Read the z table. 0 Standardize a normal random variable. Normal Random Variable
A normal random variable with mean y and variance o2has the probability density function f(x)=-e
1
(x-p12 --
202
,
-oo<X
&(3
A normal distribution with mean y and variance 02, denoted as N(p, 02), is svmmetric about u and bell-shaved (see Figure 4-3). The symmetry of a normal curve implies P ( y < X) = P ( X > p)= 0.5 The parameters p and o2determine the center and shape of the normal curve, respectively. As illustrated in Figure 4-3, the larger the value of y, the more to the right the center of the normal curve is located; the smaller the value of 02,the sharper the normal curve.
4-6 Normal Distribution
4-7
Normal Random Variable (cont.)
Figure 4-3 Normal curves with selected values of p and 02. Probabilities of Normal Distribution
Selected probabilities of a normal distribution are displayed in Figure 4-4. The area under the normal curve beyond +30 is quite small (less than 0.01). Since 99.7% of possible values of Xare within the interval (p - 30, p + 30), 6 0 is referred to as the width of a normal distribution.
Figure 4-4 Probabilities of a normal distribution. Standard Normal Random Variable
The standard normal random variable (denoted as Z) is a normal random variable with mean p = 0 and variance o2= 1. The cumulative distribution function of Z is denoted as @(z) = P(Z 1z ) The z table (see Appendix Table I1 in MR) provides values of @(z). In the table, the z column shows z values with tenth digits and the column headings indicate hundredth digits of z values. (e.g.) Reading the z Table (1) @(-1.57) = P(Z 5 -1.57) = 0.058 (2) @(1.57)=P(Z 51.57)=0.942 (3) P(Z > 1.57) = 1- P(Z 11.57) = 1 - 0.942 = 0.058 (4) P(-1.57 1Z 11.57) = P(Z 11.57) - P(Z 1-1.57) = Q(1.57) - @(-I .57) = 0.942 - 0.058 = 0.884
4-8
Chapter 4
Standardization
Continuous Random Variables and Probability Distributions An arbitrary normal random variable X with p and o2can be transformed into Z (with p = 0 and o2= l), as illustrated in Figure 4-5, by using the algebraic relationship between X and Z:
x-c1 z=(3
Figure 4-5 Standardizing a normal random variable. After standardizing the normal random variable X, probabilities of X are determined by using the z table. (Standardization) Suppose that the life length of an INFINITYlight bulb follows a normal distribution with p = 750 hours o2= 402. Calculate P(740 1X 5 770).
Exercise 4.5 (MR 4-57)
4-7
The sick-leave time of employees in a firm in a month is normally distributed with a mean of 100 hours and a standard deviation of 20 hours. Find the probability that the sick-leave time of an employee in a month exceeds 130 hours.
Normal Approximation to the Binomial and Poisson Distributions
R Apply the normal approximation to the binomial, hypergeometric, and Poisson random variables. Normal Approximation to Binomial Distribution
The binomial distribution B(n, p ) approximates to the normal distribution with p = np and o2= np(1 - p) if n p > 5 and n(l - p ) > 5 (see Figure 4-19 on page 119 of MR). However, whenp is so close to 0 or 1 that np < 5 or n(l - p ) < 5, corresponding binomial distribution is significantly skewed (see Figure 4-20 on page 120 of MR); thus, the normal approximation is inappropriate.
4-7 Normal Approximation to the Binomial and Poisson Distributions
Normal Approximation to Binomial Distribution
4-9
When normal approximation is applicable, the probability of a binomial random variable X with p = np and o2= np(1 - p ) can be determined by using the standard normal random variable - X-u X-no L=--
-
(cant.)
Normal Approximation to Hypergeometric Distribution
Recall that the binomial approximation is applicable to a hypergeometric distribution if the sample size n is relatively small to the population size N, i.e., nlN < 0.1 (see Section 3-8. Hypergeometric Distribution). Consequently, the normal approximation can be applied to the hypergeometric distribution withp = KIN (K: number of successes in N ) if nlN < 0.1, np > 5. and n(1 - p) > 5. Thus, when normal approximation is applicable, the probability of a hypergeometric random variable Xwith p = np and o2= np(1 - p)(N - n)/(N - 1) can be determined by using the standard normal random variable X - np K x - P -, where p = o ,/np(l - P ) ( N - n) /(N - I) N
z=-
Normal Approximation to Poisson Distribution
Recall that a Poisson distribution with k = h and o2= h is the limiting form of a binomial distribution with p = np and o2= np(1 - p ) (see Section 3-9. Poisson Distribution). Therefore, the normal approximation is applicable to a Poisson distribution if
m.
Accordingly, when normal approximation is applicable, the probability of a Poisson random variable X with p = h and oZ= h can be determined by using the standard normal random variable X-p --X - h
z=-
o
ii
Example 4.6
J
x
1. (Normal Approximation; Binomial Distribution) Suppose that X is a binomial random variable with n = 100 andp = 0.1. Find the probability P(X I 15) based on the corresponding binomial distribution and approximate normal distribution. Is the normal approximation reasonable? ~ . r rThe probability mass function, mean, and variance of the binomial random variable X are
I
Thus,
4-10
Chapter 4
Example 4.6 (cont.)
Continuous Random Variables and Probability Distributions By applying the normal approximation to the binomial random variable X,
---)
3
= P(Z I 1.6'7) = 0.953
Since np = 10 > 5 and n(l - p ) = 90 > 5, the normal approximation to the binomial random variable X i s satisfactory. 2. (Normal Approximation; Hypergeometric Distribution) Suppose that X has a hypergeometric distribution with N = 1,000, K = 100, and n = 100. Find the probability P ( X I 15) based on the corresponding hypergeometric distribution and approximate normal distribution. Is the normal approximation reasonable? FF The probability mass function, mean, and variance of the hypergeometric random variable X are
(Note) max (0, n - ( N - K ) ) = max (0, 100 - (2 000 - 100)) = max{O, -800) = 0 min{K,n) = min(100, 100) = 100
Thus,
By applying the normal approximation to the hypergeometric random variable X, P(X615)=P
-
----)
2.85
= P(Z 6 1.75) = 0.960
Since nlN = 0.1, np = 10 > 5 and n(l - p ) = 90 > 5, the normal approximation to the hypergeometric random variable Xis satisfactory.
. (Normal Approximation; Poisson Distribution) Suppose that X has a Poisson distribution with A. = 10. Find the probability P(X I 15) based on the corresponding Poisson distribution and approximate normal distribution. Is the normal approximation reasonable?
4-11
4-9 Exponential Distribution
Example 4.6 (cont.)
R R R R
The probability mass function, mean, and variance of the Poisson random variable X are
I
Thus,
I
By applying the normal approximation to the Poisson random variable X,
I
Since h =10 > 5, the normal approximation to the Poisson random variable X is satisfactory. 1. Suppose that X is a binomial random variable with n = 200 andp = 0.4. Find the probability P ( X I 70) based on the corresponding binomial distribution and approximate normal distribution. Is the normal approximation reasonable? 2. Suppose that X has a hypergeometric distribution with N = 2,000, K = 800, and n = 200. Find the probability P ( X I 70) based on the corresponding hypergeometric distribution and approximate normal distribution. Is the normal approximation reasonable? 3. Suppose that X has a Poisson distribution with h = 80. Find the probability P(X 1 70) based on the corresponding Poisson distribution and approximate normal distribution. Is the normal approximation reasonable?
Exercise 4.6 (MR 4-61)
4-9
I
Exponential Distribution
Describe the probability distribution of an exponential random variable. Explain the relationship between exponential and Poisson random variables. Explain the lack of memory property of an exponential random variable. Determine the probability density function, mean, and variance of an exponential random variable.
-ms*.?"+,w"M
%*
* s.B-d
Exponential Random Variable
*--
-- -.
*- "
d--*
-<
<-*
**
Mm
-*
.-M
*-
**--*
-a
a
*w
--
B
wwB
"*
"*
A
s-
A
*>
" *
a
*
..
-
*w
An exponential random variable represents the length of an interval (in time, space, etc.) from a certain point until the next success in a Poisson process. The exponential distribution parallels the geometric distribution by being concerned with the probability of the next success.
* a s
4-12
Chapter 4
Exponential Random Variable (cont.)
Continuous Random Variables and Probability Distributions The probability density function and cumulative distribution function of exponential random variable X with parameter h (average number of successes per unit interval) are f ( x ) = ~ e - ~ and ~(x)=l-e-k, x20, h > 0 The mean and variance of X are 1 2 1 p = - and o =h h2
(Caution) Like the Poisson distribution, use consistent units to define an exponential random variable X and the corresponding parameter h. An exponential curve is skewed to the right, decreasing exponentially (see Figure 4-6; as h increases, the skewness of the curve increases.
Figure 4-6 Exponential curves with selected values of h. Relationship between Exponential and Poisson Distributions
The exponential and Poisson distributions are interchangeable. Let an exponential random variable X with parameter 3L (average number of successes during the unit length of time) denote the time until the next success. Let N denote the number of successes during a specific time interval x, i.e., N is a Poisson random variable with parameter h . Then, the following relationship exists between X and N: e-k(h)O -h =e P(X > x) = P ( N = 0) = 0! Therefore, F(X)=P(X<X)=~-P(X>X)=~-~-~ By differentiating F(x), the probability density function of the exponential random variable X i s
4-9 Exponential Distribution
Exponential vs. Poisson Distributions
4-13
In the exponential distribution the number of successes is fixed as one and the length of a time interval between successes is a random variable, whereas in the Poisson distribution the number of successes is a random variable and the length of a time interval is fixed (see Table 4-1). Table 4-1 The Characteristics of Poisson and Ex~onentialDistributions
Distribution
Population
Poisson
Infinite
Constant (h)
Infinite
Constant (A)
Exponential %-,~~".m~",z~,~~."
Lack of Memory Property -
- . .-- .-
-<
me.'" -~%>d.*as,w.~s-m
Rate of Success
Parameters Length of Time Interval
--..LI"UU.Y"U.-.*H.---..C;
i
Constant
No. of ~uccesses i Variable (X)
I
i.................................................... Variable (X) Constant (1) :"
mm.~~."d~*~.d~%~w*m.m.m.~m.-"w~ ~ . ~ . ~ . ~ 4 ~ ~ " ~ , w m r ~ , ~ a - e . ~ * ~ , ~ . ~ ~ ~ ~ " ~ ~ w.".v8a.".B.". m ~ , ~ ~ ~ ~ ~?>~.,".*d.B ~ ~ ~ ~ ~.,"A:me"". , " ~ m r " . ~ . ~ . ~ , ~ ~
As introduced in Section 3-7.1 Geometric Distribution, the lack of memory property means that the system does not remember the history of previous outcomes so that the timer of the system can be reset at any time, i.e., P(t<Xt)= = P ( X < At) P ( X > t) The lack of memory property of an exponential distribution implies that the value of h does not change with X. For example, in a reliability study of failure of a device, if the failure rate h is constant (not increasing with time due to wear), the time to failure of the device has the lack of memory property. The exponential distribution is the only continuous distribution having the lack of memory. (Recall that the geometric distribution is the only discrete distribution having the lack of memory property.) (Proof) The lack of memory property The conditional probability P ( X < t + At I X > t) is
-
1 - ,-h(t+At)
- (1 -,-kt)
1 - (1 - e-"1
-
e -kt - e-h(t+At) e -ht
=1-e-
hat
The probability P ( X < At) is P ( X < At) = F(At) = 1 - e-lAt Therefore, P(Xt)=P(X
The number of customers who come to the YUMMYdonut store follows a Poisson process with a mean of 5 customers every 10 minutes. 1. (Probability Density Function; Exponential Distribution) Determine the probability density function of the time (X; unit: min.) until the next customer arrives.
4-14
Continuous Random Variables and Probability Distributions
Chapter 4
Example 4.7 (cont.)
rm For a consistent unit for Xand h,
h = 5 customers/l0 min. = 0.5 customers/min. Therefore, the probability density function of X is f (x) = he-h" = 0.5e-0.5x, x 2 0 (unit: min.) 2. (Probability) Find the probability that there are no customers for at least 2 minutes by using the corresponding exponential and Poisson distributions. rm By using the exponential distribution, P(x>~)=I-~(~<2)=~2)=l-(1-e-')=e-'=0.368 i
\
The parameter of the corresponding PoiSson distribution is lx = 0.5 customers/min. x 2 min. = 1
I
Therefore,
As shown above, the probability from the exponential distribution is equal to the probability from the Poisson distribution.
I 3. (Mean and Variance) Find the mean and variance of X
I
1 1 rm p = - = -= 2 min. until the next customer arrival h 0.5
Exercise 4.7 (MR 4-85)
The lifetime (X; unit: hour) of a mechanical assembly in a vibration test is exponentially distributed with a mean of 1 failure every 400 hours. 1. Determine the probability density hnction of the operating time (X)of the assembly before failure. 2. Find the probability that the assembly fails the vibration test in less than 100 hours by using the corresponding exponential and Poisson distributions. 3. Find the mean and variance of X.
4-10 Erlang and Gamma Distributions 4-10.1 Erlang Distribution
LI Describe the probability distribution of an Erlang random variable. 0 Explain the relationship between Erlang and exponential random variables. Explain the relationship between Erlang and Poisson random variables. random variable. ---"w#*--m*.*"w----m"-".,"*r
4-10 Erlang and Gamma Distributions
Erlang Random Variable
4-15
An Erlang random variable represents the length of an interval (in time, space, etc.) until the next r (positive integer) successes in a Poisson process. IfX,, X2,. .. ,Xr are mutually independent exponential random variables and each represents the length of an interval for the ithsuccess after the (i - l)thsuccess, the Erlang random variable X i s the sum of the r exponential random variables: X = X , + X 2 +...+ X r The Erlang distribution parallels the negative binomial distribution by being concerned with the probability of the next r successes. (Recall that the exponential distribution parallels the geometric distribution by being concerned with the probability of the next success.) The probability density function of an Erlang random variable X with parameters h (average number of successes per unit interval) and r (number of successes) is
The mean and variance of X are r 2 r p = - and o =h h2 As illustrated in Figure 4-7 an Erlang curve is skewed to the right like an exponential curve; an Erlang curve with r = 1 becomes an exponential curve.
"'"LI 0
L
2
6
.
1
1
I
5
6
1
.
7
L
8
.
I
9
3
I0
b
-
A.
-
Figure 4-7 Erlang curves with selected values of h and r. Relationship between Erlang and Poisson Distributions
Like the exponential distribution, the Erlang and Poisson distributions are interchangeable. Let an Erlang random variable X with parameter h (average number of successes during the unit length of time) denote the time until the next r successes. Let N denote the number of successes during a specific time interval x, i.e., N is a Poisson random variable with parameter hx. Then, the following relationship exists between X and N:
4-16
Chapter 4
Continuous Random Variables and Probability Distributions
In the Erlang distribution the number of successes is fixed as r and the length of a time interval until the next r successes is a random variable, whereas in the Poisson distribution the number of successes is a random variable and the length of a time interval is fixed (see Table 4-2).
Erlang vs. Poisson Distributions
r
ariable (3 Constant (r) **w"-s'llsl*IIM*X
Example 4.8
1
I**,*ll'l.H.l.*f.
,l(lW.ln,W~.*~
i
The number of customers coming to the YUMMYdonut store follows a Poisson process with a mean of 5 customers every 10 minutes. 1. (Probability Density Function; Erlang Distribution) Determine the probability density function of the time (X; unit: min.)) until r = 3 customers come to the donut store. RW For a consistent unit for X and A, h = 5 custorners@~\0.5 customers/min. \ , Therefore, the probability density function of X with r = 3 customers is y x r - 1 -Ax 0.53 x3-1e-0.5x e = 0.0625x~e-O'~~, x > 0 (unit: min.) f (x) = ( r - I)! (3 - I)!
2. (Probability) Find the probability that three customers come to the donut store in less than 10 min by using the corresponding Erlang and Poisson distributions. O-W By using the Erlang distribution, P(X < 10) =
I (
0
0.0625x~e-~.~"dx = 0.875 (this integral is solved by integration by parts, which is not shown here)
The parameter of the corresponding Poisson distribution is hx = 5 customers/lO min. x 10 min. = 5 Therefore,
The probability from the Erlang distribution is equal to the probability from the Poisson distribution.
( 3. (Mean and Variance) Find the mean and variance of X. r 3 p = - = -= 6 min. for r = 3 customers
I
2.
5/10
4-10 Erlang and Gamma Distributions
Exercise 4.8 (MR 4-104)
4-17
The time between process problems in a manufacturing line is exponentially distributed with a mean of 30 days. 1. Determine the probability density function of the time (X; unit: day) until the fourth problem (r = 4). 2. Find the probability that the time until the fourth problem exceeds 120 days by using the corresponding Erlang and Poisson distributions. 3. Find the mean and variance of X.
4-10.2 Gamma Distribution
Explain the relationship between gamma and Erlang random variables.
Gamma Random Variable
A gamma random variabl having a parameter r of a variable has a parameter distribution with a
an Erlang random variable by that an Erlang random
The probability density function of a gamma random variable X with parameters 3L and r (> 0) is
(Note) Gamma function r ( r )
A gamma function has the following properties: (1) r ( r ) = ( r - l ) r ( r - 1) (2) T(r) = ( r - I)! if r is a positive integer; r(1) = O! = I
(e.g.) Gamma function ( I ) T(~)=(v-1)!=(4-1)!=3!=6 (2) r(1.5) = ( r - 1)r(r - 1) = (1.5 - l)r(1.5 - 1) = 0.5r(0.5) = 0 . 5 h = 0.886 The mean and variance of X are r 2 r p = - and o =-
3L
Gamma vs. Erlang Distributions
x2
While the Erlang distribution has a positive integer r, the gamma distribution has a positive real number integer r (see Table 4-3).
4-18
Chapter 4
Gamma vs. Erlang Distributions (cont.)
Continuous Random Variables and Probability Distributions
Table 4-3 The Characteristics of Erlang and Gamma Distributions Pammcters . Rate of Distribution Population Lengthof ] No. of Success Interral f Successes
i
4-1 1 Weibull Distribution Cl Describe the probability distribution of a Weibull random variable. Explain the relationship between Weibull and exponential random variables. O Determine the probability density function, mean, and variance of a Weibull random variable. a
a
"L
-A"-"-
-
mm
Weibull Random Variable
ass
-
%
m -me
--
r* * w --*v --
A Weibull random variable represents the length of an interval (in time, space, etc.) until the next success for various types (increasing, decreasing, or constant) of success rates. Therefore, the Weibull random variable is more flexible than the exponential and gamma random variables by dealing with various types of success rates. The probability density function and cumulative distribution function of a Weibull random variable X with shape parameter P (> 0) and ¶meter 6 (> 0) are
When the shape parameter P = 1, the exponential distribution with h = 118
becomes the
1
f(4 P
6
A.
0
1
2
3
4
5
6
7
8
9
1
0
Figure 4-8 Weibull curves with selected values of fJ and 6.
4-19
4- 1 1 Weibull Distribution
Weibull Random Variable (cont.) Weibull vs. Exponential Distributions
The mean and variance o f X are
In the Weibull distribution the rate of success can vary, whereas in the exponential distribution the rate of success is constant (see Table 4-4).
Table 4-4 The CharacMstics of Ex~onentialand Weibull Distributions ..
.-
of Success i
1
r
(A) : ~
Example 4.9
..
No.of Intmal Succesws Variable ( X ) Constant ( I ) Variable (3 Constant (1)
w
~
~
#
~
.
-
*
w
-
~
~
~ ~~waMw..vwza-" s ~
Assume that the time to failure of a hair braiding machine follows a Weibull distribution with a = 0.5 and P = 500 hours. 1. (Probability Density Function; Weibull Distribution) Determine the probability density function of the time (3 until failure of the braiding machine.
2. (Probability) Find the probability that the braiding machine lasts longer than 300 hours without failure.
3. (Mean and Variance) Find the mean and variance ofX. b
Exercise 4.9 (MR 4-114)
( i)
i
p = ST 1 + - = 5 0 0 r 1 + - = 500l?(3) = 500 x 2! = 1,000 hrs until failure d5)
Suppose that the life of a recirculating pump follows a Weibull distribution with parameters a = 2 and p = 700 hours. 1. Determine the probability density function of the time to failure (4of the Pump. 2. Find the probability that the pump lasts less than 700 hours. 3. Find the mean and variance of X.
4-20
Chapter 4
Continuous Random Variables and Probability Distributions
4-12 Lognormal Distribution Ll Describe the probability distribution of a lognormal random variable. O Determine the probability density function, mean, and variance of a lognormal random variable. Lognormal Random Variable
A random variable X is called lognormally distributed if its natural logarithm, W = In X, follows a normal distribution with mean 8 and variance m2. Like the Weibull distribution, the lognormal distribution is used to model the lifetime of a product whose failure rate is increasing over time. The probability density function (see Figure 4-28 in MR) and cumulative distribution function of X with parameters 8 and m2 are ( ~ x-8)' n
1
f(x)=-
--
2a2
,X>O
x d G e
The mean and variance of X are k = e e+w2 1 2 and o2= e 2e+m2 Example 4.10
- 1)
Assume that the time (unit: hours) to failure of a newly developed electronic device follows a lognormal distribution with parameters 8 = - 1 and m2 = 22. 1. (Probability Density Function; Lognormal Distribution) Determine the probability density function of the time (X)until failure of the device. M
1
f(x)=-e
(lnx-e)* -
2@2
x&
I
I
=----e 1 2 G x
(ln x+l)* -
,x>O
2. (Probability) Find the probability that the device lasts longer than 100 hours without failure.
-\F,
3. (Mean and Variance) ind the mean and variance of X. e+w2 I 2 2 1 h p=e = e-1+2 = 2.7 hour /
2
o =e Exercise 4.10 (MR 4-120)
T
2e+w2
-4 =
e 2 (-1)+2*
(e 2' - 1 ) = 1 9 . 9 ~
Suppose that the length of time (in seconds) that a user views a page on a Web site before moving to another page is a lognormal random variable with parameters 8 = 0.5 and m2 = 1. 1. Determine the probability density function of the time (X)until a user moves from the page. 2. Find the probability that a page is viewed for more than 10 seconds. 3. Find the mean and variance of X.
Summary of Continuous Probability Distributions
4-21
Summary of Continuous Probability Distributions A. Probability Distributions Distribution
'I
Probability Density Function
Mean
e -hx , x > O , k > O , r = 1 , 2,... ( r - I)!
h
/'
\
Variance
Section
Uniform
Normal
yXr-l
Erlang
r
B. Comparison Parameters Distribution
Population
aate of Success
Len*h of Time Intervd
NO.of ~uccesses
Poisson
Infinite
Constant (A)
Constant
Variable (X)
Exponential
Infinite
Constant (A)
Variable (X)
Constant (1 )
Erlang
Infinite
Constant (A)
Variable (X)
Constant (positive integer r )
Gamma
Infinite
Constant (A)
Variable (X)
Constant (positive real number r)
Weibull
Infinite
Variable (X)
Constant ( I )
Changeable
(P, 6)
4-22
Chapter 4
Continuous Random Variables and Probability Distributions
EXCELApplications (Reading the z Table)
(1) Choose Insert > Function. (2) Select the category Statistical and the function NORMSDIST. (3) Enter the value of Z (Z). (4) Find the result circled with a dotted line below.
---- - -
WRJ&DIST
i?
I
---
_
--
I
= -1.57
-
= 0.058207562 Returns the standard normalcumdat~vedinbution (has a mean of zero and a standard deviation of one).
1 is the value For whlch you want the distr~butcon.
-
"-
"
Formularesult =
-
-- --
__*-----_
::---___--Q,USSM~S@:;
a
-
Helo on th~sf u n c t ~ n
Example 4.5
(Standardization) (1) Choose Insert % Function. (2) Select the category Statistical and the function NORMDIST. (3) Enter the value of X (X), mean (Mean; p), standard deviation (Standard-dev; o), and logical value (Cumulative; true for the cumulative distribution function and false for the probability density function). (4) Find the result circled with a dotted line below.
I
= 0.691462467 Returns the normal cumulat~vedatr~butcon for the speclhed mean and standard deviatm.
---
Cumulative IS a bgical value: f o r t h cumulative dlstribut~on funct~on,use TRUE; for the probainy mass funct~on,use FALSE.
--
.
Formda result = Hekl on ttus functlorl
-"
--------
<.-0.69146246::: --_____-
*---
-.- -.
m m
- --"
m-
----
-A-
cancel
-
1
EXCELApplications
Example 4.7
4-23
2. (Probability; Exponential Distribution)
*
(1) Choose Insert Function. (2) Select the category Statistical and the function EXPONDIST. (3) Enter the value of X (X), average number of success per unit interval (Lambda; k), and logical value (Cumulative; true for the cumulative distribution function and false for the probability density function). (4) Find the results circled with a dotted line below.
-_-----i
- -
Returns the exponent14cktnbutcon
-
Cumulative IS a bgicd value for the Functionto return: the cumulatcve dKtrcbutcon funct~on TRUE; the prabatdctydensky funct~on=FALSE.
- - - - - -(*0.36767944!-> --..___-*
-
*-
+ _ - - - - - - - _ _
F o r d result =
Helo an thcs Functcot)
Example 4.8
2. (Probability; Erlang Distribution)
(1) Choose Insert > Function. (2) Select the category Statistical and the function GAMMADIST. Note that the gamma distribution is a general form of the Erlang distribution. (3) Enter the value of X (X), number of successes (Alpha; r), average time until the next event (Beta; Ilk), and logical value (Cumulative; true for the cumulative distribution function and false for the probability density function). Note that a difference in notation exists between Excel and MR. (4) Find the result circled with a dotted line below.
-"----- --- -"
-
-
*
---
a-
= 0.875347981 eturns the gamma dcslrhutcon.
Cumulatives a laglcal value: return the cumulatcve dstrlbutianFunctlon = TRUE; return the probabilitymass functcon =FALSE or omttted,
:.-0.875347981 : --_____---aC_------___
rmuta result =
4-24
Chapter 4
Example 4.9
Continuous Random Variables and Probability Distributions
2. (Probability; Weibull Distribution)
(I) Choose Insert > Function. (2) Select the category Statistical and the function WEIBULL. (3) Enter the value of X (X), shape parameter (Alpha; P), scale parameter (Beta; ti), and logical value (Cumulative; true for the cumulative distribution function and false for the probability density function). Note that a difference in notation exists between Excel and MR. (4) Find the results circled with a dotted line below.
Returns the Weibull dtstribut~on.
Cumulative16a log~calvalue: for the cumulattve dlstrlbut~onfunction, use TRUE; for the probabw mass fumhon, use FALSE.
-" - - -- --- -
---------
----aa
----- - ---
- -- -
- -- - - -
---
H e l on ~ thrj functwn
Example 4.10
2. (Probability; Lognormal Distribution)
(I) Choose Insert > Function. (2) Select the category Statistical and the function LOGNORMDIST. (3) Enter the values of X (X) and two parameters (Mean, 8;Standard-dev, a) of the lognormal distribution. (4) Find the results circled with a dotted line below.
=.
Mean
1 i
-
X
I
Standard-dev
I
100
1
-1
=2
I--_
,--
1
j
':4-[;.99746519$ Returns the cumufakive lognmal distnbubon of x, where In(x) IS n&%llydis&ib'ded with parametersMean and Standard-dev.
Standard-dew
IS
the standard d e W m of In(x), a poskive numbsc.
-Fornula result = Help on this funct~on
__-----.._
;:! WES~_?~:: I-,
---
Answers to Exercises
Answers to Exercises Exercise 4.1
(Probability Density Function) 3
(I) P ( X < t ) = ly(x)dx= f i 3 3 x 2 d ~ = x 3 1=(f) ~3
1
- o = - 27
19
Exercise 4.2
(Cumulative Distribution Function) By using the probability density function of X, F ( x ) = P ( X I x ) = [f(u)du=
13u2du=u31x= x 3 - O = x
Therefore, x <0
Exercise 4.3
I 1
(Mean, Variance, and Standard Deviation) (1) The mean of X is
(2) The variance of X is
(3) The standard deviation of X is
o = JV(X>= J0.038=0.19
0
3
4-25
4-26
Chapter 4
Exercise 4.4
Continuous Probability Distributions
1. (Probability Density Function; Continuous Uniform Distribution) a = 0.95, b = 1.05 1 I - 1 - 10, 0.95 1x 11.05 f(x)=-= b-a 1.05-0.95 0.1
2. (Probability) P ( X >1.02)=
f
.05
.02
f(x)dx=
3. (Mean and Variance) p=- b + a - 1.05+0.95 =1.0 2 2 (b - a12 (1.05 - 0.95)~= 8.3 x (3 = 12 12 Exercise 4.5
Exercise 4.6
1.05
=10(1.05-1.02)=0.3
= 0.029~
1 (Standardization)
1. (Normal Approximation; Binomial Distribution) The probability mass function, mean, and variance of the binomial random variable X are
Thus,
By applying the normal approximation to the binomial random variable X,
Since np = 80 > 5 and n(1 - p) = 120 > 5, the normal approximation to the binomial random variable X i s satisfactory.
Answers to Exercises
Exercise 4.6 (cont.)
4-27
2. (Normal Approximation; Hypergeometric Distribution) The probability mass function, mean, and variance of Xare K N-K 800 2000-800 800 1200
f(XI= ( x ) ( n - x ) -
( )( x
20-x
)( -
.
)(200-I)
,x = 0 to 200
(Note) max(0, n - (N - K)) = max{0,200 - (2000 - 800)) = max (0, - 1000) = 0 min{K,n) = min{800,200) = 200
Thus,
By applying the normal approximation to X
Since n/N = 0.1, np = 80 > 5 and n(l - p) = 120 > 5, the normal approximation to the hypergeometric random variable Xis satisfactory. 3. (Normal Approximation; Poisson Distribution) The probability mass function, mean, and variance of X are
Thus,
By applying the normal approximation to X
Since h = 80 > 5, the normal approximation to Xis satisfactory.
4-28
Chapter 4
Exercise 4.7
Continuous Probability Distributions
1. (Probability Density Function; Exponential Distribution) For a consistent unit for X and h, h = 1 failure1400 hrs = 0.0025 failurelhr Therefore, the probability density function of X is f (x) = he-h" = 0 . 0 0 2 5 e - ~ . ~ ~x~2~0"(unit: , hr) 2. (Probability) By using the exponential distribution, P ( X < loo)= F(100)=1 -e-O.:'= 0.221 The parameter of the corresponding Poisson distribution is x = 100 hrs, ?a= 0.0025 failurelhr x 100 hrs = 0.25 Therefore, P(NZ1)=1-P(N=0)=1-
e-)" (hx)' = l - e - aJ - 1- e-0.25 O!
= 0.22 1
As shown above, the probability from the exponential distribution is equal to the probability from the Poisson distribution. 3. (Mean and Variance) 1 1 p = - = -= 400 hrs until failure h 11400
Exercise 4.8
1. (Probability Density Function; Erlang Distribution) For a consistent unit for X and ?L, , h = 1 problem130 days= 1/30 problemlday Therefore, the probability density function of X with r = 4 problems is
where x 2 0 (unit: day) 2. (Probability) By using the Erlang distribution, P ( X > 120) =
2.06 x lo-' x3e-0.033xdr = 0.433 (this integral is solved
20
by integration by parts, which is not shown here) The parameter of the corresponding Poisson distribution is hx = 1/30 problemfday x 120 days = 4
Answers to Exercises
Exercise 4.8 (cont.)
1
Therefore,
4-29
t$=
e-hx(~)k P(N<~)=X = k! k=O k=O
0.433
The probability from the Erlang distribution is equal to the probability from the Poisson distribution.
I
I Exercise 4.9
3. (Mean and Variance) r - 120 days P=-=-i1/30
1. (Probability Density Function; Weibull Distribution)
2. (Probability)
3. (Mean and Variance)
= 700 x 0.5 x
Exercise 4.10
& = 700 x 0.886 = 620.4 hrs until failure
1. (Probability Density Function; Lognormal Distribution) f(X)=-----e1 xm,lG
(In X - € I ) ~ 2w2
1 =-
(ln x-0.5)'
--
,x2O
2. (Probability) P ( X > l O ) = l - P ( X I l O ) = l - 0 (In l o 1 0'5) = 1- 0(1.go)= 1- 0.96 = 0.04 3. (Mean and Variance) p = e 0+w2 i 2 = e 0.5+112 = 2.7 sec.
o2 = e 20+w2 (em' -1)=e
2.0.5+1
(e'-1)=3.6~
5
Joint Probability Distributions
5-1 Two Discrete Random Variables 5-2 Multiple Discrete Random Variables 5-2.1 Joint Probability Distributions 5-2.2 Multinomial Probability Distribution 5-3 Two Continuous Random Variables 5-4 Multiple Continuous Random Variables
5-1
5-5 Covariance and Correlation 5-6 Bivariate Normal Distribution 5-7 Linear Combinations of Random Variables 5-9 Moment Generating Functions 5- 10 Chebyshev's Inequality Answers to Exercises
Two Discrete Random Variables
0 Determine joint, marginal, and conditional probabilities for two discrete random variables X and Y by using corresponding probability distributions. Calculate the mean and variance of X (or Y) by using the corresponding marginal probability distribution. 0 Calculate the conditional mean and conditional variance of X given Y = y (or Y given X = x) by using the corresponding conditional probability distribution. 0 Assess the independence of X and Y. Probability Distributions of Two Random Variables Joint Probability Mass Function
Three kinds of probability distributions are used to describe the stochastic characteristics of two random variables Xand Y: (1) Joint probability distribution (2) Marginal probability distribution (3) Conditional probability distribution The joint probability mass function (p.m.f.) of two discrete random variables X and Y, denoted as f , (x, y) , satisfies the following conditions:
5-2
Chapter 5
Marginal Probability Mass Function
Joint Probability Distributions The marginal p.m.f.'s ofXand Ywith the joint p.m.f. f, (x, y) are fx(x)=p(~=x)=Cf~(x,Y) Rx
fu(Y)=p(y=Y)=Cf,(x,Y) RY
where Rx and R, denote the set of all points in the range of (X, Y ) for which X = x and Y = y, respectively. The marginal p.m.f. f, (x) satisfies the following: (1) f,(x)=P(X=x)>O
The mean and variance of X are
Conditional Mass Function
Recall that P(B ( A) = P(A nB) I P(A) (see Section 2-4. Conditional Probability). In parallel, the conditional p.rn.f. of X given Y = y, denoted as fxl, (x) ,with the joint p.m.f. f, (x, y ) is P ( X = x , Y = y ) - f,(x,y) fXly(x)=P(X=xIY=y)= P(Y = Y) f r (Y) The conditional p.m.f. fxl, (x) satisfies the following:
The conditional mean and conditional variance of X given Y = y are E ( X I Y ) = PA, =
Independence
Cxfxly(XI
Two random variables X and Yare independent if knowledge of the values of X does not affect any probabilities of the values of Y and vice versa. In other words, X and Yare not stochastically related. Recall that P(A I B) = P(A) , P(B I A ) = P(B) , and P(A nB) = P(A)P(B) for two independent events A and B (see Section 2-4. Conditional Probability). Likewise, two independent discrete random variables X and Y satisfy any of the following: (1 f Xiy (XI= f x (XI
r
Independence (cont.) Example 5.1
5-4
Chapter 5
Example 5.1 (cont.)
Joint Probability Distributions The mean and variance of X are 4
E ( X ) = p x =~xfx(r)=lfx(1)+2.fx(2)+3.fx(3)+4fx(4) x=l
=
1 x 0.21 + 2 x 0.445 + 3 x 0.26 + 4 x 0.085 = 2.22
3 . (Discrete Conditional p.m.f.) Determine the conditional probability distribution of statistics grade (Y) given that ergonomics grade is an A ( X = 1 ) . Find also the conditional mean and conditional variance of Y given X = 1 . h The conditional marginal probabilities of Y given X = 1 are
The conditional mean and conditional variance of Y given X = 1 are
4 . (Independence) Check if ergonomics grade (X)and statistics grade (Y) are independent. mr Check if f , (1,1> = f x (1) . fY ( 1 )
By using the probabilities f , (1,l) = 0.12, f x ( 1 ) = 0.21, and f y (1) = 0.245, f , (1,l)= 0.12 + f,(l) f y (1) = 0.21 x 0.245 = 0.052 Thus, it is concluded that ergonomics grade (3 and statistics grade (Y) are not independent; in other words, ergonomics grade and statistics grade are related.
1
5-2 Multiple Discrete Random Variables
Exercise 5.1 (MR 5-5)
5-2
5-5
The number of defects on the front side ( X ) of a wooden panel and the number of defects on the rear side (Y) of the panel are under study. 1. Suppose that the joint p.m.f of X and Y is modeled as f x r ( x , y ) = c ( x + y ) , x = 1 , 2 , 3 and y = 1 , 2 , 3 Determine the value of c. 2. Determine the marginal p.m.f. of X. Find also the mean and variance ofX. 3. Determine the conditional p.m.f. of Y given X = 2. Find also the conditional mean and conditional variance of Y given X = 2. 4. Check if the number of defects on the front side (X) of a wooden panel and the number of defects on the rear side (Y) of the panel are independent.
Multiple Discrete Random Variables Joint Probability Distributions
5-2.1
Explain the joint, marginal, and conditional probability distributions of multiple discrete random variables. Explain the independence of multiple discrete random variables. _
_
j
~
m
_
_
_
~
-
Joint Probability Mass Function
P
l
_
a
_
l
(
n
~
-
w
-
w
~
~
-
~
~
~
.
~
~m
~
h
n
~
~
~
~
m
-
(
r
~
w*ua--*l~i-*-a~-"s ~ M B I ~
A joint p.m.f. of discrete random variables X I ,X 2 , ... ,X,, which describes the probabilities of simultaneous outcomes ofXl, X2,... ,X,, is f R ( ~ I , ~ 2 , . . . , x p ) = =P ( x 1 ~ = X ,2 , . ~. . , X ~p =x,) Next, the joint p.m.f. of K = { X I ,X2,... ,Xk), a subset of R = { X I ,X2, ... ,Xp) with the joint p.m.f. f R ( x ,,x, ,...,x, ) , is
f
x
x
~fR(x12x2,...9xp) Rx,x2...Xk
where RX,X2...Xk denotes the set of all points in the range of R for which X , = x, ,
Marginal Probability Mass Function
A marginal p.m.f. of& which belongs to R = { X I ,X2,... ,Xp) with the joint p.m.f. f R ( x l , x 2,..., x , ) , is
f x L ( x i ) = P ( X i= ~ i ) = C f R ( x l , x ~ , . . . , X p ) where R , denotes the set of all points in the range of R for which X i = xi .
Conditional Probability Mass Function
A conditional p.m.f. of K = {x,,X 2 , ... ,Xk) given Kr={Xk+l,Xk+2,... ,Xp) describes the probabilities of outcomes of K at the given condition of K' . Note that K and K' are mutually exclusive. As an extension of the bivariate conditional p.m.f., the conditional p.m.f. of K
w
B
a
5-6
Chapter 5
Conditional Probability Mass Function (cont.)
Independence
5-2.2
Joint Probability Distributions given K' with the joint p.m.f. f , ( x ,,x, ,...,x , ) is P ( X , = x l , X 2= x 2 ; - , X p = x P ) fKIK' ( X I X 2 , .
.-,X k ) =
P(Xk+l= X k + ~X k + 2 = Xk+2 ..., X p = X p )
As an extension of the bivariate independence conditions, multiple independent discrete random variables R = { X I ,X2,. .. ,X p ) , consisting of two mutually exclusive subsets K = { X I ,X2,. .. ,Xk) and Kf={Xk+,,Xk+2,... ,X p ) , satisfy the following: ( 1 ) fKIKr(x1,x2 ,... r X k Ixk+l , X ~ + ~ , . . . ~ X ~ ) = ~ K ( X ~ , X ~ , . . . , X ~ )
Multinomial Probability Distributions
O Determine the marginal probability distribution of multinomial random variables. Multinomial Random Variables
Suppose that ( 1 ) A random experiment consists of n trials. ( 2 ) The outcome of each trial can be categorized into one of k classes. ( 3 ) The multinomial random variables XI,X2,... ,Xk represent the number of outcomes which belong to class 1 , class 2,. .. , class k, respectively. (4) pl, p2, ... ,pk denote the probability of an outcome belonging to class 1, class 2,. .. , and class k, respectively. ( 5 ) pl,p2,. .. ,pk are constant over n repeated trials. Then, X l , X2,. .. ,Xk have a multinomial distribution with the joint p.m.f.
where x, + x , + . . . + x k = n and p, + p , + . . . + p k = 1 Note that the marginal p.m.f. of &,,i = 1 to k, is a binomial distribution with mean and variance E ( X i )=px, = npi and v ( x , ) =o& =npi(l - p i )
Example 5.2
The maximum grip strength of an adult is categorized into one of three groups: 'low' for < 50 Ibs., 'moderate' for 50 to 100 lbs., and 'high' for > 100 lbs. Suppose the probabilities that an adult belongs to the low, moderate, and high groups are 0.1, 0.7, and 0.2, respectively. A group of n = 50 people is measured for grip strength.
5-3 Two Continuous Random Variables
5-7
1. (Joint p.m.f.; Multinomial Distribution) Find the probability that 6 people have low grip strength, 36 have moderate grip strength, and 8 people have high grip strength. h Let XI,X2, X3 denote the number of people that have low, moderate, and high grip strength, respectively. Since n = 50,pl = 0 . 1 , = ~ 0.7, ~ andp3 = 0.2, the joint p.m.f. ofXl, XZ,X3 is
Example 5.2 (cont.)
where x, + x 2 + - . . + x , = n
I
Thus,
2. (Marginal p.m.f.; Multinomial Distribution) Find the probability that 36 people in the group have moderate grip strength. ~ . r rThe marginal p.m.f. of X2 follows a binomial distribution with n = 50 andp2 = 0.7, i.e.,
I
i
Exercise 5.2 (MR 5-30)
L
1
1
5-3
Thus,
In the transmission of digital information, the probabilities that a bit has high, moderate, and low distortion are 0.01, 0.04, and 0.95, respectively. Suppose that n = 5 bits are transmitted and the amount of distortion of each bit is independent. 1. Find the probability that one bit has high distortion, one has moderate distortion, and three bits have low distortion. 2. Find the probability that four bits have low distortion.
Two Continuous Random Variables w..we"~.a.'w*e,~"," s ~ m * B a A . " " A " ~ ~ ~ * ~ " " ~ e w a .~ B" ~Bv * ~ , ' ~ . ~ " m ~ ~ a . " . ~ . - v e
inuous random variables X and Y by ;sing corresponding probability distributions. Calculate the mean and variance of X (or Y) a continuous random variable by using the corresponding marginal probability distribution. C1 Calculate the conditional mean and conditional variance of X given Y = y (or Y given X = x) by using the corresponding conditional probability distribution. C1 Assess the independence of X and Y.
5-8
Chapter 5
Joint Probability Density Function
Joint Probability Distributions The joint probability density function (p.d.f.) of two continuous random variables X and Y satisfies the following: (1) f m ( x 7 Y ) 2 0
(2)
Marginal Probability Density Function
[f,(~~~)mQ=l
The marginal p.d.f.'s of X and Y with the joint p.d.f. f , ( x ,y ) are
fx(x)= L f x r ( x , Y ) ~
The marginal p.d.f. of X satisfies the following: (1) fx(x) 2 0
The mean and variance of X are
E ( X ) = P x = [xfx ( x )A
Conditional Probability Distribution
The conditional p.d.f. of X given Y = y7 denoted as f x , ( x ) ,with joint p.d.f.
f m ( x 7 Y >is fXiY
(XI=
fxr (x7 Y ) fY ( Y )
The conditional p.d.f. f,,, ( x ) satisfies the following: (1)
fXlY
( x )2 0
The conditional mean and conditional variance of X given Y = y are
E ( X I Y ) = Pxly = [ x f x l y( x )A
Independence
Two continuous random variables X and Yare independent if any of the following is true: ( 1 ) f X l Y ( X I = f x (XI
( 2 ) fYiX(Y)= f y ( Y ) (3)
f , ( 4 Y )= f x ( x ) f y ( Y )
5-3 Two Continuous Random Variables
ff
5-9
The sizes of college books are under study in terms of width (X)and height (Y). 1. (Continuous Joint p.d.f.) Suppose that the joint p.d.f of Xand Y is modeled as fxr(x,y)=cxy, 5 < x < 9 and x + 1
Example 5.3
The joint p.d.f. of X and Y should satisfy the following: (1) f,(x,y) =cxy 2 0
II
For the first condition, c 2 0 because x > 0 and y > 0. Next, for the second condition,
Therefore, the joint p.d.f. of X and Y is fxr(x,y)=&xy, 5 < x < 9 and x + l < y < x + 3 2. (Marginal Mean and Variance; Continuous Marginal p.d.f.) Determine the marginal probability distribution of X. Find also the mean and variance of X. h The marginal p.d.f. of X is
= L x . 2l [(X + 3)2 - (X+ 112] 515
=&(x+2)x,
I
The mean and variance of X are
5<x<9
5-10
Chapter 5
Example 5.3 (cont.)
Joint Probability Distributions
+fx31:)=&x1886.7=7.33
=&($x41:
in.
3. (Conditional Mean and Conditional Variance; Continuous Conditional p.d.f.) Determine the conditional probability distribution of Y given X = 8. Find also the conditional mean and conditional variance of Y given X = 8. hr The conditional p.d.f. of Y given X = 8 is
The conditional mean and conditional variance of Y given X = 8 are
= 0.05. f ( l l 3 - 93)= 10.03 in. 1
V(Y 18) = O $ =
y' fyi8(Y)&- p & = 0.05
[ y3& -10.03' 1
4. (Independence; Multiple Continuous Random Variables) Check if the width (3 and height (Y)of a college book are independent. rn Check if
f Y l x (Y)=
fxr(x,y> - k x y --f x (x) &x(x + 2) 2(x + 2) - f y ( y )
The marginal p.d.f. of Y is 515
By the way, the range of Xvaries along y (see the graph on next page) as follows: 5<x
5-3 T w o Continuous Random Varla6fes
5-i'f
Example 5.3 (cont.)
---, A o y(-y 2 + 6 y + 7 2 ) = & y ( y + 6 ) ( 1 2 - y ) ,
for 1 0 < y < 12
In summary,
Since f y l x( y )= # f y ( y ) the width (X)and height (Y)of a college 2(x + 2 ) book are not independent.
Exercise 5.3 (MR 5-49)
Two measurement methods are used to evaluate the surface smoothness of a paper product. Let X and Y denote the measurements of each of the two methods. 1 . Suppose that the joint p.d.f of X and Y is modeled by f x r ( x , y ) = c , O < x < 4 , 0 < y and x - 1 < y < x + 1 Determine the value of c. 2. Determine the marginal probability distribution of X.Find also the mean and variance of X. 3. Determine the conditional probability distribution of Y given X = 2. Find also the conditional mean and conditional variance of Y given X = 2. 4. Check if the measurements of the two methods Xand Yare independent.
5-12
Chapter 5
5-4
Multiple Continuous Random Variables
Joint Probability Distributions
~ - ~ w ~ s m e M ~ - - m # A - - * w - s -
Explain the joint, marginal, and conditional probabilities of multiple continuous random variables.
0 Explain the independence of multiple continuous random variables. *-
_ X w -
-#Avam-"am-
M m _ I s M -
Joint Probability Density Function
mww--
~
m
-
w
w
w
~
~
m
B
m. ~ w " l * " n a a a x x a a r a a " - ~ ~ ~ m ~ ~ ~ ~ ~ w w ~ ~ w * ~ a
As an extension of the bivariate joint p.d.f., the joint p.d.f. of multiple continuous random variables R = {XI,X2,... ,Xp) satisfies the following: (1) fR(x17x2,...,xp)20
(2) j f . . j f , ( x , , x ,
,...,x p ) m , m2
=I
R
The joint p.d.f. of K = {XI,X2,... ,Xk), a subset of R = {XI,X2,... ,X,) with the joint p.d.f. fR (x, ,x2,...,x,) , is
I I.' .,
~K(xI,x~,...,x~)=
/ f R ( x l , x2,...,xp)dxk+ldxk+2"'dxp
R,,,,.
where R+,xI.,.xK denotes the set of all points in the range of R for which X , = x, , X 2 = x 2 , ..., X k =Xk.
Marginal Probability Distribution
A marginal p.d.f. ofX,, which belongs to R = {XI,X2,. .. ,X,) with the joint p.d.f. fR(xl,x2,..., x p ) , i s fx,
5 f " ~ f R ( x I , X 2 , . . . 7 x p ) d X I ..'dxl-ldX,+~ &2
"'ibp
Rx'
where R , denotes the set of all points in the range of R for which X, = x, .
Conditional Probability Distribution
As an extension of the bivariate conditional p.d.f., the conditional p.d.f. of K = {XI,X2,. .. ,Xk) for a given condition of K'= {&+I, Xk+2,... ,Xp) with the joint p.d.f. fR (x, ,x, ,...,x, ) is
Independence
As an extension of the bivariate independence conditions, multiple continuous random variables R = {XI,X2,...,Xp), consisting of two mutually exclusive subsets K = {XI,X2,. .. ,Xk) and K'= {Xk+,,Xk+2,... ,X,), are independent if (I) fKJKf(x1,x2, . . - , ~ ~ ) = f K ( ~ 1 > ~ 2 , . . . , ~ k )
1
5-5 Covariance and Correlation
5-5
5-13
Covariance and Correlation
O Explain the terms covariance and correlation between two random variables X and Y. Calculate the covariance and correlation of X and Y. Covariance
(oxr)
The covariance between two random variables X and Y (denoted as cov(X, Y) or om) indicates the linear relationship between X and Y: 0, =E[(X-Px)(Y-~y)l=E(fl)-pXPY, - o o < o m < = Note that the covariance may not be an appropriate measure to represent a relationship between random variables if they are related nonlinearly. The positive, negative, and zero values of o m represent positive, negative, and zero linear relationships between Xand Y, respectively (see Figure 5-1).
Figure 5-1 Covariance and correlation between X and Y: (a) positive linearity (om > 0; 0 < p n < 1); (b) zero linearity (om = 0; pm = 0); (c) negative linearity (o,< 0; -1 < p,< 0). (Derivation) E[(X
- px)(Y - p,)] = E ( X Y ) - p x p y
E[(X - p x )(Y - PY 11
Correlation @xu)
PYX + PXPY I = E ( f l ) - PxE(Y) - PYE(X) + PxPv = E ( m ) - PXPY - PyPx + P X P Y =E ( m ) -PXPY -PxY-
The correlation between X and Y (denoted as pH) represents the normalized linear relationship between X and Y (0, normalized by oxand oy): cov(X,Y) 0 , -l
While o m depends on the units of X and Y, p, is dimensionless. Like OH, the y positive, negative, and zero positive, negative, and zero values of p ~ represent linear relationships, respectively (see Figure 5-1).
5-14
Chapter 5
Joint Probability Distributions
Independence
r
If X and Yare independent (not related), then 0, =p, = 0 However, o, = p, = 0 is a necessarv (not sufficient) condition for the independence ofXand Y. In other words, even if o, = p, = 0 , we cannot immediately conclude that X and Y are independent.
Example 5.4
(Covariance and Correlation) In Example 5-1, determine the covariance and correlation between ergonomics grade ( X ) and statistics grade (Y). h The mean and variance of X are E(X) = px = 2.22 and V(X) = 0%= 0.87' The mean and variance of Yare 4
E(Y)=Py = C ~ f Y ( ~ ) = l . f Y ( 1 ) + 2 . f Y ( 2 ) + 3 . f Y ( 3 ) + 4 . f Y ( 4 )
(
The mean of XY is
Therefore,
I Exercise 5.4 (MR 5-69)
CJ,
= E(XY) - PxP
pxr
I -
0 ,
o,oy
-
= 5.52 - 2.22 X 2.3 1 = 0.39
0.39 = 0.47 0.87 x 0.95
Since GXY and p, > 0, it can be concluded that ergonomics grade ( X ) and statistics grade (Y) have a positive linear relationship. In Exercise 5-1, the joint p.m.f. ofX(the number of defects on the front side of a wooden panel) and Y (the number of defects on the rear side of the panel) is f x r ( x , y ) = $ ( x + y ) , x = 1 , 2 , 3 and y = 1 , 2 , 3 The means and variances of X and Yare
I
E(X) = E(Y) = 2.17 and V(X) = V(Y) = 0.80' Determine the covariance and correlation of X and Y.
5-6 Bivariate Normal Distribution
5-6
5-15
Bivariate Normal Distribution
0 Explain the joint probability distribution of bivariate normal random variables. Determine the joint, marginal, and conditional probabilities of bivariate normal random variables. -jimn*l(*^.^.wm
xmsw
ma
Bivariate Normal Random Variables
rs-w#
--*
M*mrut
a
.n-
-
mrxsariarxarva
MM+B
-
a\xnna - #
.a urw*awm in*a uw -
---
-nannnenana
xlxr n r a m aa x
;."*ware
xa*
The joint probability density function of two normal random variables Xand Y 2 2 with means p~and py, variances ox and o y , and correlation p, (-1 < pxu < 1) is
f,
(x7 Y)=
1 2noxoy~l-p,
Bivariate normal distributions with different values of pxy are shown in Figure 52. The contour plots of the binomial curves are elliptical. The center of each ellipse is located at the point (px, py) in the xy plane. If p, > 0 (pxy< O), the major axis of each ellipse is positive (negative); if pxr = 0, the ellipses become circles.
Probability density function
(a)
(b)
(c)
Figure 5-2 Bivariate normal distributions with different values of p ~(a): 0 < pxr< 1; (b) pxr=O; (c) -1 < p,yy< 0. The marginal probability distributions of X and Yare normal with means px and p y and variances ox2 and o y2 ,respectively. The conditional probability distribution of Y given X = x is normal with mean and variance E(Yx)=p, + p , ~ ( x - p x ) ox
2
and V(Ylx)=o,(l-pH)
m
5-16
r
Chapter 5
Example 5.5
Joint Probability Distributions
(Bivariate Normal Distribution) The heights (X; unit: in.) and weights (Y; unit: lbs.) of adult males in the US are normally distributed with means = 68.3 and p y = 162.8, variances = 2.3' and or2 = 23.02, and correlation pm = 0.49. Determine the probability that the weight (Y) of an adult male is between 160 and 180 Ibs given that his height (3 is 70 in. D-rr The conditional probability density function of Y given X = 70 is normal with mean and variance
0x2
I
Exercise 5.5 (MR 5-79)
Therefore,
Let Xand Yrepresent two dimensions of an injection molded part. Suppose that X and Y have a binomial distribution with means k = 3.00 and py = 7.70, and = 0.04' and or2 = 0.08~.Assume that Xand Yare independent, i.e., variances p ~ 0.=Determine P(2.95 < X < 3.05, 7.60 < Y < 7.80).
02
5-7
Linear Combinations of Random Variables
0 Explain the term linear combination of random variables. 0 Determine the mean and variance of a linear combination of random variables. Linear Combination
A random variable Y is sometimes defined by a linear combination of several random variables X I ,X2,. ..,Xp: Y = c,XI + c2X 2 + + cpX p , where ti's are constants - a .
5-7 Linear Combinations of Random Variables
Rules for Linear Combination
5-17
The following rules are useful to determine the mean and variance of a linear combination of X and Y: 1. Rules for Means (1) If b is a constant, then E(b) = 0 . (2) If a is a constant, then E(aX) = aE(X) . (3) E(aX fby) = aE(X) f bE(Y) 2. Rules for Variances (1) V(b) = 0 . (2) V(aX) = a 2 v ( x ) . = a V(X) + b2V(Y), if X and Y are independent.
(Note) cov(X, Y) = o,
= p,oxoy
The rules 1.3 and 2.3 can be extended for Y = c, X1 + c2X2+ .-.+ c p X p as follows: E(Y) = c l E ( X l ) + c2E(X2)+
r
= ~ ? V ( X ,+ ) c;v(x~)
Example 5.6
cpE(Xp)
+ + c i V(Xp) , i f z s are independent
(Linear Combination) For adult males in the US, the length of the shoulderelbow (X; unit: in.) and the length of the elbow-fingertip (Y;unit: in.) have a = bivariate normal distribution with means px= 14.4 and y y = 18.9, variances or2 = 0.g2, and correlation ,p = 0.737. Determine the mean and variance of the length of the shoulder-fingertip (X+ Y). D-rr The mean and variance of the length of the shoulder-fingertip Z = X + Y is E(X + Y) = E(X) + E(Y) = 14.4 + 18.9 = 33.3
0x2
Note that cov(X, Y) = p,oxoy
= 0.737 x 0.8 x 0.8 = 0.472 . Thus,
V(X + Y) = 0 . 8 ~+ 0 . 8 ~+ 2 x 0.472 = 1 . 4 9 ~ I v
i
1
Exercise 5.6 (MR 5-91)
The width of a casing (X; unit: in.) and the width of a door (Y; unit: in.) are normally distributed with means px = 24 and y, = 23 +,and standard
I deviations ox =
and o, = &, respectively. Assume that the width of the casing (X) and the width of the door (X)are independent. Determine the mean and standard deviation of the difference between the width of the casing (X) and the width of the door (Y).
5-18
Chapter 5
5-9
Moment Generating Functions
Joint Probability Distributions
R Explain the term moment generatingfunction. Determine the moment generating function of a random variable X and the mth moments of X. R Find the mean and variance of X by using the first and second moments of X. Moment Generating Function
The moment generating function of a random variable X (denoted as M(t)) is the expected value of elx, i.e., ~ ( t=)E(etX) =
Ir
elxf (x),
if X is a discrete random variable
"
etxf (x)A, if X is a continuous random variable
The moment generating function of X is unique if it exists and completely determines the probability distribution of X. Thus, if two random variables have the same moment generating function, they have the same probability distribution. If ~ ( ~ ) (denotes t ) the mth derivative of M(t), the rnth moment of X about the origin (zero) (denoted by E(X")) is xmf (x), if X is a discrete random variable E(x")
= M(")(o) =
(r "
xmf(x)dx, if X is a continuous random variable
(Derivation) E ( r ) = M(") (0) The mth derivative of M(t) is xmetxf (x), ~ ( ~ )= ( t ) ' dt"
If
[[
if X is a discrete random variable
xmetxf (x)dr, if X is a continuous random variable
By setting t = 0,
M (") (0) =
Application of Moments
xmf (x),
if X is a discrete random variable
xrn (x)A, if X is a continuous random variable
The mean and variance ofXcan be determined by using the first and second moments of X: p = E(X) = M'(0)
o2= E(x*)
- [E(x)]~ = M"(0) - [M'(0)12
5-9 Moments Generating Functions
Example 5.7
1
5-19
Suppose that X has an exponential distribution f(x)=~e-*.x>o 1. (Moment Generating Function) Find the moment generating function of X. h From the definition of moment generating function,
--
'
fort-h<0
h-t
II
2. (Application of Moments) Determine the mean and variance of X by using the first and second moments of Xabout zero. h The first moment of X about the origin is
I
The second moment of X about the origin is
I
Therefore, the mean and variance of X are
and 2 h2
1 h2
1 h2
o2= E ( x ~ )- [E(x)]~ = M"(0) - [Mr(0)12= -- -= Exercise 5.7 (MR S5-15)
The geometric random variable X has probability distribution f ( ~ ) = ( l - ~ ) ~ -x' =~1,, 2 , ..., n 1. Find the moment generating function of X. 2. Determine the mean and variance of X by using the first and second moments of X about zero.
5-20
Chapter 5
Joint Probability Distributions
5-10 Chebyshev's Inequality Explain the use of Chebyshev's inequality rule. Bound the probability of a random variable by using Chebyshev's inequality rule and compare the bound probability with the corresponding actual probability. -9
w
-
m
'
-
*
e
Chebyshev's Inequality
e
~
.
~
-
~
>
~
a
m
v
~
~
~
~
~
~
-
-
-
--*
~
A relationship between the mean and variance of a random variable X having a certain probability distribution is formulated by Chebyshev as follows: ~ ( ( ~ - p ( 2 c o ) l l l c >~ o, By using Chebyshev's inequality rule, a bound probability of any random variable can be determined. For example, Table 6-1 presents bound probabilities of a normal random variable X with p and o2(by using Chebyshev's inequality rule) and corresponding actual probabilities; the table indicates that a bound probability always includes the corresponding actual probability. Table 6-1 Bound Probabilities and Actual Probabilities of a Normal Random Variable X
Probability Condition
gi
Example 5.8
c
eauodwili' A@
1/c9
(
RobWity
(Chebyshev's Inequality) The power grip forces (X)of adult females in the US are normally distributed with a mean of 62.7 and a standard deviation of 17.1 lbs. Bound the probability that the grip strength of an adult female differs from the mean more than 2.5 times standard deviation. Compare also the bound probability with its actual probability. By applying Chebyshev's inequality rule, ~ ( 1 ~ - p 1 2 2 . 5 0 ) l l / 2 =0.16 .5~ The actual probability is P((X-p122.50)=P
xi
(I I 2 2.5) = P(\ Z 12 2.5)
The actual probability is less than the bound probability. Exercise 5.8 (MR S5-25)
Suppose that the photoresist thickness (X) in semiconductor manufacturing has a continuous uniform distribution with a mean of 10 pm and a standard deviation of 2.3 1 pm over the range 6 < x < 14 pm. Bound the probability that the photoresist thickness is less than 7 or greater than 13 prn. Compare also the bound probability with its actual probability.
Answers to Exercises
Answers to Exercises Exercise 5.1
Joint p.m.f.) I 1 . (Discrete The joint p.m.f. f, (x, y) ,x
=
l , 2 , 3 and y = l , 2 , 3, should satisfy the
following: (1) f m ( x , y ) = c ( x + y ) 2 0
I
I
For the first condition, c 2 0 because x > 0 and y > 0. Next, for the second condition,
Therefore, the joint p.m.f. of X and Y is f,(x,y)=$(x+y), x = 1 , 2 , 3 and y = 1 , 2 , 3
I
2. (Discrete Marginal p.m.f.) The marginal probabilities of X are
I
The mean and variance of X are
V ( X )= 0; =
Cx2fx (x) - p; X
=
1 2 . f x ( l ) + 2 2 . f x ( 2 ) + 3 2 . fx(3)-2.172
=
1 2 x i+ 2 2 f~ + 3 2 ~ -2.17 2 =0.802
5-21
5-22
Chapter 5
Exercise 5.1 (cont.)
Joint Probability Distributions
1 3. (Discrete Conditional p.m.f.)
I
The conditional marginal probabilities of Y given X = 2 are
I
The conditional mean and conditional variance of Y given X = 2 are
4. (Independence) Check if fxr (191) = fx (1).
I Exercise 5.2
fY
(1)
Thus, the number of defects on the front side (4of a wooden panel and the number of defects on the rear side (Y) of the panel are not independent. 1. (Joint p.m.f.; Multinomial Distribution) Let XI,X2, X3 denote the number of bits that have high, moderate, and low distortion, respectively. Since n = 5 , p l = 0 . 0 1 , = ~ 0.04, ~ andp, = 0.95, the joint p.m.f. ofXl, X2, X3 is
Thus, 5! fx,x2,3 (1,1,3) =-0.01xO.O4~0.95~ l! I! 3!
=0.007
Answers to Exercises
2. (Marginal p.m.f.) The marginal p.m.f. ofX3 follows a binomial distribution with n = 5 and p3 = 0.95, i.e.,
Exercise 5.2 (cont.)
I
Exercise 5.3
5-23
Thus,
1 . (Continuous Joint p.d.f.) The ranges of X and Yare displayed below. Note that integration for 0 < x I 1 (area I) and that for 1 < x < 4 (area 11) should be conducted separately because they have different low limits.
The joint p.d.f. of X and Y should satisfy the following: (1) f , ( x , y ) = c ~ O
For the first condition, c 2 0 because x > 0 and y > 0. Next, for the second condition,
Therefore, the joint p.d.f. of X and Y is fH(x,y)=i?j-, 0 < x < 4 , 0 < y and x - 1 < y < x + l
5-24
Chapter 5
Joint Probability Distributions
Exercise 5.3
2. (Continuous Marginal p.d.f.) The marginal p.d.f. of X is
(conL)
0 <xll
The mean and variance of X are E(X)=p, = k f x ( x ) d r = l & x ( x + l ) d r + f ~ x d r
I
3. (Continuous Conditional p.d.f.) The conditional p.d.f. of Y given X = 2 is
The conditional mean and conditional variance of Y given X = 2 are
I
4. (Independence) Check if
Note that the range of X for 1 < y < 3 is y - 1 < x < y + 1 (see the graph on next page). Thus, tthe marginal p.d.f. of Y for 1 < y < 3 is f h ) = [ f d x , r ) d x = ll&h=&(xl:r:)=&,
l
Since fr12(y) = 3 # f y (y) = 5 ,the measurements of the two methods X and Yare not independent.
Answers to Exercises
5-25
Exercise 5.3 (cont.)
Exercise 5.4
(Covariance and Correlation) The mean of XY is
Therefore,
,a
= E(XY)-pxpr =4.67 - 2 . 1 7 ~ 2 . 1 7=-0.04
There is a weak, negative correlation between the number of defects on the front side (3 of a wooden panel and the number of defects on the rear side (Y) of the panel. Exercise 5.5
(Bivariate Normal Distribution) Since X and Yare independent, P(2.95 < X < 3.05,7.60 < Y < 7.80) = P(2.95 < X < 3.05)P(7.60 < Y < 7.80) By standardizing X and Y each,
= P(- 1.25 < Z
< 1.25)= 0.789
Thus, P(2.95 < X < 3.05,7.60 < Y < 7.80) = 0.789x0.789 = 0.623
5-26
Chapter 5
Exercise 5.6
Joint Probability Distributions
(Linear Combination)
-
2
X N(24, ) , Y independent.
- N(23 8 , 7
1 2
) , and COV(X, Y) = 0 because X and Yare
1 Therefore, E(X-Y)=E(x)-~(~)=24-23+=$
Exercise 5.7
1 . (Moment Generating Function) From the definition of moment generating function, m
w
M(t) = x e g f (x) = X e " ( 1 - p)"-' p =
pet [(1 - p)e'lX-'
Note that the sum of the infinite geometric sequence ( a, ar, ar2,... ) is m
s = C a r n " =- a , where lr)< 1 1-r n=l Thus,
2. (Application of Moments) The first moment of X about the origin is d{pet [1 - (I - p)et]-' dt .=.
Y
The second moment of X about the origin is
Answers to Exercises
Exercise 5.7 (cont.)
5-27
I Therefore, the mean and variance of X are p = E(X) = Mr(0) = np and
Exercise 5.8
I
(Chebyshev's Inequality) The probability density function of the uniform random variable X are
By applying Chebyshev's inequality rule, P(X<7)+P(X>13)=P(X-10<7-10)+P(X-10>13-10)
=~(~~-1o~>3)=~(~~-1o~>co) 13) is P ( ( X - l 0 1 > 3 ) = ~ (X1 - 1 0 ) > 1 . 3 0 x 2 . 3 1 ) < 1 / 1 . 3 ~=0.59
I
The actual value of P(X < 7 ) + P(X > 13) is
I
The actual probability is less than the bound probability, which supports Chebyshev's inequality rule.
I
Random Sampling and Data Description
6-1 6-2 6-3 6-4 6-5
6-1
Data Summary and Display Random Sampling Stem-and-Leaf Diagrams Frequency Distributions and Histograms Box Plots
6-6 Time Sequence Plots 6-7 Probability Plots M~NITAB Applications Answers to Exercises
Data Summary and Display
Determine the mean, variance, standard deviation, minimum, maximum, and range of a set of data. Mean
The arithmetic mean of a set of data indicates the central tendency of the data. 1 . Sample Mean (denoted as T ): The average of all the observations in a sample, which is selected from larger population of observations. - - ki=l x i
n
2. Population Mean (denoted as p): The average of all observations in the population. T x i
p=Variance and Standard Deviation
1=1
N
, for a finite population with size N
The variance of a set of data indicates the dispersion of the data along the mean: the larger the variance, the wider the spread of the data about the mean.
6-2
Chapter 6
Variance and Standard Deviation (cant.)
Random Sampling and Data Description 1. Sample Variance g ( x i -TI2 2
=
i=l
g x - i=l
n-1
n -1
2. Population Variance N
C ( x i - ~ ) ~t x : 02
=
-
i=l
i=l
N
N
p 2 , for a finite population with size N
Note that the divisor for the sample variance is n-1, while that for the population variance is N. The sample standard deviation and population standard deviation are s=@
and o=@
Minimum and Maximum
The minimum and maximum of a set of data are the smallest and largest values of the data set, respectively.
Sample Range
The difference between the minimum and maximum of the sample. R =max(xi)-min(x,), i = 1 , 2,... , n
v
Example 6.1
(Mean, Variance, and Range) The scores (X; integers) of n = 30 students in a statistics test are as follows (arranged in ascending order): 65 68 70 72 72 73 74 75 75 79 81 83 84 84 85 , 86 87 88 88 88 88 88 90 92 92 94 96 96 97 98 Determine the (1) sample mean, (2) sample variance, (3) minimum, (4) maximum, and (5) sample range of the test scores. The following summary quantities have been calculated: n = 30, E x i
= 2,508,
and
x,?
= 212,194
- 212,194 - 2 , 5 0 8 ~130 = 87.1 = 9.3 2 30 - 1 (3) minimum = 65 (4) maximum = 98 (5)R=98-65=33
6-3
6-2 Random Sampling
The numbers of cycles (X; integers) to failure of n = 70 aluminum test coupons under repeated alternating stress at 21,000 psi and 18 cycleslsec. are as follows (arranged in ascending order):
Exercise 6.1 (MR 6-15)
1
Determine the (1) sample mean, (2) sample variance, (3) minimum, (4) maximum, and (5) sample range of the test results. The following summary quantities have been calculated: .-
I
6-2
n = 70, E x i
= 98,256, and
Ex,? 149,089,794 =
Random Sampling
O Distinguish between population and sample. O Describe the terms random sample and statistic. Explain why selecting a representative random sample is important in research.
O Describe the characteristics of a random sample. - * - , - ~ B ~ > w . " ~ ~ " ~ % % - - - * ~ w w * - - - . ~ , ~ ~ ~ v ~ d - ~
Population vs. Sample
-
~
w
~
~
-
~
"
~
~
m
~
-
-
~
~m ~
~ . B ~ w " ~ ~ - . ~ - ' ~ " v a ~ - w ",~-B,,wuB"w-""w*, * * -~ . ~ ~ . ~ " #~,--wm""*-,-'*-" mm e. w~~ ,-~ ~w e a
".*"-"""
""M,"w,..-a
-,*"
1. Population: The collection of all the elements of a universe of interest. The size of a population may be either (1) small, (2) large but finite, or (3) infinite. (e.g.) Population size (1) Students in a statistics class: Small (2) College students in the United States: Large but finite (3) College students in the world: Infinite 2. Sample: A set of elements taken from the population under study, i.e., a subset of the uouulation (see Figure 6-1). Since it is impossible and/or impractical to examine the entire population in most cases, a sample is often used for statistical inference about the population. Population
Sample
Figure 6-1 Population versus sample.
6-4
Chapter 6
Random Sample
Random Sampling and Data Description
A random sample refers to a sample selected from the population under study by
certain chance mechanism to avoid bias (over- or underestimation). Thus, selecting a random sample that properly represents the population is important for valid statistical inference about the population. A random sample of size n is denoted by random variables XI, X2,... ,Xn where X, represents the i'hobservation of the sample. The random sample XI, X2,... ,X, has the following characteristics: (I) The X ' s are independent of each other, and (2) Every has the same probability distribution f (x) . In other words, XI,X2,... ,Xn are independently and identically distributed (Lid.). Thus, the joint probability density function of XI, X2,. .. ,Xn is f x , ,,..., , ~ X, ( x I , x z , . . . , x ~ )(xl)f(x2)...f(xn) =~
Example 6.2
(Random Sample) A public opinion poll on contraception is conducted by calling people's homes, which are randomly selected, between 8 AM and 5 PM on weekdays. Discuss this study design. h Even if the telephone numbers are randomly chosen, the survey sample is biased because most working class people are not at home during the survey time period. Therefore, the survey result is most likely an over- or underestimate.
Exercise 6.2
Defects on sheet metal panels in three categories of sizes (small, medium, and large) are under study. The metal panels are made of the same material. The analyst randomly selects 20 large-size panels (not including small- and mediumsize panels) and draws a conclusion regarding the average number of defects/ft2. Discuss this study design.
Statistic
A statistic is a function of random variables X I ,X2,... ,Xn; therefore, a statistic is also a random variable. (e.g.) Statistic (I) Sample mean
X=
n
X, I n i=I
(2) Sample variance
s2=
n
(Xi - 2 ) ' /(n - 1) i=l
6-3
Stem-and-Leaf Diagrams
0 Explain the use of a stem-and-leaf diagram. 0 Construct a stem-and-leaf diagram to visualize a set of data. 0 Determine the 100kthpercentiles, quartiles, interquartile range, median, and mode of a set of data. 0 Compare the variability measures variance, range, and interquartile range with each other.
1 i iI 8
6-3 Stem-and-Leaf Diagrams
Stem-and-Leaf Diagram
6-5
A stem-and-leaf diagram is a graphical display that presents both the individual values and corresponding freauencv distribution simultaneously by using 'stems' and 'leaves.' The following procedures are applied to construct a stem-and-leaf diagram (see Figure 6-2): Step 1: Determine stems and leaves. The digits of each data is divided into two parts: (I) stem for the 'significant' digits (one or two digits in most cases) and (2) leaf for the 'less significant' digit (last digit usually). The analyst should exercise histher own discretion to determine which digits are most significant with consideration of the range of data. The number of stems between 5 and 20 is commonly used. A stem can be further subdivided as necessary. As an example, the stem 5 can be divided into two stems 5L and 5U for the leaves 0 to 4 and for the leaves 5 to 9, respectively. Step 2: Arrange the stems and leaves. The stems are arranged in ascending (or descending) order. Then, beside each stem, corresponding leaves are listed next to each other in a row. Step 3: Summarize the frequency of leaves for each stem.
Figure 6-2 A stem-and-leaf diagram. Example 6.3
(Stem-and-Leaf Diagram) In Example 6-1, construct a stem-and-leaf diagram of the test scores. Step 1: Determine stems and leaves. The test scores range from 65 to 98. Each test score consists of two digits and the first digit has higher significance. Thus, the first and last digits of a test score are defined as the stem and leaf of the data, respectively: 6 to 9 for stems and 0 to 9 for leaves. To better represent the distribution of the test scores, the stems are further split into 6U, 7L, 7U, 8L, 8U, 9L, and 9U where L is for the lower half leaves (0 to 4) and U is for the upper half leaves (5 to 9).
6-6
Random Sampling and Data Description
Chapter 6
Example 6.3 (cont.)
I
Exercise 6.3 Percentile @k)
Step 3: 6U 7L 7u 8L 8U 9L 9U
58 02234 559 1344 56788888 0224 6678
2 5 3 4 8 4 4
1 In Exercise 6-1, construct a stem-and-leaf diagram of the test results. The 1OOkthpercentile (0 < k I 1; denoted as pk) of a set of data means the 1OOkth largest value of the data set. The procedures to findpk from n observations, x, ,x,, ...,x, , are as follows: Step 1: Arrange data in ascending order, say x(,),x(,) ,...,x(,, .
.Ink3
Step 2: Calculate the position number r by using n and k
r=
if nisodd (n + l)k, if n is even Step 3: Determine p, based on r
Pk
=I
x(,, , if r is an integer x(lr) + (x(l,.l)- xIlr))(r- Lrb, if r is not an integer
(Note) The symbols [rl (called ceiling) and Lv] (called floor) denote 'roundup' and 'round-down' r to the nearest integer, respectively. Quartile (qk)
The following three quartiles divide a set of data into four equal parts in ascending order: (1) First (lower) quartile: q, = (2) Second quartile (median): 9, = p,,,, (3) Third (upper) quartile: 9, = p,,,,
Interquartile Range (IQR)
The interquartile range (denoted as IQR) of a set of data is the difference between the upper and lower quartiles of the dataset, i.e., *QR = 93 - ql = p . 7 5 - ~ 0 . 2 5
i
I
6-3 Stem-and-Leaf Diagrams
Comparison of Variability Measures
6-7
The variance, range, and interquartile range of a data set have different sensitivity (stability): variance is most sensitive (least stable) and interquartile is least sensitive (most stable). For example, Figure 6-3(a) shows that two different data sets with the same range can have different variances; and Figure 6-3(b) displays that two different data sets with the same interquartile range can have different ranges and variances. These illustrations imply that variance, range, and interquartile range have high (low), moderate, and low (high) sensitivities (stabilities), respectively.
(a)
(b)
Figure 6-3 Sensitivities of variance, range, and interquartile range: (a) R1 = R2, while 012# 0 z 2 ; (b) IQRl = IQR2, while R1 # R2 and ol2# 0 2 ~ . Of the three variability measures, the appropriate variability measure is selected depending on the analyst's judgment on which measure best represents the variability of the data.
Median
Mode
The median of a set of data is the value that divides its ranked data set into two equal halves, i.e., p,,,, = 9 , . The mode of a set of data is the value that is observed most frequently. A data set can have no mode, one mode (unimodal), or multiple modes (bimodal, trimodal, etc.), as illustrated in Figure 6-4.
Figure 6-4 Modes of data distributions: (a) no mode; (b) unimodal; (c) bimodal. Example 6.4
(Percentile, Quartile, Interquartile Range, Median, and Mode) In Example 61, determine the (1) 8othpercentile, (2) first quartile, (3) interquartile range, (4) median, and (5) mode of the test scores. Note that n = 30 and 9 , = 90.5.
6-8
Chapter 6
Random Sampling and Data Description
Example 6.4 (cont.)
h
( 1 ) Since n = 30 is even, r = ( n + l)k = 3 1 x 0.8 = 24.8, which is not an integer. Therefore,
PO8 = X(lrj)+ ( ~ ( r r ) x ( [ r)('~ -
1j
1'9
= X(124.8j) + ( ~ ( l 2 8) 4 - X(j24 81) )(24.8 - 124.8B
ii
= x ( ~+~( x) ( ~-~~ )( ~ ~ ) ) ( 2 24) 4 . 8= 92 + (92 - 92) X 0.8 = 92
(2) Since r = (n+ l)k = 3 1x0.25 = 7.75 is not an integer,
I
-Lrb
91 = PO25 = X([rl) + ( ~ d r )-'([r])
- ~ ( 1 77 5 l(7.75 ~ - L7.75b = x ( ~+, ( x ( ~-)~ ( ~ ) ) ( 7 . 775)= 74 + (75 - 74)X 0.75 = 74.75 = ~ ( 1 77
5 + ~ ( ~ ( r 751) 7
(3) IQR = q3 - q1 = pR7, - ~ 0 . = 2 90.5 ~ - 74.75 = 15.75 ( 4 ) Since r = (n+ l)k = 3 1x0.5 = 15.5 is not an integer, ~ 0 . 5=
+ (413-xirr1))(r-rr)
'(1.1)
- ~ ( [ l ~ I(l5.5 . ~ l ) - 115.5)
= X ( i l 5 . 5 n + ('(l15.5l)
= x ( , ~+) ( x ( ~-~~)( , ~ ) ) ( 1 5-.15) 5 = 85 + (86 - 8 5 ) 0.5 ~ = 85.5
( 5 )Mode = 88 (unimodal) In Exercise 6-1, determine the ( 1 ) 9othpercentile, (2) third quartile, (3) interquartile range, ( 4 )median, and ( 5 ) mode of the test results. Note that n = 70 and q, = 1,097.75.
Exercise 6.4
6-4
Frequency Distributions and Histograms
0 Explain the termsfrequency, cumulativefrequency, relativefrequency, and relative cumulative frequency.
0 Construct a frequency table of a set of data. 0 Construct a bar graph and histogram to visualize a set of data.
O Explain the term skewness and describe the relationships between the central tendency measures mean, median, and mode depending on the skewness of data. wi--*-~Y".~Ln
-a*
--n_i
~
~
-
Frequency
a
~
~
~
M
_
X
_
X
M
_
-
n
i
m
-
~
~
~
~
s
m
~
l
~
-
~
*
-
~
_
x
(
~
_
X
~
~
~
"
n
_
a
n
~
~
.
;
r
a
i
r
r
o
-
A frequency indicates the number of observations that belong to a bin (class interval, cell, or category). The frequency of the fh bin is denoted by .
The sum of the frequencies of bins is called cumulative frequency (denoted by
6. ):
=z i
4
f,,
j=1
i = 1,2, ..., k
~
-
.
u
n
a
r
-
a
I
6-4 Frequency Distributions and Histograms
Relative Frequency
6-9
A relative frequency is the ratio of a frequency to the total number of observations (n). The relative frequency of the i~ bin (denoted by pi) is p . =-A , n = total number of observations n
The sum of the relative frequencies of bins is called cumulative relative frequency (denoted by P, ):
r
Example 6.5
(Frequency Measures) In Example 6-1, prepare a frequency table (including frequency, relative frequency, and cumulative relative frequency) of the test scores with five class intervals: below 60,60 to 69, 70 to 79, 80 to 89, and 90 or above. [kr
Class Interval ( X )
d
Frequency
Relative Frequency
Cumulative Relative Frequency
Exercise 6.5
In Exercise 6-1, prepare a frequency table (including frequency, relative frequency, and cumulative relative frequency) of the failure test results with five class intervals: below 500, 500 to 999, 1,000 to 1,499, 1,500 to 1,999, and 2,000 or above.
Bar Graph
A bar graph is a graphical display with bars, the length of each bar being proportional to the frequency andlor relative frequency of the corresponding category. A bar graph is used to provide a visual comparison of the frequencies and/or relative frequencies of multiple categories.
Histogram
A histogram is a bar graph in which the length and width of each bar are proportional to the frequency (or relative frequency) and size of the corresponding bin, respectively. The shape of a histogram of a small set of data may vary significantly as the number of bins and corresponding bin width change. As the size of a data set becomes large (say, 75 or above), the shape of the histogram becomes stable. Different histograms can be constructed for the same data set depending on the analyst's judgment. The following procedures may be of use to construct a histogram with bins of an equal width for a data set with size n: Step 1: Determine the number of bins (k). The following formula can be used for k: kG& I (Note) The number of bins between 5 and 20 is used in most cases.
6-10
Chapter 6
Random Sampling and Data Description
Histogram (cont.)
Step 2: Determine the width of a bin (w). The bin width is selected based on the range of the data set and the number of bins: R max- min W>-= k k Step 3: Determine the limits of the histogram. The lower limit (I) of the first bin and the upper limit (u) of the last bin are slightly lower than the minimum and larger than the maximum, respectively, i.e.,
Step 4: Prepare a frequency table based on the bins determined. Count the number of observations that belong to each bin and calculate relative frequencies accordingly. Step 5: Develop a histogram. Construct a bar graph by using the frequencies (frequency histogram) and/or relative frequencies (relative frequency histogram) of the bins.
(Bar GraphEIistogram) In Example 6-1, construct a histogram of the test scores by using the following summary quantities: n = 3 0. minimum = 65. and maximum = 98 h Step 1: Determine the number of bins (k). = 5.5, k = 6. Since & =
I
Step 2: Determine the width of a bin (w). R max- min - 98 - 65 - 5.5 , w = 6. Since - = k k 6
I
Step 3: Determine the limits of the histogram.
6-4 Frequency Distributions and Histograms Step 5: Develop a histogram.
Example 6.6 (cont.)
I Exercise 6.6
Skewness
6-11
Test Scores
In Exercise 6-1, construct a histogram of the test results by using the following summary quantities: n = 70, minimum = 375, and maximum = 2,265 The skewness (denoted by y) of a set of data indicates the degree of asymmetry of the data set around the mean. As illustrated in Figure 6-5, the distribution with a long tail to the left and distribution with a long tail to the right have negative and positive values of skewness, respectively; the symmetric curve has a zero skewness.
Figure 6-5 Distributions with different values of skewness (y): (a) y < 0 (leftskewed); (b) y =O (symmetric); (c) y > 0 (right-skewed). Comparison of Central Tendency Measures
The mean, median, and mode of a data set can be different depending on the skewness of the data distribution, as illustrated in Figure 6-6. Both the mean and mode shift toward the direction of skewness with high sensitivity, whereas the median moves with low sensitivity. Of the three central tendency measures, the appropriate measure is selected depending on the analyst's judgment on which measure best suits the purpose of analysis: mean for arithmetic average, median for relative standing, and mode for frequency.
6-12
Chapter 6
Random Sampling and Data Description
Comparison of Central Tendency Measures
f(x)
I
(x)
I
f (x)
iA iA -
(cont.)
1i---- ___L L,L
n
--
+.
x"
X
=x
X
---u
z
X
x
mean < median < mode mean = mode = median mode < median < mean (a) (b) (c) Figure 6-6 Comparisons of means, medians, and modes of various distributions: (a) left-skewed; (b) symmetric; (c) right-skewed.
6-5
BOXPlots
Explain the use of a box plot.
O Construct a box plot to visualize a set of data. _"-
___L1_X_
IIII*mYeV
*-.XX
Box Plot
-.^%
~
~
*
~
-
-
*
-
)
U
(
m
-
e
i
-
-
*
a
~
-
~
~
m
,
~
~
v
~
-
m
-
~
*
m
-
-
~
-
x
4
a
-
A box plot is a graphical display that presents the center, dispersion, symmetry, and outliers of a set of data simultaneously. Note that outliers are observations that are unusually low or high compared to most of the data. A box plot consists of three components (see Figure 6-7): (1) Box: The left edge, middle line, and right edge of the box indicate the first (q,), second (q2),and third (q3) quartiles, respectively; therefore, the length of the box indicates the interquartile range (lQR) of the data set. (2) Whiskers: The left whisker extends from the left edge of the box to the smallest data point within 1.5 1QR from ql; and the right whisker extends from the right edge of the box to the largest data point within 1.5 IQR from 93. (3) Outliers: Points beyond the whiskers are outliers. outliers
4.l
1.5IQR
92
IQR
q3
outlier
& 1.SIQR
Figure 6-7 A box plot. Example 6.7
(Box Plot) In Example 6-1, construct a box plot of the test scores by using the following summary quantities: minimum = 65, maximum = 98, q, = 74.75, q2 = 85.5, q3 = 90.5, and IQR = 15.75
'i
1 I
i
-
-
6-13
6-6 Time Sequence Plots
uv
Since the minimum = 65 is greater than ql - 1.5 x IQR = 5 1.1 and the maximum = 98 is less than q 3 + 1.5 x IQR = 1 14.1, the left and right whiskers extend to 65 and 98, respectively. Thus, there are no outliers in the test scores which exceed 1.5 IQR from the edges of the box.
IIII
IIIIIIIII
60
70
111111111l111l
IIII
IIII
90
80
100
Test Scores Exercise 6.7
6-6
In Exercise 6-1, construct a box plot of the test results by using the following summary quantities: minimum = 375, maximum = 2,265, q l = 1,097.75, q2 = 1,436.5, q3 = 1,735, and lQR = 637.25
Time Sequence Plots
O Explain the use of a time sequence plot. I
-,-=
i
Time Sequence Plot
A time sequence plot is a graphical display that presents a time sequence (time series; data recorded in the order of time) to show the trend of the data over time. As an example, Figure 6-8 shows two time sequence plots of a company's sales of three years by year and by quarter. Each horizontal axis denotes time (years or quarters; t) and each vertical axis denotes sales (in year or quarter; x). Figure 68(a) displays the annual sales are increasing over the years and Figure 6-8(b) indicates that the annual sales have a cyclic variation by quarter-the first- and second-quarter sales are higher than the sales of the other quarters.
sales
sales
,*
(XI
+
(4
#' 9
,a,
*------. --e--*
;" 0
year (0
:
i
.\; ,
A
,'
1
/
I,,
3, I,
'w
.*
0,:
U
~
A
1 2 3 4 1 2 3 4 1 2 3 4
A quarter (r)
(a) (b) Figure 6-8 Time sequence plots of a company' sales: (a) annual sales; (b) quarterly sales.
I
~
~
6-14
Chapter 6
6-7
Probability Plots
Random Sampling and Data Description
0 Explain the use of a probability plot. -"9".wa*w,.--a-~eeM
~
~
,
w
~
Probability Plot
~
"
"
~
~
"
~
~
"
*
,
"
.
~
~
~
~
~
.
-
~
b
"
~
~
~
w
~
"
~
,
~
,
~
~
~
~
.
w
~
~
~-",---,.-as* ~
~
~
,
A probability plot is a graphical display that presents the cumulative relative frequencies of ordered data x(l),xcz), ... ,xc,, on the probability paper of a hypothetical distribution (such as normal, Weibull, and gamma distributions) to examine if the hypothesized distribution fits the data set. For example, Figure 6-9 shows a normal probability plot of 20 ordered observations xu) against corresponding cumulative relative frequencies (j- 0.5)ln. Statistical software is usually used to produce a probability plot. As the plotted points closely lie along a straight line (chosen subjectively), the adequacy of the hypothesized distribution is supported. The probability plot method is useful to examine if the data conforms to a hypothetical distribution, but lacks objectivity by relying on subjective judgment. A formal method to test the goodness-of-fit of a probability distribution is presented in Section 9-7.
60
70
80
90
100
Observation (xu))
Figure 6-9 A normal probability plot.
"
~
~
.
~
~
~
MINITABApplications
MINITABApplications
I
Examples 6.1 and 6.4
1
(Descriptive Statistics)
(1) Choose File % New, click Minitab Project, and click OK. :he worksheet.
(3) Choose Stat % Basic Statistics % Display Descriptive Statistics. In
6-16
Chapter 6
Examples 6.2 and 6.4 (cont.)
Random Sampling and Data Description
I
I
I
Descriptive Statistics
80
82
84
86
88
95% Confdence Interval tor Slgma 74316
I
125444
95% Confidence Interval for Medlen
79 4574
Example 6.3
88 0000
(Stem-and-Leaf Diagram) (1) Choose Graph > Stem-and-Leaf. In Variables, select Score. In Increment, enter the difference between the smallest values of adiacent stems. Then click Ok.
Character Stem-and-Leaf Display
I
Stem-and-leaf of Score Leaf Unlt = 1.0
N
= 30
I
-
MINITAB
Example 6.5
I
Applications
6-17
(Frequency Measures) (1) Choose Graph > Histogram. In Graph variables, select Score. Then
(2) Under Type of Histogram, check Frequency or Cumulative Frequency. Under Type of Intervals, check Cutpoint. Under Definition of Intervals, check Midpointlcutpoint positions and enter the set of cut
Score
6-18
Chapter 6
Example 6.6
Random Sampling and Data Description
(Bar Graph/Histogram) ( 1) Choose Graph k Histogram. In Graph variables, select Score. Then
Jnder Type of Histogram, check Frequency or Cumulative Frequency. Under Type of Intervals, check Cutpoint. Under Definition of Intervals, check Midpointlcutpoint positions and enter the set of cut
64
70
76
82
Score
88
94
100
MINITAB Applications
Example 6.7
I (Box Plot)
Score
6-19
6-20
Chapter 6
Random Sampling and Data Description
Answers to Exercises Exercise 6.1
I (Mean, Variance, and Range)
= 161,913.9= 402.4~
(3) Minimum = 375 (4) Maximum = 2,265 (5) R = 2,265 - 375 = 1,890 Exercise 6.2
1 (Random Sample) Since the same material is used for the sheet metal panels in various sizes, the number of defects per unit panel area will not be significantly different depending on the size of the panel. Thus, the random sample selected only from the large-size panels will properly represent the entire roup of the sheet metal panels to study the average number of defectslft .
B
Exercise 6.3
(Stem-and-Leaf Diagram) Step 1: Determine stems and leaves. The test results range from 375 to 2,265. For simplicity, round each data at the 10s place and use only the 1,000s and 100s digits for plot. Since the 1,000s digit is most meaningful, the 1,000s and 100s digits of a test result are defined as the stem and leaf of the data, respectively: 0 to 2 for stems and 0 to 9 for leaves. However, the number of the stems is only three, therefore, the stems are further divided into Of,Os,Oe, lz, It, If, Is, le,2z,and2t where 'z' is for the leaves 0 and 1, 't' is for 2 and 3, 'f' is for 4 and 5, 's' is for 6 and 7, and 'e' is for 8 and 9. Step 2:
0s Oe lz It 1f Is 1e 22 *B*vse.e*.,*",.a.-*b,+B.***,*
I
7 8888999 0000001111111 22233333333 44555555555 666666677 888888899999 011 23 '*
1 7 13 11 11 9 12 3
%--
Step 3: Summarize the frequency of leaves for each stem as shown above.
Answers to Exercises
6-21
(Percentile, Quartile, Interquartile Range, Median, and Mode) (1) Since n = 70 is even, r = (n + l)k = 71 x 0.9 = 63.9, which is not an integer. Therefore, PO.^ = X(Lr]) + (~([rl) -~ ( l r i -
Exercise 6.4
L'D
- x(163,9) + (x([a.sl)- ~ ( 1 6 3 . I(63.9 9~ - l63.9D
+
= x ( ~ ~( )x
( ~-~~)( ~ ~ ) ) ( 6-363) . 9 = 1,890 + (1,910 - 1,890)X 0.9
= 1,908 (2) Since r = (n + l)k = 7 1x 0.75 = 53.25 is not an integer,
43 = P0.75 = x(1.h + ( ~ ( r r h-xILr1,
- LrD - ~ ( ~ 5 3 . 2+5(~(153.25) ~ - ~ ( 1 5 3 . 2 5l(53.25 ) - 153.258
= x ( ~+~( )x ( ~-~~)( ~ ~ ) ) ( 5 3-.53) 2 5 = 1,730 + (1,750 - 1,730)x 0.25 = 1,735 = 1,735 - 1,097.75 = 637.25 (3) IQR = q3 - 4, = (4) Since r = (n+l)k = 71x0.5 = 35.5 is not an integer, ~0.= 5 ~(1.j) + ( ~ ( l r-i ~ ( j r j)(' )
-
1'4
9
- X ( ~ a . 5 ) + (~([3i.il) - ~([3i,i~)(35-5 - 135.5D
+
= x ( ~ ~( )x
( ~-~~)( ~ ~ ) ) ( 3- 535) . 5= 1,421 + (1,452 - 1,421)x 0.5
= 1,436.5
(5) Modes = 1,102, 1,315, and 1,750 (trimodal) Exercise 6.5
1 (Frequency Measures)
Exercise 6.6
1 (Bar GraphIHistogram) Step 1: Determine the number of bins (k). Since & = f i = 8.4 , k = 9.
I
Step 2: Determine the width of a bin (w). R max- min 2,265 -375 =210,usew=220. Since - = k k 9
I
Step 3: Determine the limits of the histogram. . kw-R 9 x 220 - 1890 1G min- --- = 375 = 330 2 2 u=l+kw=330+9~220=2,310
6-22
Chapter 6
i
Random Sampling and Data Description
Exercise 6.6 (cont.)
1
Step 4: Prepare a frequency table based on the bins determined. Cumulative Relative Class IntervaI ( X ) Frequency Relative Frequency Frequency
Step 5: Develop a histogram. -
T
I
02g
Test Results
Exercise 6.7
(Box Plot) Since the minimum = 375 is greater than ql - 1.5 x IQR = 141.9 and the maximum = 2,265 is less than 93 + 1.5 x IQR = 2,690.9, the left and right whiskers extend to the minimum and maximum, respectively. Thus, there are no outliers in the test results which exceed 1.5 IQR from the edges of the box.
I
Test Results (unit: hr)
7-4 Sampling Distributions 7-5 Sampling Distributions of Means Answers to Exercises
7- 1 Introduction 7-2 General Concepts of Point Estimation 7-3 Methods of Point Estimation
7-1
Introduction
8
*
sm
-
a
&%s ee* -a
a"
W w M - n -
m
W#"W
-
s -wmm-#"w
wa
--
= m w w a w-
e
-*
wmvw#*a.w
w-
us***-
O Describe the terms parameter, point estimator, and point estimate. O Identify two major areas of statistical inference. O Distinguish between point estimation and interval estimation. O Determine the point estimate of a parameter.
Parameter (0)
A parameter (denoted by 0) represents a characteristic of the vovulation under study. It is constant but unknown in most cases. (e.g.) Parameters Mean (p), variance (02),proportion (p),correlation coefficient (p), and regression coefficient (p)
Statistical Inference
Statistical inference refers to making decisions or drawing conclusions about a population by analyzing a sample from the population. Two major areas of statistical inference can be defined: 1. Parameter estimation: Estimates the value of 0. (e.g.) p. = 150 2. Hypothesis testing: Tests an assertion of 0. (e.g.) Ho: p = 150
Parameter Estimation
Parameter estimation is further divided into two areas: 1. Point estimation: Estimates the exact location of 0. (e.g.) p = 150 2. Interval estimation: Establishes an interval that includes the true value of 0 with a designated probability. (e.g.) 145 < p < 155
7-2
Chapter 7
Point Estimation of Parameters
A point estimator (denoted by 6 ) is a statistic used to estimate 0. It is a random variable because a statistic is a random variable.
Point Estima?r (0)
(e.g.) Point estimator of p: sample mean
X=
n
Xi l n
A single numerical value of 6 determined by a particular random sample is called a point estimate of 8 (denoted by 0).
m
Example 7.1
(Point Estimate) Suppose that the life length of an INFINITY light bulb (3 has a normal distribution with mean p and variance 02.A random sample of size n = 25 light bulbs was examined and the sum of the life lengths was 14,900 hours. light bulb. Estimate the mean life length (F) of an INFINITY
Exercise 7.1
The line width (X) of a tool used for semiconductor manufacturing is assumed to A random sample of size n be normally distributed with mean p and variance 02. = 20 tools was measured and the sum of the line width measurements was 13.2 micrometers (pm). Estimate the mean line width (p) of the tool.
General Concepts of Point Estimation
7-2 w
---
'Bw*
0 Explain the terms unbiased estimator, minimum variance unbiased estimator (MVUE), standard error, and mean square error ( M S E ) ofan estimator.
0 Select the appropriate point estimator of 8 in terms of unbiasedness, minimum variance, and
-
-_
minimum mean square error each. w*m"
I*
JXnXr
a_nlsje*(
Unbiased Estimator
w_(_aV*._^_--Pmw_'*&*~--B~Pm----e-
---
An unbiased estimator is a point estimator ( 6 ) whose expected value is equal to the true value of 8, i.e., ~ ( 6=)8 . Note that several unbiased estimators can be defined for a single parameter 8. As shown in Figure 7-1, the bias of 6 from the true value of 8: bias = ~ ( 6-)8
6 is the difference of the expected value of
7-2 General Concepts of Point Estimation
7-3
Unbiased Estimator (cont.)
Figure 7-1 Bias of a point estimator
6.
(Unbiased Estimator) Let X I , X2,... ,Xn denote a random sample of size n from a probability distribution with E(X) = p and V(X)= 02.Show if the sample mean
1 is an unbiased estimator of the population mean p. I Exercise 7.2 (MR 7-21
I
Since E(X) = p , X is an unbiased estimator of p. Let X I ,X2,. .. ,X7 denote a random sample from a population having mean y and variance d.Show if the following estimator
I is an unbiased estimator of p. Minimum Variance Unbiased Estimator (MVUE)
A minimum variance unbiased estimator (MVUE) of 0 is the unbiased 6 with the smallest variance (see Figure 7-2). By using the MVUE of 0, the unknown parameter 0 can be estimated accuratelv and precisely.
7-2 Variances of unbiased estimators of 0. Figure -
Example 7.3
(Minimum Variance Unbiased Estimator) Let XI,X2,. .. ,Xn denote a random sample of size n (>I) from a population with E(X) = p and V(X)= 0'. Both X , and X are unbiased estimators of p because their expected values are equal to p. Of the two estimators, which is preferred to estimate p and why?
7-4
1 1 Chapter 7
Point Estimation of Parameters
I
I i
Example 7.3 (cont.)
o-rr The variances of X,and
X
are
V(X, ) = o2
Since V(X,) = o2> ~ ( 2=)o21n for n > 1, with higher accuracy and precision.
c$
X
is preferred to estimate p
Exercise 7.3
Let X I ,X2,. .. ,Xn denote a random sample of size n (>3) from a population with E(X) = p and V(X) = 02.Both (X, + X 2 + X,) / 3 and X are unbiased estimators of p because their expected values are equal to p. Of the two estimators, which is preferred to estimate p and why?
Standard Error (061
The standard error of a point estimator (denoted by o6 ) is the standard deviation of a point estimator 6 . It can be used as a measure to indicate the precision of parameter estimation. If o6 includes unknown parameters that can be estimated, use of the estimates of the parameters in calculating o6 produces an estimated standard error (denoted by s6 or se(6)).
t'
Example 7.4
In Example 7-1, X has a normal distribution with mean p and variance o2and a random sample of size n = 25 was examined. 1. (Standard Error) Assuming o2= 402, determine the standard error of the n
sample mean ( X ). Note that
X=
XiI n - N(p, o21n) .
i=l
o 40 o- =-=-=8.0
0-rr
hrs.
"&,I5
2. (Estimated Standard Error) Suppose that o2is unknown and the sample variance sX2= 352. Calculate the estimated standard error of the sample mean
(XI. h
6
S-=-=-=--
B
s,
35 - 7.0 hrs.
X & & J 2 5
Exercise 7.4
In Exercise 7-1, X i s normally distributed with mean p and variance o2and a random sample of size n = 20 was measured. 1. Assuming o2= 0.05~,determine the standard error of the sample mean ( 2). 2. Suppose that o2is unknown and the sample variance s$ = 0.04~.Calculate the estimated standard error of the sample mean ( 2 ).
7-2 General Concepts of Point Estimation
Mean Square Error (MSE) o f Estimator
The mean square error (MSE)of a point estimator difference between
6
6
7-5
is the expected squared
and 0:
M S E ( ~=) E ( B - 0)' = E[G - E ( B ) ] + ~ [e- E ( B ) ~ =
v (6)+ (bias)'
=
~ ( 6 for ) an unbiased 6
because bias = 0.
Note that a biased estimator of 0 with the smallest MSE is sometimes used for precise estimation of 0 while sacrificing the accuracy of estimation on 0 (see Figure 7-3).
Figure 7-3 A biased
r
unbiased
Example 7.5
6 , with a smaller mean square error than that of the
6 ,.
(Mean Square Error o f Estimator) Suppose that the means and variances of 6 , and 6 , are ~ ( 6=,0), ~ ( 6= 0.90, ~ )~ ( 6=, 5), and ~ ( 6 =' 4) , respectively. Which estimator is preferred to estimate 0 and why? F O ~6 , ,
biasgl = E ( O , ) - 0 = 0 - 0 = 0 M S E ( ~ ,=) ~ ( 6+, (biasgl ) )' = 5 - 0 = 5 For
62, biasg, = E ( 0 2 )- 0 = 0.90 - 0 = -0.10 M S E ( ~ ,= ) ~
( 6+ (bias6, ~ ) )2
=4
By subtracting M S E ( ~), from MS.(@
+ 0.010' ),
M S E ( ~ ,-) M S E ( ~ , = ) 5 - ( 4 + 0 . 0 1 0 ~=) 1 - 0.010' The preferred estimator of 0 for precise estimation depends on the range of 0 as follows: if 0 2 10 because M S E ( ~),I M S E ( ~ , )and 6 , is unbiased
62,
if 0 < 10 because M S E ( ~ ,>) M S E ( ~ , )
7-6
Chapter 7
Exercise 7.5
Point Estimation of Parameters
I
Let XI, X2,... ,X5 denote a random sample of size n = 5 from a population with
E(X) = 70 and V(X) = 52. Two estimators of p are proposed:
I Of thc two estimators, which is preferred to estimate p and why? 7-3
Methods of Point Estimation
0 Explain the utility of the maximum likelihood method.
0 Find a point estimator of 0 by using the maximum likelihood method. Maximum Likelihood Method
The method of maximum likelihood is used to derive a point estimator of 0. This method finds a maximum likelihood estimator of 9 which maximizes the likelihood function of a random sample Xl, X2, ... ,Xn L(9)= f(xl;0)f(x2;9)... f ( x n ; 9 ) , whereXl,Xz,..., Xn-i.i.d. f(x;0) (Maximum Likelihood Estimator) Let X I ,X2,... ,X,,denote a random sample of size n from an exponential distribution with the parameter h. Find the maximum likelihood estimator of h. U-W The probability density function of an exponential distribution is f (x; h) = he-h" Thus, the likelihood function of X I ,X2, ... ,Xn is
Then, the log likelihood function is n
l n ~ ( h ) = n i n h -h C x i
I
Now, the derivative of in L(h) is
By equating this derivative of In L(h) to zero, the point estimator of h which maximizes L(h) is
7-7
7-4 Sampling Distribution
Exercise 7.6 (MR 7-21)
7-4
I
Let XI, X2,... ,Xnrepresent a random sample of size n from a geometric distribution with parameterp. Find the maximum likelihood estimators ofp.
Sampling Distribution
Explain the term sampling distribution.
Sampling Distribution
A sampling distribution is the probabilitv distribution of a statistic (a function of random variables such as sample mean and sample variance). The sampling distribution of a statistic depends on the following: (1) The distribution of the population (2) The size of the sample (3) The method of sample selection (e.g.) Sampling distribution of a sample mean
7-5
Sampling Distributions of Means
Explain the central limit theorem (CLT). ~ e t e r m i n ethe distribution of a sample mean by applying the central limit theorem. IX-e_^el
*-
-rwX("
Sampling Distribution of
IIIII(*XCL-
-
-*(i-%i
wX Y I(I
LX-B*.(XI*l(~eYX
I#*w-ma
I-l.
*-#,.
.*
ni* *esX--Le*
*/
l-X*P.IXI
*
P
%"
*
-
I d X
Suppose that a random sample of size n is taken from a normal distribution with mean p and variance 02. Then, the sampling distribution of the sample mean is
n
(Derivation)
X= i=l
0
X i1 n - N(p,-)
n
,where X
- N(p,
02)
Since XI, X2,. .. ,X, are independent and normally distributed with the same E(X) = p and V ( X )= 0 2 , the distribution of X is normal with mean and variance
P
7-8
Chapter 7
Sampling Distribution of
Point Estimation of Parameters
(x, + x 2 n + . . . + X n v(X) = V
=-[v(x,)+
V ( X , ) + . - . + v(x,)]
Tr
A
(cont.)
Central Limit (CLT)
Let XI,X2,... ,Xn denote a random sample of size n taken from a population ( X ) with mean y and variance 02.Then, the limiting- form of the distribution of the sample mean X is n
o2
x=zXi/n-~(p,-) n i=l
x-p a Z=--
o/&
as n approaches M.This normal approximation of theorem (CLT).
i'J(0,l)
X
is called the central limit
As displayed in Figure 7-4, the distributions of the sample means from uniform, binomial, and exponential distributions become normal distributions as their sample sizes n become large. In most cases, if n 2 30, normal approximation of X will be satisfactory regardless of the distribution of X.In case that n < 30 but the distribution of X is close to the normal, normal approximation of X will be still held.
(a)
(b)
(c)
-
Figure 7-4 Sampling distribution of X : (a) X uniform; (b) X - binomial with p = 0.2; (c) X exponential with h = 1.
-
Based on the central limit theorem, the sampling distribution of normal or non-normal is as follows: 1. Case 1: Normal population, X N(p, o2)
-
2
where X is
7-5 Sampling Distributions of Means
7-9
2. Case 2: Non-normal population with p and o2 (1) Case 2.1: Normal approximation applicable
Central Limit Theorem (CLT)
n
2= i=l
(cant.)
o
XiI n - N(p,-)
n
, if n 2 30 or the distribution of X is close to
the normal (2) Case 2.2: Normal approximation inapplicable It would be difficult to find the distribution of 2 if n < 30 and the distribution of X is significantly deviated from the normal. In this case, use non-parametric statistics for statistical inference (see Chapter 15). Example 7.7
(Central Limit Theorem; Sampling Distribution of Sample Mean) Suppose that the waiting time (X; unit: min.) of a customer to pick up hislher prescription at a drug store follows an exponential distribution with E(X) = 20 min. and V(X) = 52. A random sample of size n = 40 customers is observed. What is the distribution of the sample mean?
I
o-
Exercise 7.7
Since n 2 30, normal approximation is applicable to X even i f X is exponentially distributed. Thus, the sampling distribution of X is
Suppose that the number of messages (X) arriving to a computer server follows a Poisson distribution with E(X) = 10 messages/hour and V(X) = 10. The number of messages per hour was recorded for n = 48 hours. What is the distribution of the sample mean?
1
7-10
Chapter 7
Point Estimation of Parameters
Answers to Exercises Exercise 7.1
1 1. (Point Estimate)
Exercise 7.2
I (Unbiased Estimator)
1 Exercise 7.3
Since ~ ( 6=)p,
6 is an unbiased estimator of p.
(Minimum Variance Unbiased Estimator) The variances of ( X I + X 2 + X , ) I3 and
1 1
1 Exercise 7.4
are
o > v(X) = - for n > 3, X is preferred as n the point estimator of p for more accurate and precise estimation on p. Since
-
v
)-3-
, 1 . (Standard Error) 2. (Estimated Standard Error)
Exercise 7.5
X
(Mean Square Error of Estimator) For
6,,
bias. = E ( @ 1 ) - p = 7 0 - 7 0 = 0 01
Answers to Exercises
Since M S E ( ~ , )= 5.0 < M S E ( ~ , )= 9.4, and precise estimation on p.
7-11
6, is selected for more accurate
(Maximum Likelihood Estimator) The probability mass function of a geometric distribution is
Exercise 7.6
f (x; P) = (1 - PIX-'P Thus, the likelihood fhnction of XI, X2,... ,Xn is
Then, the log likelihood function is
Now, the derivative of In L(p) is
By equating the derivative of In L(p) to zero, the point estimator o f p is
The maximum likelihood estimator o f p is the reciprocal of
X
(Central Limit Theorem; Sampling Distribution of Sample Mean)
Exercise 7.7
I
Since n = 48 2 30, normal approximation is applicable to X even if X follows a Poison distribution. Therefore, the approximate distribution of is
X
8
Statistical Intervals for a Single Sample
8- 1 Introduction 8-2 Confidence Interval on the Mean of a Normal Distribution, Variance Known 8-3 Confidence Interval on the Mean of a Normal Distribution, Variance Unknown 8-4 Confidence Interval on the Variance and Standard Deviation of a Normal Population
8-1
8-5 A Large-Sample Confidence Interval for a Population Proportion 8-6 A Prediction Interval for a Future Observation 8-7 Tolerance Intervals for a Normal Distribution Summary of Statistical Intervals for a Single Sample MINITABApplications Answers to Exercises
Introduction
0 Distinguish between confidence, prediction, and tolerance intervals. __&
**"
"'''-"
--
*
-eBX -k#M%wmn smm-# _ wa*mmm-d jm wmaea-ms P - ~ - M M e m m ~ z - e eM .wM ^( x nc-iw* emj xnW a x nn m
Statistical Intervals
While point estimation estimates the exact location of a parameter (8), interval estimation establishes bounds of plausible values for 8. Three types of statistical intervals are defined: 1. Confidence Interval (CI): Bounds a parameter of the population distribution. (e.g.) A 90% CI on p where X- N(p, 02)indicates that the CI contains the value of p with 90% confidence (see Section 8.2). 2. Prediction Interval (PI): Bounds a fbture observation. (e.g.) A 90% PI on a new observation where X- N(y, 02)indicates that the PI contains a new observation with 90% confidence (see Section 8.6). 3. Tolerance Interval (TI): Bounds a selected proportion of the population distribution. (e.g.) A 95% TI on X with 90% confidence indicates that the TI contains 95% ofxvalues with 90% confidence (see Section 8.7).
8-2
Chapter 8
8-2
Confidence Interval on the Mean of a Normal Distribution, Variance Known
Statistical Intervals for a Single Sample
0 Interpret a 100(1 - a)% confidence interval (CI).
Cl Explain the relationship between the length of a CI and the precision of estimation. Ll Determine a sample size to satisfy a predetermined level of error (E) in estimating p. 0 Identify the point estimator of y when o2is known and the sampling distribution of the point estimator. Ll Establish a lOO(1 - a)% CI on p when o2is known. _fVIwa_U_#m-w-w-.w*-
Confidence Interval
X(
%-
-im--e-A
X-
*tw-
-.*n(l*UIX9(X llAnnm#mw=eu*8
iarun*a-emmmn-e
a-*
"-
* awiinaaaw
-s*a*e
an nn x
*xa*ntBM
m
S W
rxnlirrmu
mar--
Mmni4aru
i
r
A 100(1 - a)% confidence interval on a parameter ( 8 )has lower and/or upper bounds ( I I 0 I u), which are computed by using a sample from the population. Since different samples will produce different values of I and u, the lower- and upper-confidence limits are considered values of random variables L and U which satisfy the following: P(LlOIU)=l-a, O l a l l Since L and U are random variables, a CI is a random interval. A 100(1 - a)% CI indicates that, if CIS are established from an infinite number of random samples, 100(1 - a)% of the CIS will contain the true value of 8. The types of confidence intervals include: 1 . Two-Sided CI: Specifies both the lower- and upper-confidence limits of 8, such as 1I 0 I u. (e.g.) 730 I k 5 750 hrs, where X = life length of an INFINITY light bulb 2. One-Sided CI: Defines either the lower- or upper-confidence limit of 8. (e.g.) 730 I p~ or pxI 750 hrs, whereX= life length of an INFINITY light bulb
Length of C I and Precision of Estimation
The length of a C I refers to the distance between the upper and lower limits (u I). The wider the CI, the more confident we are that the interval actually contains the true value of 8 (see Figure 8-I), but less informed we are about the true value of 0.
X
710
I
730
(f95%CI--+I
750
770
I
(life length; hrs.)
Figure 8-1 Confidence intervals on the mean life length (8 = k)of an INFINITY light bulb with selected levels of confidence. The length of a CI on 8 is inversely related to the precision of estimation on 8: the wider the CI, the less precise the estimation of 8.
8-2 Confidence Interval on the Mean of a Normal Distribution, Variance Known
Sample Size Selection (cont.)
8-3
The distance of a point estimate of 0 from the true value of 0 represents an error (denoted by E) in parameter estimation: ~=10-01
=I
T - p 1 in estimating p
To establish a 100(1 - a)% CI on p which does not exceed a predefined level of E, the sample size is determined by the formula 2
(in case n is not an integer, round up the value)
Inference Context
Parameter of interest: y (3
X - N(p,-)
Point estimator of y:
n
x-p z z=--
, o2known N(071)
"I& Confidence Interval Formula
A 100(1 - a)% CI on p when 0,is known is -
-
(J
X-Z,,~-<~IX+Z,,~-
&-
CF
&
0
x-z,-
for two-sided CI for lower-confidence bound
h-
-
pIX+z,-
(3
for upper-confidence bound
&
confidence interval on p, o2known (Derivation) Two-sided -
x- -
By using Z = - N(0,l) , o/&
P(L<~Iu)=~-a
z P(- z,,,
5 z I z,,,)=
Therefore,
-
L=X-zaI2-
0
&
I -a
-
and U = X + z a 1 2 -
0
h
Note that, for a one-sided CI, use z, instead of z,,, limit.
to derive the corresponding
4
Chapter 8
Statistical Intervals for a Single Sample
Example 8.1
Suppose that the life length of an INFINITYlight bulb (X; unit: hour) follows the normal distribution N(y, 402). A random sample of n = 30 bulbs is tested as shown below, and the sample mean is found to be ? = 780 hours. 1
No.
Life Length
No.
Life Length
No.
Life Length
7 8 9 10
814 820 753
17 18 19
804 754 715
27 28 29
829 82 1 8 16
,awma-***a,-rB*B#m.~
1. (Confidence Interval on p, o2Known; Two-Sided CI) Construct a 95% twosided confidence interval on the mean life length (p) of an INFINITY light bulb.
I
1-a=0.95 3 a=0.05 95% two-sided CI on p:
2. (Sample Size Selection) Find a sample size n to construct a two-sided confidence interval on p with an error = 20 hours from the true mean life length. Use a = 0.05.
Exercise 8.1 (MR 8-10)
I I I
The diameter of a hole (X; unit: in.) for a cable harness is normal with 02= 0.012. A random sample of n = 10 yields an average diameter ( 2) of 1SO45 in.
1. Construct a 99% upper-confidence bound on the mean diameter (p) of the hole. 2. Find a sample size n to construct a two-sided confidence interval on y with an error = 0.003 inch from the true mean hole diameter. Use a = 0.05.
8-3 Confidence Interval on the Mean of a Normal Distribution, Variance Unknown
8-3
8-5
Confidence Interval on the Mean of a Normal Distribution, Variance Unknown
O Explain the characteristics of a t distribution and read the t table. O Identify the point estimator of p when o2is unknown and the sampling distribution of the point estimator. Establish a 100(1 - a ) % CI on p when o2is unknown.
t Distribution
The probability density function of a t distribution with v degrees of freedom is
The mean and variance of the t distribution are E(X) = 0 and V(X) = v /(v - 2) for v > 2 A t distribution is symmetric about zero, unimodal, and bell-shaped like the standard normal, but has thicker tails than the standard normal (see Figure 8-4 in MR). As v approaches w, the t distribution is close to the standard normal. t Table
The t table (see Appendix Table IV in MR) provides 100a upper-tail percentage points ( t , , ) o f t distributions with various degrees of freedom (v): P(Tv > t , , = a Since a t distribution is symmetric about zero, -
tl-a," - -ta,v (e.g.) Reading the t table (I) P(T12> t) = 0.05 , to,5,1, = 1.782 (2) P(T12 < t) = 0.05 Since
Inference Context
t0.95,,2
= tl-0.05,12 = -t0.05,12
9
t0.95,12
=
Parameter of interest: p Point estimator of p:
Confidence Interval Formula
P(T1, > t) = 0.95
X - N(p, -)(5
n
,02unknown
A 100(1 - a)% CI on w when o2is known is S X - tai2,, < p I X + ta/2,vfor two-sided CI
&-
X - t,,,
&
S
-< p
for lower-confidence bound
6-
p5X
+ t,,,
S
-
&
for upper-confidence bound
8-6
Chapter 8
CI Formula
Statistical Intervals for a Single Sample
(Derivation) Two-sided confidence interval on u, 0,unknown F-p By using T = , s 1J;;
-
"-"
It,,,,
= 1-a
Therefore,
S
L = X - ~ ~ , , , ~ -and J;;
U =X +
S
J;;
Note that, for a one-sided CI, use t, instead of t,,, to derive the corresponding limit.
Large Sample CI
r
If the sample size is large (n 2 30), the t-based CI formula on p is similar to the 2
based CI formula on p below, regardless of whether the underlying population is normal or non-normal according to the central limit theorem (presented in Section 7-5). S S X-zat2-
6-
Example 8.2
J;;
(Confidence Interval on y, o2Unknown; Two-Sided CI) For the light bulb life length data in Example 8.1, the following quantities have been obtained: n = 30, T = 780, and s 2 = 40, Assume that the variance o2is unknown. Construct a 95% two-sided confidence light bulb. interval on the mean life length (p)of an INFINITY h 1-a=0.95 a=0.05; v = n - 1 = 3 0 - 1 =29 95% two-sided CI on p:
Note that this t-based CI is wider than the corresponding z-based CI 765.7 Ip 5794.3 in Example 8.1.
8-4 Confidence Interval on the Variance and Standard Deviation of a Normal Population
Exercise 8.2 (MR 8-23)
8-4
8-7
An Izod impact test was performed on n = 20 specimens of a PVC pipe product. The sample average and sample standard deviation were x = 1.25 and s = 0.25, respectively. Assume that Izod impact strength (X)is normally distributed. Construct a 95% upper-confidence bound on the mean Izod impact strength (p).
Confidence Interval on the Variance and Standard Deviation of a Normal Population
CI Explain the characteristics of a
distribution. table. CI Identify the point estimator of o2and the sampling distribution of the point estimator. Cl Establish a 100(1 - a)% CI on 02.
0 Read the
X2
Distribution
The probability density function of a X2 distribution with v degrees of freedom is
The mean and variance of the E(X) = v and V(X) = 2v
distribution are
A distribution is unimodal and skewed to the right (see Figure 8-8 in MR). As v approaches w, the distribution is close to a normal distribution.
x2Table
The table (see Appendix Table I11 in MR) provides 100a upper-tail percentage points (xi,, ) of distributions with various degrees of freedom (v), i.e.,
p(x: >&")=a (e.g.) Reading the (1) P(x;,
Table
> x ) = 0.1, 2
=XE.1,14
Z21.06
(2) P ( X ~< x 2 ) = 0 . 1 2 ~ - P ( x : ~ > x 2 ) = 0 . 9 ,
Inference Context
Parameter of interest:
G~
Point estimator of 02:
s2=
n -1
=Xi,9,14 =7.79
,X,,X2,..., Xn-i.i.d. ~ ( p , o ~ )
8-8
Chapter 8 Confidence Interval Formula
Statistical Intervals for a Single Sample (, - 1)s2
lo l
(n - 1)s2
,
X 1-a i 2
Xai2,v
(n - 1)s
, ~
I o2
2 Xai2,v
for two-sided CI for lower-confidence bound
(n-1)s2
o 1 2 X I-a/ 2,v
for upper-confidence bound
(Derivation) Two-sided confidence interval on o2 (n - 1)s2 By using the test statistic x2= X 2 (v) ,
-
Therefore,
L= Xa12,v
r
Note that, for a one-sided CI, use limit. Example 8.3
instead of
xil2 to derive the corresponding
(Confidence Interval on oZ;Two-Sided CI) Suppose that the life length of an INFINITY light bulb (X)follows a normal distribution with mean p and variance 02. A random sample of n = 16 bulbs is tested, and the sample variance is found to be s2= 44'. Construct a 95% two-sided confidence interval on the variance of the life length of an INFINITYlight bulb (0'). OT. 1 - a = 0 . 9 5 : a = 0 . 0 5 ; v = n - 1 = 16-1 = 1 5 CI on 02: 95% two-sided (n-1)s' (n-1)s' lo I 2 Xai2,v
XI-a12,v
8-5 A Large-Sample Confidence Interval for a Population Proportion
Exercise 8.3 (MR 8-35)
8-5
8-9
A rivet is to be inserted into a hole. A random sample of n = 15 holes is selected, resulting in the sample standard deviation of the hold diameter = 0.008 mm. Assume that the hole diameter is normally distributed. Construct a 99% upperconfidence bound on the variance of the hole diameter (02).
A Large-Sample Confidence Interval for a Population Proportion
Identify the point estimator o f p and the sampling distribution of the point estimator Establish a 100(1 - a)%CI onp.
Inference Context
Parameter of interest: p Point estimator ofp:
X n
p = -,where
X - B(n, p )
- N(0,l) Sampling Distribution of
if np and n(1- p ) > 5
The mean and variance of a binomial random variable X E(X) = np and V(X) = np(1- p )
- B(n, p ) are
i, Thus,
Recall that in Section 4-7 a binomial distribution B(n,p) approximates to a normal distribution i f p is not close to either zero or one and n is relatively large so that both np and n(1-p) are greater than five. Therefore, if np and n(1- p ) > 5, the approximate distribution of P is
8-10
Chapter 8
Confidence Interval Formula
Statistical Intervals for a Single Sample
A
P - z,,
,
for two-sided CI
Ip
^
p - za,,
Ip
for lower-confidence bound
pIP+zaI2
'(I
- )'
for upper-confidence bound
(Derivation) Two-sided confidence interval* on p
By using the test statistic Z = P(LS~IU)=I-~ 3
P(-z,,,
By using k in place o f p because the upper and lower limits of the C1 include the unknown parameter p, = 1- a
Therefore,
/?
L=P-zai2
* For a one-sided CI, use Sample Size Selection
and U = ~ + Z ~ ! ~ z , instead of z,,,
to derive the corresponding limit.
In estimating p, the following formulas are used to select a sample size n for a predefined level of error:
( )
2
n= =
Example 8.4
p ( b p)
if information o f p is available
(2) 2
x 0.25 if information o f p is unavailable
A sample of n = 40 bridges in HAPPYCounty is tested for metal corrosion, and x = 28 bridges are found corroded. I . (Confidence Interval onp; Two-Sided CI) Construct a 95% two-sided confidence interval on the proportion of corroded bridges (p) in the county.
8-1I
8-6 Prediction Interval for a Future Observation
I
Example 8.4 (cont.)
0 . ~ r1 - a = 0 . 9 5
s a = 0 . 0 5 ; j = - =x- = 028 .7 n
Since both nj? = 40 x 0.7 = 28 and n(1- j3) = 40 x 0.3 = 12 are greater than five, the sampling distribution of
1
k
is approximately normal.
95% two-sided CI on a:
I
2. (Sample Size Selection) Determine a sample size n to establish a 95% confidence interval o n p with an error = 0.05 from the true proportion.
The proportion of defective integrated circuits produced in a photolithography process is being studied. A random sample of n = 300 circuits is tested, revealing x = 13 defectives. 1. Construct a 95% lower-confidence bound on the proportion of defective integrated circuits @). 2. Determine a sample size n to establish a 95% confidence interval o n p with an error = 0.02 from the true proportion. Use j = 0.05 as an initial estimate o f p for sample size calculation.
Exercise 8.4 (MR 8-48)
8-6
40
A Prediction Interval for a Future Observation -
O Identify the distribution of the prediction error X,,,- X . Cl Establish a 100(1 - a)% prediction interval (PI) for a new observation. ---mm%m
, b
1C
s wse w w
Sampling Distribution Of
E = X,,,- X
w
**#
m.-&w
** * -*
#+
w*
*
*
a sr> w e
*-
*-ass
VS*
we=*
e
wmaeax
a
&
-a**
sse-
e
-
a
%
wm * W "
a
Suppose that XI, X2,. .. ,X,is a random sample from a normal population with mean p and variance o2and we wish to predict a new observation X,+, . If X is used as a point estimator of X,+1, then the distribution of the corresponding prediction error E is
X,,,and
X
are independent
-
8-12
Chapter 8
Statistical Intervals for a Single Sample
Sampling Distribution of E = X,+, - X (cont.)
Thus,
-
Xn+l -
x - ~ ( 0 , 1 ~ if) ,02is known
o 1+-
T = Xn+l -
x - t(n - I),
l--T
Prediction Interval Formula
r
F-Z,/~~ -
if o2is unknown
-
d :
~ - t , ~ ~ , , - , 1+-IX,+lSF+t,12,,-ls s
for two-sided CI, o2known
1+-
d : 1+-
fortwo-sidedC1,o 2 unknown
(Prediction Interval on X,+I, o2 Unknown; Two-Sided CI) For the light bulb life length data in Example 8.2, the following quantities have been obtained:
Example 8.5
Construct a 95% two-sided prediction interval on the life length of the next light bulb tested (X31). h 1-a=0.95 a a=0.05 95% two-sided PI on X31:
I Exercise 8.5 (MR 8-50)
Note that this t-based PI is wider than the corresponding t-based CI on p 765.1 I p 5794.9 in Example 8.2. For the Izod impact test in Exercise 8.2, the following quantities have been obtained:
I
n = 20, ?=1.25, and s 2 = 0 . 2 5 ~ Construct a 95% two-sided prediction interval on the impact strength of the next specimen of PVC pipe tested (X2,).
8-7
Tolerance Intervals for a Normal Distribution ~
1 L
w
e
M
~
M
m
"
-
~
~
m
~
~
~
-
m
e
-
-
-
-~
w~
s.
e~
w
.
m
~
-
w
0 Establish a y tolerance interval with 100(1 - a ) % confidence for a normal population. Tolerance Interval
Suppose that the life length ( X ) of an INFINITYlight bulb follows a normal distribution with mean p and variance 0'. Then, the interval ( p - 1.960, p + 1.960), called tolerance interval, includes 95% (y, coverage percent) of the light bulb population in terms of life length. When p and 0' are unknown, the 95% tolerance interval may be established as + 1.96s ) by using X and s2. However, due to sampling variability in Z and s2, the estimated tolerance interval includes less than 95% of the population. ( X - 1.96s , X
r
Thus, when p and o2are unknown, a y tolerance interval with confidence level 100(1 - a ) % is established by ( i?- ks , X + ks ), where k is found in Appendix Table XI of MR. Note that as the sample size n approaches w, the value of k is close to the z-value associated with y.
Example 8.6
(Tolerance Interval on X; Two-Sided CI) For the light bulb life length data in Example 8.2, the following quantities have been obtained: n = 3 0 , X=780,and s 2 = 4 0 2 Construct a 95% two-sided tolerance interval with 95% confidence for the life length measurements of light bulbs. h k = 2.529 for n = 30, y = 0.95, and confidence level = 0.95 95% two-sided TI is (X-ks, X + k s ) s (780 - 2.529 x 4 0 , 780 + 2.529 x 4 0 ) (678.8, 881.2)
I
Note that this TI is wider than the corresponding t-based CI on p 765.1 I p I 794.9 in Example 8.2.
I
6
<
8-13
8-7 Tolerance Interval for a Normal Distribution
Exercise 8.6 (MR 8-61)
I
For the Izod impact test in Exercise 8.2, the following quantities have been obtained: n = 2 0 , 2=1.25,and s 2 =0.25' Construct a 95% two-sided tolerance interval on the impact strength of PVC pipe with 90% confidence.
~
~
~
8
~
-
~
A
8-14
Chapter 8
Statistical Intervals for a Single Sample
Summary of Statistical Intervals for a Single Sample A. Confidence lntervals
p, o2known p, o2unknown 02of a normal distribution
p of a binomial distribution
B. Prediction lntervals
Xn+l, o2known Xntl, o2unknown
-
X
2-z,,,o
J
:
I+-IX,,,
-
X
C. Tolerance lntervals
X
- N(p,0 2 ) ,o2known
X
- N(p,o2) , o2unknown
* y: coverage percent ** k: tolerance level factor found in Appendix Table XI of MR
<X+za,,o 8-6
8-15
MINITABApplications
MINITAB Applications (Inference on p, c2Known) (1) Choose File New, click Minitab Project, and click OK.
Example 8.1
*
ITY light bulbs on the worksheet.
(2)
INFINITY
*
(3) Choose Stat Basic Statistics % l-Sample Z. In Variables, select INFINITY.Click Confidence Interval and enter the level of confidence in Level. In Sigma, enter the population standard deviation assumed. Then click OK.
I I
Z Confidence Intervals
I The assumed slgma
= 40.0
.......'.".....................
:"""
Variable
N
Mean
INFINITY
30
780.00
StDev 40.02
SE Mean j 7.30
95.0 'r C I
( 765.68, 794.32) .......................................
II
8-16
Chapter 8
Statistical Intervals for a Single Sample
Example 8.2
(Inference on p, o2Unknown) (1) Choose File New, click Minitab Project, and click OK.
1
I
, ,
(2) Enter the life length data of INFINITYlight bulbs on the worksheet.
,
(3) Choose Stat Basic Statistics 1-Sample t. In Variables, select INFINITY.Click Confidence Interval and enter the level of confidence in
I
I
T Confidence Intervals Varlable
N
Hean
INFINITY
30
780.00
StDev
SE Bean
40.02
7.31
i....................................... ( 765.06, 794.94) i V
Example 8.4
(Inference on p)
,
,
1
(1) Choose Stat Basic Statistics 1 Proportion. Click Summarized data and enter the number of bridges surveyed in Number of trials and the number of bridges corroded in Number of successes. Then click
8-17
MINITABApplications
Example 8.4 (cont.)
(2) Enter the level of confidence in Confidence level and the hypothesized proportion in Test proportion. Check Use test and interval based on
I
I
I
Test and Confidence Interval for One Proportion Test of p = 0.5 vs p nor = 0.5 Sample
X
N
28
40
.........."*.."..." .............*..
Samplepi 0.700000
95.0 5 C I
; 2-Value
P-Value
2.53
0.011
{ .............., (0.557987, 0.842013) I " ,.................
8-18
Chapter 8
Statistical Intervals for a Single Sample
Answers to Exercises Exercise 8.1
1. (Confidence Interval on p, o2Known; Upper-Confidence Bound)
1 - a = 0.99 3 a = 0.01 99% upper-confidence bound on p:
1 2. (Sample Size Selection) Exercise 8.2
(Confidence Interval on p, o2Unknown; Upper-Confidence Bound) 1 -a=0.95 a=0.05; v = n - 1 = 2 0 - 1 = 1 9 95% upper-confidence bound on p:
p I x + t,,,
Exercise 8.3
S -
&
(Confidence Interval on 02;Upper-Confidence Bound) 1 - a = 0 . 9 9 3 a = 0 . 0 1 ; v = n - 1 = 1 5 - 1 = 14 99% two-sided CI on 02: (n-l)s2 (31
,
XI-a,"
s o 2
(15-l)x0.008'
x ?-0.01,14
8-19
Answers to Exercises
Exercise 8.4
I 1 . (Confidence Interval onp; Lower-Confidence Bound) Since both n j = 300 x 0.043 = 13 and n(1-
F ) = 300 x 0.957 = 287
greater than five, the sampling distribution of
1
is approximately normal.
95% lower-confidence bound onp:
1 2. (Sample Size Selection)
Exercise 8.5
(Prediction Interval on Xn+l,o2Unknown; Two-Sided CI) ~ - r r I - a = 0.95 3 a = 0.05 95% two-sided PI on X21: 1 X n + ,1X+ta12,n-1S
Exercise 8.6
1
(Tolerance Interval on X; Two-Sided CI) ~ - r rk = 2.564 for n = 20, y = 0.95, and confidence level = 0.90 95% two-sided T I is (X-ks, X + k s ) 3
(1.25-2.564~0.25, 1.25+2.564~0.25) (0.61, 1.89)
are
9 9-1 Hypothesis Testing 9-2 Tests on the Mean of a Normal Distribution, Variance Known 9-3 Tests on the Mean of a Normal Distribution, Variance Unknown
Tests of Hypotheses for a Single Sample
9-5
Tests on a Population Proportion
9-7
Testing for Goodness of Fit
9-8
Contingency Table Tests MINITABApplications Answers to Exercises
9-4 Hypothesis Tests on the Variance and Standard Deviation of a Normal Population
9-1
Hypothesis Testing
Cl Explain the terms null hypothesis, alternative hypothesis, test statistic, acceptance region, rejection region, critical value, type I error probability (a), type 11error probability (P), and power of a test (1-P). Cl Establish the acceptance and rejection regions of a hypothesis test at a . O Determine the type I1 error probability and power of a test. Explain the relationships between a and P. O Explain the difference between strong conclusion and weak conclusion. Identify the procedure of hypothesis testing.
Hypothesis
A hypothesis is an assertion about the parameters (0) of one or more populations under study. There are largely two kinds of hypotheses: 1. Null hypothesis (Ho):States the presumed condition of 0 (based on experience, theory, design specification, regulation, or contractual obligation) that will be held unless there is strong evidence against it. Note that Ho should value of 0. always specify an (e.g.) Ho: px= 750 hrs, where X is the life length of an INFINITY light bulb
9-2
Chapter 9
Tests of Hypotheses for a Single Sample
Hypothesis (cont.)
2. Alternative hypothesis (HI):States the condition of 0 that would be concluded if Ho is rejected. In parallel to the types of confidence intervals (see Section 8-2), the following types of Hl are defined depending on the directionality of 0 assumed in H I . (1) Two-sided HI: Indicates directionality of 0. (e.g.) HI: k + 750 hrs (2) One-sided H I :Indicates the directionality of 0. (a) Lower-sided HI: Includes the lower-side inequality sign. (e.g.) Hl: px< 750 hrs (b) Upper-sided H I :Includes the upper-side inequality sign. (e.g.) HI: px > 750 hrs
Test Statistic
A test statistic refers to a statistic used for statistical inference about 0. (e.g.) Test statistic for inference on k w h e r e X ~ ( p , a and ~ ) a2is known:
Test Regions
Two regions of a test statistic (see Figure 9-1) are established for testing Ho against H I : 1. Acceptance region: The region of a test statistic that will lead to failure to E & C J Ho. 2. Rejection (critical) region: The region of a test statistic that will lead to reiection of Ho. Note that the boundaries between the acceptance and rejection regions are called critical values.
-
For example, in Figure 9-1, in testing Ho: px= 750 vs. HI: px# 750 where X denotes the life length of an INFINITY light bulb, the test statistic of p is the sample mean and the critical values of the sample mean are 730 and 770 hours. Therefore, we will conclude the following: (1) If 730 I T 5 770 (acceptance region), fail to reject Ho. (2) if Z < 730 or 2 > 770 (rejection region), reject Ho. Rejection region
b 730 770 Mean life length of light bulb ( X ) Figure 9-1 Acceptance and rejection rejections for Ho: px = 750 vs. HI: px + 750, where X (unit: hours) is the life length of an INFINITY light bulb. Test Errors
The truth or falsity of a hypothesis can never be known with certainty unless the entire population is examined accurately and thoroughly. Therefore, a hypothesis test based on a random sample may lead to one of the two types of wrong conclusions (see Table 9-1): (1) Type I error: Reject Ho when Ho is true. (2) Type I1 error: Fail to reject Howhen Hois false.
......_............... - ................__.............
m...
(~vrrr.)
~~
~-
U ~ ~ j J g I v n i
Hois true (HI is false)
Truth
Fail to reject HO Correct decision (Correct acceptance)
Reject Ho Type I error (False rejection)
The probabilib of type Ierror (denotedas a)and the probabi/ity of type 11 % \ \ % \ \ L % \ % * ~ \ \ \ \ % ~ \ < < \ % ~ L \ \ L \ L L < \ ~ \ \ \ < % \ s .
a = P(type1 enor) = P(xe1ect Ho\ HQ-IS tme)
p = P(type I1 error) = P(fai1 to reject HOI HOis false)
Note that the type I error probability (a) is also called the level of significance of a test. Small values of a such as 0.05 and 0.01 are typically used to make it difficult to reject Ho if it is true. Power of Test
The power of a statistical test indicates the probability of rejecting Ho when Ho is false, indicating the ability (sensitivity) of the test to detect evidence against Ho: power = P(reject HoI HOis false) = 1 - P(fai1 to reject Ho I Ho is false) = 1 - P
Test Regions,
The test regions, hypotheses, a , P, and power of a test are related to each other. The acceptance and reiection regions of a test statistic 6 are determined based on a and the hvuothesized value of 0 (denoted as 00) in Ho (note that a and 00 are G i f i e d by the analyst). If L and U denote the lower and upper limits of an acceptance region, respectively, and we are testing Ho: 0 = 80, the acceptance region of 6 is determined as follows: 1 - a = P(fai1 to reject Ho I Ho is true) = P(fai1 to reject Ho 1 0 = 80) [P(L S 6 I U 1 0 = go), for two- sided H ,
CG P' and
Power
= { ~ ( ~ ~ 6 ~ 0 = 0 , )for , lower- sided H , k ( 6 5 U I 8 = go),
for upper- sided H ,
On the other hand, the P and power of the test are determined based on the acceptance region of the test and the true value of 0 as follows: f3 = P(fai1 to reject Ho I Hois false) = P(fai1 to reject Ho( 0 # 80) [P(L 5 6 I u 1 0 f go), for two- sided H , = / ~ ( ~ < 6 ~ 0 # 0 , ) , for lower- sided H , [ ~ ( S6U I 0 + go), and power
=
1- p
for upper- sided H ,
9-3
9-1 Hypothesis Testing
Test Errors (cont.)
is truck
-Mw-
( ~ a l s eacceptancei mm-w-eM
w -e-
-m%ms
-
-*
(Correct * *rejection) - -d
a
s
m
The probability of type I error (denoted as a ) and the probability of type I1 error (denoted as P) are conditional probabilities as follows: a = P(type I error) = P(reject HoI Hois true) p = P(type I1 error) = P(fai1 to reject Ho I Ho is false) Note that the type I error probability ( a ) is also called the level of significance of a test. Small values of a such as 0.05 and 0.01 are typically used to make it difficult to reject Hoif it is true. Power of Test
The power of a statistical test indicates the probability of rejecting Howhen Hois false, indicating the ability (sensitivity) of the test to detect evidence against No: power = P(reject HoI Ho is false) = 1 - P(fai1 to reject HoI Hois false) = 1 - P
Test Regions,
The test regions, hypotheses, a , P, and power of a test are related to each other. The acceptance and reiection regions of a test statistic 6 are determined based on a and the hvpothesized value of 8 (denoted as 4)in Ho(note that a and Bo are specified by the analyst). If L and U denote the lower and upper limits of an acceptance region, respectively, and we are testing Ho:8 = €lo,the acceptance region of 6 is determined as follows: 1 - a = P(fai1 to reject HoI Hois true) = P(fail to reject HO1 8 = 80) [P(L i 6 5 u I 8 = go), for two- sided HI
P9 and Power
={p(~i6~8=e,),
for lower- sided HI
b(65 U I 8 = 8,),
for upper- sided H,
On the other hand, the P and vower of the test are determined based on the acceptance repion of the test and the true value of 8 as follows: p = P(fail to reject Ho1 Hois false) = P(fai1 to reject Ho 1 8 # 00)
I
P ( L i 6 5 ~ l 8 # 8 , ) , fortwo-sidedH,
= ~(~<618+8,),
for lower- sided HI
IP(~ 5 U ( 8 3 e,),
for upper- sided H,
and power
=
1- p
r
9-4
Tests of Hypotheses for a Single Sample
Chapter 9
Suppose that the life length of an INFINITYlight bulb (X; unit: hour) is normally distributed with o2= 402. We wish to test Ho: px= 750 versus HI: p x f 750 with a random sample of size n = 30 light bulbs. 1 . (Acceptance/Rejection Regions) Construct the acceptance and rejection regions of the test on p at a = 0.05.
Example 9.1
test statistic of p is the sample mean with the following sampling distribution:
t + ~ The
The acceptance region I I X 1u for Ho:px= 750 vs. HI: px# 750 satisfies the following: 1 - a = 1 - 0.05 = 0.95 = P(fai1 to reject Ho 1 p = 750) =~(11XIu(p=750)
1
Accordingly, the critical values are
Therefore, acceptance region: 735.7 5 2 I 764.3 rejection region: i? < 735.7 and 2 > 764.3
I
2. (p and Power of Test) Assume that the true value of px= 730 hrs. Find the and power of the test if the acceptance region is 735.7 I 2 I 764.3.
I
6 = P(fai1 to reject Ho 1 Hois false)
I
power = 1 - p = 0.782
h
=
P(735.7 I
X 5764.3 ( p = 730)
P
9-1 Hypothesis Testing
9-5
Exercise 9.1 (MR 9-6)
The heat evolved from a cement mixture (X;, unit: callg) is approximately normally distributed with o2= 22. We wish to test Ho:b = 100 versus HI: b # 100 with a sample of n = 9 specimens. 1. Find the acceptance region of the test at a = 0.01. 2. Suppose that the acceptance region is defined as 98.3 I T 5 101.7. Find the P and power of the test if the true mean heat evolved is 103.
Relationship between a and p
The relationships between a and P are explained with illustrations in Figure 9-2. under H ,
under H ,
under H ,
under H,
Hypothetical distributions of X N ( p , o21n) for Ho: 8 = 80andH1:8 # 80
-
1. As the acceptance region widens, a decreases but p increases, provided that the sample size n does not change.
-- -
under H,
under H ,
2. As n increases, both a and p decrease, provided that the critical values are held constant.
under H , under H,
3. When Ho: 0 = 8 0 is false, p increases as the true value of 8 approaches to 8 0 and vice versa.
Figure 9-2 Relationships between a and P.
9-6
Chapter 9
Tests of Hypotheses for a Single Sample
Strong vs. Weak Conclusion
The analyst can directly control both a (probability of rejecting Ho when HOis true) and (hypothetical value of €9, but not P (probability of accepting Ho when HOis false; p depends on the true value of 8). Thus, it is considered that "rejection of Ho"is a strong conclusion and "acceptance of Ho" is a weak conclusion. Consequently, rather than saying we "accept HO,"we prefer saying "fail to reject Ho,"which indicates that we have not found sufficient evidence to reject HO.
Hypothesis Testing Procedure
The formal procedure to make a statistical decision about hypotheses consists of the following four steps throughout the rest of the chapters: Step 1: State Ho and HI. Step 2: Determine a test statistic and its value. Step 3: Determine a critical value(s) for a. Step 4: Make a conclusion, such as we reject orfail to reject HOat a. (Note) The four-step procedure of hypothesis testing is basically same with the eight-step procedure of hypothesis testing in MR.
9-2
Tests on the Mean of a Normal Distribution, Variance Known
R Test a hypothesis on p when o2is known (z-test). R Calculate the P-value of a z-test. O Compare the a-value approach with the P-value approach in reporting hypothesis test results. R Explain the relationship between confidence interval estimation and hypothesis testing. O Determine the sample size of a z-test for statistical inference on y by applying an appropriate sample size formula and operating characteristic (OC) curve. O Explain the effect of the sample size n on the statistical significance and power of the test.
Inference Context
Parameter of interest: y
o
Point estimator of p:
X - N(y,-)
n
, 02known
-
- p N(0,l) Test statistic of p: Z = -
o/,L
Test Procedure (2-test)
Step 1: State Ho and H I .
Ho: ~ = p l j H I : y f po for two-sided test y > plj or p < po for one-sided test Step 2: Determine a test statistic and its value.
2, =-
?-PO
of&
i!
9-2 Tests on the Mean of a Normal Distribution, Variance Known
z-test (cont.)
9-7
Step 3: Determine a critical value(s) for a. z,,, for two-sided test; z, for one-sided test Step 4: Make a conclusion. Reject HOif for two-sided test Izo > za,
I
zo > z, zo < -z,
Significance Probability (P-value)
for upper-sided test for lower-sided test
The significabce probability (denoted as P) of a test statistic refers to the smallest level of significance (a) that would lead to rejection of Ho. Thus, the smaller the P-value, the higher the statistical significance (an inverse relationship between Pvalue and statistical significance). For example, the P-value of a test statistic zo can be computed by using the following formulas: 11 for two-sided test for upper-sided test for lower-sided test (Note) @(z) = P(Z 5 z) : cumulative distribution function of Z
a-value vs. P-value Approach
Two approaches are available to use in reporting the result of a hypothesis test: 1. a-value approach: States the test result at the value of a preselected. This reporting method lacks providing the weight of evidence against Ho. 2. P-value approach: Specifies how far the test statistic is from the critical value(s). Once the P-value is known, the decision maker can draw a conclusion at any specified level of significance (a) as follows: ifPIa Reject Ho at a , otherwise Fail to reject Hoat a , Note that the P-value approach is more flexible and informative than the a-value approach.
Interval Estimation and Hypothesis Test
A close relationship exists between a two-sided 100(1 - a)% confidence interval (CI) on 0 (1 5 0 I u) and the test of a null hypothesis about 0 (Ho: 8 = 80) at a as follows: if the two- sided CI does not include 8, Reject Ho at a , Fail to reject Ho at a ,
otherwise
For example, suppose that a 95% CI on p is 75 1 I p I 779 and we are testing Ho: p = 750 vs. HI: p # 750 at a = 0.05. Since the CI of p does not include the hypothesized value p = 750, we will fail to reject Hoat a = 0.05.
Sample Size Formula
Formulas are used to determine the sample size of a particular test for particular levels of p (or power = 1 - p), a , and 6 (= 10 - 001, difference between the true value of 0 and its hypothesized value). For a z- test on a single sample, the following formulas are applied:
9-8
Chapter 9
Tests of Hypotheses for a Single Sample
Sample Size Formula
n=
( ~ a i 2+ z p ) 2 0 2
,where 6 = p - p o for two-sided test
6
(cant.)
Note that the sample size requirement increases as a , P, and 6 decrease and o increases.
Operating Characteristic (OC) Curve
Operating characteristic (OC) curves in Appendix A of MR plot P against d (for and t-tests) or 3L (for X2- and F-tests) for various sample sizes n at a = 0.01 and 0.05, i.e., P =An, d or h, a ) By using an appropriate OC chart, an adequate sample size can be determined for a particular test to satisfy a predefined test condition.
Z-
Table 9-2 displays a list of OC charts in MR and a formula of the OC parameter d for a z-test on p. By using the table, the appropriate OC chart for a particular ztest is chosen (e.g., for a one-sided z-test at a = 0.05, chart VIc is selected).
Table 9-2 Operating Characteristic Charts for z-test - Single Sample Chart VI a Test OC parameter (Appendix A in MR)
Two-sided z-test One-sided
Effect of Sample Size
0.05 0.01 0.05
(c)
-. (7
(7
As the sample size n increases, both the statistical significance (inverse to Pvalue) and power (= 1 - P) of the test increase. For example, Table 9-3 presents P-values and powers of testing on p for the following conditions:
I:[
X - N p,-
and x = 5 0 . 5
Ho: px= 50 VS. H I :px+ 50 a = 0.05 and the truevalue o f p = 50.5 The P-value column indicates that, for the same value of i? = 50.5, Ho is rejected a , while fro i s ' h t rejected at a = 0.05 at a = 0.05 when n = 100 because P I when n 2 50 because P $ a.
9-2 Tests on the Mean of a Normal Distribution, Variance Known
tr
9-9
Statistical vs. Practical Significance
The statistical significance of a test does not necessarily indicate its practical significance. Sincie the power of a test increases as the sample size increases, any small departure of 9 from the hypothesized value O0 will be detected (in other words, Ho: 0 = O0 will be rejected) for a large sample, even when the departure is of little practical significance. Therefore, the analyst should check if the statistical test result has also practical significance.
Example 9.2
For the light bulb life length data in Example 8.1, the following results have been obtained: n=30, T=780,
0'
=402
95% two-sided CI on p: 765.7 I p I 794.3 1. (Hypothesis Test on k, o2Known; Two-Sided Test) Test Ho: p H I : p # 765 hrs at a = 0.05. Step 1: State Ho and H I . Ho: p = 765 HI: ~ $ 7 6 5
= 765
hrs vs.
Step 2: Determine a test statistic and its value. X - p o - 780 - 765 = 2.05 zo =-0/& 40/@ Step 3: Determine a critical value(s) for a. za12
= z0.0512 = '0.025 =
Step 4: Make a conclusion. Since lzol = 2.05 > z,,,,
= 1.96, reject Hoat a = 0.05.
2. (P-value Approach) Find the P-value for this two-sided z-test. h P = 2[1- @(I z0 I)] = 2[1- @(I 2.05 I)] = 2[1- 0.9801 = 0.04 Since P = 0.04 I a = 0.05, reject Hoat a = 0.05. 3. (Relationship Between CI and Hypothesis Test) Test Ho: p = 765 hrs vs. HI: y # 765 hrs at a = 0.05 based on the 95% two-sided CI on p. h Since the 95% two-sided CI on y, 765.7 I p 2794.3, does not include the hypothesized value 765 hrs, reject Hoat a = 0.05. 4. (Sample Size Determination) Determine the sample size n required for this two-sided z-test to detect the true mean as high as 785 hours with power of 0.9. Apply an appropriate sample size formula and OC curve. h (1) Sample Size Formula P = 0.1 power = P(reject HOI Hois false) = 1 - p = 0.9 6=y-yo ~ 7 8 5 - 7 6 5 ~ 2 0
9-10
Chapter 9
Tests of Hypotheses for a Single Sample
Example 9.2 (cont.)
(2) OC Curve To design a two-sided z-test at a = 0.05 for a single sample, OC Chart VIa is applicable with the parameter
By using d = 0.5 and P = 0.1, the required sample size is determined as n as displayed below. Note that this sample size is similar with the sample size n = 42 determined by using a sample size formula.
= 44
(a)
O.C. curves fur different values of n for the two-sided nomu4 test for a level of significance a = 0.05
I
Exercise 9.2 (MR 9-28)
I
For the hole diameter data in Exercise 8.1, the following results have been obtained: n = 10, ?=1.5045,and o2=0.012
1. Test the hypothesis that the true mean hole diameter is greater than 1.50 in. at a = 0.01. 2. Find the P-value for this one-sided z-test. 3. Determine the sample size n required for this one-sided z-test to detect the true mean as high as 1.505 inches with power of 0.9. Apply an appropriate sample size formula and OC curve.
9-3
Tests on the Mean of a Normal Distribution, Variance / Unknown
Test a hypothesis on y when o2is unknown (t-test). O Determine the sample size of a t-test for statistical inference on p by using an appropriate operating characteristic (OC) curve.
9-11
9-3 Tests on the Mean of a Normal Distribution, Variance Unknown
Inference Context
Parameter of interest: y Point estimator of p:
X - N(p,-)0 ,o2unknown n
,
X-p Test statistic of p: T = ----- - t ( v ) , v = n - 1
s/&
Test Procedure (t-test)
Step 1: State Ho and H I . Ho: y = p o HI: p # pg for two-sided test, p > po or y < yo for one-sided test. Step 2: Determine a test statistic and its value.
2-PO To = ------
s/&
- t (n - 1)
Step 3: Determine a critical value(s) for a. t,,,,,-, for two-sided test; t u n ,
for one-sided test
Step 4: Make a conclusion. Reject Ho if t o > t n 1 for two-sided test
1
Operating Characteristic (OC) Curve
to > t,,,-,
for upper-sided test
to < -t,,,-,
for lower-sided test
Table 9-4 displays a list of OC charts in MR and a formula of the OC parameter d for a t-test on p. By using the table, the appropriate OC chart for a particular t-test is selected (e.g., for a two-sided t-test at a = 0.01, chart VIfis selected). Table 9-4 Operating Characteristic Charts for 1-test - Single Sample
Test
Chart VI
a
OC parmeter
(Appendix A in b4R)
- ,
..
Two-sided t-test One-sided
r
*
--
-"-"----
-" * -------*
*-*---*-w
-
-
w w
- -- - ----
(Note) For 6 , use a sample standard deviation or subjective estimate. Example 9.3
For the life length data in Example 8.2, the following results have been obtained: n = 3 0 , Z=780,and s 2 =402 1. (Hypothesis Test on p, 0' Unknown; Two-Sided Test) Test Ho: p = 765 hrs vs. HI: 1-1 # 765 hrs at a = 0.05.
9-12
Chapter 9
Example 9.3 (cont.)
Tests of Hypotheses for a Single Sample
* Step 1: State HOand H I . Ho: P = 765 HI: ~ $ 7 6 5 Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. f a /2,n-1 - '0.05 /2,30-1 = t0.025.29= 2.045 Step 4: Make a conclusion. Since ltol = 2.05 >
= 2.045 ,reject Ho at
a = 0.05.
2. (Sample Size Determination) Determine the sample size n required for this two-sided t-test to detect the true mean as high as 785 hours with power of 0.9. Apply an appropriate OC curve. hr power = P(reject Ho / H o is false) = 1 - p = 0.9 3 p = 0.1 6=p-j.10 =785-765=20 To design a two-sided t-test at a = 0.05 for a single sample, OC Chart VIe is applicable with the parameter
By using d = 0.5 and P = 0.1, the required sample size is determined as n = 45 as displayed below.
d
(e)
O.C. curves for different values of n for the two-sided t-test for a level of siBnif~cancea = 0.05.
9-4 Hypothesis Tests on the Variance and Standard Deviation of a Normal Population 7
m
9-13
/
Exercise 9.3 (MR 9-35)
I
For the Izod impact test data of a PVC pipe product in Exercise 8.2, the following results have been obtained:
II
The ASTM standard requires that Izod impact strength must be greater than 1.0 ft-lblin. 1. Test if the Izod impact test results satisfy the ASTM standard at a = 0.05. 2. Determine the sample size n required for this one-sided t-test to detect the true mean as high as 1.10 with power of 0.8. Apply an appropriate OC curve.
I 9-4
Hypothesis Tests on the Variance and Standard Deviation of a Normal Population
Test a hypothesis on o2(x2-test). O Determine the sample size of a %'-test for statistical inference on o2by using an appropriate operating characteristic (OC) curve.
Inference Context
Test Procedure a2-test)
Parameter of interest: o2
Point estimator of 02:
s2= i=l
Point estimator of 02:
x
=
n-1 (n-1)s'
o
,Xl,X2,..., Xnwi.i.d. N(~,O~)
- X (v),v=n-1
Step 1: State Ho and H I . Ho: 02 = o2o HI: o2# 0; for two-sided test,
o2> 0; for upper-sided test o2< (3; for lower-sided test. Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. 2 2 2,n-1 and x ~ -2,n-l ~ , for two-sided test
xa; 2
for upper-sided test
x 2~ - ~ for , ~lower-sided -~ test
9-14
Chapter 9
Tests of Hypotheses for a Single Sample Step 4: Make a conclusion. Reject Hoif 2 > 2 or 2 < x 2 ~ - ~
X2-test (cont.)
xo x ~ ~ ~ xo, ~ - ~ ~ for ~ ,two-sided ~ - ~ test 2 for upper-sided test xo2 > x,,~-, 2 2 xo < x ~ - ~ ,for~ -lower-sided ~ test
Operating Characteristic (OC) Curve
Table 9-5 displays a list of OC charts in MR and a formula of the OC parameter h for a X2-teston 2.By using the table, the appropriate OC chart for a particular x2-test is selected (e.g., for an upper-sided x2-test at a = 0.05, chart VIk is selected).
Table 9-5 Operating " Characteristic Charts for r2-test ,* Chart VI Test a OC parameter (Appendix A in MR) 0.05 Two-sided 0.01
r
Example 9.4
For the light bulb life length data in Example 8.3, the following results have been obtained: n = 16, s2 = 442 95% two-sided CI on 02: 32S2 l o2 5 68. l 2
1 . (Hypothesis Test on 02; Two-Sided Test) Test Ho: o2= 402 vs. HI: o2f 402 at a = 0.05. ~l-rrStep 1: State Ho and H I . Ho: o2=402 HI: o2+402 Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. 2 X0.0512,16-1
2
= X0.025,15 = 28.65 and
Step 4: Make a conclusion. Since xi = 18.15 P = 18.15
I
rr xi,,,
F/I
2 XI-0.05/2,16-1
2
= X0.975,15 = 6.91
~ i =,28.65 ~ and ~ ~ , ~ ~ = 6.91, fail to reject Ho at a = 0.05.
2. (Relationship Between and Hypothesis Test) Test Ho: o2= 402 vs. HI: o2 # 402 at a = 0.05 based pn the 95% two-sided CI on 02.
9-5 Tests on a Population Proportion Example 9.4 (cont.)
n-
9-15
Since the 95% two-sided CI on 02,%2S2I o' 5 68. 1 2 ,includes the hypothesized value 402, fail to reject Ho dt a = 0.05.
3. (Sample Size Determination) Determine the sample size n required for this two-sided x2-test to detect the true standard deviation as high as 50 with power of 0.8. Apply an appropriate OC curve. h power = P(reject Ho I Ho is false) = 1 - B = 0.8 s /3 = 0.2
To design a two-sided X2-testat a = 0.05, OC Chart VIi is applicable with the parameter
By using h = 1.25 and /3 = 0.2, the required sample size is determined as n = 75 as displayed below.
1.25
A
(I) O.C.curves for d i i k m l values ofn f a the tvm-dddd chi-squaretest fa a lml of significancea = 0.05
Exercise 9.4 (MR 9-48)
9-5
For the hole diameter data in Exercise 8.3, the following results have been obtained: n = 15 and s2= 0.008~ I . Test if there is strong evidence to indicate that the standard deviation of the hole diameter exceeds 0.01 mm. Use a = 0.01. 2. Determine the sample size n required for this upper-sided x2-test to detect the true standard deviation mean which exceeds the hypothesized value by 50% with power of 0.9. Apply an appropriate OC curve.
Tests on a Population Proportion
0 Test a hypothesis o n p (2-test) for a large sample. 0 Determine the sample size for statistical inference o n p by using an appropriate sample size formula.
9-16
Chapter 9
Inference Context
Tests of Hypotheses for a Single Sample
Parameter of interest: p
-
X Point estimator ofp: P = -, where X - B(n, p ) n Test statistic ofp:
Test Procedure (2-test)
-
- N(0,l)
if np and n(1- p) > 5
Step 1 : State Ho and H I . Ho: p 'PO HI: p stpo for two-sided test, p > po or p < p~ for one-sided test. Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value@) for a. z,,, for two-sided test; z , for one-sided test Step 4: Make a conclusion. Reject HOif )zO> za,, for two-sided test
1
zo > z, z, < -z,
Sample Size Formula
for upper-sided test for lower-sided test
For a hypothesis test onp, the following formulas are applied to determine the sample size: L
for two-sided test L
for one-sided test
Y
Example 9.5
For the corroded bridge data in Example 8.4, the following results have been obtained: x 28 n = 4 0 and j = - = - = 0 . 7 n 40 1. (Hypothesis Test onp; Two-Sided Test) Test Ho:p = 0.5 vs. H I :p # 0.5 at a = 0.05. h Step 1: State Ho and H I . Ho: p = 0.5 HI: ~ $ 0 . 5
9-7 Testing for Goodness of Fit
Example 9.5 (cont.)
I
9-17
Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. za12
= z0.05/2 = '0.025 =
Step 4: Make a conclusion. Since lz,l= 2.53 > z,,,~, = 1.96, reject Hoat a = 0.05.
2. (Sample Size Determination) Determine the sample size n required for this two-sided z-test to detect the true proportion as high as 70% with power of 0.9. Apply an appropriate sample size formula. b power = P(reject Ho I Hois false) = 1 - p = 0.9 p = 0.1
Exercise 9.5 (MR 9-53)
For the circuit test data in Exercise 8.4, the following results have been obtained: x 13 n = 3 0 0 and p=-=-=0.043 n 300 A
1. Test if the fraction of defective units produced is than 0.05. Use a = 0.05. 2. Determine the sample size n required for this one-sided z-test to detect the true proportion as low as 2% with power of 0.8. Apply an appropriate sample size formula.
9-7
Testing for Goodness of Fit
0 Distinguish between nominal and ordinal variables. Ll Explain why the expected frequency of each class interval should be at least three in the goodnessof-fit test. bution. X I ( * * i i ~ _ n ~ q X I M & * l m ~ ~
XUT
Ch.mlii--e
9-18
Chapter 9
Categorical Variable
Inference Context
Tests of Hypotheses for a Single Sample A categorical variable is used to represent a set of categories. Two types of categorical variables are defined depending on the significance of the order of the category listing: meaningful. 1. Nominal variable: The order of listing of categories is (e.g.) gender (male and female) hand dominance (left-handed, right-handed, and ambidextrous) 2. Ordinal variable: The order of listing of categories is meaningful. (e.g.) education (< 9 years, 9 - 12 years, and > 12 years) symptom severity (none, mild, moderate, and severe) The underlying probabilitv distribution of the population is unknown. Thus, we wish to test if a particular distribution fits the population. (e.g.) Ho: X - Poisson distribution with 3L (discrete distribution) Ho: X ~ ( p02 , ) (continuous distribution)
-
Test Statistic
X2 =
c k
(oi
i=l
where: k = 0, = E, = p =
-'i12
- ~ ~ ( k - ~ - l )
Ei Number of class intervals (bins) Observed frequency of class interval i Expected frequency of class interval i Number of parameters of the hypothesized distribution that are estimated by sample statistics
As the observed frequencies are close to the corresponding expected frequencies, the statistic x2becomes small. Thus, x2is used to test the null hypothesis Ho: X follows a articular distribution. Table 9-6 can be used to calculate the test statistic
d
Table 9-6 Goodness-of-Fit Test Table
class 0be;erved btewals Frequency i
Qi
Probability
Expact4 Frequency
Pi
Bi(=npi)
Oi-Ei
(Caution) Minimum expected frequenc If an expected frequency is too small, X can be improperly large by a small departure of the corresponding observed frequency from the expected frequency. Although there is no consensus on the minimum value of an expected frequency, the value 3,4, or 5 is widely used as the minimum. To avoid this undesirable case, when an expected frequency is very small (say, < 3), the corresponding class interval should be combined with an adjacent class interval and the number of class intervals k is reduced by one.
Y
9-7 Testing for Goodness of Fit
Test Procedure &'-test)
9-19
Step 1: State Ho and H I . Ho: X A particular distribution. HI: X -/- The hypothesized distribution.
-
Step 2: Determine a test statistic and its value. 1. Estimate the parameteds) of the hypothesized distribution if their values are not provided. 2. Define class intervals and summarize observed frequencies (Oi's) accordingly. 3. Estimate the probabilities (pi's) of the class intervals.
4. Calculate the expected frequencies (Ei= npi)of the class intervals. If an expected frequency of a class interval is too small (< 3), combine it to an adjacent class interval. Then, repeat steps 2.2 to 2.4. k
5. Calculate the test statistic X; = i=l
(Oi - E , ) ~
Ei
Step 3: Determine a critical value for a. 2 Xa,k-p-1
Step 4: Make a conclusion. Reject Ho if
(Caution) Upper-sided critical region becomes smaller as the hypothesized distribution fits Since the test statistic xo2 better, no lower limit is set as a critical value in the goodness-of-fit test. Example 9.6
(Goodness-of-Fit Test; Discrete Distribution) The number of e-mails per hour (3 coming to Ms. Young's e-mail account is assumed to follow a Poisson distribution. The following hourly e-mail arrival data are obtained during n = 100 hours:
**--
s-e-D
2 3
e*--w*w*ws-
---me
7 5
~ ~ ~ w B M ~ ~ - - = m M m w - ~ - ~ - - # a m
--~wm"wa-,"-mm~em"
Conduct a goodness-of-fit test at a = 0.05. Step 1: State Ho and H I . Ho: X Poisson distribution with h (Note) E(X) = V(X)= 3L Hl: X -/- Poisson distribution
-
9-20
Chapter 9
Example 9.6 (cont.)
Tests of Hypotheses for a Single Sample Step 2: Determine a test statistic and its value. 1. Estimate the parameter of the hypothesized distribution. Ox60+lx28+2x7+3x5 = E(X) = = 0.57 100 The number of parameters estimated is p = 1. 2. Define class intervals and summarize observed frequencies (0;'s)
3. Estimate the probabilities (pj's) of the class intervals.
4. Calculate the expected frequencies (El = np;) of the class intervals. If an expected frequency is too small (< 3), adjust the class intervals. Since the expected frequency of the last class interval in the above table is less than three, combine the last two cells as follows: Observed Expected Frequency Probability Frequency oi- Ei (Oi -E;) # e-mails per hour Oi Ei (= n j i )
Fi
3
5. Calculate the test statistic:
(0; - E ; ) ~
= 3.14
= i=l
Ei
Step 3: Determine a critical value for a. 2 Xa,k-pl
2 - 2 - X0.05,3-1-1 = X0.05,l
= 3.84
Step 4: Make a conclusion. = 3.14 2 Xi,05,, = 3.84, fail to reject Ho at a = 0.05. In Since other words, the number of e-mail arrivals per hour follows a Poisson distribution at a = 0.05.
9-21
9-7 Testing for Goodness of Fit
4
Exercise 9.6 (MR9-59)
1 Consider the following frequency table (n = 100) of a random variable X
1
0
,y
1
2
3
4
I
(Goodness-of-Fit Test; Continuous Distribution) The final scores (X)of n = 40 students in a statistics class are summarized as follows:
Example 9.7
I
Bins in X
-
Frequency
The mean and variance of the scores are 82.2 and 13.3~, respectively. Test if a normal distribution fits the test scores at a = 0.05. Step 1: State Ho and H I .
Ho: X HI: X
I I
+-
~(82.2J3.3~) ~(82.2J3.3~)
Step 2: Determine a test statistic and its value.
1. Estimate the parameter of the hypothesized distribution. Since y and o2are known, skip this step and the number of parameters estimated is p = 0. 2. Define class intervals and summarize observed frequencies (Oi's) accordingly. Bins in Z Observed Expected Xi - p Frequency Probability Frequency Bins in X d Oi Pi Ei (= nPi
3. Estimate the probabilities (pi's) of the class intervals. pl = P ( X < 60) = P(Z < -1.67) = 0.05 p, =P(6OIX<70)=P(-1.67IZ<-O.92)=0.13 p, = P(70 I X < 80) = P(-0.92 I Z < -0.17) = 0.25 p, = P(80 I X < 90) = P(-0.17 I Z < 0.59) = 0.29 p , = P(90 5 X) = P(0.59 I Z) = 1 - P(Z < 0.59) = 0.28
9-22
Chapter 9
Tests of Hypotheses for a Single Sample
Example 9.7 (cont.)
4. Calculate the expected frequencies (Ei= npi)of the class intervals. If an expected frequency is too small (< 3), adjust the class intervals. Since the expected frequency of the first class interval in the previous table is less than three, combine the first two class intervals as follows: Bins in Z Observed Expected Xi - Y Frequency Probability Frequency Bins in X (T i' Pi Ei (=nPi
(Oi- E , ) ~
4
5. Calculate the test statistic:
=
= 1.49 .i=l .
Observed Frequency
Expected Frequency
oi
Ei(= np,)
Bins in X
Ei
Oi- Ei
(oi
)x,
- E~
Step 3: Determine a critical value for a. 2 Xa,k-p-l
- 2
2
- X0.05,4-0-1 = X0.05,3
= 7.8
Step 4: Make a conclusion. Since = 1.49 2 X i , 0 5 , 3 = 7.81 , fail to reject Ho at a = 0.05.
A random number generator that uniformlv produces numbers (X)between zero and one is tested. The distribution of n = 100 numbers from the generator is summarized as follows:
Exercise 9.7
I
9-8
Bins in X
Frequency
Contingency Table Tests
Describe a contingency table. Conduct a contingency table test for independencelhomogeneity of categorical variables.
9-23
9-8 Contingency Table Tests
rxc Contingency Table
An r x c contingency table (see Table 9-7) refers to the table having r rows (for the categories o f 4 and c columns (for the categories of Y ) where each cell i j (category i of X and category j of Y) contains the corresponding observed frequency Oii.
Table 9-7 An r x c Contingencv Table 1 2
.
011 1 0 2 1
$
; i Oil
X
r
* ~ a - - ~ ~ s - ~ - w . " . * ~ x ~ m , a
Inference Context
.
Or1 w w , . , , . ,a a ,.,
012 022
Oi2 Or2
',m,.m,vw,
... ... ... ... ... ... "-""
0, 02,
0, 0"*,XRRRR .
"-Bw
... ... ... ... ... ...
,ms.w-~ww-~s~-~,,,,,-
01, 02c
Oic '
Oi-c
, ,
We wish to test the association between two categorical variables (X and Y ) by using an r x c contingency table for independence or homogeneity as follows: 1. Independence: To examine if X and Y are independent, a representative sample is selected from a single population and then each element in the sample is classified into one of r categories in Xand one of c categories in Y. (e.g.) Classifying a sample of residents in HAPPYCounty in terms of gender (X) and occupation (Y). 2. Homogeneity: To examine if r populations (X)are homogeneous in terms of Y, representative samples are selected from the r populations and then the elements of each sample are classified into c categories in Y. (e.g.) Classifying five samples of residents from different counties (A') in terms of occupation (Y). Recall that P(A I B) = P(A) , P(B I A) = P(B) , and P(A n B) = P(A)P(B) for two independent events A and B (see Section 2-6). Likewise, the relationship of two categorical variables is considered independent/homogeneous if (1) P ( X = xi I Y) = P ( X = x i ) = pi, , (2) P ( Y = y j I X ) = P ( Y = y j ) = p , , , o r
It
where: P ( X = x, I Y) and P(Y = y , I X ) are conditional probabilities,
L
P ( X = x, ) and P(Y = y j ) are marginal probabilities, and
I
P ( X = x, ,Y = y , ) is the joint probability of X and Y.
Test Statistic
(0, - E , ) ~ -x2(v), v=(r-l)(c-l) j=1 i=l E, where: 0, = observed frequency of cell ij
x 2 = t z
E,
= expected
n . n . j ni.n., frequency of cell i j = np, = npi,p.j= n"=n n n
9-24
Chapter 9
Test Statistic (cont.)
Tests of Hypotheses for a Single Sample
(Derivation) Determination of degrees of freedom v = k - p - 1 = r e - [ ( r - l ) + ( c - I)]-1 = r c - r - c + l = ( r - l ) ( c - 1 ) since: k = # cells = rc p = # marginal probabilities that should be estimated by sample statistics to determine E,'s = (r - 1) + (c - 1) As the observed frequencies are close to the corresponding expected frequencies, the value o f 2 becomes small. Thus, the statistic x2is used to test the null hypothesis Ho:X and Yare independent for independence or Ho:X's are homogeneous in terms of Y for homogeneity. Table 9-8 can be used to calculate the test statistic 2 .
Totals
1
...
n.,
...
(Caution) Minimum expected frequency Like the minimum expected frequency for the goodness-of-fit test (see Section 92), if an expected frequency is too small (say, < 3), x2can be improperly large for a small departure of the observed frequency from the expected one. Thus, any category whose expected frequency is small (< 3) should be combined with an adjacent category. Test Procedure a2-test)
Step 1: State Ho and H I . (1) Case 1: Testing for Independence. Ho: X and Yare independent. HI: X and Yare not independent. (2) Case 2: Testing for Homogeneity. Ho: &'s (r populations) are homogenous in terms of Y. HI: Xi's (r populations) are not homogenous in terms of Y. Step 2: Determine a test statistic and its value. (o,-E,)~ j=1 i=1
E,
- x 2 ((r - 1)(c - 1)) ,
ni.n.j
where EV= n
I
9-25
9-8 Contingency Table Tests
xL-test (cont.)
Step 3: Determine a critical value for a. 2 Xa,(r-l)(c-l)
Step 4: Make a conclusion. Reject Ho if 2
XO
r
'
2
Xa,(r-I)(~-I)
(Caution) Upper-sided critical region Since the test statistic xo2 becomes small as the null hypothesis of independence1 homogeneity is true, no lower limit is set as a critical value in the independence1 homogeneity test. Example 9.8
(Contingency Table Test; Independence) Grades in ergonomics (X) and grades in statistics (Y) of n = 100 students are summarized as follows: Statistics Grade (Y) A B Others
-
Test if grades in ergonomics (X) and grades in statistics (Y) are independent at a
= 0.05.
Step 1 : State Ho and H I . Ho: Grades in ergonomics (X)and grades in statistics (Y) are independent. H I : Grades in ergonomics (X) and grades in statistics (Y) are not independent. Step 2: Determine a test statistic and its value. Statistics Grade (Y ) A B Others
Totals
Ergonomics Grade (3 Others
*.b
~ , d~
, .m. ,
x*Aw
".*"" v
10.6 * 32 *","...
i
a.w.~,d*-.-s-
w m ~ , ~ " . m ~ m . , ~ . * # A " # ~ ~ ~ m ~ 8 , " ~ # w*~*-.aB,w.w.d,2b*~ .u.v~>"~
i
21
8
4
8.6 26 .,".....,.,,.-,,-.,-.,Ia
" * ~ , ~ ~ ,~"a~%~ .~m. ,e.~ #'m, ~wv,m" ,.~
Totals ",",-.-.
/4
-~M~.d*easw,
3"
13.9 ! 42 1'00 ",.,.,., ",1,1",1,1,11" l.llll. ," ", " .,.,,-..
!"'
a " a ~ ~ - . ~ , ~ . * " ~ a - * ~ . ~ . e . e . ~ , " , *m
">" ,I,I,",I,",I,I,I"',I,I,I,I.,I",I,I,I,I,I,I,I,I,I,I,I,I,I , I ",I,I,I,I,I,I,I"..,I,I,I,I,I,I,I",I, I
"11"/11
n sw"
,m,mwmvm*A.#
"l".
Step 3: Determine a critical value for a. 2 Xa,(r-l)(c-1)
2
2
= X0.05,(3-1)(3-1) = X0.05,4 = 9'49
Step 4: Make a conclusion. = 19.5> = 9.49 , reject HO at a = 0.05. ~t is Since concluded that grades in ergonomics (X) and grades in statistics (Y) are not independent at a = 0.05.
Xi,05,4
9-26
m
Chapter 9
4
Tests of Hypotheses for a Single Sample
4
: 1
Exercise 9.8 (MR 9-69)
II I
The failures of an electronic component are under study. There are four types of failures (Y) and two mounting positions (X) at the device. A summary of n = 134 failures are as follows:
Mounting Positionlq
a e * " *
ese
~
~
~
Front Back
w
~
m
~
~
~
~
a
~
~
A
Failure Type (Y) 3 C
D
22 4
46 17
9 12
-
s
.
~
w
~
v
~
m
8
~
.
~
~
A
~
18 6 ~
~
~
~
~
~
~
~
B
.
~
1
Test if the type of failure (Y) is independent of the mounting position (X)at a = 0.05.
I
(Contingency Table Test; Homogeneity) A random sample of n = 300 adults with different hand sizes (2')evaluates two mouse designs (Y). The evaluation results are summarized as follows: -
I
~
w
~
Mouse Designs (Y) Conventional New Hand Size Groups (X) a
eu
--
w-s--a-*-
Small Medium La%e mmm.w
a-wa
flmvam
35 20 30
----- -----
%**
--
m"-mw--asmmwmmm-..
65 80 70 -"-
--s--*-aw--"m-
Test if users in different hand-size groups (X) have homogeneous opinions on the mouse designs at a = 0.05. mStep 1: State Ho and HI. Ho: Users in different hand-size groups are homogeneous in terms of opinions on the mouse designs. HI: Users in different hand-size groups are not homogeneous in terms of opinions on the mouse designs. Step 2: Determine a test statistic and its value.
Mouse Designs (Y) Conventional New Hand Size Groups (X)
",.
Totals ,a-,.,-,B
Totals
Medium
~ e # w w ~ ~ - ~ ~ m - * eb*m-e - - ~ "
ewmm,e-mm--a
1
30 70 B 28.3 71.7 100 ,, Iggggggggggggggg ,,*-,,,,-,,," ., ., .., i 85 ".~"2 15 .,,.," ,. . 1 300 ,... .,, ,a1 ,-,-,,
Large .w.vm...-a~w-~ws..~.*a-~
j
mm-
w,smsm.-.*a*wB, gggg.gggggggggg.".gg"
w. 8A8*wfl
~
~ w*d,a*s*%m--,7
B*B.s,w,
,%*,m-.,**
*w .,- a.".,,
Step 3: Determine a critical value for a. 2 Xa,(r-l)(c-1)
2
2
= X0.05,(3-1)(2-1) = X0.05,2 = 5.99
Step 4: Make a conclusion. Since = 5.7 > ~ i , = ~5.99 ~ , fail , ~to reject Hoat a = 0.05. It is concluded that users in different hand-size groups do not have significantly different opinions on the mouse designs at a = 0.05.
Xi
~
~
~
w
9-27
9-8 Contingency Table Tests
Exercise 9.9 (MR 9-70)
A random sample of n = 630 students in different class standings (X)is asked their opinions (Y) on a proposed change in core curriculum. The survey results are as follows:
Class Standing ( X ) e--s-"ms---
-mm-.-m'a-----~m'm
Sophomore Junior Senior
70 60 40
wwmswm----me--=-m
*
- -----
sw
-w*e
*
130 70 60
a mw
-**+
a"B
B a wa
Test if students in different class standings (X) have homogeneous opinions on the curriculum change at a = 0.05.
9-28
Chapter 9
Tests of Hypotheses for a Single Sample
MINITAB Applications
-
Example 9.1
(Power of Test) (1) Choose Stat ) Power and Sample Size > 1-Sample Z. Click Calculate power for each sample size. Enter the sample size selected in Sample sizes, the difference between the true mean and hypothesized mean to be detected in Difference, and the population standard deviation in Sigma. Then click O~tions.
Jnder Alternative Hypothesis select the type of alternative hypothesis, and in Significance level enter the probability of type I error (a).Then click OK twice.
I
I
Testing mean = null (versus n o t = null) Calculating power f o r mean = null + 20 Alpha = 0.05 Sigma = 40
..................
Sample ,
....................i
3 0 ; 0.7819
9-29
MINITABApplications
Example 9.2
I
(Inference on p, c2Known) (1) Choose File % New, click Minitab Project, and click OK. ITY light bulbs on the worksheet.
(2) r---A--------m
r
INFINITY
1
-
(3) Choose Stat % Basic Statistics % 1-Sample Z. Click Test mean, enter the hypothesized mean life length, and select not equal (type of the alternative hypothesis) in Alternative. Enter the assumed population
2-Test Test of mu = 765.00 vs mu not = 765.00 The assumed sigma = 40.0 Varlable INFINITY
Mean 780.00
N 30
StDev 40.02
SE Mean 7.30
................................... i Z 2.05 ,..................
P /0.040 i -5.
...........
l . I I
"y"
,5,
.ii..
3:.:
9.2.1,2
-30
Chapter 9
Tests of Hypotheses for a Single Sample
(5) Choose Stat > Power and Sample Size % 1-Sample Z. Click Calculate sample size for each power value. Enter the power of the test predefined in Power values, the difference between the true mean and hypothesized mean to be detected in Difference, and the population standard deviation
Example 9.2 (cont.)
(6) Under Alternative Hypothesis select the type of alternative hypothesis, and in Significance level enter the probability of type I error (a).Then click OK twice.
1 I
(7) Obtain the s a m ~ l esize reauired to satisfv the medefined test condition.
I I
Power and Sample Size 1-Sample Z Test
?I
I
Testing mean = null (versus not = null) Calculating power for mean = null + 20 Alpha = 0.05 Sigma = 40
..................
isample i Target i Slze -z i 43 i 0.9000
..................
Actual
? -- p
0.9064
Y
9-31
MINITABApplications
Example 9.3
(Inference on p, cr2 Unknown) (I) Choose File New, click Minitab Project, and click OK.
*
(2) Enter the life length data of INFINITYlight bulbs on the worksheet.
*
*
2hoose Stat Basic Statistics 1-Sample t. Click Test mean, ente:r the hypothesized mean life length, and select not equal in Alternative. Then click OK.
T-Test of the Mean
I
T e s t of mu = 765.00 vs mu n o t = 765.00 Variable INFINITY
Qs
Mean 780.00
N
30
*
~dl
StDev 40.02
I
.................................
SE Mean
7.31
i T I2.05 0.049 ,................................
9.3.1
a
lu:
(5) Choose Stat Power and Sample Size k 1-Sample t. Click Calculate sample size for each power value. Enter the power of the test predefined in Power values, the difference between the true mean and hypothesized mean to be detected in Difference, and the sample standard deviation in
9-32
Chapter 9
Example 9.3 (cont.)
Tests of Hypotheses for a Single Sample
(6) Under Alternative Hypothesis select the type of alternative hypothesis, and in Significance level enter the probability of type I error (a).Then
Cancel
( 7 ) Ohtain the sample size reauired to satisfv the redefined test condition.
I I
Power and Sample Size l-Sample t T e s t
I.
*I
T e s t i n g mean = n u l l ( v e r s u s n o t = n u l l ) C a l c u l a t i n g power f o r mean = n u l l + 20 A l p h a = 0.05 Sigma = 40
I
.................. !Sample
i
i
i
Slze 44
i
.................
:
Target rower
Actual
0.9000
0.9000
ruwm
9.3.2 Y
Example 9.5
(Inference on p)
*
(1) Choose Stat Basic Statistics ) 1 Proportion. Click Summarized data and enter the number of bridges surveyed in Number of trials and the number of bridges corroded in Number of successes. Then click
9-33
MINITABApplications
Example 9.5 (cont.)
(2) Enter the level of confidence in Confidence level, the hypothesized proportion in Test proportion, and not equal in Alternative. Check Use twice.
I
I
I
Test and Confidence Interval for One Proportion Tesc o f p Sample
= 0.5 vs p
X 28
N 40
not
= 0.5
Sample p 0.700000
:.. ............................ 95.0 % C I j (0.557987, 0.842013);
2-Value
P-Value
2.53 0.011 ................................
9.5.1 --
(4) Choose Stat > Power and Sample Size > 1 Proportion. Click Calculate sample size for each power value. Enter the power of the test predefined in Power values, the true proportion to be detected in Alternative p, and
9-34
Chapter 9
Example 9.5 (cont.)
Tests of Hypotheses for a Single Sample
(5) Under Alternative Hypothesis select the type of alternative hypothesis, and in Significance level enter the probability of type I error (a).Then click OK twice.
(61 Ohtain the samnle size reauired to satisfv the medefined test condition.
I I
1
Power and Sample Size
I
T e s t f o r One P r o p o r t i o n T e s t i n g proportion = 0 . 5 ( v e r s u s n o t = 0 . 5 ) C a l c u l a t i n g power f o r p r o p o r t i o n = 0 . 7 Alpha = 0 . 0 5 Difference = 0.2
................. Sample Size 62
Example 9.8
i i
Target
Actual
ruwrr
ruwrr
0.9000
0.9029
-
(Goodness-of-Fit Test; Independence) (1) Choose File
+ New, click Minitab Project, and click OK. CI-T - j --J-
I
Others
~2 A
r--T--d----
Others
21
3
MI
9.5.2
-
MINITABApplications
9-35
, ,
(3) Choose Stat Tables Chi-Square Test. In Columns containing the table, select A, B, and Others which contain the frequencies. Then click
Example 9.8 (cont.)
OK.
1
Chi-square Test
I
Expected counts are p n n t e d b e l o r observed counts
I Total
A
B
Others
12 5.46
5 6.72
4 8.82
26
32
42
Total 21
100
................................................................................................... !Chi-Sq = 7 . 8 3 4 + 0.440 + 2 . 6 3 4 + 0 . 3 2 1 + 1.244 + 0 . 2 7 9 + 2 . 4 4 5 + 0 . 6 2 1 + 3 . 6 7 8 = 19.496 ~ D F= 4 , P-Value = 0.0011 .................................................................................................. i
Example 9.9
'
I *!
(Goodness-of-Fit Test; Homogeneity) (1) Choose File > New, click Minitab Project, and click OK. (2) 1
on the mouse designs.
9-36
Chapter 9
Tests of Hypotheses for a Single Sample
*
Example 9.9 (cont.)
(3) Choose Stat % Tables Chi-Square Test. In Columns containing the table, select Conventional and New which contain the frequencies. Then click OK.
I 1
Chi-Square Test
I
I Expected 1
1
c o u n t s a r e p r i n t e d below observed c o u n t s
Conventi 35 28.33
New 65 71.67
Total 100
Total 85 2 15 300 ....................................................... !
1.569 + 0.620 2.451 + 0.969 0.098 0.039 ~ D F= 2 , P-Value = 0.057 I C h l - ~ q=
+
+
+ =
5.746
-
......................................................... $t
IB~,~
v: $9
31
Answers to Exercises
9-37
Answers to Exercises Exercise 9.1
1. (AcceptanceICritical Regions) The test statistic of p is the sample mean with the following sampling distribution:
The acceptance region I I 2 I u for Ho: px = 100 vs. HI: px # 100 satisfies the following: 1 - a = 1 - 0.01 = 0.99 = P(fai1 to reject Ho I p = 100) =~(11X1u1p=100)
Accordingly, the critical values are
Thus, acceptance region: 98.3 1F 1 101.7 rejection region: F < 98.3 and F > 101.7 2. (p and Power of Test) = P(fai1 to reject HO1 HOis false) = P(98.3 I2 1101.7 1 p = 103)
a
power = 1 -
= 0.973
9-38
Chapter 9
Exercise 9.2
Tests of Hypotheses for a Single Sample
1. (Hypothesis Test on y, o2Known; Upper-Sided Test) Step 1: State Ho and H I . Ho: p = 1.50 HI: p > 1.50 Step 2: Determine a test statistic and its value. X-po - 1.5045-1.50 = 1.42 zo =-CJ/& o.o1/4G Step 3: Determine a critical value(s) for a. z, = z,,,, = 2.33 Step 4: Make a conclusion. Since z, = 1.42 2 z,,,, = 2.33, fail to reject Ho at a = 0.01.
2. (P-value Approach)
P = 1- Q(z,) = 1 - Q(1.42) = 1 - 0.922 = 0.078 Since P = 0.078 5 a = 0.01, fail to reject Ho at a = 0.01. 3. (Sample Size Determination) (1) Sample Size Formula power = P(reject HoI Ho is false) = 1 - p = 0.9 6 = p - po= 1.50 - 1.505 = 0.005 n=
(z, +
~
P = 0.1
~() z ~~+. o z~~~~, ~ ) ~o(2.33 . o ~+ ~1.28)~~ 0 . 0 E 1 53 ~ 0.005~ 0.005~
6 (2) OC Curve To design a one-sided z-test at a = 0.01 for a single sample, OC Chart VId is applicable with the parameter 161 10.0051 = 0.5 -- -----d = IP-PO I CJ 0 0.01 By using d = 0.5 and P = 0.1, the required sample size is determined as n = 55 as displayed below. Note that this sample size is similar with the sample size n = 53 determined by using a sample size formula.
d
(4 0 C. c
w for &fiereat values ofn for the one-sided n
d test for a lml of s ~ l p a u= 0 01
Answers to Exercises
Exercise 9.3
9-39
1. (Hypothesis Test on p, o2Unknown; Upper-Sided Test) Step 1: State Ho and H I . Ho: y = 1.0 H,: p > 1 . 0 Step 2: Determine a test statistic and its value. to =--x - P o - 1.25 -1.0 = 4.47 s/& 0.25/fi Step 3: Determine a critical value(s) for a. tct,n-l - t0.05,20-~ = t0.05,19 = .73 Step 4: Make a conclusion. Since to = 4.47 > to.,
,,,,= 1.73, reject Hoat a = 0.05. We conclude
that the PVC pipe product meets the ASTM standard at a = 0.05. 2. (Sample Size Determination) power = P(reject HoI Hois false) = 1 - p = 0.8 a P = 0.2 6 ~ p - =1.10-l.O=O.lO p ~
I
To design a one-sided t-test at a = 0.05 for a single sample, OC Chart VIg is applicable with the parameter
I
By using d = 0.4 and P = 0.2, the required sample size is determined as n = 40 as displayed below.
9-40
Chapter 9
Exercise 9.4
Tests of Hypotheses for a Single Sample
1. (Hypothesis Test on 02; Upper-Sided Test) Step 1: State Ho and H I . Ho: o2=0.012 HI: 0 2 > 0 . 0 1 2 Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value@) for a. 2 Xa,n-I
2
2
= X0.0l,l5-1 = X0.01,14 = 29.14
Step 4: Make a conclusion. Since = 8.96 2 0,,14 = 29.14 ,fail to reject Ho at a = 0.0 1. There is no significant evidence which indicates that the standard deviation of the hole diameter is greater than 0.01 mm at a = 0.01.
2. (Sample Size Determination) power = P(reject Ho 1 HOis false) = 1 - j 3 = 0.9
p = 0.1
To design an upper-sided X2-testat a = 0.01, OC Chart VII is applicable with the parameter 0
h=-=1.5
By using h = 1.5 and p = 0.1, the required sample size is determined as n = 50 as displayed below.
0.C curves for different values of n for the one-sided (upper tail) chi-square test for a level of significance a = 0.01.
(1)
Answers to Exercises
Exercise 9.5
1. (Hypothesis Test onp; Lower-Sided Test) Step 1 : State Ho and H I . Ha: p = 0.05 H I : p < 0.05 Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. z , = zo,05= 1.64 Step 4: Make a conclusion. Since z0 = -0.53 4: -z,,,
= -1.64, fail to reject Ho at
a = 0.05.
2. (Sample Size Determination) power = P(reject Ho I Ho is false) = 1 - p = 0.8 3 P = 0.2
Exercise 9.6
(Goodness-of-Fit Test; Discrete Distribution) Step 1: State Ho and H I . Ho: X Poisson distribution with h = 1.2 H I: X -/- Poisson distribution
-
Step 2: Determine a test statistic and its value. 1. Estimate the parameter of the hypothesized distribution. Since the value of h is given, skip this step. The number of parameters estimated is p = 0. 2. Define class intervals and summarize observed frequencies (Oi's) accordingly.
9-41
9-42
Chapter 9
Tests of Hypotheses for a Single Sample
Exercise 9.6 (cont.)
-
4 or more -*----
4 B-M-*mm
------
0.03
-e*-m-----
3
1
0.3
* . a a * . %
-m,
"**-=-
3. Estimate the probabilities (pi's) of the class intervals.
4. Calculate the expected frequencies (Ei = npi) of the class intervals. If an expected frequency is too small (< 3), adjust the class intervals. All the expected frequencies are three or above. 5
(Oi - E , ) ~
5. Calculate the test statistic: X: =
= 6.6 i=l
I
Ei
Step 3: Determine a critical value for a.
Step 4: Make a conclusion. = 6.6 2 Since
Xi.05,4 = 9.49 ,fail to reject Ho at a = 0.05.
Exercise 9.7
(Goodness-of-Fit Test; Continuous Distribution) Step 1: State HOand H I . Ho: X Uniform distribution f (x) = 1 , 0 I x I 1 HI: X Uniform distribution
-
+
I I
Step 2: Determine a test statistic and its value. 1. Estimate the parameter of the hypothesized distribution. Since the uniform distribution does not have any parameter, skip this step. The number of parameters estimated is p = 0. 2. Define class intervals and summarize observed frequencies (Oi's) accordingly.
9-43
Answers to Exercises Exercise 9.7 (cont.)
3. Estimate the probabilities (pi's) of the class intervals. .2
p , = P(X < 0.2) = fidx =:IX
= 0.2
Since the sizes of the bins are the same, the corresponding probabilities are equal to each other as 0.2. 4. Calculate the expected frequencies (Ei = npi) of the class intervals. If an expected frequency is too small (< 3), adjust the class intervals. None of the expected frequencies are less than three. (0, -E , ) ~ =1.70 Ei
Xt =
5. Calculate the test statistic:
i=l
Step 3: Determine a critical value for a. 2 - 2 Xc~,k-~-l - X0.05,5-0-1
2
= X0.05,4 = 9.49
Step 4: Make a conclusion. Since = 1.49 P = 7.81 , fail to reject Ho at a = 0.05. The data does not indicate any abnormal functioning of the random number generator at a = 0.05. (Contingency Table Test; Independence) Step 1: State Ho and H I . Ho: The type of failure is independent of the mounting position. H I : The type of failure is not independent of the mounting position.
Exercise 9.8
Step 2: Determine a test statistic and its value:
A
I
---*
Totals
--m-mx-w-
1 ~
P
Failure Type (Y) B C
i
-
~
26
m
~
~
-
~
-
-
-
-
63 m
-
~
~
-
-
~
-
m
24 -
a
.
w
~
m
-
-
~
21 m
s
~
"
~
L 134 -,---
9-44
Chapter 9
Tests of Hypotheses for a Single Sample Step 3: Determine a critical value for a.
Exercise 9.8 (cont.)
2 Xa,(r-l)(c-1)
2
= XO.O5,(2-1)(4-1)
2
= X0.05,3 = 7.8
Step 4: Make a conclusion. Since = 1 0 . 8 > ~ i , , ~= ,7.81, ~ reject Ho at a = 0.05.
Exercise 9.9
(Contingency Table Test; Homogeneity) Step 1: State Ho and H I . Ho: Students in different class standings are homogeneous in terms of opinions on the curriculum change. H I : Students in different class standings are not homogeneous in terms of opinions on the curriculum change.
I
I
S t e 2: ~ Determine a test statistic and its value.
Opinion (Y) Favoring opposing
Totals
Totals
Step 3: Determine a critical value for a. 2 Xa,(r-l)(c-1)
- 2 2 - X0.05,(4-1)(2-1) = X0.05,4 = 9.49
Step 4: Make a conclusion. Since = 27.0 > X i = 9.49, reject Ho at a = 0.05. It is concluded that students in different class standings have significantly different opinions on the curriculum change at a = 0.05.
.,,,
10
Statistical Inference for Two Samples
10-2 Inference for a Difference in Means of Two Normal Distributions, Variances Known 10-3 Inference for a Difference in Means of Two Normal Distributions, Variances Unknown
10-4 Paired t-Test 10-5 Inference on the Variances of Two Normal Populations 10-6 Inference on Two Population Proportions MINITABApplications Answers to Exercises
I
c
10-2
Inference for a Difference in Means of Two Normal Distributions, Variances Known
Cl Test a hypothesis on p~ - p2 when o12and 022 are known (z-test). Cl Determine the sample size of a z-test for statistical inference on pl - pz by using an appropriate
sample size formula and operating characteristic (OC) curve.
O Establish a 100(1 - a)%confidence interval (CI) on p1 - p2 when o12and 022are known. 0 Determine the sample size of a z-test to satisfy a preselected level of error (I$) in estimating pl - p2. Inference Context
Parameter of interest: p~ - p2 Point estimator of pl - p2:
(Note)
Xl - N(pl)' , 0
0:
o;
"'1
n2
X1 - X2 - N(pl - p 2 ,-+ -) and
X2 - N(p2,-)0;
n1 XI and X2 are independent
1'( Test statistic of pl - p2: Z =
-2'
2
n2
) - (PI - 112)
I
2
, o 1 and 0 2 known;
-
_ N(O,l)
Isal p a p y s - l a ~ o l ~ o j "z- > Oz 1sal paprs-laddn ~ o j " z < OZ 1sao pap!s-OMIlroj " " 2 < lozJ J! OH13afax .uo!snpuo3 e ayeK :p days
L
Og-('x- '_XI
=
0 2
'anIan s)! pua q+s!+a+s)sa+ e au!uualaa :Z dais
'Lia~!l~adsal ' 1
ED = ( Z
zTi- 'ri=(Zx)y-('x)g=(Zx- Ix)y a ~ u e ! . ~pue e ~ ueaur q y ! ~~euuouS! z~- ~
IX
-
pue) ' [U ~1 ;D = ( 'x)/i Z ~ )' [rl g = (Ix)g - '' d =( -
sa3uepeA put? sueaur y o ! leuuou ~ PUB ouapuadapu! ale
Z~
-
.ouapuadapu! ale z-x pup I X
pue
IX
-
a3u!S
10-2 Inference for a Difference in Means of Two Normal Distributions, Variances Known
Sample Size Formula (cont.)
Operating Characteristic (OC) Curve
n=
10-3
( z a + ~ ~ ) ~ +( ooi :)
for one-sided test (A - A d L where: A = p, - p2 (true mean difference), A + A, (hypothesized mean difference), and
Table 10-1 displays a list of OC charts in MR and a formula of the OC parameter d for a z-test on p1 - p2.By using the table, the appropriate OC chart for a particular z-test is chosen (e.g., for a one-sided z-test at a = 0.05, chart VIc is selected).
Table 10-1 Operating Characteristic Charts for z-test - Two Samples Chart VI OC p m e t e r Test a G ' e A in %)
Confidence Interval Formula
A 100(1 - a)% CI on p, - p2when o12and ~2~ are known is -
5 p, -)I,
-
I X, - X,
+ z,,,
for two-sided CI
I p l - p2 for lower-confidence bound
pl - p2 I Xl - X2 + Z,
1:
-+
: A
for upper-confidence bound
(Derivation) Two-sided confidence interval* on p1 - p2, o12and 022known By using the test statistic Z =
P(LI~IU)=I-~ 3
P(- z,,, I z I z,,,)= 1- a
Therefore, -
-
L=X,-X,-z,,,
-
-
and U = X l - X 2 + z a 1 2
10-4
Chapter 10
Statistical Inference for Two Samples
Sample Size Formula for Predefined Error
As an extension of the sample size formula for estimation on p with a preselected level of error (described in Section 8-I), the following formula is used for estimation on pl - p2:
Example 10.1
The life lengths of INFINITY (XI; unit: hour) and FOREVER(X2; unit: hour) light bulbs are under study. Suppose that XI and X2 are normally distributed with oI2= 402 and 022= 302,respectively. A random sample of INFINITY light bulbs is presented in Example 8.1 and a random sample of FOREVERlight bulbs is shown below: Life Length No Life Length No Life Length No II 755 21 837 7 89
(a
(y)+ 2
n=
(0;
7 8
769 836 847
9
I
0 ; ) . where a, = n2 = n
17 18 19
787 788 794
The two random samples are summarized as follows: Brand of Light Bulb
Sample Size nl
= 30
Sample Mean -
x,
= 780
hrs
Variance
o12= 402
mean life length of an INFINITY light bulb is different from that of a FOREVER light bulb at a = 0.05. h Step 1: State Ho and H I . Ho: -p2 = 0 HI: p1-p2+0 Step 2: Determine a test statistic and its value. (780 - 800) - 0 z,, = (X, - X2) - 6O-
-+-
I
+ --
Step 3: Determine a critical value(s) for a. za12 = z0.0512
= z0.025 =
Step 4: Make a conclusion. Since lz,, = 2.12 > zo,02j= 1.96, reject Hoat a = 0.05.
1
10-2 Inference for a Difference in Means of Two Normal Distributions, Variances Known
Example 10.1 (cont.)
10-5
2. (Sample Size Determination for Predefined Power of Test) Determine the sample size n (= nl = n2) required for this two-sided z-test to detect the true difference in mean life length as high as 20 hours with 0.8 of power. Apply an appropriate sample size formula and OC curve. w (1) Sample Size Formula power = P(reject HOI Ho is false) = 1 - p = 0.8 3 P = 0.2
(2) OC Curve To design a two-sided z-test at a = 0.05 for two samples, OC Chart Vla is applicable with the parameter A 120-0) d=dT=dy=0.4
o +o
402 + 30
By using d = 0.4 and P = 0.2, the required sample size is determined as n = 50 as displayed below.
3. (Contidence Interval; Two-Sided CI) Construct a 95% two-sided confidence interval on the mean difference in life length (pl - p2). Based on this 95% twosided CI on p~ - p2, test Ho:p1- p2 = 0 VS.HI:p1- p2 # 0 at a = 0.05. h
1 -a=0.95 3 a=0.05 95% two-sided CI on pl - p2:
10-6
Chapter 10
Statistical Inference for Two Samples
Since this 95% two-sided CI on y l - y2 does not include the hypothesized value zero, reject Ho at a = 0.05.
Example 10.1 (cont.)
4. (Sample Size Determination for Predefined Error) Find the sample size n (= nl = n2) to construct a two-sided confidence interval on yl - y2 within 20 hours of error at a = 0.05.
Exercise 10.1 (MR 10-2)
Two types of plastic (say, plastic 1 and plastic 2) are used for an electronics component. It is known that the breaking strengths (unit: psi) of plastic 1 (XI) and plastic 2 (X2)are normal with 0,' = 0 2 ~= 1.O. From two random samples of n, = = 155.0. 10 and n2 = 12, we obtain = 162.5 and 1. The company will not adopt plastic 1 unless its breaking strength exceeds that of plastic 2 by at least 10 psi. Based on the sample test results, should they use plastic l? Use a = 0.05 in reaching a decision. 2. Determine the sample size n (= nl = n2) required for this one-sided z-test to detect the true difference in mean breaking strength as high as 11.5 psi with 0.9 of power. Apply an appropriate sample size formula and OC curve. 3. Construct a 95% upper-sided confidence interval on the mean difference in breaking strength (y1- y2).
10-3
Inference for a Difference in Means of Two Normal Distributions, Variances Unknown
O Test a hypothesis on yl - y2 when o12and 0 2 ~are unknown (t-test). 0 Determine the sample size of a z-test for statistical inference on yl - y2 by using an appropriate operating characteristic (OC) curve. R Establish a 100(1 - a)% confidence interval (CI) on y l - y2 when 0 1 2 and 0; are unknown. *---*-.#.aw--m
-*-----
Inference Context
w
M
m
w
m
-
-
-
m
-
u
m
m
~
-
~
s
m
-
m
~
~
m
-
-
~
~
-
~
~
*
~
-
wa**m
m-ms-s
.s-s-dm
-"*
B-%
Parameter of interest: P I - y2 Point estimator of y1 - y2: (Note)
XI - N(pl ,-)0:
0:
0;
"'1
n2
X1 - X2 - N(yl - y 2,- + -) and
0 X2 - N(y2,A), o12and oZ2 unknown;
n1 and X2 are independent
n2
----
10-3 Inference for a Difference in Means of Two Normal Distributions, Variances Unknown
Inference Context (cont.)
10-7
Test statistic of pl - p2: Different test statistics of p1 - ~2 are used depending on the equality of o12and 022 as follows: (1) Case 1: Equal variances (oI2= 022 = 02)
,
where: S, = (n, - 1)s: + (n2 - 1)s; n, + n 2 - 2
(pooled estimator of 02)
(2) Case 2: Unequal variances (oI2# 022)
(Note) The equality of two variances (o12= 022)can be checked by using an F test described in Section 10-5.
Test Procedure (t-test)
Step 1: State Ho and H I . Ho: PI - P2 = 60 HI: PI - p2 # 60 for two-sided test PI - p2 > 60 or pl - p2 < 60 for one-sided test Step 2: Determine a test statistic and its value. (1) Case 1: Equal Variances (oI2= 022
where: S, =
n,
= 02)
+ n2 - 2
(2) Case 2: Unequal Variances (oI2# 022)
Step 3: Determine a critical value(s) for a. t,,,, for two-sided test; t , , for one-sided test Step 4: Make a conclusion. Reject Hoif It,, > t 2 for two-sided test
1
to > t,,
for upper-sided test
to < - t , ,
for lower-sided test
10-8
Chapter 10
Operating Characteristic (OC) fhrve
Statistical Inference for Two Samples
Table 10-2 displays a list of OC charts in MR and a formula of the OC parameter d for a t-test on p1 - p2 where ol2 = o22 = o2 and nl = n2 = n. Note that OC curves are unavailable for a t-test when oI2# 0 z 2 because the corresponding t distribution is unknown. By using the table, the appropriate OC chart for a particular t-test is chosen (e.g., for a two-sided t-test at a = 0.01, chart VIfis selected). The sample size n* obtained from an OC curve is used to determine the sample size n (= nl = n2) as follows: n*+l n=, where n* from an OC curve 2
Table 10-2 Operating Characteristic Charts for t-test Chart VI Test a
-
Two Samples
(Appendix A in MR)
OC parameter
Two-sided t-test One-sided
subjective estimate.
Confidence Interval l70rmula
A 100(1 - a ) % CI on p1 - p2 when oI2and 022are unknown depends on the equality of o12 and 022as follows: (1) Case 1: Equal variances (oI2 = 022 = 02)
-
-
I
I p1 - p2 I X I - X2 + ta, 2,vSp
3,- 1,- t 1 2 , vS
1 +5 pl - p2
for two-sided CI
for lower-confidence bound
n2
'"1
- pl-& I Xl - X 2 +ta,2,vSp
for upper-confidence bound
(2) Case 2: Unequal variances (o12# 0 2 ~ ) -
-
I p, -p2 I X I - X , +ta12,,
for two-sided CI
5 p, - p2 for lower-confidence bound -
-
pl -& I Xl -X2 +ta,2,v CI and Hypothesis Test for Large Sample
for upper-confidence bound
If the sample sizes are large (nl and n2 L 30), the z-based C1 formulas and test procedure in Section 10-2 can be applied to inference on pl - p2 regardless of whether the underlying populations are normal or non-normal according to the central limit theorem (described in Section 7-5).
10-3 Inference for a Difference in Means of Two Normal Distributions, Variances Unknown
I
II I
10-9
For the light bulb life length data in Example 10.1, the following results have been obtained: Brand of Light Bulb
Sample Size
nl
INFINITY (1)
FOREVER(2)
Sample Mean -
= 30
x,
= 780
Sample Variance
hrs
S :
= 40'
-
n2 = 25
x2 = 800 hrs
Sided Test) Assuming 01' # 0 z 2 , test if the mean life length of an INFINITY light bulb is different from that of a FOREVERlight bulb at a = 0.05. Step 1: State Ho and H I . Ho: ~1 - ~2 = 0 H I : PI - P2+ 0 Step 2: Determine a test statistic and its value. (780 - 800) - 0 to = (XI - X2)- 60 -
-+v=
(st I n , (s: I n ,
+
+si In,), ( ~l ni d 2
--
(40, I 30 + 302 12512
-2 =
-2~54
(40, 1 3 0 ) ~L (302 1251,
Step 3: Determine a critical value(s) for a. t a ~ =~t0.025,54 , ~ = 2.00 Step 4: Make a conclusion. Since Ito/= 2.12 > to,o,5,5,= 2.00, reject Hoat a = 0.05.
I
2. (Sample Size Determination) Assuming 0 1 , = 0 z 2 , determine the sample size n (= nl = n2) required for this two-sided t-test to detect the true difference in mean life length as high as 20 hours with 0.8 of power. Apply an appropriate OC curve. h To design a two-sided t-test at a = 0.05 for two samples, OC Chart VIe is applicable with the parameter
I
(Note) s: =
(n, - 1)s; + (n, - 1)s; n, + n 2 - 2
10-10 Chapter 10 Example 10.2 (cont.)
Statistical Inference for Two Samples For d = 0.28 and P = 0.2, n*
=
100 as displayed below.
1.o j - - - - - - - r - - - - - - 7 - - ~ - - - - - 3
(e)
O.C. curves for different v d w of n for the two-sided t-test for a level of significance a = 0.05.
Thus, the required sample size n is
3. (Confidence Interval on p1- p2, o12and o; Unknown but Equal; TwoSided CI) Assuming 0 1 2 = 0z2, construct a 95% two-sided confidence interval on the difference in mean life length (pl - p2). 95% two-sided CI on p1- p2:
(Note) This t-based CI is wider than the corresponding z-based CI -38.5 I p l - p , 5 - 1 5 in Example 10-1. 4. (Confidence Interval on pl - p2, o12 and o; Unknown and Unequal; TwoSided CI) Assuming 0 1 2 # 0 2 2 , construct a 95% two-sided confidence interval on the difference in mean life length (pl - p2). Based on this 95% two-sided CIonp~-p2,te~tH~:p~-p~=O~~.H~:p~-p~#Oata=0.05
10-11
10-4 Paired t-Test
I Exercise 10.2 (MR 10-22)
I I
Since this 95% two-sided CI on p~ - p2 when 0 1 2 and 02' are unknown and unequal does not include the hypothesized value zero, reject Ho at a = 0.05. Two suppliers manufacture a plastic gear used in a laser printer. The impact strength of these gears measured in foot-pounds is an important characteristic. A random sample of n, = 10 gears from supplier 1 results in Fl = 290 and sl = 12, while another random sample of n2 = 16 gears from supplier 2 results in Z2 = 32 1 and s2 = 22. Assume that XI and X2 are normally distributed. 1. Assuming o12= 022,test if supplier 2 provides gears with higher mean strength at a = 0.05. 2. Assuming oI2= oZ2,determine the sample size n (= nl = n2) required for this one-sided t-test to detect the true mean difference as high as 25 foot-pounds with 0.9 of power. Apply an appropriate OC curve. 3. Assuming oI2= 022, construct a 95% upper-confidence bound on pl - p2. 4. Assuming oI2# 022,construct a 95% upper-confidence bound on
10-4
- p2.
Paired t-Test
O Explain a paired experiment and its purpose. O Test a hypothesis on AD for paired observations when 0 D 2 is unknown (paired t-test). O Establish a 100(1 - a ) % confidence interval (CI) on p~ for paired observations when 0~~is unknown.
Paired Experiment
A paired experiment collects a pair of observations (XI and X2) for each specimen (experimental unit) and analyzes their differences (instead of the original data). This paired experiment is used when heterogeneity exists between specimens and this heterogeneity can significantly affect XI and X2; in other words, XI and X2 are not independent. For instance, in Table 10-3, the effect of a diet program on the change in weight is under study with n participants, who are heterogeneous. This heterogeneity is likely to confound the effect of the diet program on the weight change; in other words, the weight before (XI) and the weight after (X2) the diet program are not independent. To block the effect of the heterogeneity on the weight change, it is proper to analyze the differences (D) of the paired observations.
10-12 Chapter 10
Statistical Inference for Two Samples
Paired Experiment (cont.)
Inference Context
Parameter of interest: ~ 1 0 -
Point estimator of pD: D = X I - X ,
0; - N ( p D ,T),
X I and X , are -
oD2unknown;
independent.
D-PD -t(v),v=n- 1 Test statistic of PD: T = -----s, I&
Test Procedure (paired t-test)
Step 1: State Ho and H I . Hot pD = 60 H I : p~;t GO for two-sided test, p~> 60 or p~ < 60 for one-sided test. Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. t,,,,,-, for two-sided test; t ,
for one-sided test
Step 4: Make a conclusion. Reject Ho if t o > t , 2 , n 1 for two-sided test
1
Confidence Interval Formula
to > t,,,-,
for upper-sided test
to < -t,,,-,
for lower-sided test
D - t,, , ,
SD
-
-< p D 5 D
&-
+ t,,
,,
&
for two-sided CI for lower-confidence bound
SD PD 5 D + t,, -
&
CI and Hypothesis Test for Large Sample
for upper-confidence bound
If the sample size is large (n 2 30), the z-based CI formulas and test procedure in Section 9-2 can be applied to inference on pD according to the central limit theorem.
r
10-4 Paired t-Test
Example 10.3
10-13
The weights (unit: lbs) before and after a diet program for 30 participants are measured below.
w
7 8 9 10
-
w w
A
198 165 180 172 m
-
9
-
182 160 178 171 a
w
w
17 18 19 20
184 173 179 168
w
-
w
176 156 167 152 e
A
The summary of the weight data is as follows: Sample Size Sample Mean (no. participants) (weight loss)
d
n = 30
=
10 lbs
27 28 29 30
-
215 206 165 170
206 20 1 156 154
Sample Variance s i = 52
1. (Hypothesis Test on PD, OD' Unknown; Two-Sided CI) Test if there is a significant effect of the diet program on weight loss. Use a = 0.05. h Step 1: State Ho and H I . Ho: pD=o HI: b # O Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. ta/2,n-l
= t0.05/2,30-1 = t0.025,29 = 2.045
Step 4: Make a conclusion. Since It, = 10.95 > t0,025,24 = 2.045 , reject Hoat a = 0.05.
1
2. (Confidence Interval on b,OD' Unknown; Two-Sided CI) Construct a 95% two-sided confidence interval on the mean weight loss (pD)due to the diet program. Based on this 95% two-sided CI on PD, test Ho: p . =~ 0 VS.HI:PD z 0 at a = 0.05. rn 1 - a = 0 . 9 5 3 a=0.05; v = n - 1 = 3 0 - 1 = 2 9 95% two-sided CI on
10-14 Chapter 1
Statistical Inference for Two Samples
Example 10.3 (cont.) Since this 95% two-sided CI on zero, reject Ha at a = 0.05.
Exercise 10.3 (MR 10-38)
does not include the hypothesized value
A computer scientist is investigating the usefulness of two different design languages in improving programming tasks. Twelve expert programmers, familiar with both languages, are asked to code a standard function in both languages, and their coding times in minutes are recorded as follows:
*-
d d
-
7 8 9
16 14 21
10 13 19
11 12
13 18
15 20
w - w w . -
--m--w---
smm-s----
*
-
ee--ms
m-ma-mm
6 1 2
--
-2 *-=---
=---
-2
-ms-
--
w-mm"
The summary quantities of the coding time data are n = 12, d = 0.67, and s i = 2.962 1. Does the data suggest that there is no difference in mean coding time for the two languages? Use a = 0.05 in drawing a conclusion. 2. Construct a 95% two-sided confidence interval on the difference between the two languages in coding time (pD). Based on this 95% CI on b,test Ha: = 0 vs. H I :pD# 0 at a = 0.05.
10-5
Inference on the Variances of Two Normal Populations
Explain the characteristics of an F distribution. R Read the F table. Test a hypothesis on the ratio of two variances (012/022)(F-test). O Determine the sample size of an F-test for statistical inference on 0 1 ~ 1 0by 2 ~using an appropriate operating characteristic (OC) curve. Establish a 100(1 - a)% confidence interval (CI) on 012/022.
1
10-5 Inference on the Variances of Two Normal Populations
F Distribution
10-15
The probability density function of an F distribution with u and v degrees of freedom is
The mean and variance of the F distribution are E ( X ) = v l(v - 2 ) for v > 2
V ( X )=
2v2(u + v - 2) for v > 4 u(v - 2),(v- 4)
An F distribution is unimodal and skewed to the right (see Figure 10-4 in MR) like a X 2 distribution. However, an F distribution is more flexible in shape by having an additional parameter of degrees of freedom. F Table
The F table (see Appendix Table V in MR) provides 1 OOa upper-tail percentage points (fa,,,, ) of F distributions with various values of u and v, i.e.,
P(Fu,v > fa,,,, The lOOa lower-tail percentage points ( fl-a,,,v) can be found as follows: fi-a,,,,
1
=-
fa,,,,
(e.g.) Reading the F table P(F7,15 > f ) = 0.05 : f = f0,05,7,15 = 2.71 (2) P(F7,,, > f)=0.95 : f = fO,,,,,,,, = 1 / f0,,,15,7 =l/3.51 = 0.28
Inference Context
0: Parameter of interest: 0:
Point estimator of
o2 + 0 2
s: s;
: -
(Note) S:
=C(xli
=C(X,, -x,)' i n , - 1 ; "2
"1
In, - 1 and S:
-
X 1 N ( p l , o : ) and X , - N ( F ~ , ~ ; ) ; X l and X , are independent
Test statistic of
o2 + 0 2
s; l o 2 s; 10;
: F = -----L F(nl - 1, n, - 1)
10-16 Chapter Test Procedure (F-test)
Statistical Inference for Two Samples Step I : State Ho and H I . 2 0 1
2
--0 1 , o
Ho:
7
HI:
, 2
0:
t
0 2
g >-
2 01,o
0;
for two-sided test,
4 , o
02,o
2
or
5< 0, for one-sided test. 0;
2 02,o
Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. fa/2,n,-l,n2-l
and
)
1 fi-a/2,nl-l,n2-1
for two-sided test
fa/2,n2-~,n1-1
for upper-sided test
fa,n, - I , ~ ~ - I /
\
1
for lower-sided test
Step 4: Make a conclusion. Reject Ho if f~ > f a / 2 , n l - l , n 2 - 1 0' f~ < fi-a/2,n,-l,n2-1 for upper-sided test fo > fa,nI-l,n2-l fo < fi-a,n,-l,n,-l
Operating Characteristic (OC) Curve
for two-sided test
for lower-sided test
Table 10-4 displays a list of OC charts in MR and a formula of the OC parameter = nz = n. By using the table, the appropriate OC chart for a particular F-test is chosen.
h for an F-test on 012/022 where n l
Table 10-4 O~eratinp. " Characteristic Charts for F-test Chart VI Test a OC parameter ( ~ ~ p e n dA i xin MR)
10-17
10-5 Inference on the Variances of Two Normal Populations Confidence Interval Formula
o2 s 2 < - L < 2I f a/2,n2-l,n,-I
s; fl-a/ 2,n2 - ~ , n ,-I
0;
'2
:
s -
2 fi-a,n2-l,n,-l S2
for two-sided CI
S2
0 < - 1
for lower-confidence bound
0; 0:
s;
0 2
S2
-75
for upper-confidence bound
fa,n2-~,n,-I
(Derivation) Two-sided confidence interval* on 012/022 s; / 0 ; By using the test statistic F = ----- F(n2 - 1, n, - 1) ,
s: 1 0 ;
-
Therefore,
* For a one-sided CI, use Example 10.4
f a instead of f a , , to derive the corresponding limit.
For the light bulb life length data in Example 10.1, the following results have been obtained:
Brand of Light Bulb INFINITY (1) FOREVER ( 2 )
Sample Size nl
= 30
n2 = 25
Sample Mean Z, -
x,
Sample Variance
= 780 hrs
S:
= 800
s,2 = 302
hrs
1. (Hypothesis Test on o12/022; Two-Sided Test) Test Ho: 0 012/022 # 1 at a = 0.05. h Step 1: State Ho and H I . Ho: o : / ( T = ; l HI: of 10; $ 1 Step 2: Determine a test statistic and its value.
1 ~ 1 0= 2 ~1
= 402
VS.H I :
10-18 Chapter 10 Example 10.4 (cont.)
Statistical Inference for Two Samples Step 3: Determine a critical value(s) for a. fa1 2,nl -l,n2 -1
= f0.05
12,30-1,25-1
fl-a12,nl-l,n2-1 = f0.975,29,24
= f0.025,29,24 = 2'22 and
= f0.025,24,29
2. I
Step 4: Make a conclusion. Since, f0 = 1.78 2 f0025,29,2, = 2.22 and fo = 1.78 g f0,975,29,24 = 0.46, fail to reject HOat a = 0.05.
2. (Sample Size Determination) Determine the sample size n (= nl = n2) required for this two-sided F-test to detect the ratio of '31 to '32 as high as 1.5 with 0.8 of power. Apply an appropriate OC curve. ~hrTOdesign a two-sided F-test at a = 0.05, OC Chart Vlo is applicable with the parameter
~=0'=1.'j '32
By using h = 1.5 and P = 0.2 (because power = 1 - P = 0.8), the sample size required is determined as n (= nl = n2) = 50 as displayed below.
1.50 (0)
A
O.C. curves for diffnent values of n for the two-sided F-test for a level of significance a = 0.05,
3. (Confidence Interval on oI2/oz2; Two-Sided CI) Construct a 95% two-sided ~. on this 95% CI on '31~/'322, test Ho: confidence interval on 0 1 ~ 1 ' 3 2Based ~ ~ 1 ~ 1 '= 32 12VS.HI:'312/'322# 1 at a = 0.05. ~hr1 - a = 0.95 a = 0.05 95% two-sided CI on '31'102~:
=
10-19
10-6 Inference on Two Population Proportions
Example 10.4 (cont.)
Since this 95% two-sided CI on '312/'322includes the hypothesized value unity, fail to reject Hoat a = 0.05.
Exercise 10.4 (MR 10-49)
In semiconductor manufacturing, wet chemical etching is often used to remove silicon from the backs of wafers prior to metalization. The etching rate is an important characteristic in this process. Two different etching solutions have been compared, using two random samples of 10 wafers for each solution. The observed etching rates (unit: milslmin) are as follows:
9.4 9.3 9.6
10.3 10.0 10.3
10.6 10.7 10.4
10.2 10.7 10.4
that the etching rates are normally distributed.
1. Test Ho: o12/'322= 1 VS. HI: o12/'322# 1 at a = 0.05. 2. Determine the sample size n (= nl = n2) required for this two-sided F-test to detect the ratio of '3, to o2as high as 2 with 0.9 of power. Apply an appropriate OC curve. 3. Construct a 95% two-sided confidence interval on '312/'322.Based on this 95% = 1 VS.HI: 0121'322f 1 at a = 0.05. two-sided CI on 012/'322,test Ho:
10-6
Inference on Two Population Proportions
0 Determine the sample size of a z-test for statistical inference onpl -p2 by using an appropriate
Inference
Parameter of interest:
pl -p2
10-20 Chapter 10 Inference Context (cont.)
Statistical Inference for Two Samples
-
-
(Note) X1 B(n1, P I ) , X, B(n,, p,) ; n l p l , n,(l- p , ) , n2p,, and n2(1 - p,) are greater than 5; X, and X, are independent;
-
and P2 = n2
n1
Test statistic ofpl -p2: The test statistic ofpl -p2depends on the equality ofpl and p2 as follows: (1) Case 1: Unequal proportions (pl #p2)
(2) Case 2: Equal proportions @I
Test Procedure (z-test)
= p2= p )
Step 1: State Ho and H I . Ho: PI -p2'60 HI: PI -p2 f 60 for two-sided test, pl - p2 > 60 or p1 - p 2 < 6o for one-sided test. Step 2: Determine a test statistic and its value. (1) Case 1: Unequal proportions (pl # p2) 8 -P2 - 6 , 4-6-6,
zO=~ A
A
( 2 ) Case 2: Equal proportions (pl = pz = p )
Step 3: Determine a critical value(s) for a. z,,, for two-sided test; z , for one-sided test Step 4: Make a conclusion. Reject Hoif Izol > zai2 for two-sided test z, > z,
for upper-sided test
< -2,
for lower-sided test
2,
10-6 Inference on Two Population Proportions
Sample Size Formula
Confidence Interval Formula
10-21
For a hypothesis test o n p l - p2, the following formulas are applied to determine the sample size:
Like the test statistic o n p , -p2, a 100(1 - a)%C I onpl equality o f p l and p2as follows: 1. Case 1: Unequal proportions (pl #p2)
a
PI
A
-/
A
-p2
on the
d v
,:- p2 A
-p2 depends
i p, - p2 for lower-confidence bound
-za/2 A
PI-PZ SP,
A
-P2
for upper-confidence bound
+ z a ~
n2
2. Case 2: Equal proportions (pl =p2 = p )
A
A
C pl - p i C P, - P2 +
for two-sided CI
I pl - p2 for lower-confidence bound for upper-confidence bound
Example 10.5
Random samples of bridges are tested for metal corrosion in the HAPPYand
10-22 Chapter 10 Example 10.5 (cont.)
Statistical Inference for Two Samples 1. (Hypothesis Test o n p l -pz; Unequal Proportions; Upper-Sided Test) Assuming pl #p2, test if the proportion of corroded bridges of HAPPYCounty exceeds that of GREATCounty by at least 0.1. Use a = 0.05. h
Since n , j l = 4 0 x 0 . 7 = 2 8 , n l ( l - j l ) = 4 0 x 0 . 3 = 1 2 , n 2 j 2 =30x0.5=15, and n2(1 - j2) = 30 x 0.5 = 15 are greater than five, the sampling distributions of
4 and p2 are approximately normal.
Step 1: State Ho and HI. Ho: pl -p2= 0.1 Hi: pl -p2 > 0.1 Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value@) for a. za = zo,05= 1.64 Step 4: Make a conclusion. Since )zo = 0.86 2 z,,,, = 1.64 , fail to reject Ho at a = 0.05.
1
2. (Sample Size Determination) Suppose that p, = 0.7 and p, = 0.5. Determine the sample size n (= nl = n2) required for this two-sided z-test to detect the difference of the two proportions with power of 0.9. k power = P(reject Ho I Ho is false) = 1 - p = 0.9 p = 0.1
=
3. (Confidence Interval onpl -pz;Unequal Proportions; Upper-Confidence Bound) Assumingpl +p2, construct a 95% upper-confidence bound on the difference of the two corroded bridge proportions (pl - p2). 95% two-sided CI onpl - p2: PI-P* Sjl-i)2
+Za
10-6 Inference on Two Population Proportions
10-23
Example 10.5 (cont.) Exercise 10.5 (MR 10-65)
A random sample of nl = 500 adult residents in MARICOPA County found that xl = 385 were in favor of increasing the highway speed limit to 70 mph, while another random sample of n2 = 400 adult residents in PIMACounty found that x2 = 267 were in favor of the increased speed limit. 1. Does the survey data indicate that there is a difference between the residents of the two counties in support of increasing the speed limit? Assume pl - p2 = 0 and use a = 0.05. 2. Suppose that p1 = 0.75 and p2= 0.65. Determine the sample size n (= nl = n2) required for this two-sided z-test to detect the difference of the two proportions with power of 0.8. 3. Assuming pl = p2, construct a 95% two-sided confidence interval on the difference between the favor proportions of the two counties for the speed limit increase (pl - p2). Based on this 95% two-sided CI onpl - p2, test Ho:pl , p2=Ovs. H l : p l - p 2 + O a t a = 0 . 0 5 . I
10-24 Chapter 10
Statistical Inference for Two Samples
MINITAB Applications Example 10.2
I
(Inference on 1 1 - pz, o12and oz2Unknown) (1) Choose File New, click Minitab Project, and click OK.
*
(2) Enter the life length data of the INFINITYand FOREVERlight bulbs on the worksheet cl-1 &
g
INFINITY /FOREVER,
*
a 3;
*
(3) Choose Stat Basic Statistics 2-Sample t. Click Samples in different columns and select INFINITYand FOREVERin First and Second, respectively. Select not equal in Alternative and type the level of = confidence in Confidence level. Check Assume equal variances if o12
MINITABApplications
10-25
Example 10.2 (cont.)
II
Two Sample T-Test and Confidence Interval
I
TWO 3 m p T~ f o r INFINITY
I INFINITY FOREVER
N 30 25
Mean 780.0 800.0
vs FOREVER StDev 40.0 30.0
I
SE Mean 7.3 6.0
............+t ...........,....................
95t C I f o r m u I N F I N I T Y - m u FOREVER: i,(.:3?.;~-.:~-.ai T - T e s t m u I N F I N I T Y = m u FOREVER ( v s n o t = ) :;T..:.z?.:>?.*P.,y.j.;p.329i
I
1 10.2.41
DF = 52
Two Sample T-Test and Confidence Interval
(
TWO
s m p l e T f o r INFINITY
INFINITY FOREVER
N 30 25
Mean 780.0 800.0
vs FOREVER StDev 40.0 30.0
SE Mean 7.3 6.0
1
95% C I f o r m u I N F I N I T Y - m u FOREVER:;.!.;::, T - T e s t m u I N F I N I T Y = m u FOREVER ( v s Both u s e P o o l e d S t D e v = 35.8
(5) Choose Stat N Power and Sample Size N 2-Sample Z. Click Calculate sample size for each power value. Enter the power of the test predefined in Power values, the difference between the true mean and hypothesized mean to be detected in Difference, and the pooled estimate of standard
(6) Under Alternative Hypothesis select the type of alternative hypothesis, and in Significance level enter the probability of type I error (a).Then click OK twice.
10-26 Chapter 10
Example 10.2 (cont.)
Statistical Inference for Two Samples fined test condition.
I 1
Power and Sample Size 2-Sample t T e s t T e s t i n g mean 1 = mean 2 ( v e r s u s n o t = ) C a l c u l a t i n g power f o r mean 1 = mean 2 + 20 Alpha = 0.05 Sigma = 35.8
Example 10.3
I
(Inference on po, oDZ Unknown) (1) Choose File New, click Minitab Project, and click OK.
*
(3) Choose Stat
* Basic Statistics N Paired t. Select BEFOREand AFTERin
MINITABApplications
Example 10.3 (cont.)
I
10-27
(4) Enter the level of confidence in Confidence level, the hypothesized difference in mean in Test mean, and not equal in Alternative. Then click OK twice.
( 5 ) Intemret the analvsis results.
I
Paired T-Test and Confidence Interval P a l r e d T f o r Before - A f t e r N 30 30 30
Before After Difference
Example 10.5
(Inference on p l - p2)
+
Mean 181.00 171.00 10.000
StDev 22.76 23.63 Sf
17
+
SE Mean 4.15 4.31 n
*II 1 10.3.2 1
(1) Choose Stat Basic Statistics 2 Proportions. Click Summarized data and enter the number of bridges surveyed in Trials and the number of Ins.
10-28 Chapter 10 Example 10.5 (cont.)
(2) Enter the level of confidence in Confidence level, the hypothesized difference in proportion in Test difference, and greater than in Alternative. Check Use pooled estimate of p for test if the proportions
/
I
Test and Confidence Interval for Two Proportions
10.5.3
.: Test for ~ ( 1 ) - ~ ( 2 =) 0.1 [vs > 0.1) :i~,~..~.,~:~~...~:~~.1.~~..:...9:,?~~ 5 5 5 5 5 5 ;
+
+
-
(4) Choose Stat Power and Sample Size 2 Proportions. Click Calculate sample size for each power value. Enter the power of the test predefined in Power values and the true proportions of two populations
10-29
MINITABApplications
Example 10.5 (cont.)
(5) Under Alternative Hypothesis select the type of alternative hypothesis, and in Significance level enter the probability of type I error (a).Then
I Power and Sample Size
I
*I
T e s t f o r Two P r o p o r t i o n s T e s t i n g p r o p o r t l o n 1 = p r o p o r t l o n 2 ( v e r s u s >) C a l c u l a t i n g power f o r p r o p o r t l o n 1 = 0 . 7 a n d p r o p o r t l o n 2 = 0 . 5 Alpha = 0 . 0 5 D i f f e r e n c e = 0 . 2
1
I
10-30 Chapter 10
Statistical Inference for Two Samples
Answers to Exercises Exercise 10.1
1. (Hypothesis Test on - p ~tsZ , Known; Upper-Sided Test) Step 1: State HOand H I . Ho: PI - k 2 = 10 HI: ~ 1 - ~ 2 > 1 0 Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. za = z ~ , =~1.65 , Step 4: Make a conclusion. Since z, = -5.84 ;6 z,,, = 1.65, fail to reject Ho at a = 0.05. We do not have significant evidence to support use of plastic 1 at a = 0.05. 2. (Sample Size Determination) (1) Sample Size Formula
n=
i z , + ~ , 3 ) ~ ( 0 : + 0 ; )- (z,.05+zo.,)2(12+12) (A - A,)2
(11.5-10)~
(2) OC Curve To design a one-sided z-test at a = 0.05 for two samples, OC Chart VIc is applicable with the parameter 11 1.5 - 101 A -A = 1.06 By using d = 1.06 and P = 0.1, the required sample size is determined as n (= nl = n2) = 8 as displayed below.
1.06
(c) 0.C NMS for di&rent vahtea of n for ttv ~ r t ~ s i &nd
d tcs for a lnrd of s@Sicanw a = 0.05.
.
Answers to Exercises
10-31
Exercise 10.1 (cont.)
3. (Confidence Interval; Upper-Confidence Bound) 1-a=0.95 3 a=0.05 95% upper-confidence bound on p1- p2:
Exercise 10.2
1. (Hypothesis Test on pl - pz, o12and o; Unknown but Equal; Lower-Sided Test) Step 1: State Ho and HI. Ho: p1- p 2 = 0 H1: p1 - p 2 < 0 Step 2: Determine a test statistic and its value.
(Note) si =
(n,
- 1)s;
n,
+ (n2 - l)s22
+ n2 - 2
Step 3: Determine a critical value@)for a. ta,v - t0.05,24 = Step 4: Make a conclusion. Since to =-4.07<-to,05,24 =-1.711,rejectHoat a = 0 . 0 5 . 2. (Sample Size Determination) To design a one-sided t-test at a = 0.05 for two samples, OC Chart VIg is applicable with the parameter
10-32 Chapter 10 Exercise 10.2 (cont.)
Statistical Inference for Two Samples When d = 0.66 and j3 = 0. I, n* = 20 as displayed below.
(g)
O.C.curves for differentvalues ofn for the one-sidedt-test for a level of significance a = 0.05.
Thus, the required sample size n is
I 1
3. (Confidence Interval on p1- pz, 01' and 02 Unknown but Equal; UpperConfidence Bound) 95% upper-confidence bound on p1 - p2:
4. (Confidence Interval on p1- p z , ~and ~ ' 0 2 ' Unknown and Unequal; Upper-Confidence Bound) 1 - a = 0.95 3 a = 0.05
1
95% upper-confidence bound on pl - p2:
Answers to Exercises
Exercise 10.3
10-33
1 . (Hypothesis Test on y D, 0 D 2 Unknown; Two-Sided Test) Step 1 : State Ho and H I . Ho: PD=O HI: P D # O Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. ta/2,n-1- t0.05/2,12-1 = t0.025,1 I = 2.201 Step 4: Make a conclusion. Since Idol = 0.783 P to,025,11 = 2.201, fail to reject Ho at a = 0.05. Thus, we conclude that the two design languages are not significantly different in mean coding time at a = 0.05.
2. (Confidence Interval on yo, OD' Unknown; Two-Sided CI) 1 - a = 0 . 9 5 3 a=0.05; v = n - 1 = 1 2 - 1 = 1 1 95% two-sided CI on b:
Since this 95% two-sided CI on p . includes ~ the hypothesized value zero, fail to reject HOat a = 0.05.
Exercise 10.4
1. (Hypothesis Test on o 1 2 / o z 2 ; Two-Sided Test) Step 1 : State Ho and H I . 2 2 Ho: o; =oi 3 0 , l o 2 = l H I : o ; # o ; 3 o ; / o ;# l Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. f a , 2,nl -l,n2-l
= f0.05
2,10-I,IO-I
= f0.025,9,9 = 4.03 and
10-34 Chapter 10
Exercise 10.4 (cont.)
Statistical Inference for Two Samples
Step 4: Make a conclusion. Since fo = 3.33 > f0,,, ,,,,, = 4.03 and f o = 3.33 < f0,9,5,9,9 = 0.25 , fail to reject Ho at a = 0.05.
1 2. (Sample Size Determination) To design a two-sided F-test at a = 0.05, OC Chart VIo is applicable with the parameter
When h = 2.0 and P = 0.1, the required sample size is determined as n (= nl n2)= 25 as displayed below.
I (0)
O.C.curves for d i f f m t values of n for the two-sided F-test for a level of significance a = 0.05.
3. (Confidence Interval on O ~ ~ I OTwo-Sided Z~; CI) 1 - a = 0 . 9 5 3 a=0.05 95% two-sided CI on 0 1 ~ 1 0 2 ~ :
Since this 95% two-sided CI on 0 1 unity, fail to reject Ho at a = 0.05.
~ 1 0 includes 2 ~
the hypothesized value
=
7
Answers to Exercises
Exercise 10.5
10-35
1. (Hypothesis Test onpl -pz, Equal Proportions; Two-Sided Test) The survey data is summarized as follows:
X
Sample Size
County
Sample Proportion
Since n, j, = 500 x 0.77 = 385 , n, (1 - j , ) = 500 x 0.23 = 115, n 2 j 2 = 400x 0.67 = 267, and n2(1 - j 2 ) = 400x 0.33 = 133 are greater than five, the sampling distributions of
4 and
are approximately normal.
Step 1: State Ho and H I . Ho: pi - p 2 = 0 Hi: pi - p 2 + 0 Step 2: Determine a test statistic and its value.
x, + x, - 385 + 267 = 0.72 (Note) j = -n, + n2 500 + 400 Step 3: Determine a critical value(s) for a. 'a12 = z0.05!2 = '0.025 = Step 4: Make a conclusion. Since lz, = 3.32 > z,,,,,
1
= 1.96 ,reject Hoat a = 0.05.
2. (Sample Size Determination) power = P(reject Ho I Ho is false) = 1 - p = 0.8 a P = 0.2
10-36 Chapter 10
Exercise 10.5
Statistical Inference for Two Samples
3. (Confidence Interval onpl -pz; Equal Proportions; Two-Sided CI)
(cont.)
95% two-sided CI on pl - p2:
Since this 95% two-sided CI o n p l -p2 does not include the hypothesized value zero, reject Ho at a = 0.05.
Simple Linear Regression and Correlation
11- 1 Empirical Models 11-2 Simple Linear Regression 11-3 Properties of the Least Squares Estimators 1 1-4 Some Comments on Uses of Regression 11-5 Hypothesis Tests in Simple Linear Regression 11-5.1 Use of t-Tests 11-5.2 Analysis of Variance Approach to Test Significance of Regression
11-1
11-6 Confidence Intervals 11-6.1 Confidence Intervals on the Slope and Intercept 11-6.2 Confidence Interval on the Mean Response 11-7 Prediction of New Observations 11-8 Adequacy of the Regression Model 11-8.1 Residual Analysis 11-8.2 Coefficient of Determination 11-8.3 Lack-of-Fit Test 11-9 Transformations to a Straight Line 11- 11 Correlation MINITABApplications Answers to Exercises
Empirical Models
independent variable, dependent variable, and random error. le linear model. Regression Analysis
Regression analysis is a statistical technique to model the relationship between two or more variables so that we can predict the response of a variable at a given condition of the other variable(s).
11-2
Chapter 11
Regression Analysis (cont.)
Simple Linear Regression and Correlation For example, in Figure 11-1, the scatter diagram of price versus sales for a' plywood product indicates that the volume of sales linearly decreases as the price of the product increases. By modeling this linear relationship, we may want to know how many pieces of plywood would be sold if the price is set at $8.50. It may not be easy to answer the question because there is no simple, linear curve which passes through all the points in the diagram-the data randomly scatter along a straight line.
1
0
2
I
I
I
I
4
6
8
10
*
Price per Piece (x)
Figure 11-1 Scatter diagram of plywood price versus sales. Simple Linear Model
A simple linear model which explains the linear relationship between two variables (x and Y) is Y=po +Plx+& where:
(1) Regression coefficients (Po and PI): The coefficients Po and PI denote the intercept and slope of the regression line, respectively. The slope PI measures the expected change in Y for a unit change in
z. (2) Independent variable (x; regressor, predictor): Since the independent variable x is under control with negligible error in an experiment, x represents controlled constants (not random outcomes). (3) Random error (E): The variable E represents the random variation of Y around the straight regression line Po + Pix. It is assumed that E is normally distributed with mean 0 and constant variance 02, i.e., E - N(o,o~) (4) Dependent (response) variable (Y): Since outcomes in Y at the same value of x can vary randomly, Y is a random variable with the following distribution: y-N(P0 + P , x , ~ ~ ) In summary, four assumptions are made for the simple linear regression model: 1. Linear relationship between x and Y: The variables x and Yare linearly related. 2. Randomness of error: The errors (spreads of Y along the regression line) are random (not following any particular pattern).
11-2 Simple Linear Regression
Simple Linear Model (cont.)
11-3
3. Constant oZ:The variance of the random errors is constant as 02.In other words, the spreads of Y along the regression line are the same for different values of x (see Figure 11-2). 4. Normality of error: The random errors are normally distributed.
0 0
XI
x2
X
Figure 11-2 The variability of Y over x. The analyst should check if the assumptions of the linear model are met for proper regression analysis, which is called assessment of model adequacy (presented in Section 11-8).
11-2
Simple Linear Regression
O Explain the method of least squares. O Estimate regression coefficients by using the least squares method. 0 Calculate mean responses and residuals by using a fitted regression line. Cl Estimate error variance 02. Least Squares Method
In the 1800s Karl Gauss proposed 'the method of least squares' to estimate the regression coefficients of a linear model. Suppose that we have n pairs of observations (xi, yi), i = 1 , 2 , . .. , n, and the n observations are modeled by a simple linear model
The least squares method finds the estimates of Do and (3, which minimize the sum of thesquares of the errors (denoted by SSE;called error sum of squares; sum of the squares of the vertical deviations of data from the regression line in Figure 11-3)
Chapter
Least Squares Method (cont.)
po
Dl
(Derivation) Least square estimators and Recall that, in calculus, when we want to find the value of a variable which minimizes (or maximizes) a certain function, the following procedures are used: (1) Take the derivative of the function with respect to the variable designated, (2) Set the derivative as equal to zero, and (3) Solve the equation. Since the function SSE (or L) has two unknowns (Do and PI), we need to take a partial derivative with respect to each regression coefficient as follows:
I
!
1 1-2 Simple Linear Regression
Least Squares Method (cont.)
11-5
n
By multiplying the first equation above by
xi and the last equation by n each, i=l
By subtracting the last equation above from the first equation above,
Thus,
Lastly, from Equation (1 1-2), 2 Y i
'=' So = Fitted Regression Model
- S l t x i i=l
n
=y-plx
By using the least squares estimates of Do and Dl, the fitted (estimated) regression line is determined as follows:
+So +Six
By applying the fitted regression line, the mean response at xi is estimated. Note that an actual response yi can be different from the corresponding predicted response ji . The difference between the actual and predicted responses is called the residual (denoted as e;),which is an estimate of the random error E~ : e i = y i - j j i , i = 1 , 2 ,..., n
1-6
Chapter
Estimation of Error Variance
Simple Linear Regression and Correlation
Since the error variance o2is unknown in most cases, estimation on o2is needed. The point estimator of o2is
(02
n
y: - ny2 - bls,
(Derivation) SS, = i=l
SS, =f:ei2 i=l
=
f:[vi
=f:(yi
-ji)2 =
*
-(Bo + D , X ~ ) ]because ~ ji =Po + Pixi A
i=l
i=l
-(y -
f:Li
pi? + p 1 x i ) r
-
A
because
* -
Do = y -Dl
x
i=l
i=l
i=l
n
(Notes) SS, = S, =
(yi - j?)2 (called total sum of squares)
n
n
=SS, -2fils,
+P:S,
because S, = C y i ( ? - x i ) and C ( ? - x i ) = O i=l
>xy + PI -SY
= SS, - 2 B 1 ~ v
A
Sxx
=cy,?-B1s, n
-nj2
because
i=l
Axy P1 = A
Sxx
n
n
because SS, = C ( ~ , - J ) =~C y ' - n ~ '
r
1 1-2 Simple Linear Regression Example 11.1
I
11-7
The sales' volumes (Y) of a plywood product at various prices ( x ) are surveyed as follows (sorted by price):
z
z
The summary quantities of the plywood price-sales data are n =30, y, = 1,930, y,? = 138,656, z x i = 225,
z
z
xi = 1,775 , and
xi y, = 13,360
Assume that x and Yare linearly related. 1 . (Estimation of Regression Coefficients) Calculate the least squares estimates of the intercept and slope of the linear model for x and Y.
The fitted regression line of the plywood price-sales data is j = p, + p,x = 159.9 - 1 2 . 7 ~ A
I
n
2. (Calculation of Residual) Calculate the residual ofy = 84 at x = $6. h The predicted sales at x = $6 is .. jX=,= P o + / 3 , ~ = 1 5 9 . 9 - 1 2 . 7 ~ 6 = 8 3 . 4 Thus, - yX,, - jx=, = 84 - 83.4 = 0.6 (underestimate) ex=, A
11-8
Chapter 1 1
Simple Linear Regression and Correlation
-
Example 11.1 (cont.)
3. (Estimation of 02) Estimate the error variance 0'.
SS,
= t y i-ny2 -&s, = 1 3 8 , 6 5 6 - 3 O x ( z )
2
-(-12.7)x(-1,115)
i=l
An article in the Journal of Sound and Vibration (Vol. 151, 1991, pp. 383-394) described a study investigating the relationship between noise exposure (x) and hypertension (Y). The following data (sorted by sound pressure level) are representative of those reported in the article:
Exercise 11.1 (MR 11-9)
I The summary quantities of the noise-hypertension data are
I Assume that x and Yare linearly related.
I1 1 11-3
1. Calculate the least squares estimates of the intercept and slope of the linear model for x and Y. 2. Calculate the residual of y = 5 mmHg at x = 85 dB. 3. Estimate the error variance 4'
Properties of the Least Squares Estimators
"*
Identify the sampling distributions of the least squares slope and intercept estimators. 0 Check if the least square estimators fi, and are unbiased estimators.
Dl
11-3 Properties of the Least Squares Estimators
Distribution of Slope Estimator
The sampling distribution of the least squares slope estimator
(Dl) (Note) Estimated standard error of
11-9
8, is
8, = 8,
(Derivation) Sampling distribution of Recall that the least squares estimator of P 1 is n
Also recall that Y =Po + p 1 x + & - N(Po + P l x , o 2 ) ,where E - N ( o , o ~ ) ,andx represents constants. Since variables y i's,
0, is a linear combination of the normal random
Dl is normally distributed with the following mean and variance:
I
=-plCxi(xi-;) c
(Note) Since E(&) = pl ,
-
because x ( x i - x ) = O
8, is an unbiased estimator of P I .
11-10 Chapter 1 1
Distribution of
Simple Linear Regression and Correlation
The sampling distribution of the least squares intercept estimator
po is
Estimator
Do
(Derivation) Sampling distribution of Recall that the least squares estimator of Po is
Also recall that Y constants. Since and
- N(Po + P,x, 0 2 ) , B, - N(Pl , 0 2 1S , ) ,
and x denotes
Do is a linear combination of the normal random variables y, 's
p, , So is normally distributed with the following mean and variance: E(B,) = E ( y - p , ~ =) E(y) - ~ ~ ( f = i ,E(Po ) + Dl?)
-PI?
=Po + p 1 x - p 1 x = p 0 v(po)=v(y-plx)=v(y)+x2v(pl)-2xcov(y,pl)=v(y)+x2v(pl) (Note) cov(j7,pl) = 0 because
(Note) Since E(p0) = P o ,
11-4
and
p, are independent.
Do is an unbiased estimator of Po .
Some Comments on Uses of Regression -*wI*IXImw~~wa
-
-d -Wa--in
*--Ixn-*a1Xzmwm-~---
m-
--I.
1)",XU*XIcrr.-.FW(-=
E1W
--- *-*-
L l Explain common misuses of regression. Common Misuses of Regression
Care should be taken on the following to avoid technical misuse of regression: 1. Practical validity of regressor: A statistical significance can be found on the relationship between independent (x) and dependent (Y) variables, although they are not related from a practical sense. Therefore, the practical significance of the relationship among the variables should be checked separately.
i
11-1 7
11-5 Hypothesis Tests in Simple Linear Regression
i
2. Empiricavtheoretical validity of regressor: The direction and/or magnitude of a regression coefficient in the model can contradict common understanding of the relationship between the variables. Therefore, it should be checked if the sign and magnitude of each regression coefficient are reasonable as compared with previous findings.
Common Misuses of Regression (cont.)
3. Association of regressor with predictor: A significant relationship between the regressor and predictor in the model does not necessarily indicate a causeeffect relationship between the variables. Designed experiments are the only way to determine cause-effect relationships between variables. 4. Limited generalizability of the model: The regression relationship established between the variables may be valid only over the range(s) of the regressor(s) examined in the study. To apply the regression model beyond the original range(s) of the regressor(s), a new study should be designed for the extrapolation purpose.
11-5
Hypothesis Tests in Simple Linear Regression
11-5.1 Use of f-Tests O Conduct hypothesis tests on PI and Po each. ~
U
X
*
l
w
~
X
(
_
*
J
U
w
Inference Context on Slope
_
I
~
~
-
~
m
~
M
-
~
- *Lw*M~~ *- % ~ -w ~ w ~ Mm ~ ~ * I
PI
Parameter of interest: Point estimator of
CP,) Test statistic of
-
"^*
#
[
PI = -- N P, ,-
PI :
PI :
% " .
'x'
s,
T
-
1'
- P ~ , o -t(n-2)
O - J i S ,
Hypothesis Test on P1
(t-test)
Step 1: State HOand H I . Ho: PI = PI,O H I : PI # P1,o Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(~)for a. t a t 2.n-2
Step 4: Make a conclusion. Reject Ho if
I
It0 > tat2.n-2
lil*
XIX(*FX
%"&
*"
o2is unknown.
I .
iiX4aa
X
*
e.
XXlll',
t_X
I
11-12 Chapter 1 1
The Significance of Regression
Simple Linear Regression and Correlation
It is typical to test if the regressor x is significant to explain the variability in Y by using Ho: = 0, which is called testing the significance of regression. Note that failure to reject Ho: [3, = 0 may imply that (1) The regressor x is of little value in explaining the variability in Y, or (2) The true relationship between x and Y is not linear. On the other hand, rejection of Ho: = 0 may imply that (1) The regressor x is of value in explaining the variability in Y, (2) The simple linear model is adequate, or (3) A better model could be obtained by adding a higher order term(s) of x while a linear effect of x on Yremains significant.
Inference Context on Intercept (Po)
Parameter of interest:
Test statistic of
Hypothesis Test on Po (t-test)
Po :
Po
To =
Step 1: State Ho and H I . Ho: Po = Po,o H1: Po + P0,o
1
Step 2: Determine a test statistic and its value.
I
Step 3: Determine a critical value(s) for a.
Step 4: Make a conclusion. Reject Ho if
r
10 '
Example 11.2
( > tai 2,n-2
For the plywood price-sales data in Example 11-1, the following summary quantities have been calculated: n = 30, b2 =3.2', E x i = 2 2 5 , Sxx=87.5, =-12.7, and =159.9
8,
1. (Hypothesis Test on PI) Test Ho: P1 = 0 at a = 0.05.
8,
11-13
11-5 Hypothesis Tests in Simple Linear Regression Step 1: State Ho and H I . Ho: p1 = 0
Example 11.2 (cont.)
I
HI:
pl + O
Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. 'u12,n-2 - t0.025,28 = 2.05
I
-
I
Step 4: Make a conclusion. Since It0\= 37.4 > t,12,n-, = 2.05, reject Ho at a = 0.05.
2. (Hypothesis Test on Po) Test Ho: Step 1: State Ho and HI. H0: Po = 0 HI: pozo
=0
at a = 0.05.
Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value@)for a. 'a/2,n-2 = t0.025,28 2.05
I Exercise 11.2 (MR 11-25)
Step 4: Make a conclusion. Since (t,(=61.0>taI,,,-, =2.05,rejectHoata=0.05. For the noise-hypertension data in Exercise 11- 1, the following summary quantities have been calculated: n = 20, ir2 = 1.4', = 1,646, S, = 3,010.2, = 0.17, and = -9.81
a,
Ex,
1. Test HO:pl = 0 at a = 0.05. 2. Test HO:Po = 0 at a = 0.05.
II-5.2 Analysis of Variance Approach to Test Significance of Regression Explain the analysis of variance (ANOVA) technique. Describe the segmentation of the total variability of a response variable (SST)into the regression sum of squares (SSR)and error sum of squares (SSE). Establish an ANOVA table and conduct a hypothesis test on P1. ---l_nP-X--lj---M
-
- ~ ~ - - - - _ j ~ l l ~ w w = ~ - a ~
_
_
*
-
_
I
*
~
~
=
P
_
L
-
a
a
a
X
w
-
~
r t * /,m -s -
s-^
11-14 Chapter 11
Analysis of Variance (ANOVA)
Simple Linear Regression and Correlation
The analysis of variance (ANOVA) technique provides a statistical procedure to evaluate the contribution of a variable (X,) to the variability in the response variable (Y). As illustrated in Figure 11-4, the ANOVA method divides the total variance of Y ( S T ) into the segments ( SS,, 's ) explained by variables Xi's and the segment unexplained (SSE), implying that the larger the size of the segment SS,, ,the higher the contribution of XI to the total variability in Y.
Figure 11-4 Segmentation of the total variability in Y ( S T ) . ANOVA for Test on
The ANOVA method can be used to test the significance of regression (Ho:PI = 0). For a simple linear model, as shown in Table 11- 1, the total variability in Y (SSTor Syy,total sum of squares; degrees of freedom = n - 1) is partitioned into two components: (1) Variability explained by the regression model (SSR, regression sum of squares; degrees of freedom =I) (2) Variability due to random error (SSE, error sum of squares; degrees of freedom = n - 2 )
Table 11-1 Partitioning of the Total Variability in Y (SST) SST SSR ss. (Total SS) (Regression SS) (Emr SS) r=l
r=l
DF n-l 1 (Note) SS: Sum of Squares; DF:Degrees of Freedom m*&"w-am**m-mema
-*B
waa*s~-~-*--essB--b-~a*
+
w ~ - m ~ ~ a - # - - - - ~ e ~ m . - - ~ w " * m . m ~
r=l
n-2
*sea%a--"--mm--e
A graphical interpretation of the ANOVA quantities is provided in Figure 11-5. The deviation of an observation yi from the corresponding mean y is divided into the yi-deviation explained by the regression line and the yi-deviation due to error. These deviation are squared and then added to those of the other observations to calculate SST,SSR,and SSEas follows:
11 i
1 1-5 Hypothesis Tests in Simple Linear Regression
11-15
ANOVA for
Test on
PI
(cont.)
(variability ofy, due to regress~on)
Figure 11-5 Partitioning of the total variability ofy. The ratio of SSR/l to S S E / (-~2) follows an F distribution: S S ~l --M S ~ F(l, n - 2) F= SS, l ( n - 2 ) MS, as P1 = 0 is This statistic F is used to test Ho: P1 = 0, because F becomes true. The quantities MSR (regression mean square) and MSE (error mean square) are adjusted SSR and SSEby their degrees of freedom, respectively. Note that
ANOVA
Table
The variation quantities from the ANOVA analysis are summarized in Table 11-2 to test the significance of regression.
Table 11-2 ANOVA Table for Testing the Significance of Regression Source of Sum of Degrees of Mean Squares Freedom Square Fo Variation 1 MS, MS,IMS, SSR = fils, Regression Error
SS, = SS, - SS,
Total
SS,
n -2
n
Hypothesis Test on
PI
(F-test)
=zyi -ny2
n-1
Step 1: State Ho and H I . Ho: Pl = 0 H I : Pl f 0 Step 2: Determine a test statistic and its value. - MSR F(l, - 2) --F, = SSR I SS, l(n - 2) MS,
'
MSE
I
11-16 Chapter 11
Simple Linear Regression and Correlation
1
Step 3: Determine a critical value(s) for a.
Hypothesis Test on
i
fa,l,n-2
P1
(F-test) (cont.)
Step 4: Make a conclusion. Reject HOif f~ > fa,~,n-2 In testing the significance of regression, the ANOVA test procedure is equivalent to the t-test procedure in Section 11-5.1, because the square of the t-test statistic becomes the F-test statistic when pl = 0. Therefore, use of either test procedure will lead to the same conclusion on 01.
Relationship between t- and F-tests
--f i l
S~Y
because Sv,,
--M S ~ because -
fils, = SS,
= MSR and
e2= MSE
MSE Note that the t-test is more flexible than the F-test in testing a hypothesis on PI, because the t-test can have either a one- or two-sided alternative hypothesis whereas the F-test can have only a two-sided alternative hypothesis. However, the F-test is used to evaluate the relative contributions of multiple regressors to the variability in Y (see Section 12-2).
Example 11.3
I
(ANOVA for Hypothesis Test on PI) For the plywood price-sales data in Example 11- 1, the following summary quantities have been calculated: y? =138,656, S , =-1,115, and =-12.7 n = 30, =1930,
zyi
p1
Establish an ANOVA table and conduct a hypothesis test on PI at a = 0.05.
II I1 I
I
SS, =
Ay,?
2
- njJ2= 138,656 - 3Ox(%)
= 14,492.7
i=l
SS, = filsxy = -12.7 x (-1,115) = 14,208.3 SS," = SS, - S S P = 14,492.7 - 14,208.3 = 284.4
Mean
Source of Variation
Sum of Squares
Degrees of Freedom
Square
fo
Regression
14,208.3
1
14,208.3
1398.9
Error
284.4
28
10.2
Total
14,492.7
29
(Note) t: = (-37.41~ = 1398.9 = fo
1 1-6 Confidence Intervals
Example 11.3 (cont.)
Step 1: State Ho and H I . Ho: p1 = 0 H1: PI # 0 Step 2: Determine a test statistic and its value. SSR11 - MSR - 14,208.3 = 1,398.9 fo = SS,l(n-2) MS, 10.2 Step 3: Determine a critical value(s) for a. for,l,n-2 = f0.05,1,30-2 = 4.20 Step 4: Make a conclusion. Since f , = 1,398.9 >fa,,,,-, = 4.20, reject Ho at a = 0.05.
d
Exercise 11.3 (MR 11-25)
For the noise-hypertension data in Exercise 11- 1, the following summary quantities have been calculated: Establish an ANOVA table and conduct a hypothesis test on P1 at a = 0.05.
I1 6
Confidence Intervals
11-6.1 Confidence Intervals on the Slope and Intercept Establish confidence intervals on P1 and PO each.
Inference Context on Slope
p1
Parameter of interest: Point estimator of
(PI)
P1 :
P, =
(Note) Estimated standard error of
A
Confidence Interval on
A 100(1 - a ) % confidence interval on I
P1
Inference Context on Intercept (Po1
-2
-
2
4
I
p,
-
2
i-' 1sn 0
P,
Parameter of interest: Point estimator of
is
+2
1
-
:x; /
:
D,
-
=y -
-
Pl x - N fi
(Note) Estimated standard error of
p, =
ij2 - + -
11-17
1 1
Simple Linear Regression and Correlation
11-18 Chapter 1 1
A 100(1 - a)% confidence interval on Po is
Confidence Interval on
Po For the plywood price-sales data in Example 11-1, the following summary quantities have been calculated: n = 3 0 , x x i =225, 6' =3.22, S , =87.5, =-12.7,and =159.9
Example 11.4
a,
Po
1. (Confidence Interval on PI) Establish a 95% two-sided confidence interval on the slope. h- 1 - a = 0 . 9 5 z a=0.05 95% two-sided CI on PI:
BI - t a / 2 , n - 2 J K 5 P l
5fi1+ t a / 2 , n - 2 J E
d
I
1
I I
Exercise 11.4 (MR 11-38)
=. - 12.7 - t o , 0 5 , 2 , 3 , - 2 ~5-12.7 ~ ~ ~+to,05,2,30-2 I 3.22 187.5 3 -12.7 - 2.05 x 0.34 I PI 5 -12.7 + 2.05 x 0.34 a -13.4 I P , 5-12.0
2. (Confidence Interval on Po) Establish a 95% two-sided confidence interval on the intercept. 1-a=0.95 a a=0.05 95% two-sided CI on Po:
For the noise-hypertension data in Exercise 11- 1, the following summary quantities have been calculated: n = 20, z x i = 1,646, B2 =1.4', S , =3,010.2, S , =3,010.2, = 0.17,
0,
and
Po =-9.81
1. Establish a 95% two-sided confidence interval on the slope. 2. Establish a 95% two-sided confidence interval on the intercept.
!
11-19
11-6 Confidence Intervals
11-6.2 Confidence Interval on the Mean Response
0 Identify the sampling distribution of the mean response estimator at x = xo. Establish a confidence interval on the mean response at x = xo.
Sampling Distribution of Mean Response Estimator
The mean response at x = x o is Pyls = E ( Y XO)=E[(PO + P ~ ~ + E ) I ~ O ~+ =P PP oO The point estimator of pylxo is
PYlxo
The sampling distribution of
is
(Derivation) Sampling distribution of
fi Ylxo
Recall that
Do - N Since
fi ylxo
(
p0,o
-+-
and
B1 - N
2 [ : ] )
is a linear combination of
1:) PI,-
Po and p1, fi ylxo has a normal distribution
with the following mean and variance: E ( P y l x o ) = ~ ( B o + b l ~ o ) = ~ ( B o ) + x o ~ ( B i ) = ~ o+ x o h =Prix0
(Note) Since E(P
Inference Context on Mean Response atx=x, (PYlXo
) = pylxo , fi ylxo is an unbiased estimator of
Parameter of interest: p yl,o Point estimator of pyIxo :
Pylxo = b o + B ~ X -O N o2is unknown.
~YI,
.
11-8 Adequacy of the Regression Model
Example 11.7 (cont.)
h
1
11-25
The normal probability plot displays that the residuals lie closely along a straight line. In addition, the standardized residual plot shows more than 95% of the standardized residuals (29130 = 96.7%) fall in the interval (-2,2). Therefore, it is concluded that the residuals are normally distributed. (Notice that there is no indication of an outlier in the plywood price-sales data because all the standardized residuals are within the interval (-3,3).) Next, the plots ei vs. j j and ei vs. xi indicate that the residuals are random and have a constant variance. The variability of the residuals is slightly increased in the mid ranges of y and x; however, this deviation from the constant variance is not serious enough to reject the adequacy of the regression model.
d
In the noise-hypertension data in Exercise 11-1, the following residual plots are obtained for the fitted regression model j = -9.8 1 + 0 . 1 7 ~:
Exercise 11.7 (MR 11-47)
Observation Number
-3
-2
-1
0
I
2
3
Residuals (e,)
(a) normal probability plot
E
5 * 6 ...............
............
0.
(b) standardized residual plot
Sound Pressure Level O (x: dB)
.
a".- - . . . . . . - - -
(c) e, vs. ji
(d) e, vs. xi
1 Discuss the adequacy of the regression model by using the residual plots. 11-8.2 Coefficient of Determination (I?)
O Explain the term coeflcient of determination. O calculate the coefficient of determination of a regression model. -,-.- ..... . ...... B2ws,BMsB
-s*m*mw"w,d
* - ~ s m , w , m ....ass.*W
"m%#*B.m.w.b ~%esem
w*m8aM-ea-
.m.sm*a
8acv,B*
. . ~ . d ~ ~ a . d . . m ~ . m . ~ v ~ ~ - - ~ - ~ ~ . . ~ m . ~ ~ ~ w A ~ * ~ * ~ ~ . ~ ~ ~ ~ . ~ ~ ~ m . . w m # ~ a * ~ . ~ 2 d . ~ . e ~ . ~ ' ~ . M m . a . # a ~ we#.mw.Bw#w~~d~..a. a..~*.~a~..~
The coefficient of determination (denoted as R2) indicates the amount of Coefficient of Determination variability of the data explained bv the regression model. In other words, R2 is the proportion of the total variability in Y which is accounted for by the regression (R~) model:
11-26 Chapter 11
Simple Linear Regression and Correlation
I
1
Coefficient of Determination
5s ' -I-%, R 2 =RSST SST
(RZ) (cont.)
(Caution) Use of R~ A large value of R~ does not necessarily indicate the adequacy of the regression model. By adding new regressors or high-order terms of the regressors to an existing model, R~ will always increase even if the new model is less desirable than the existing model in terms of simplicity, stability, and statistical significance. For instance, n data points of x and y can be 'perfectly' fit by a polynomial of degree n - 1 (e.g., four data points can be perfectly fit by a polynomial with a degree of three). Although the value of the perfect-fit model is one, the high-order polynomial is complex to use and often unstable when a different sample of data is used. Lastly, unless the error sum of the squares (SSE) of the new model is reduced by the error mean square of the existing model, the new model will have a larger error mean square (MSE), resulting in a decreased level of statistical significance.
Example 11.8
(Coefficient of Determination) For the plywood price-sales data in Example 111, the following summary quantities have been calculated: SS, = 14,492.7, SS, = 14,208.3, and SS, = 284.4 ) the fitted regression model Calculate the coefficient of determination ( R ~of 9 = 159.9 - 1 2 . 7 ~ .
I I Exercise 11.8 (MR 11-47)
0
2 SSR 14,208.3 R =-= 98.0% SS, 14,492.7 The fitted regression model explains 98.0% of the variability in the sales of the plywood product.
For the noise-hypertension data in Exercise 11- 1, the following summary quantities have been calculated: SS, = 124.2, SS, = 88.5, and SS, = 35.7 Calculate the coefficient of determination ( R ~of) the fitted regression model j=-9.81+0.17~.
11-8.3 Lack-of-Fit Test me
*
e*
?
-n_wll_/l
a
&
mw
-
a a
Ull
jlC-*
U-P_neCl
I
**
-^*
a
-"
-
*-
-n-
.
XFII
O Describe the purpose of the lack-of-fit test on a regression model. O Conduct a lack-fit-test on a fitted regression model. Lack-of-Fit Test
The lack-of-fit (goodness-of-fit) test on a regression model identifies if the order of the model is appropriate. For a lack-of-fit test, repeated observations of Y are needed for at least one level of x.
1 1-8 Adequacy of the Regression Model
ANOVA for Lack-of-Fit Test
11-27
The ANOVA method underlies the lack-of-fit test on a regression model. Suppose that the observations for m levels ofx are as follows: No.s%pateQ Y X
@ims
By applying the ANOVA technique, as shown in Table 11-3, the error sum of squares (SSE; degrees of freedom = n - 2; see Section 11-5.2) is further partitioned into two components: (1) Variability due to pure error (SSpE, pure error sum of squares; degrees of freedom = n - m) (2) Variability due to the lack-of-fit of the model (SSLOF,lack-of-fit sum of squares; degrees of freedom = m - 2) Table 11-3 Partitioning of the Error Sum of Squares (SSE) 4% ~ P R SSWF (EfiorSS) (Pure m r SSj (Lacksf-& S$l
- # ' -m- - -
DF
n-2
*--*-
-
n-m
+
v%a-rma-xmrrrxa-----
m-2
m
(Note) n =
n, ;SS: Sum of Squares; DF: Degrees of Freedom
A graphical interpretation of SSE,SSPE,and SSLoF is provided in Figure 11-7. The deviation of an observation yij from the corresponding estimate jiat xiis divided into the deviation ofyVfrom the corresponding mean Tidue to pure error and the deviation of yi from the corresponding estimate jidue to the lack-of-fit of the
Figure 11-7 Partitioning of the variability of y due to error.
11-28 Chapter 11
ANOVA for Lack-of-Fit Test (cont.)
Simple Linear Regression and Correlation
model. These three deviations are squared and then added to those of the other observations to calculate SSE,SSpE,and SSLOF.In Figure 1 1-7, the error due to the lack-of-fit of the model could have been zero if a higher order model (dotted line) was employed.
1
The ratio of SSLo,W/(m- 2 ) to SSp~l(n- m ) follows an F distribution:
Since the statistic F becomes as the order of the model is appropriate, F is used to test Ho: The order of the regression model is correct. Note that the quantities MSLoF(lack-of-fit mean square) and MSPE(pure-error mean square) are adjusted SSLOFand SSPEby their degrees of freedom, respectively.
ANOVA Table and Lack-of-Fit Test
The lack-of-fit and pure-error quantities from the ANOVA analysis are summarized in Table 11-4 to test for the adequacy of model. -. the Lack-of-Fit Test Table 11-4 ANOVA Table for 1 Sum of Degrees of Mean Fo Squares Freedom Square Lack-of-Fit S S ~ o ~ m-2 Ms,m M s ~ ~ l M S ~ ~ Pure Error SSPE n-m M~PE SSE n -2 Error MSE * - ** - *" -
-
-
~
---
~
~
- -
-
- -
Source of Variation
A
mm-waw*
ew*meww-mww
i
wMwe- mwvs
-ww
a
am
-w
ma%-
mms
dw
*-**A
%-
a
"V
Yrn
AW
Step 1: State Ho and H I . Ho: The order of the model is correct. HI: The order of the model is not correct. Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a.
Step 4: Make a conclusion. Reject Ho if
f~ 'fa,m-2,n-m
Example 11.9
I I I
(Lack-of-Fit Test) For the plywood price-sales data in Example 11- 1, the following summary quantities have been calculated: m
n = 3 0 , m = 6 , SS, = 284.4, SSLoF =
C(yi ji)* = 2.4, and -
Conduct a lack-of-fit test on the regression model jj = 159.9 - 1 2 . 7 ~at a = 0.05.
11-29
11-9 Transformation to a Straight Line
Example 11.9 (con?.)
An ANOVA table to test the lack-of-fit of the regression model is as follows: Mean Source of Sum of Degrees of Freedom Square Fo Variation Squares
I
I I
Pure Error
282.0
24
Error
284.4
28
nwmX( %
11.8 i*eeanniea
n _ ~ ' w w * L n ~ f ~ j _ x " . ~ - s s m - - ~ ~ - - ~ ~ - w ~ ~ , ~ * w ~ " - w . - ' - z - - e m ~ ~ ~ * - w # * ~
,-,,
a.x,", a.x.w.rx.naarar.""ir
Step 1: State Ho and H I . Ho: The order of the model is correct. HI: The order of the model is not correct. Step 2: Determine a test statistic and its value. MSL, 0.6 fo = - -= 0.05 MS, 11.8 Step 3: Determine a critical value(s) for a. fa,m-2,n-m = f0.05,4,24 = 2.78 Step 4: Make a conclusion. Since fo = 0.05 > f a , , -,,,-, = 2.78, fail to reject HOat a = 0.05. It is concluded that the order of the model is adequate.
Exercise 11.9
I
I
For the noise-hypertension data in Exercise 11-1, the following summary quantities have been calculated: m
= G ( j i -ii)'=4.8.and
n=2O, m=10, SS, =35.7, SS,,
i=l
I Conduct a lack-of-fit test on the regression model j. I1 9
= -9.8 1+ 0 . 1 7 ~at a = 0.05.
Transformations to a Straight Line
O Explain the relationship between linear and non-linear models. -_(j.
-ls^---Y_--s----w=
Transformation of Nonlinear Model
-------
-
-
~
-
-
*
~
w
a
-
~
~
~
~
~w
" a ~ #-4 w *witm
(CUYP-
^(i
XIWIIXI
PI
-
i
-*-
A non-linear relationship between independent (x) and dependent (Y) variables can be transformed into a linear model by mathematical operations. Examples are presented in Table 11-5.
P
11-30 Chapter 11 Transformation of Nonlinear Model (cont.)
Simple Linear Regression and Correlation
Table 11-5 Transformation of Nonlinear Models (illustrated)
Linear Model
Nonlinear Model Y=P
~
~
~
~
I~n Y =El n p o + P , x + E
11-11 Correlation
Calculate the sample correlation coefficient (R) between random variables X and Y. LI Construct a confidence interval on the population correlation coefficient (p) between X and Y. Conduct a hypothesis test on p.
Correlation vs. Regression
A close relationship exists between the correlation of random variables Xand Y and the regression model of independent and dependent variables x and Y. Recall that in the regression model the regressor x is assumed to be a mathematical variable, not a random variable (see Section 11-1). Suppose that X and Yare random variables and the observations (X., Yi), i = 1,2, ... ,n, are obtained from a bivariate normal distribution. The population correlation coefficient p, (see Section 5-5) is
The conditional probability distribution of Y given X = x is a normal distribution with mean and variance (see Section 5-6)
= P o + P 1 x , where p o = p y - p x p m -
CJ Y
CJx
and
PI = p H - -CJ Y '3,
Therefore, it is identified that the conditional probability distribution of Y given X = x is equivalent to the probability distribution of a simple linear regression model with mean DO + plx and variance 02.
11- 11 Correlation
Estimation of Correlation Coefficient (PXY)
The estimator of , ,p
11-31
called sample correlation coefficient (denoted as R), is n
z ( x i - X)(? R - 66x ,& y -
-
Y)
-1IR<1 d Sm s = / w ' x ( x i - X ) 2 X ( ? -F12 i=l
i=l
The square of R becomes the coefficient of determination R' of the regression model for Y and X having a bivariate normal distribution:
SSR (Derivation) R 2 = SST
S R
2
=--
sh
S,S, =-
'ism
-
because SS, = S y y
S,SST because
S, Dl = A
SST S, S S ~ -because SSR = B 1 ~ , y j SST Inference Context on Pxr
Parameter of interest: p , ~
sx
Point estimator of pxr: R = SXSY Test statistic of p~ R-Po - t(n - 2) for testing Ho: p = 0 =
Z o = (arctanh R - arctanh po)-
Confidence Interval on
for testing Ho:p = p~(i 0), n t 25
An approximate 100(1 - a)% confidence interval on p is
PXY Hypothesis Test on
PI
Step 1: State Ho and H I . H o : P 'PO Hi: p*po
(t-test) Step 2: Determine a test statistic and its value. -
- t n - 2)
for testing Ho: p = 0
=
ZO= (arctanh R - arctanh pO)Jn - 3
for testing Ho: p = po (# 0), n 2 25
I 1-32 Chapter 11
Hypothesis Test on
Simple Linear Regression and Correlation
Step 3: Determine a critical value(s) for a. -,,t for testing Ho:p = 0
P1
z,
(t-test) (cont.)
for testing H o : p = po (+ 0), n 2 25
Step 4: Make a conclusion. Reject Hoif for testing Ho: p = 0
It,l> t,12,,-2 IzOI> zaI2
for testing H o : p = po (# 0), n 2 25
The relationship between ergonomics grade (X)and statistics grade (Y) is under study. A random sample of n = 50 students is selected who have taken both ergonomics and statistics courses. Their final scores in ergonomics and statistics are summarized as follows: n = 50, S , = 6,652.5, S , = 3,999.0, and S , = 3,303.1
Example 11.10
I I
Assume that ergonomics grade and statistics grade have a bivariate normal distribution. 1. (Sample Correlation Coefficient) Calculate the sample correlation coefficient (R) between ergonomics grade (X)and statistics grade (Y).
II"
2. (Confidence Interval on pxy) Construct a 95% confidence interval on the population correlation coefficient (p) between X and Y. arctanh r + -
tanh(arctanh r -
)
Jn-3
z0.05 12 arctanh 0.64 - zo,0512 < p 1tanh(arctanh 0.64 + -
JG' -
3. (Hypothesis Test on pm) Test if p m # 0 at a = 0.05. Step 1: State Ho and H I . Ho: p = O HI: p # O
I
Step 2: Determine a test statistic and its value.
I
Step 3: Determine a critical value(s) for a. ta/2,n-2
= t0.0512,50-2 = t0.025,48 = 2.01
, I s
I 14
-
1 1-1 1 Correlation
I1-33
Example 11.10 (cont.)
Step 4: Make a conclusion. Since Itol = 5.78 > t,/2,n-2 = 2.01, reject Ho. It is concluded that ergonomics grade (3 and statistics grade (Y) are significantly related at a = 0.05.
Exercise 11.10 (MR 11-56)
The measurements of weight (X) and systolic blood pressure (Y) for n = 26 randomly selected males in the age group 25 to 30 are collected, resulting in the following summary quantities: n = 26, S , =4,502.2, S , =15,312.3, and S , =6,422.2 Assume that weight and systolic blood pressure are jointly normally distributed. 1. Calculate the sample correlation coefficient (R) between X and Y. 2. Construct a 95% confidence interval on the population correlation coefficient (p) between X and Y. 3. Test if p~ # 0 at a = 0.05.
11-34 Chapter 1 1
Simple Linear Regression and Correlation
MINITABApplications Examples 11.1-3,5,6,8
I (Simple Linear Regression)
I
( I ) Choose File
(2)
eet.
&
1
1
* New, click Minitab Project, and click OK.
I
Pr~ce
I
*
1 Sales Volume / 95 /
II
I
*
(3) Choose Stat Regression Regression. In Res~0nSeselect Sales Volume and in predictors select Price.
(4) Click Graphs. For Residuals for Plots, select Regular or Standardized. Under Residual Plots, check Normal plot of residuals, Residuals versus fits, andlor Residuals versus order. In Residuals versus the
MINITABApplications
11-35
(5) Click Options. Check Fit intercept. In Prediction Intervals for new observations, enter the value of price of interest for the confidence interval on the mean response andor prediction interval of a future observation. In Confidence level, type the level of confidence. Then click
Examples 11.1-3,5,6,8 (cont.)
OK
1
(6) Click Results. Under Control the Display of Results, click the level of
,
,
II
Regression Analysis The r e g r e s s ~ o ne q u a t i o n 13 S a l e s Volume = 160 - 12.7 P r l c e
i
Source
MS
DF
1 R e s i d u a l E r r o r ; 28 Total : 29 Regression
A.,$;.
14208 10
5.,,..?,!4,!
13g8.9:
P 0,000
1
j
14493 ............................................................................. *.
:
Obs 1 2 3 4
11.1.2
Prlce 5.0 5.0 5.0 5.0 5.0
S a l e s Vo 95.000 98.000 96.000 92.000 99.000
. ,......8.8.... ..
Fit 96.190 96.190 96.190 96.190 96.190
. . .
b'...b'....Hj. /
StDev Flt 1.032 1.032 1.032 1.032 1.032
Resldual -1.190 1.810 -0.190 -4.190 2.810
........h.7.7. ........
-
...........BB.0.774 y4
0.552 : 1.3y?'*
44............
_
R d e n o t e 8 e n o b s e r v a t l o n w l t h a l a r g e standardized r e s l d u a l
( Ptedlcted
Values
S t Resld -0.39 0.60 -0.06 -1.39 0.93 -1.12 0.18 0.50
--
P
11.3
11-36 Chapter 11
Example 11.7
Simple Linear Regression and Correlation
;rl
(Residual Analysis)
(1) Normal probability plot
(2) Standardized residual plot
Normal Probablll~Plot of the Reslduals @*on..
;E B
. ..
*I"
8.
Reslduals Versus the Order of the Data D .'m. ts ..,.s W)
la)
'*
I
;
0
2 , 2
6
.
.
16
(4) e, vs. x,
I
I
Reslduals Versus Me Fnted Values mipas.
8.
S'.I
W
Flned value
m
Ob~ewstlonOrder
Residual
I
.............
........................
I
30
Answers to Exercises
Answers to Exercises
(
Exercise 11.1
1. (Estimation of Regression Coefficients)
The fitted regression line of the noise-hypertension data is j = p o +PIX=-9.81 +0.17x
2. (Calculation of Residual)
jx=*5=Po A
+ P1x= -9.81 + 0.17 x 85 = 4.76 A
ex,85 = YxES5- jx=85= 5 - 4.76 = 0.24
1
Exercise 11.2
I
3. (Estimation of d )
1 . (Hypothesis Test on j31) Step 1 : State Ho and H I . Ho: p1 = 0
HI:
P l f O
Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a.
-
tc(/z,n-2 - t0.025,18= 2.10 Step 4: Make a conclusion. Since Ito(=6.68 > t,,2,n-2 = 2.10, reject Ho at a = 0.05.
11-37
11-38 Chapter 1 1
Exercise 11.2 (cont.)
Simple Linear Regression and Correlation
2. (Hypothesis Test on Po) Step 1 : State Ho and H I . Ho: po = 0 H I : po#o Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. ta/2,n-2 = t0.025,18 = 2.10 Step 4: Make a conclusion. Since Ito = 4.6 > t,,, ,,-, = 2.10, reject Ho at a = 0.05.
1
Exercise 11.3
I (ANOVA for Hypothesis Test on PI)
Regression
88.5
1
88.5
Error
35.7
18
2 .O
Total n
_
%
~
l
x
b
124.2 _
_
e
m
~
~
~
-
~
-
m
*
B
~
-
~
W
19
-m~ u~ r ~r ama r- r - * x l ~ m x w ~
x ~ r r r r a m M ~ r a l - a . r - ~ ~ - ~ - - m
(Note) t i = 6.7' = 44.7 = f o Step 1 : State Ho and H I . Ho: P I = 0
HI:
PI # 0
Step 2: Determine a test statistic and its value. SSR 11 - MSR - 88.5 - 44.7 ----f0 = S S i(n ~ - 2) M S ~ 2.0 Step 3: Determine a critical value(s) for a. f a , l , n - ~ = f0.05,1,20-2 = 4.41 Step 4: Make a conclusion. Since fo = 44.7 > f ,,,,,-,
44.7
= 4.41, reject HOat a = 0.05.
. Answers to Exercises
Exercise 11.4
1. (Confidence Interval on PI) 1-a=0.95 a a=0.05 95% two-sided CI on P1: 81 =)
3
3
- t a / i 7 n - 2 J G ~ '81 ~ i +ta/i.n-2JlSlr
0.17 - to,05,2,20-2 /,I
0.17-2.10XO.O3IP, I 0 . 1 7 + 2 . 1 0 ~ 0 . 0 3 0.12 I P, 10.23
2. (Confidence Interval on Po) 1 - a = 0.95 a a = 0.05 95% two-sided CI on Po:
Exercise 11.5
(Confidence Interval on y ylx, )
0 yls5 = Po + 8,xo = -9.81 + 0.17 x 85 = 4.7 1-a=0.95 a a=0.05 95% two-sided CI on y ,Ig5 :
d-
I P1 I 0.17 + to.05,2,20-2
11-39
11-40 Chapter
Simple Linear Regression and Correlation
Exercise 11.6
(Prediction Interval on yo)
jo
=so
+ s l x o=-9,81+0.17~85=4.7 1 - a = 0.95 3 a = 0.05 The 95% two-sided prediction interval on yo at x
= 85
dB is
(Note) The prediction interval 1.7 1 yo 17.8 is wider than the corresponding confidence interval on the mean response 4.1 5 p,,,, 5 5.4 in Exercise 11-5.
11 I
1 1
Exercise 11.7
(Residual Analysis) The normal probability plot displays that the residuals lie closely along a straight line. In addition, the standardized residual plot shows that all the standardized residuals fall in the interval (-2,2). Therefore, it is concluded that the residuals are normally distributed. Next, the plots ei vs. 5iand ei vs. xiindicate that the residuals are random and have a constant variance. The variability of the residuals is slightly reduced in the low ranges of y and x; however, this deviation from the constant variance is not serious enough to reject the adequacy of the regression model.
Exercise 11.8
1 (Coefficient of Determination)
I I
2 SSR - 88.5 R =--- - 71.3% SS, 124.2 The fitted regression model explains 71.3% of the variability in blood pressure rise.
11-41
Answers to Exercises
Exercise 11.9
(Lack-of-Fit Test) An ANOVA table to test the lack-of-fit of the regression model is as tbllows:
Source of Variation
Sum of Squares
Degrees of Freedom
Lack-of-Fit
4.8
8
Pure Error
30.8
10
Error
35.7
18
Mean Square 0.6
Fo 0.20
3.1
Step 1: State Ho and H I . Ho: The order of the model is correct. HI: The order of the model is not correct. Step 2: Determine a test statistic and its value. MS,, 0.6 fo = ------- 0.20 MS, 3.1 Step 3: Determine a critical value(s) for a. fa,m-2,n-m = f0.05,8,10 = 3.07 Step 4: Make a conclusion. Since fo = 0.20 P = 3.07, fail to reject Hoat a = 0.05. It is concluded that the order of the model is adequate.
Exercise 11.10
1 . (Sample Correlation Coefficient)
2. (Confidence Interval on pxy)
arctanh r + -
arctanh r - arctanh 0.77 --
JZT
>
)
z0.051 2 < p < tanh(arctanh 0.77 + -
-
0.55 < p 1 0 . 8 9
3. (Hypothesis Test on pxy) Step 1: State Ho and H I . Ho: p=O HI: p $ 0 Step 2: Determine a test statistic and its value.
J26-3
12
Multiple Linear Regression
12-1 Multiple Linear Regression Model 12-2 Hypothesis Tests in Multiple Linear Regression 12-2.1 Test on Significance of Regression 12-2.2 Test on Individual Regression Coefficients and Subsets of Coefficients 12-3 Confidence Intervals in Multiple Linear Regression 12-3.1 Confidence Intervals on Individual Regression Coefficients 12-3.2 Confidence Interval on the Mean Response
12-1
12-4 Prediction of New Observations 12-5 Model Adequacy Checking 12-5.1 Residual Analysis 12-5.2 Influential Observations 12-6 Aspects of Multiple Regression Modeling 12-6.1 Polynomial Regression Models 12-6.2 Categorical Regressors and Indicator Variables 12-6.3 Selection of Variables and Model Building 12-6.4 Multicollinearity MINITAB Applications Answers to Exercises
Multiple Linear Regression Model
Explain the assumptions of a multiple linear regression model. R Transform a non-linear model into a linear model. O Calculate residuals. O Estimate the error variance 02. Multiple Linear Model
As an extension of a simple linear model, a multiple linear model explains the linear relationship between the response variable (Y)and k (> 1) multiple regressors (xi's): Y = P O +Plxl + " ' + P k ~+k& where: (1) Regression coefficients (Po, PI,. .. ,Pk): The coefficient Po denotes
12-2
Chapter 12
Multiple Linear Regression
Multiple Linear Model (cont.)
the intercept of the model and the coefficients Pj's, j = 1 to k, denote the slopes of the regressors. Each slope coefficient Pj measures the expected change in Y per unit change in xi when the other regressors are held constant (this is why Pj's are sometimes called partial regression coefficients). (2) Independent variables (XI,xz, ... ,xk;regressors, predictors): Since the regressors xi's will be controlled with negligible error, xj's represent controlled constants (not random outcomes). (3) Random error (E): The variable E represents the random variation of Y around the linear line Po + P,x, + ... + P xk . It is assumed that E is normally distributed with mean 0 and constant variance 02, i.e., E- N ( o , ~ ~ ) (4) Dependent (response) variable (Y): Since outcomes in Y at the same values of xl, x2,... ,xk can vary randomly, Y is a random variable with the normal distribution Y N(Po +P,x, +...+ P k x k , 0 2 )
1
-
Like the assumptions of a simple linear model (see Section 1 1-I), the multiple linear model has four assumptions: (1) Linear relationship between x,'s and Y (2) Randomness of error (3) Constant o2 (4) Normality of error The analyst should check if these four assumptions are met for proper regression analysis (see the assessment of model adequacy in Section 12-5). Note that computers are mostly used in multiple regression analysis so that the detailed calculation process is not presented in this chapter. Transformation of Nonlinear Model
Like the transformation of a non-linear model in simple linear regression analysis (see Section 11-9), a non-linear relationship between xj7sand Y can be transformed into a linear model by applying appropriate mathematical operations. Therefore, any regression models with regressors linearly combined can be treated as a linear model. (e.g.) Transformation of a non-linear model 2 P I Z X ~+XE ~ (1) Y=Po + P I X I + P 2 ~ + Y = P O + Plxl + P2x2 + P3x3 + E , where j3, = PI2 and x, = xlx2
3
Y = PO + Plxl + P,x,
+ P3x3+ P4x4 + E ,where x,
1
3
= - and x, = x, XI
Matrix Notation
Matrix notations and operations are useful in multiple regression. Suppose that n observations consisting of k regressors and the response variable, (xi,,xi2,. . . ,xik, yi), i = 1,2,. .. , n, are modeled by a multiple linear model yl =Po +Plxil + . . . + P k ~ i k+ E ~ ,i = 1 , 2,..., n
4
i
12-1 Multiple Linear Regression Model
Matrix Notation (cant.)
12-3
The system of n equations is expressed in matrix notation as follows: y=X$+& where:
I!'] =I] and
P= .
&
Pk
Model Building
En
The least squares method finds the estimates of regression coefficients $ which minimize the sum of the squares of the errors
The least squares estimators of $ are
fi = (xfx)-' x'y Thus, the fitted regression model is
i = xfi Residual and Error Variance
The residuals of the fitted model are e=y-i The estimate of the error variance G~is (J A
=- S S ~-
n-P
"' - "x" n-P
, where p = number of regression coefficients =k
Standard Error of Regression Coefficients
B
j
The covariances of
+ 1 ( k = number of regressors)
6 are
COV($)= 02(x'x))-l = a2c, where C = (x'x)-' This covariance matrix is a 07 x p ) symmetric matrix whosejjthcomponent is the variance of and whose ij" component is the covariance of and ,where i
B
and j
= 0 to
B,
k, i # j.
Then, the estimated standard error of fi se(B, ) = &&
is
fi
12-4
Chapter 12
Standard Error of
P,
Multiple Linear Regression
The standard error of a regression coefficient is a useful measure to represent the precision of estimation for the regression coefficient: the smaller the standard error, the better the estimation precision.
(cont.) Example 12.1
The electric power (Y) consumed each month at a chemical plant is under study. It
By using a statistical software program, a fitted regression model of the power consumption data including all the four regressors (xl, x2, x3, and x4) and the corresponding error sum of squares (SSE)are obtained as follows: i = -102.71 + 0 . 6 1 ~+ ~8 . 9 2 ~+ ~1 . 4 4 ~+ ~0 . 0 1 ~and ~ SSE= 1,699.0 1. (Calculation of Residual) Calculate the residual ofy = 301 when XI = 65"F, x2 = 25 days, x3 = 91%, and x4 = 94 tons. U-F The estimate of the power consumption given that xl = 65, x;, = 25, x3 = 91, and xq = 94 is j = -102.71 + 0 . 6 1 ~ 6 5+ 8 . 9 2 ~ 2 + 5 1.44~91+ 0 . 0 1 ~ 9= 4 291.8 Thus, e = y - 9 = 301 - 291.8 =9.2 (underestimate)
I
2. (Estimation of 02) Estimate the error variance 02. n = 12 and p (number of regression coefficients) = 5 6 2 = - s S ~- -- 242.7 = 15.6' n-p 12-5
U-F
I
12-2 Hypothesis Tests in Multiple Linear Regression
Exercise 12.1 (MR 12-8)
I I
12-5
The pull strength of a wire bond is under study. Information of pull strength ('y), die height (XI),post height (xz), loop height (xg), wire length (x4), bond width on the die (xs), and bond width on the post (xg) is collected as follows:
By using a statistical software program, a fitted regression model of the pull strength data including four regressors ( ~ 2x3, , x4, and x ~ and ) the corresponding error sum of squares (SSE) are obtained as follows: 5= 7.46 - 0 . 0 3 ~+ ~0 . 5 2 ~- ~0. lox4 - 2 . 1 6 ~ and ~ SSE= 10.91 I. Calculate the residual of y = 9.0 when x2 = 18.6, x3 = 28.6, x4 = 86.5, and x5 = 2.0.
2. Estimate the error variance a2.
!
12-2
Hypothesis Tests in Multiple Linear Regression
12-2.1 Test on Significance of Regression
CI Test the significance of multiple regression.
12-6
Chapter
Significance of Regression
Multiple Linear Regression The test for significance of regression determines if there is at least one xj, out of k regressors, that has a significant linear relationship with the response variable Y, i.e., Ho: P I = P 2 = = P k = O (3 P = O )
HI:
Pi # 0
for at least onej
=
1 to k
Like simple regression, the ANOVA method is used to test the significance of regression. For the multiple linear model, in Table 12-1, the total variability in Y (SSTor S, total sum of squares; degrees of freedom = n - 1) is partitioned into two components: (1) Variability explained by the regression model (SSR,regression sum of squares; degrees of freedom = k) (2) Variability due to random error (SSE, error sum of squares; degrees of freedom=n-k-1 = n - ( k + l ) = n - p ) Table 12-1 Partitioning. of the Total Variabilitv in Y (SST) ,, SST SSR SSE (Total SS) (Regression SS) (Error SS) Q
-
\
DF n (Note) SS: Sum of w-waw*-#a*.A8~"*m~
~ w w ~ - * v ~ . -
The ratio of S&/k to SSE/(n - p ) follows an F distribution: SSRl k --M S ~ ~ ( k ,n p) F= SS, l(n - p ) MS, as Ho: Pl = P2 = ... = Pk = 0 is true, it is used to test the Since F becomes significance of regression. Note that MSR (regression mean square) and MSE (error mean square) are adjusted SSRand SSEby their corresponding degrees of freedom, respectively. Also note that MSE is an estimate of the error variance:
Test Procedure (F-test)
The variation quantities from the ANOVA analysis are summarized in Table 12-2 to test the significance of regression. Table 12-2 ANOVA Table for Testing the Significance of Regression Source of Sum of Degrees of ~ e & Fo P-value Variation Squares Freedom Square Regression SSR k MSR MSRIMSE v
Error Total
SSE
n-P n -1
MSE
12-2 Hypothesis Tests in Multiple Linear Regression
Test Procedure (F-test) (cont.)
12-7
Step 1: State HOand H I . ff0: p1 = p 2 = ... = pk-0 HI: p j + O foratleastonej,j= 1 t o k Step 2: Determine a test statistic and its value. SS, 1k MSR F ( k , n - p ) F, = SS, l(n - p ) MSE
---
Step 3: Determine a critical value@) for a. fa,k,n-p
Step 4: Make a conclusion. Reject Ho if f 0
R' vs. Adjusted R~
'
fa,k,n-p
The coefficient of determination (denoted as R2) indicates the amount of variability in Y explained by the regressors X I x2, , ... ,xk: R 2 =-S S ~=I--, SSE 0 5 ~ ~ 1 1 ~ S T ~ S T As discussed in Section 11-8.2, a large value of R2 does necessarily imply that the regression model is adequate. As a new regressor is added to the existing model, R2 will always increase regardless of whether the additional regressor is statistically significant or not; thus, a regression model having a large value of R2 may unsatisfactorily estimate mean responses or predict new observations. In contrast, the measure adjusted R2 considers the number of regressors (p) included in the model in the computation: adjusted R2 = 1- SSE l(n -PI =I-- n-1 (1-R2), 0
Y
Example 12.2
I
For the power consumption data in Example 12-1, the following ANOVA table is obtained: Source of Sum of Degrees of Mean
Variation
Squares
Regression
P-value
Square
4,957.2
Freedom 4
1,239.3
Error
1,699.0
7
242.7
Total
6,656.3
11
5.1
0.03
1. (Test on the Significance of Regression) Test the significance of regression at a = 0.05.
12-8
Multiple Linear Regression
Chapter
Example 12.2 (cont.)
Step 1: State Ho and H I . Ho: P 1 = P 2 = P 3 = P 4 = 0 HI: P,#O foratleastonej,j= 1 t o 4 Step 2: Determine a test statistic and its value. SSR 1k M S R 1,239.3 f o = SS, l(n - p ) ---M S E ---~5.1 242.7 Step 3: Determine a critical value(s) for a. f a , k , n - p = f0.05,4,7 = 4. Step 4: Make a conclusion. Since fo =5.1> fa,, ,,-,
=4.12,rejectHoat a = 0.01. This
indicates that at least one of the four regressors has a significant linear relationship with the monthly power consumption at a = 0.05. 2. ( R and ~ Adjusted R') Calculate the coefficient of determination (R2)and adjusted RZof the regression model. h R' = S S R--4z957 . 7 42 . 5 % SS, 6,656.3 The four regressors (xl, x2, x3, and x4) explain 74.5% of the variability in the power consumption Y. n -1 12-1 Adjusted R 2 = 1- -(1-R2)=1-(1 - 0.745) = 0.599 n-P 12-5
Exercise 12.2 (MR 12-22)
I1
For the pull strength data in Exercise 12-1, the following ANOVA table is obtained by multiple regression including the four regressors (x2, x,, x4, and x5): Source of Sum of BegrePsof Mean fo P-value Variation Squares Freedom Square Regression 22.31 4 5.58 7.16 cO.01 10.91 14 0.78 Error
-
Total -x_l
*a/*--_*1P_
18
33.22 *,
-w.w--Y,%
m
~
~
~
~
*
e
_
-
~
~
n
_
~
~
X
~
-
~
~
m
a
1. Test the significance of regression at a = 0.05. 2. Calculate the coefficient of determination (R2) and adjusted R2 of the regression model.
12-2.2 Tests on Individual Regression Coefficients and Subsets of Coefficients
O Test the significance of an individual regressor. Cl Test the significance of a subset of regressors.
-
~
e
e
-
11
12-2 Hypothesis Tests in Multiple Linear Regression
Significance of Individual Regressor Inference Context on Pi
Like the testing on the significance of a regression coefficient in simple regression (see Section 1 1-5.l), the significance of any individual regressor x, is evaluated by t test (called partial or marginal test).
p
Parameter of interest:
p, - ~ ( p ,0,2 c J 7 ) ,$ is unknown. p . p.-p. Test statistic of P : T = se(P, > (Note) Estimated standard error of p, : se(p, ) = ,where 8' = MSE Point estimator of
p,
:
J
J - J
J
T
Hypothesis Test on Pi
(t-test)
12-9
,/=
Step 1: State Ho and H I . Ho: pj = pj,0 HI Pj # P,,o Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a.
Step 4: Make a conclusion. Reject Hoif
1
It0 > tat 2,n-p
Example 12.3
(Hypothesis Test on Pi) Consider the power consumption data in Example 12-1. The following quantities are obtained for production volume ( ~ 4in ) the regression analysis: = 0.014, se(b4) = 0.734, n = 12, andp = k + 1 = 5
p4
Test the significance of x4 at a = 0.05. * Step 1: State Ho and H I . Ho: p 4 = 0 H,: P 4 f O Step 2: Determine a test statistic and its value. to =
0 4 - p4.0
se(b4
>
0.014 - 0 = 0.02 0.734
Step 3: Determine a critical value(s) for a. ta/2,n-p - t0.0512,12-5 = '0.025,7 = 2.365 Step 4: Make a conclusion. Since )to]= 0.02 2
= 2.365 , fail to reject HOat a = 0.05.
12-10 Chapter 12
Multiple Linear Regression
I 'I
Exercise 12.3
I
Consider the wire bond pull strength data in Exercise 12-1. The following quantities are obtained for bond width on the die (x5)in the regression analysis: =-2.16, se(b,) =2.39, n = 1 9 , a n d p = k + 1 = 5
b5
Test the significance ofxs at a = 0.05. Significance of the Subset of Regressors
The significance of a subset of regressors, xl, x2, ... ,x, (r < k), is tested by the extra sum of squares method (partial F-test; general regression significance test), which is an extension of the ANOVA method explained in Section 12-2.1. Like the test for significance of regression, this test determines if there is at least one xi, out of r regressors, that has a significant linear relationship with Y, i.e., Ho: p 1 = p 2 = ... = p r = 0 H I : P j # O foratleastonej,j= 1 t o r To test the significance of the subset of regressors, let the vector of regression coefficients ( p ) be partitioned into $ for the regressors xl,x2, ... ,xr and fl for the other regressors x,l, x ,+z, ... ,xk:
,
,
Also let X, and X2 denote the columns of the regressors X related to those related to P, ,respectively. Then, the multiple linear model is J'=X$+E=Xlfll + X 2 p 2+E
P,
and
In Section 12-2.1, the regression sum of squares (SSR)and mean error square (MSE)are
where (j = (x'x)-' X'Y . Note that SS, (p) indicates the contribution of the regressors X to the variability in Y. Now, to identify the contribution of the regressors X, ,fit a reduced model for X 2 , i.e., Y =X2P2
+E
The least squares estimate of regression sum of squares is
P, is $,
[k i=l
SS,(P,>= $;x',y -
yi
n
1
2
= (x;
X, )-' X i y and the corresponding
12-11
12-2 Hypothesis Tests in Multiple Linear Regression
Significance of the Subset of Regressors (cont.)
Then, the regression sum of squares due to model is
ssR($l
IP~)=~~R(P)-~~R(P~)=~'x'Y
given that
P2
is already in the
-~;x;Y
In other words, SS, (PI I P 2 ) indicates an increase in regression sum of squares by adding XI to the reduced model, which is called extra sum of squares due to
. The ratio of SS, ( P I I P , ) lr to SSE/(n- p ) follows an F distribution:
Like the full model, the statistic F becomes true.
Test Procedure (F-test)
as Ho: PI = P2 =
= Pr = 0
is
The variation quantities of the ANOVA analysis for testing the significance of the subset of regressors P1 are summarized in Table 12-3.
Table 12-3 ANOVA Table for Testing the Significance of a Subset of Regressors PI Sum of Degrees of Mean Source of Fo Freedom Square Variation Squares Regression ( P ) SSR k MSR MS, I MSE $1
SS,
Pz
Error Total
(PI l Pz )
SSR ( P z ) SSE
r
k -r n-P n-1
MSR(P, I P z ) MSR(PZ) MSE
Step 1: State Ho and H I . Ho: P 1 = O (PI= P 2 = ... = P r = O ) H I : P1 $ 0 ( P , + O for at leastonej,j= 1 t o r ) Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. fa,r,n-p
Step 4: Make a conclusion. Reject Hoif
f o > fa,r,n-p
MSR(P1 I P z ) MSE
I
12-12 Chapter 12
I
Example 12.4
I
Multiple Linear Regression
I
I (
I
I
(Test on the Subset of Regressors) Consider the power consumption data in Example 12-1. Suppose that the four regressors (P) are grouped into two categories: (1) Non-production regressor (PI): temperature (x,) (2) Production regressors (Pz): number of days (xz), product purity (xg) and production volume (x4) Multiple regression is performed separately with the four regressors (P) and the non-production regressor (PI), yielding the ANOVA results below. Source of Sum of Degrees of Mean Variation Squares Freedom Square fo Regression (P) 4,957.2 4 1,239.3 5.1 Error 1,699.0 7 242.7
Source of Variation Regression (PI) Error Total
Sum of Squares 3,758.9
Degrees of Freedom 1
2,897.3 6,656.3
10 11
Mean Square 3,758.9
fo 13.0
289.7
Test the significance of the production regressors (P2) at a = 0.05. ~ - r rStep I : State Ho and H I . Ho: p 2 = 0 (p2 = p 3 = p 4 = 0) HI: P 2 f 0 (pj#Oforat least o n e j , j = 2 to4) Step 2: Determine a test statistic and its value. SSR(P2 I PI) = SS,(P) - SSR(PI) = 4,957.2 - 3,758.9 = 1,198.3
Step 3: Determine a critical value(s) for a. fa,r,n-p = f0.05,3,7 = 4.35
I b
I Exercise 12.4
Step 4: Make a conclusion. Since fo = 1.65 > fa,,,,-, = 4.35, fail to reject HOat a = 0.05. This indicates that none of the production regressors have a significant linear relationship with the power consumption at a = 0.05. Consider the wire bond pull strength data in Exercise 12- 1. Suppose that the six regressors (p) are grouped into two categories: (1) Process regressors (PI): die height (XI),post height (xz), and loop height (~3) (2) Wire-bond regressors (P2): wire length (x4), bond width on the die (x5), and bond width on the post (x6)
i
1 3 I
,
12-13
12-3 Confidence Intervals in Multiple Regression
Exercise 12.4 (cont.1
I
I I I I
Multiple regression is performed separately with the six regressors (P) and the wire-bond regressors ((Jz), yielding - the ANOVA results below. . Source of Variation Regression (P)
Sum of Squares 23.63
Error
9.59
Total
33.22
Degrees of Freedom 6
Mean Square 3.94
12
0.80
II 1
Total w
an e-B.-MC_wM*I#
U m % # % w m m -mm
18
Sum of Squares 10.8 1 22.41
--
-m-m
wau#va-%"w
-A%w .
.,l*iP.
Degrees of Freedom 3 15
33.22 -m-
4.93
~ - ' ~ ~ ~ ~ ~ ~ " w ~ w a v ~ ~ # ~ w m " ~ ~ w*wal . ~ .in^.w ~ ~ ~ . *I_i-Xve.,i ~ ~ ~ ~ ~ ~ *"* w > , a~ ~ . a ~ . ~ . ~ , a
fl*ljY"UlfX U ~ j l X U . w m . m w ~ . me-sm.e-. '~-,~"s~ ~ ,~a . ~ m ~ " , w ~ ~ ~ " ~ A 8 m-mm~.*--*.~vaM.B~d. w ~ m ~ w ~ / ~ ' m
Source of Variation Regression (p2) Error
fo
-- - 18i
*
*msn
ne
aau-mxn-aar
Mean Square 3.61 1.49
fo 2.41
-
rn n n a nnnw*x-a
w r a a ~ ' , r n ~ W ~ ~e
rxraxx
Test the significance of the process regressors (PI) at a = 0.05.
12-3
Confidence Intervals in Multiple Regression
12-3.1 Confidence Intervals on Individual Regression Coefficients ,"*
*-.-
" . , m ~ . ~ , " . , ~ < ~ ~ ~ r ~ ~ " ~ ~ ~ " ~ ~ . - ~ . d ~ ~ B % ~ , ~ " ~ a ~ ~ ~ -.~~a%ms".m",m~~m~m ~ , w - " ~ . " , , ~ , w amwm.'* ~ , " * ~ ~ .","ea,.~~.-*.~m.~ ' ~ a ~ ~ ~ ~ ~ . b
"v"vA
,."*"""."v*'"<'
LI Establish a confidence interval on pi.
Inference Context on
Test statistic of
Pi
pJ :
p.-p.
J J T = -----se(B I -
(Note) Estimated standard error of
Confidence Interval on
Pi Example 12.5
pJ . - p .J
4B
- t(n - P)
: se(Bj ) = d-,
where 8' = MSE
A 100(1 - a)% confidence interval on p, is
BJ - t a i 2 , n - p * l ~ 5 p 5Bj j +t./2,n-jpq (Confidence Interval on pi) Consider the power consumption data in Example 12-1. The following quantities are obtained for production volume (x4) in the regression analysis: = 0.014, se(B4) = 0.734, n = 12, andp = k + 1 = 5
B4
Establish a 95% two-sided confidence interval on p4. n- 1 - a = 0 . 9 5 3 a = 0 . 0 5 95% two-sided CI on p4: B 4 -ta/2,r-pJZric,,5
3
0.014 - t
~4
5 B 4 +ta/l,n-pJ-
,, ,,,,-, x0.7345D4 50.014 + t,,,12,,2-,
~0.734
Example 12.5 (cont.) Exercise 12.5 (MR 12-31)
I
0.014 - 2.365 x 0.734 5 p4 5 0.014 + 2.365 x 0.734 -1.725P4 11.75
3
Consider the wire bond pull strength data in Exercise 12-1. The following quantities are obtained for bond width on the die (XS)in the regression analysis: = -2.16, se(p,) = 2.39, n = 19, a n d p = k + 1 = 5
Pi
Establish a 95% two-sided confidence interval on PS
12-3.2 Confidence Interval on the Mean Response
---
w-s-l_l(jll--Y,-w-*wm--%m--
B--
I-mx.XIw
wLs-XIX(-
eU-ii*-i*D---I
Establish a confidence interval on the mean response at x = xo. **U*VP(
m *-*a
-
-*'*a
(/-
s.x_---n
Sampling Distribution of Mean Response Estimator atx=x,
I? -n-B"IXwm---m--Bw---Yw
M sw-
w a w-m e s - a m -wm-
.
.*sm-sew_x
amsaaw w ~ w m sm n x ra ;-a
*x-
ram
rx m-ws,,Bmm
*-
a axurn
axcen a t a m * r
-
a **a* a
Suppose that the vector xo represents a particular point of the regressors X I , x ~..,. , x,: 1 Xo 1
Xo =
X02
( PYlx=xo -XOk -
The mean response at x = xo is Prjxo= E(Y I xo) = E(XP + E I xo) = Then, the point estimator of p ylxo is
The sampling distribution of
Inference Context on Mean Response atx=x, (cl,lx0
*
fiy,xo
is
Parameter of interest: p yl% Point estimator of p ylxo :
fi ylxo = x;B
-~
( ylxo p ,02xb(x'x)-' xo) ,d is
unknown. Pyjx0- C L Y / ~ ,
Test statistic of p ylxo :
- t(n - P) --
(Note) Estimated standard error of where
e2= MSE .
fiylxo : se(P ylxo ) = ,/e2x; (X'X)-' xo ,
12-15
12-4 Prediction of New Observations Confidence Interval on p~ 1 x 0
"4
Example 12.6
A 100(1 - a)% confidence interval on p y,xo is
fiylxo-to/2,n-pd
mI
(Confidence Interval on p
~y~~~
I 6 Y I x 0 +to/2,n-p
) Consider the power consumption data in
Example 12-1. The following quantities are obtained for the mean power consumption when x l = 65"F, x2 = 25 days, x3 = 91%, and x4 = 94 tons in the regression analysis: P Y I X , =291.81, se(Pylxo) =7.68, n = 1 2 , a n d p = 5 Establish a 95% two-sided confidence interval on the mean response. ~ - r r1 - a = 0 . 9 5 3 a=0.05 95% two-sided CI on p :
Exercise 12.6 (MR 12-31)
12-4
Consider the wire bond pull strength data in Exercise 12-1. The following quantities are obtained for the mean pull strength when xz = 18.6, x3 = 28.6, x4 = 86.5, and x5 = 2.0 in the regression analysis: ) = 0.57, n = 19, andp = 5 P y l x 0 = 8.66, se(P Establish a 95% two-sided confidence interval on the mean response.
Prediction of New Observations se
p
v
a * m * "
#*
ww ws wm MMm-m-
wBw a
* * * **
*
-
*
--
# 3
v
Establish a confidence interval on the mean response at x = xo.
Prediction Interval On Future Observation
Suppose that the vector x', = [l, x,, ,xo2,...,x,, ] denote a particular point of the regressors XI,~ 2 , ... ,xk. Then, a point estimate of a future observation yo at x=x', is
A 100(1 - a)% prediction interval on yo at xo is
j,,-to,,,-,
de2(1 + x i (xx)-'
xo) I yo I ja+t,,2,n-p \/e2(1 + x;(xfx)-l x O )
As discussed in prediction of new observations in simple regression (Section 117), a prediction interval (PI) is always wider than the corresponding confidence interval (CI) on the mean response because the PI depends on both the error from the fitted regression model and the inherent variability in the random variable Y, while the CI depends on only the error from the fitted regression model.
A
I
12-16 Chapter 12
v
Example 12.7
Multiple Linear Regression
(Prediction Interval on yo) Consider the power consumption data in Example 12-1. The following quantities are obtained in the regression analysis for a future power consumption observation when x2 = 18.6, x3 = 28.6, x4 = 86.5, and x5 = 2.0: Po = 291.81, 6 2 x b ( ~ ' ~ ) - ' x =, 7.682, 62= 1 5 . 5 8 ~n, = 12, andp = 5 Establish a 95% two-sided prediction interval on the power consumption. rrn 1 - a = 0 . 9 5 3 a = 0 . 0 5 95% two-sided CI on yo: $0
Exercise 12.7 (MR 12-31)
-hi 2,n-p
6 YO6
-4
F~+ la, 2,n-p
Consider the wire bond pull strength data in Exercise 12-1. The following quantities are obtained in the regression analysis for a future pull strength observation when XI = 65"F, x2 = 25 days, x3 = 91%, and x4 = 94 tons: Po = 8.66, e2x; (x'x)-'x, = 0 . 5 7 ~ , = 0 . 8 8 ~ ,n = 19, andp = 5 Establish a 95% two-sided prediction interval on the pull strength.
e2
12-5
Model Adequacy Checking
12-5.1 Residual Analysis
O Describe the purpose of residual analysis. Residual Analysis
Recall the four assumptions of a multiple linear model (see Section 12-1): (1) linearity between xj's and Y, (2) randomness of error, (3) constant variance of error, and (4) normality of error. These four assumptions can be checked by analyzing residuals ( e i = yi - j i ) , which is called the assessment of model adequacy. The first three assumptions (linearitv, randomness, and constant variance) can be checked by examining the following residual plots: (1) ei vs. ji (2) e, vs. xv, j = l , 2 , . .. ,r (3) e, vs. t (if time sequence is known) Next, the normalitv of error is checked by one of the following: (1) Frequency histogram of residuals
12-17
12-5 Model Adequacy Checking
(2) Normal probability plot of residuals (3) Standardization of residuals: If the residuals are normally distributed, about 95% of the standardized residuals
Residual Analysis (cont.)
are in the interval (-2,2). Residuals whose standardized values are beyond f3 may be outliers. An undesirable residual pattern (see Section 11-8.1) may be due to non-linearity or insignificance of the relationship between the corresponding regressor and response variable (Y). To correct an undesirable residual pattern, the appropriate method out of the following can be applied: (I) Transform xj or y into another form (such as lly, and in y). (2) Add a high order term(s) of xj to the model. (3) Include a new regressor(s) in the model.
f i ,9,
12-5.2 Influential Observations e*
a*
O Identify influential observations by using the measure Cook's distance. The measure Cook's distance (denoted by D;) is the squared distance between the least squares estimate based on all n observations and based on all the data
Cook's Distance
fi
(Di)
fie,
except the ith observation:
As the ith observation is unusually influential in determining the estimates of regression coefficients, D ibecomes large-a value of Di> 1 would indicate that the point is influential. If an influential observation is due to error during the experiment, this observation should be excluded from the analysis.
Example 12.8
I
(Influential Observation) Consider the power consumption data in Example 121. The Cook's distances of the observations are calculated by using software:
Power No.
"%Xw
_
U
consumption Cook's (v; unit: kW) Distance (Di)
je;j x-W."
316
X(s*mm-w%.-
m**As%Mw*.,--*
0.20
**aw-a-
No.
I2
* * v # - A # e w w
Power consumption Cook's 01; unit: kW) Distance (Dr)
maxm-2
261
w- x w e w
0.01
& d " . m Y w e - m - ~ B - ~ m ~ *
Check if any influential observations exist in the data. (bNo influential observations whose Di > I are found in the data.
12-18 Chapter 12
Exercise 12.8 (MR 12-40)
12-6
I
Multiple Linear Regression
Consider the wire bond pull strength data in Exercise 12-1. The Cook's distances of the observations are calculated by using- software as follows: Pull strength Cook's Pull strength Cook's 6)) Distance (D)) No. 01) Distance (Di)
Aspects of Multiple Regression Modeling
12-6.1 Polynomial Regression Models
O Establish a polynomial regression model. O Test the significance of a high order term. Polynomial Regression Model
-
When the response is curvilinear, a polynomial (a linear combination of powers in one or several variables) can be fit by using a multiple linear regression model. For example, a second-order polynomial in one variable can be transformed to a linear regression model as follows: Y = P O+ p 1 x + P l l x 2+ &
When fitting a polynomial model, the lowest-order model is preferred for simplicity reasons (see Section 12-6.3). The significance of a higher-order term(s) is tested by t test or the extra sum of squares method.
,
Example 12.9
I 1
(Polynomial Model) Consider the power consumption data in Example 12-1. By including only temperature (xl), the following second-order regression model is developed: j + Blxl + P2x: = 162.76 + 3 . 5 0 ~-~0.02~: The corresponding ANOVA table is as follows:
=Po
12-6 Aspects of Multiple Regression Modeling Example 12.9 (cont.)
I
Some of Variation
Sum of SquaRs
Degrees of Freedom
Regression (xl, x: )
4,362.8
SSR(PI I Po)
12-19
fo
2
Mean Square 2,181.4
3,758.9
1
3,758.9
13.0
SSR(P2 I Pi,Po) Error
603.8 2,293.5
1 9
603.8 254.8
2.4
Total
6,656.3
11
8.6
Test if the quadratic term of temperature (x:) is significant at a = 0.05. h
Step 1: State Ho and H I . Ho: p 2 = 0 HI: P 2 f 0 Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. f a , r , n - p = f0.05,1,9 = 5.12 Step 4: Make a conclusion. Since fo = 2.4 P f ,,,,,-,
= 5.12, fail to reject Ho at a = 0.05. This
indicates that the quadratic term does not significantly contribute to the model. Exercise 12.9 (MR 12-50)
The following data are collected during an experiment to determine the change in thrust efficiency O/, in percent) as the divergence angle of a rocket nozzle (x) changes: 24.60 24.71 23.90 39.50 39.60 57.12 Y x 4.0 4.0 4.0 5.0 5.0 6.0 67.1 1 67.24 67.15 80.1 1 77.87 x 6.5 6.5 6.75 7.0 7.1 A polynomial model is developed for the thrust efficiency data: Y
84.67 7.3
>=so+ six + p2x2=-4.46 + 1 . 3 8 +~ 1.47x2 ding ANOVA table is as follows:
Regression (XI,x; ) s s ~ ( P , IPo) s s ~ ( P 2IP1,Po) Error Total
5,740.6
2
2,870.3
1,045.0
5,716.3
1
5,7 16.3
1,167.0
24.3 24.7
1 9
24.3 2.7
8.8
5,765.3
11
- ~ . ~ ~ , ~ ~ ~ ~ _ _ ( _ I . X ~ . ~ ~ ~ ~ % ~ ~ ~ ~ - m . w a ~ ~ ~ B ~ - - ~ . ~ ~ - ~ d ~ - M - - - - ~ ~ ~ . ' ~ " - miu*l*w--rmelw-res r
nl.n*-MPxraaa
iarxi.e.arrd
arr ra*-i=.n
nw alare n am Irrnr--larx
Test if the quadratic term of the nozzle divergence angle ( x 2) is significant at a = 0.05.
-
r
12-20 Chapter 12
Multiple Linear Regression
12-6.2 Categorical Regressors and Indicator Variables
Explain use of indicator variables in regression analysis. w-~"*~~"a~--.~-~.~--m~
s ~ - H B f * ~ " ~ . ~ ~ ~ a ~ ~ * ~ . w # w m , ~ m . a , ~ m ""*.ms*s*w " ~ . ~ m ~sB*m-m.*,9s"~B~a*m.w,eaes~.B ~ ~ ~ u
Indicator Variables
B e . ~ . m e ~ a * ~ - w # a . ~ *m"m---B*B*B ~ . - B * . B -w"s~m.-",--w*---Bw~~"~m.*-*~~.~~~b~""~m.w~-a"~>w>~~*~"~
The regression models presented in previous sections are based on continuous (quantitative) variables. However, categorical (qualitative) variables such as gender and education level are sometimes considered in regression analysis. To incorporate categorical variables in regression, indicator (dummy) variables are used. A categorical variable with c levels can be modeled by c - 1 indicator variables. (e.g.) Indicator variables (1) gender = (male, female} 0 for male 1 for female (2) education = {high school, college, and postgraduate} for high school for college for postgraduate
r
Another method of analyzing data including categorical information is to fit a separate regression model to the data of each category. Compared to this approach, the indicator variable approach has the following advantages: 1. One regression model: One regression model integrates several regression models for different categories. 2. Increased statistical power: By pooling the data of different categories, the error term will have a smaller mean square error due to a larger number of degrees of freedom (n - p ) , which results in increased power in statistical testinn.
Example 12.10
(Indicator Variable) A relationship between liver weight @; unit: g) and body surface area (xl; unit: m2) is under study. The following measurements are obtained from 20 cadavers (10 males and 10 females; gender (x2) = 0 for male and 1 for female):
1
12-21
12-6 Aspects of Multiple Regression Modeling
By assuming that the interaction between xl and x2 is negligible, a fitted regression model is obtained for the liver weight data:
Example 12.10 (cont.)
F =Po + Plxl + P2x2 = -108.7 + 9 7 7 . 0 ~-~1 . 3 ~ ~
I
Predictor Constant BSA (XI)
b
Exercise 12.10 (MR 12-53)
Standard error 829.2 502.7
Coefficient -108.7 977.0
to
P-value
-0.13 1.94
0.897 0.069
Since the P-value for P2 is greater than 0.05, gender (xl) does not have a significant effect on liver weight (y) at a = 0.05.
A mechanical engineer is investigating the surface finish 0/;unit: pm) of metal parts produced on a lathe and its relationship to the speed (xl; unit: RPM) of the lathe. The following measurements are obtained by using two different types of cutting tools (x2 = 0 for tool type 302 and 1 for tool type 416): Type of Type of Surface Lathe cutting Surface Lathe cutting finish speed tool finish speed tool No. O (~1) (~2) NO. (0 (XI) (~2)
7 8 9
52.26 50.52 45.58
265 259 22 1
0 0 0
17 18 19
32.13 35.47 33.49
224 25 1 232
1 1 1 1
,..,
".",*%"A'*""
I
I
,w
The following regression model is fit to the surface finish data by using software: 5 = + Plxl + P2x2+ Ij3X1X2 = 11.50 + 0 . 1 5 ~- ~6 . 0 9 ~- 0~ . 0 3 ~ ~ ~ ~
Po
Predictor Constant Lathe speed (xl) Tool type (x2) X1 X X2
Coefficient 11.50 0.15 -6.09 -0.03
Standard error 2.50 0.01 4.02 0.02
required to adequately model the tool life data.
to
4.59 14.43 -1.51 -1.79
P-value cO.01 <0.01 0.15 0.09
12-22 Chapter 12
Multiple Linear Regression
12-6.3 Selection of Variables and Model Building
R Explain the features of a satisfactory regression model. Compare variable selection techniques with each other: (1) all-possible regression evaluation; (2) forced selectiodelimination; (3) forward selection; (4) backward elimination; and (5) stepwise regression. O Determine the best set of regressors by the all-possible regression evaluation technique. O Select a subset of regressors for model building by the stepwise regression technique.
Variable Selection for Good Model
In regression analysis, the features of a 'good' model include 1. Validity: The model includes regressors having empirical (engineering or practical) significance as well as statistical significance. 2. Satisfactory performance: The model provides estimates of the response (given conditions of the regressors) within an acceptable accuracy. 3. Simplicity: The model includes as few regressors as possible for ease of use. 4. Stability: The model does not change sensitively by use of a different set of observations. These four criteria often conflict in many applications, thus a compromise is necessary to select appropriate regressors in a model. In this variable selection process, experience, judgment, and discussion among the analysts and practitioners involved in the study are important to establish a satisfactory model.
Variable Selection Techniques
No 'best' technique exists to select the best subset of regressors out of k candidate regressors. Available selection techniques include (1) all-possible regression evaluation; (2) forced selectiodelimination; (3) forward selection; (4) backward elimination; and (5) stepwise regression. Each selection technique will show its own trade-off between computational efficiency and model performance in a specific application.
All-Possible Regression Evaluation
The all-possible regression evaluation technique generates all possible regression equations from the combinations of k regressors and then evaluates the equations by using selected criteria (which will be presented in the following paragraphs). This approach finds the best regression equation through comprehensive search and evaluation, but the number of equations to be examined increases geometrically as the number of candidate regressors increases (e.g., if k = 3, there are Z3 = 8 possible equations; if k = 10, there are 2'' = 1,024 possible equations). To determine a subset of regressors for the best model, four statistical criteria R (,: adjusted R,; MSE(p),and C,; p = number of regression coefficients in the mode1,p I k + 1) are available to use (see Fi ure 12-1). First, the best set of regressors is found where an increment in R, (coefficient of determination) begins smaller than a value predetermined by the analyst. Note that R : ever increases a s p increases (see Section 12-2.1).
8
12-23
12-6 Aspects of Multiple Regression Modeling
All-Possible Regression Evaluation (cont.)
Increase in R: smaller than the value predetermined
:, P
Adjusted R:
Maximum adjusted R;
Minimum MS, (p)
--
P * ** * * * -* *-Figure 12-1 Plots of variable selection criteria againstp (number of regression coefficients). is
*,"rim w
*ijs
-*XI
I _6--"#
Ylf
Y I--
- w
-XI-
-wm-m-
w- sac
wee**
u
am*-
ura=
Second, the best set o f p regressors is determined at which adjusted R; is at maximum. Note that adjusted R: may decrease a s p increases unless the reduction in error sum of squares (SSE) is greater than the error mean square (MSE) of the existing model (see Section 12-2.1). Third, the best set o f p regressors is determined where MSE(p) is at minimum. This MSE criterion is equivalent to the adjusted R; criterion in the aspect that MSE(p) will increase if the reduction in SSEis smaller than MSE(p - 1) due to the loss of one degree of freedom in the error term by the addition of a new regressor. Lastly, the best set o f p regressors is determined where C,, which is a measure of the total mean square error, is at minimum or close top:
If the p-term regression model has negligible bias, the value of Cpis close top; if the model is significantly biased, the value of C, is significantly greater thanp.
iI '
12-24 Chapter 1
t+'
Multiple Linear Regression
Example 12.11 I
(All-Possible Regression Evaluation) Consider the power consumption data in Example 12-1. By using software, the values of R;, adjusted R;, MSE(p),and Cp are obtained as follows:
1 1 1
2 2
2 2 2
3 3 3
3 3
4 4
XI X3
X2,X3 xz,X4 Xl,X3
X I , X ~ , X ~ X2,X3,X4
56.5 0.2 <0.1
52.1
17.0 25.8 25.8
3.9 19.4 19.4
64.6 64.5 64.1
56.8 56.6 56.2
16.2 16.2 16.3
3.7 3.7 3.8
73.2 64.7
63.1 5 1.4
14.9 17.1
3.4 5.7
15.6" "."..-*""* ,",~----*""5.0 4 5 X I , X ~ , X ~ , X ~ 74.5 ""-- 59.9 .*" Discuss which model is preferable for the power consumption data. U-F A two-regressor model including xl and x 2 is recommended for it has the maximum value of adjusted R; and minimum values of MSE(p)and Cp. ---*
Exercise 12.11 (MR 12-57)
m.-.*"-ma
u
~
~
-
~
~
~
"
"
-
#
"
v
a
'
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
"
~
~."*me-"-,"sL* ~ ~ .
"
~wm,.-w-,"*-" - " "
.
~ .-~w -2a.,.~
.
~
w
-
"
-
"
(All-Possible Regression Evaluation) Consider the wire bond pull strength data in Exercise 12-1. By using software, the values of R:, adjusted R;, MSE@),and C, are obtained as follows:
"
No. regressors in the model 1 1
p 2 2
X3
2
3
3
ma"",
R: (%) 47.7 31.1
Ad'usted RP (%) 44.6 27.0
ME @) 1.01 1.16
6.7 13.6
X3,X5
57.2
51.8
0.94
4.8
3
X3,X4,X5
67.1
60.6
0.85
2.7
4
4
X I , x 3 , X4, X6
68.6
59.6
0.86
4.1
5
5
x1, x2, x3, x4, x 6
69.0
57.0
0.89
5.9
6
sss*aae
,,.a, ma.m
6
-*-"msm.>m
Regressors in the model X4
---
X I , X wv"",-'*,a.w ~ , X ~ , .wsm.Me, X ~ *.,a*-"X ~*--. ,~---" X ~71.1 sm-m
4
56.7 * " "*
w,*~~*%.m~",sa
Discuss which model is preferable for the pull strength data.
'%.,,,*'"d"'.~
0.89 *,*
*.mm,ms
CP
7.0
~,,-*?w~,m *.- -,b.#,,m"
12-6 Aspects of Multiple Regression Modeling
Forced Selection1 Elimination Forward Selection
Backward Elimination
12-25
The forced selection/elimination technique includes/excludes regressors designated by the analyst inlfrom the model regardless of their statistical significance. The forward selection technique adds to the existing model the regressor xi out of candidate regressors (not in the model) that has the largest partial F-value V;) iff; exceeds the entry criterionJn (corresponding probability pi, = 0.10 or 0.15; preselected by the analyst): s s (Pi ~ I B model ) f i = M s (fl ~ rnodeI,Pi In other words, regressors are added to the model one at a time as long as their partial F-values are greater thanj,,. The backward elimination technique includes all candidate regressors in the model and then eliminates from the model the regressor xj that has the smallest partial F-value G)if& is less than the removal criterion&,, (corresponding probability pout= 0.15 or 0.20; predefined by the analyst):
In other words, regressors are removed from the model one at a time as long as their partial F-values are than&,,.
Stepwise Regression
The stepwise regression technique is the mix of the forward selection and backward elimination techniques: forward selection followed by backward elimination. This mixed variable selection technique is widely used. The procedure of stepwise regression is as follows. Step 1: Out of candidate regressors, include xiin the model having the largest F-value iff; >J;, (corresponding pin= 0.10 or 0.15). Step 2: Among the regressors in the model, remove x, from the model having the smallest F-value if& > h u t(corresponding pout= 0.15 or 0.20; pin 5 pout). Step 3: Repeat steps 1 and 2 until no other regressors are added to or removed from the model.
Example 12.12
(Stepwise Regression) Consider the power consumption data in Example 12-1. Out of the four regressors (XI,x2, xg, and xq), determine an appropriate set of regressors by the stepwise regression technique to build a regression model. Use software withpi, =pout= 0.15 (equivalent toJ;, = h u t=A, 12-, = 2.5). h Two regressors, xl and x2, are selected for model building.
Exercise 12.12 (MR 12-57)
Consider the wire bond pull strength data in Exercise 12-1. Out of the six regressors (XI,x2, x3, x4, xg, and x ~ )determine , an appropriate set of regressors by the stepwise regression technique to build a regression model. Use software with Pin =pout= 0.15 (equivalent to& = h u t=A, 19_, = 2.3).
12-26 Chapter 12
Multiple Linear Regression
12-6.4 Multicollinearity
Explain the terms multicollinearity and variance injlation factor (VIF).
O Describe undesirable effects due to multicollinearity in multiple regression analysis. Identify regressors having multicollinearity based on VIF.
O Describe remedial measures to resolve the problem of multicollinearity. Multicollinearity
When there are strong dependencies among the regressors xj7s,it is said that multicollinearity exists. The following undesirable effects can be observed due to multicollinearity: 1. Unstable estimation of regression coefficients: The estimates of regression coefficients, 's, are sensitive to a small change in data.
b,
2. High standard errors of regression coefficient estimates: The standard errors of regression coefficient estimates. se(b j) 's, are unreasonably large. 3. Contradicting test results: Although the full regression model is significant by F-test, no individual regressors are found significant by ttest. A regression model with the problem of multicollinearity will produce unsatisfactory estimations of mean responses or predictions of new observations beyond the region of the x-space surveyed (where extrapolation is needed). However, this regression model may produce satisfactory estimations on the response within the region of the x-space surveyed (where interpolation is performed). Variance Inflation Factor (VIF)
The measure variance inflation factor (VIF) indicates the extent of multicollinearity: 1 VIF(P,)=2 , j = 1 , 2 ,..., n I-Ri where R; is the coefficient of determination between xj and the other k - 1 regressors. The stronger the multicollinearity, the larger the value of R;. If a VIF exceeds 10 (some researchers suggest 4 or 5), the problem of multicollinearity exists.
Remedial Measures
Remedial measures against multicollinearity include 1. Inclusion of additional data: Collect additional data to explore the possibility to break the existing linear dependencies between the regressors. 2. Removal of regressors with large values of VIF: Exclude regressors having large values of VIF from the model.
'r
Example 12.13
12-6 Aspects of Multiple Regression Modeling
12-27
(Multicollinearity) Consider the power consumption data in Example 12-1. The variance inflation factors of the four regressors (x,,x2,x3,and x4) are obtained by using software as follows:
-I
Number of days in the month (x2) Product purity (x3)
2.2 1.3 1.o
Interpret the VIF values. Since all the VIF values are less than 10, the regression model does not have the problem of multicollinearity.
&
Exercise 12.13
Consider the wire bond pull strength data in Exercise 12- 1. The variance inflation factors of the four regressors (XZ,x3, x4, and xg) are obtained by using software as follows:
Loop height (x3) Wire length (x4) Bond width .--,on t $"."*"..- ->*~"'.* %,*
s".m
**~a,r"
Interpret the VIF values.
1.07 1.41
12-28 Chapter 12
Multiple Linear Regression
MINITABApplications Examples 12.1-3,5-8,13
(Multiple Linear Regression)
(1) Choose File
* New, click Minitab Project, and click OK. sheet.
(3) Choose Stat
* Regression * Regression. In Response select Y and in
(4) Click Graphs. For Residuals for Plots, select Regular or Standardized. Under Residual Plots, check Normal plot of residuals, Residuals versus fits, andlor Residuals versus order. In Residuals versus the
MINITABApplications
12-29
(5) Click Options. Check Fit intercept. Under Display, check Variance inflation factors. In Prediction intervals for new observations, enter the values of the regressors under consideration for the confidence interval on the mean response andlor prediction interval of a future observation. In Confidence level, type the level of confidence. Then click OK. -
Examples 12.1-3,543, 13 (cont.)
1
(6) Click Results. Under Control the Display of Results, click the level of
1
(7) Click Storage. Check Fits, Residuals, and Cook's distance. Then click
f
0 @QQ 1 ~ 1z!P1 ,$6- - -406 ~ ~ 0 € t 0 ~ ~ 8 5 ~ 8 1 - j 8 S L001 8 5 Z ,16 I1>1003 LlS3tl 1SlI4 Px ,f .--.-* m I i n t 9 D n
......
I
'
-
PZ
R
-9EZ OPZ
_ __
_z __ 1 T
TnnpTsaI P ~ Z T P I B P U W ~ a6lwl S w qirn uoraBnxasqo ue saiouap 1
I
MINITABApplications
Example 12.4
12-31
(Test on the Subset of Regressors) The
I1
Regression Analysis The r e g r e s s i o n e q u a t i o n is Y = 224 + 0.953 xl
I
Predictor Constant
Coef 224.38 0.9525
StDev 15.88 0.2645
T 14.13 3.60
P 0.000 0.005
...........................................................................................................
I ~ n a l y s ~ 0. s v a r i a n c e
:s o u r c e
i Regression
DF 1
:T o t a l
10 11
i ~ e s i d u a lE r r o r
ss 3758.9 2897.3 6656.31
MS 3758.9 289.7
F
12.97
p i 0.005 i
,..........................................................................................................
Example 12.9
w:
. .
.~"", ,
.$":
8 .,
I#,j
(Polynomial Model)
*
( I ) Choose Stat Regression P Fitted Line Plot. In Response select Y, in Predictor select the regressor under consideration, and under Type of Regression Model click the type of the polynomial to be developed. Then click OK.
I
Polynomial Regression
I
A n a l y s i s of V a r i a n c e
I
SOURCE Regression Error Total
SOURCE
I
.Lanear
MS 2181.38 254.83
F 8.56002
Seq SS
F
P
603.83
2.36953
0.158109
DF 2 9 11
DF
SS 4362.75 2293.50 6656.25
..............L.....3.7.56..92 ,.....12..2?.3.7 ....L R 3 & R 3 ...:
Quadratic
1
I
P 8.27E-03
I
12-32 Chapter 12
Example 12.10
Multiple Linear Regression
II
, ,
(Indicator Variables) (1) Choose File New, click Minitab Project, and click OK.
I
(2) Enter the liver weight data on the worksheet.
,
(3) Choose Stat Regression Regression. In Response select Y and in Predictors select regressors under consideration.
I
RegressionAnalysis The r e g r e s s i o n e q u a t i o n is
y = - 109 + 9 7 7 x l - 1 x2 ...................................................................................................... -108.7 977.0 -1.3
x l x2
,
R-Sq = 18.45
Source Regression Resldual Error Total
Example 12.11
I
502.7 126.7
T
P
-0.13 1.94 -0.01
0.897 0.069 0.992
i 1
i
.....................................................................................................i S = 281.2
/
StDev 829.2
Coef
Predictor Constant
source
DF
DF 2 17 19
R-Sq(ad3) = 8.85
SS 303477 1344478 1647955
Seq SS
US 151738 79087
F 1.92
P 0.177
I
(All Possible Regression Evaluation) (1) Choose Stat Regression Best Subsets. In Response select Y and in
,
,
12-33
MINITABApplications
Example 12.1 1 (cont.)
(2) Click Options. In Minimum and Maximum enter the minimum and maximum numbers of regressors that will be included in regression models, respectively. In Models of each size to print, specify the number (1 to 5) of best regression models to be printed for each number of
Best Subsets Regression Response is y
64.5 60.9 1.7 X 15.381 56.5 52.1 3.9 X 17.022 1 0.2 0.0 19.4 25.769 X 1 0.0 0.0 19.4 X 25.799 67.2 1.4 2 73.1 X X 14.095 56.8 3.7 2 64.6 X X 16.173 56.6 3.7 2 64.5 X X 16.210 2 64.1 56.2 3.8 X X 16.289 . . .2. . . .5.6. ..5. . . .4. 6. ..8. . . . . 5. ..9. . . . . .1 .7.. 9. 4. .1 . . . X. . . . . . .X. 64.9 3.0 3 74.5 14.573 X X X 63.1 3.4 3 73.2 14.944 X X X 3 64.7 51.4 5.7 X X X 17.149 1
1
..............................................
I
-
12-34 Chapter
Example 12.12
Multiple Linear Regression
(Stepwise Regression) (1) Choose Stat > Regression
> Stepwise. In Response select Y and in
respectively.
(3) Intemret the analvsis results.
Stepwise Regression
I
R e s p o n s e is Step Constant
,.............,..... I x2 iT-Value!
on
y
4 predictors, with N =
-30.1607
2 0.5287
15.2 4.26
10.3 2.36
1
12
Answers to Exercises
12-35
Answers to Exercises Exercise 12.1
(Calculation of Residual) The estimate of the pull strength when x2 = 18.6, x3 = 28.6, x4 = 86.5, and x5 = 2.0 is $=7.46 - 0 . 0 3 ~ 1 8 . 6 +0 . 5 2 ~ 2 8 . 6- 0 . 1 0 ~ 8 6 . 5- 2 . 1 6 ~ 2 . 0 = 8 . 6 6 Thus, e = y - j = 9.0 - 8.66 = 0.34
(underestimate)
2. (Estimation of c2) n = 19 and p (number of regression coefficients) = 5 -2 SSE --10.91 (J =-= 0.78 = 0 . 8 8 ~ n-p 19-5 Exercise 12.2
' 1. (Test on the Significance of Regression) Step 1: State Ho and H I . Ho: p 2 = p3 = p 4 = p5 = 0 H I : p J # O foratleastonej,j= 1 to4 Step 2: Determine a test statistic and its value. SSRl k --MSR - 5.58 - 7.16 f o = SS, l(n - p ) MS, 0.78 Step 3: Determine a critical value@) for a. fa,k,n-p
= f0.05,4,14 = 3.1
Step 4: Make a conclusion. Since fo =7.16> fa,,,n-,
=3.11,reject Hoat a = 0.05. This
indicates that at least one of the four regressors in the model has a significant linear relationship with the pull strength of a wire bond at a = 0.05. 2. (R2 and Adjusted R2) R 2 - ss R-22.31 - -= 67.2% SS, 33.22 The four regressors (x2,x3, x4, and x5) explain 67.2% of the variability in the pull strength Y. n -1 Adjusted R~ = 1- -(1-R 2 )=I-- 19-1 (1 - 0.672) = 0.578 n-P 19-5
12-36 Chapter 12 Exercise 12.3
Multiple Linear Regression
(Hypothesis Test on Pi) Step 1: State Ho and H I . H,,: p5 = 0 H1: p5 + 0 Step 2: Determine a test statistic and its value.
to =
bs
-P5,0
- -2.16-0 -
se
= -0.90
2.39
Step 3: Determine a critical value(s) for a. t a / 2 , n - p - 05/2,19-5 = 025,14 = 2.145 Step 4: Make a conclusion. Since Ito\ = 0.90 ;a ta12,,-p= 2.145 , fail to reject Ho at a = 0.05. This indicates that production volume (x4) does not have a significant linear relationship with the pull strength at a = 0.05.
(Test on the Subset of Regressors) Step 1 : State Ho and HI. Ho: p1= 0 (p1 = p 2 = p 3 = 0 ) HI: # O (Pj#O for at least o n e j , j = 1 to 3)
Exercise 12.4
I
Step 2: Determine a test statistic and its value. s s R ( p l~ ~ 2 ) = ~ ~ R ( ~ ) - ~ ~ R ( ~ z ) = 2 2 . 3 1 - ~ o . 8 1 = 1 1 . 5 0
Step 3: Determine a critical value@) for a. fa,r,n-p
= f0.05,3,12 = 3'49
Step 4: Make a conclusion. Since f o = 4.79 > fa,k,n-p = 3.49, reject Ho at a = 0.05. This indicates that at least one of the process regressors has a significant linear relationship with the pull strength at a = 0.05.
Exercise 12.5
(Confidence Interval on Pi) 1-a=0.95 a=0.05 95% two-sided CI on P 5 : P5
-ta12,n-p
d87C,1p5 5 fi5 + ta12,n-pJB'C,
12-37
Answers to Exercises
Exercise 12.6
(Confidence Interval on yylxo ) 1 - a = 0.95 3 a = 0.05 : 95% two-sided C I on y PYlxo
Exercise 12.7
- t a / 2 , , - , d ~ 5 ~ y l x ,5 f i y l x 0 + t a i 2 , n - p
d
XX)
m
(Prediction Interval on yo) a = 0.05 1 - a = 0.95 95% two-sided C I on yo: YO-tai2,n-p
J%GizG 5 5 jo YO
+
t
a
/
2
,
n
-
p
d
m
Exercise 12.8
(Influential Observation) Since all the Di's are less than one, no influential observations are found in the data.
Exercise 12.9
(Polynomial Model) Step 1: State HOand HI Ho: p 2 = 0 H1: p 2 2 0 Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. fa,r,n-fj = f0.05,1,9 = 5'12 Step 4: Make a conclusion. Since f o = 9.0 > f a , ,
,-,
= 5.12, reject Ho at a = 0.05. This indicates
that the quadratic term significantly contributes to the model.
12-38 Chapter 12
Exercise 12.10
Multiple Linear Regression
I (Indicator Variable)
For tool type 302 (x2 = O), the fitted model is j = + = 11.50 + 0 . 1 5 ~ ~
B, B,x,
I 1
I
I
1
I Exercise 12.11
In contrast, for tool type 416 (x2 = I), the fitted model is
j =
B,
+ B1.1
+ B 2 + 133x1 =(Do + 82) + (ljl + B3)4
= (1 1.50 - 6.09)
+ (0.15 - 0 . 0 3 ) ~=5.41+ ~
0.12~~
However, since the P-values for Pi and P3 are greater than 0.05, it is concluded that two regression equations are not needed to model the tool life data.
1
(All-Possible Regression Evaluation) Either a two-regressor model including x3 and x4 or three-regressor model including xl, x3 and x4 is recommended. The two-regressor model has the I minimum value of C, and the three-regressor model has the maximum value 1 of adjusted R; and minimum value of MS&). I I
Exercise 12.12
1 Exercise 12.13
(Stepwise Regression) Two regressors, x3 and x4, are selected for model building. (Multicollinearity) Since all the VIF values are less than 10, the regression model does not have the problem of multicollinearity.
13-2
Chapter 13
13-2
Design and Analysis of Single-Factor Experiments: The Analysis of Variance
The Completely Randomized Single-Factor Experiment
13-2.1 An Example
0 Identify dependent, independent, and extraneous variables in an experiment. 0 Explain the terms treatment and replicate. Compare the following pairs of concepts each: controllable vs. uncontrollable variables; fixedeffects vs. random-effects factors; balanced vs. unbalanced designs; complete vs. restricted randomizations. Explain the purpose of randomization. Tensile Strength Experiment (example)
A tensile strength experiment is introduced to explain fundamental terminologies of experimental design. Suppose that a manufacturer of paper is interested in improving the tensile strength of the product. The manufacturer wishes to identify if the tensile strength is affected by the hardwood concentration in the pulp used for the paper. The range of hardwood concentration of interest is between 5 and 20%; and, within the range, four levels (a = 4) of hardwood concentration are examined: 5, 10, 15, and 20%. At each concentration level, three specimens (ni = 3, i = 1 to a) are tested in a laboratory. The measurements by, j = 1 to ni,i = 1 to a) of this experiment can be summarized in a table format like Table 13-1. Table 13-1 Data Structure of a Single Factor Experiment
F'wcrar3hWle
I
Replicate
1
2
...
I
...
j
Ylj
Y2j
... ...
Yy
...
Y aj
... ... ...
Yin,
... ... ...
Y a,,
...
Ya.
**-
n1 **s
n_(__"
aiamla
-
-*
-*
Totals Averages :a m-invn*_?_f*
X*j.
_wwf.i._ea
Yln,
Y2n2
paa.-aarr----s----~---wM*w-
rr--arw
YI.
Y2.
-
-
y,. a
8*_ X _
n
l
*
~
~
8
x
Y2. /
~
l
_
e
_
n
~
~
Yl. -
.
~
~
~
Yi,
~
~
"
~
~
'
.
a
1
Ya.
11 ,
Y..
-
~ ax*-carulxi.in-nr-m---*->--d~w*u"%.r % ~ . ~ ~ ~ ~ ~
i
m-
a.-n-xnrtrmaran-n-M
-
%
~
~
~
~
Y.. w
~
~
id%-
Types of Variables
Three categories of variables exist in an experiment: 1. Response (dependent) variable: The outcome (response) of interest (e.g., tensile strength) 2. Independent variable (factor): The variable whose effect on the response variable is under study (e.g., hardwood concentration) 3. Extraneous variable (nuisance factor): The variable whose effect on the response is not under consideration but may affect the response variable (e.g., difference between specimens, temperature, etc.)
Treatment (Factor Level)
The term treatment (factor level) of a factor refers to an individual condition of the factor. For example, the tensile strength experiment has four factor levels: 5, 10, 15, and 20%.
~
~
'
~
~
<
~
\
I
w
13-2 The Completely Randomized Single-Factor Experiment
Fixed-Effects VS.
Random-Effects Factors
Controllable VS. Uncontrollable Variables Replicate
Balanced VS. Unbalanced Designs Block
Complete VS. Restricted Randomization
13-3
Two types of factors are defined depending on the selection method of factor levels: fixed-effects and random-effects factors. While the levels of a fixedeffects factor are specifically selected by the experimenter, those of a randomeffects factor are chosen by random mechanism from a population of factor levels. The conclusion of a fixed-effects factor experiment cannot be extended beyond the factor levels selected, while that of a random-effects factor experiment can be extended to all the factor levels. For example, in the tensile strength experiment hardwood concentration is a fixed-effects factor because the four factor levels are specifically (not randomly) selected by the experimenter; thus, the corresponding conclusion is restricted to the four hardwood concentrations examined. An extraneous variable can be classified into either a controllable or uncontrollable variable depending on if the condition of the variable can be controlled by the experimenter with an acceptable variation. For example, if the tensile strength experiment is conducted in a laboratory where its temperature is under control, temperature is a controllable variable. The term replicate refers to a repeated measurement under the same experimental condition. As an example, the tensile strength experiment has six replicates for each treatment. A design of experiment is classified into either a balanced or unbalanced design depending on the equality of replicates. A balanced design has the same number of replicates between the treatments, while an unbalanced design does not. For example, the tensile strength experiment is a balanced design. Recall that the paired t-test is used to block out the effect of heterogeneity between specimens (experimental units) (see Section 10-4). This heterogeneity between experimental units becomes a nuisance factor; and each experimental unit is called block. Randomization means that the order of trials is determined by random mechanism (such as random number table and random number generator) and two types of randomization are defined according to restriction in randomization: (1) Complete randomization: All trials are randomized (see Table 13-2(b)). (2) Restricted randomization: Trials are randomized within each block (see Table 13-2(c)). Randomization balances out the effects of extraneous variables on the response. For example, in the tensile strength experiment, suppose that the tensile strength measurement device has a warm-up effect-an increase of two units in error per measurement. Table 13-2 shows errors in measurement due to the warm-up effect for different types of running trials; the warm-up effect becomes balanced out by randomization (from the range [30,48] in sequential experiment to [34,44] in complete randomization and [34,42] in restricted randomization).
13-4
Chapter 13
Design and Analysis of Single-Factor Experiments: The Analysis of Variance
Complete VS. Restricted Randomization
Table 13-2 Errors Due to Warm-up Effects
(cont.)
. . e , e . * . . . a . * s . L . ~
.
.,., ,
3 Totals
m~-d..BMwm-a.-
r
<.-,.wm.-,*.e,%
,111,1
*,
,
12(6) -.-,. ",.."*~ ,,., -., 38 ,.,,,, ,,,., -.a<.
13-2.2 The Analysis of Variance
Context
Model
1. Factor: Fixed-effects factor with a treatments 2. Blocking: No blocks defined 3. Repetition: n replicates for each factor level (balanced design) (Note) N (total number of observations) = n x a 4. Randomization: Complete randomization
Y.. =p+'ti 1/ where: y
+E.. 1/
=pi
= overall
mean a
r =0
'ti= "i treatment effect, i=l
pi
=
i'htreatment mean
E~ = random
error
- i.i.d.N (0, o2)
. .
13-5
13-2 The Completely Randomized Single-Factor Experiment
Mode' (cont.)
Least Square Estimators
(N0tes)l.G-~(p,,o') 2. In the fixed-effects model, the treatment effects (zi) are defined as deviations from the overall mean; thus, the sum of zi's is zero. The least squares method finds the estimates of p and .r;i's (i = 1 to a) which minimize the sum of the squares of the errors (SSE)
By solving the partial derivatives of L with respect to p and zi's, the least square estimators of p, ~ ~ 'and s , pi's are determined:
P = F.. ~ ~ = y , - y , . i, = 1 , 2 ,..., a A
-
-
P, =P+?i =y,,, i = 1 , 2,..., a ANOVA
The ANOVA method is used to test the significance of a factor of interest. Like the simple regression model, the total variability in Y (SST,total sum of squares; degrees of freedom = N - 1) is partitioned into two components (see Table 13-3): (1) Variability explained by the factor (SSTreatments,treatment sum of squares; degrees of freedom = a - 1) (2) Variability due to random error (SSE, error sum of squares; degrees of freedom = N - a) Table 13-3 Partitioning of the Total Variability in Y (SST)
SST
ss~reannents
SSE
(Total SS)
(Treatment SS)
(Emr SS)
M
-
~
~
"
w
+ A
~
~
~
~
d
-
w
-
~
-
~
~
a
~
~
~
.
~
.
a
~
~
N-a
(Note) SS: Sum of Squares; DF: Degrees of Freedom - 1) to SSEI(N- a) follows an F distribution: The ratio of SST,atmentsl(a
F=
"Treatments
-
SS, l(N - a )
- MS~reatments - F ( u - 1 , N - a ) MSE
This statistic F is used to test Ho: TI = 7 2 = ... = T, = 0, because F becomes small as the effect of the factor is insignificant. The quantities MS~reatments(treatment mean square) and MSE (error mean square) are adjusted S S T , ~and ~ ~SSE ~ ,by ~~ their degrees of freedom, respectively. Note that
u
~
"
-
~
~
w
e
~
~
~
~
13-6
Chapter 13
ANOVA (cont.)
Design and Analysis of Single-Factor Experiments: The Analysis of Variance The variation quantities from the ANOVA analysis are summarized in Table 13-4 to test the significance of the factor under consideration. Table 13-4 ANOVA Table for Testing the Significance of a Factor: Completely Randomized Design Degrees of Mean Source of Sum of Fo Freedom Square Squares Variation Treatments SS~reatrnents a-1 MS~reatments MS~reatments M S ~ Error
-Total Hypothesis Test (F-test)
"
"
SSE SST
MSE
N-a N-1
-
.d
"
-
-
"
".Wti"TXI
Step 1: State Noand H I . H ~ : TI = TI = ... = T O =0 HI: z;# 0 for at least one i, i = 1, 2,. .. , a Step 2: Determine a test statistic and its value.
F, = SSTreatments SS, l(N - a )
- MS~reatrnents
- F(u-1, N - a )
MSE
Step 3: Determine a critical value(s) for a. fa,a-1,~-a
Step 4: Make a conclusion. Reject Ho if f 0
'
fa,a-1,~-a
4
I
Unbalanced Design
The same ANOVA method of the balanced design is applicable to an unbalanced design with slight modifications. If n, denotes the number of observations under the ithtreatment, the total number of observations is
Then, the sums of squares for the unbalanced design are
a
n,
a
2
Yi.
1
SS~reatments i=] j=l
A balanced design has two advantages compared to an unbalanced design: (1) Robustness: The test procedure of the balanced design is less sensitive to a small departure from the assumption of equal error variance (d). (2) Statistical power: The power of the test of the balanced design is higher when the sample sizes of the two designs are the same.
1
1
13-7
13-2 The Completely Randomized Single-Factor Experiment
r
Confidence Interval on Pi
A 100(1 - a)% confidence interval on pi is
Confidence Interval on Pi-&
A 100(1 - a)% confidence interval on yi - pi is
Example 13.1
4 7
JF
-
-
Yi. - t a / 2 , ~ - a
p i 5 Ti, + ta,2,N-a
A paper manufacturer is examining if the tensile strength of a paper product is affected by the hardwood concentration of the pulp used for the product. Four hardwood concentrations ( 5 , 10, 15, and 20%; a = 4 ) are selected by the analyst and five specimens (n = 5 ) are tested at each concentration for tensile strength, resulting in the following: .
,
Hardwood Conmtratian (%;9 10 (2) 1s (3) 20 (4)
, --
ee
Totals A v*--e r a -*#* ~ *WWW*
50 10.0 " -
a*
m
"' m-
wwwana
-
79 15.8
w m v ma**
A
v-s
-aa**ra** e
84 16.8
=
v,
A
*-
m
107 21.4*
a
*w#
e
320 16.0 #
Bw"
--
The following summary quantities are also provided for the tensile strength data: a
n
a
2
N = 2 0 , ~ ~ y , 2 = 5 , 5 7 6 , a n d ~ =27,246 y, 1=1 J=I
r=l
(Note) The subscripts 1,2, 3, and 4 correspond to 5 , 10, 15, and 20% of hardwood concentration, respectively. 1. (Test on the Significance of a Fixed-Effects Factor; Completely Randomized Design) Perform the analysis of variance to test if there is any significant difference in tensile strength due to hardwood concentration at a = 0.05. h The analysis of variance of the tensile strength data is as follows:
13-8
Chapter 13
Design and Analysis of Single-Factor Experiments: The Analysis of Variance
Step 1: State Ho and H I . Ho: 2 1 = 2 2 = ". =2,=0 H I : z i f 0 for at leastone i,i=1 t o 4
Example 13.1 (cont.)
Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. fa,*-1,~-a
= f0.05,4-1,20-4
Step 4: Make a conclusion. Since f0 = 13.8 > fa,,-, ,-,,
= f0.05,3,16 = 3.24
= 3.24 , reject Ho at a = 0.05.
2. (Confidence Interval on pi) Establish a 95% two-sided confidence interval on p 2 (mean of tensile strength at 10% of hardwood concentration). 1-a=0.95 2 a=0.05 95% two-sided CI on p 2: 72. -ta12,~-a
I
Exercise 13.1 (MR 13-5)
dMS,
- k n
h+
ta12.N-~d~
3. (Confidence Interval on pi - &) Establish a 95% two-sided confidence interval onp1-p2. 5% two-sided CI on fi 1 - y 2:
The effect of cathode ray tube coating type on the tube conductivity is under study. Five different types of coating (a = 5) are selected by the analyst and four replicates (n = 4) are run for each coating type, resulting in the data as follows:
13-2 The Completely Randomized Single-Factor Experiment
13-9
Exercise 13.1 (cont.)
a
n
N=20, ~ ;=I
~ ;=I
a
y=389,115,and y 2
Zyi2=1,555,487 i=l
1. Perform the analysis of variance to test if there is any significant difference in conductivity due to coating type at a = 0.05. 2. Establish a 95% two-sided confidence interval on p 1 (mean of tube conductivity for coating type 1).
3. Establish a 95% two-sided confidence interval on p 2 - p3.
13-2.3 Multiple Comparisons Following the ANOVA
Multiple Comparison Methods
Least Significant Difference (LSD) Method
Several methods are available to identify which treatment means are different from each other: 1 . Scheffe's orthogonal contrast 2. Fisher's least significant difference (LSD) 3. Duncan's multiple range test 4. Tukey-Kramer's honestly significant difference (HSD) test In this book, only the LSD method is presented. Fisher's least significant difference (LSD) method is used to compare all possible pairs of treatment means, having the following null hypotheses Ho: pi = CL, for all i # j, i and j = 1 , 2 , . . . , a Each null hypothesis Ho: pi = CL, will be rejected if ItOI
> tai2,~-a
In other words,
13-10 Chapter 13 LSD Method (cont.)
Design and Analysis of Single-Factor Experiments: The Analysis of Variance As shown above, the pair of piand ~ 1will , be concluded significantly different if the absolute difference of the corresponding sample means is greater than the LSD. Note that for an unbalanced design the LSD is defined as I
/
\
The procedure of the LSD method is as follows: Step 1: Determine the LSD for a. Step 2: Arrange the means of a treatments in descending order. Step 3: Compare the difference between the largest and smallest means with the LSD. Continue this comparison with the next smallest mean as long as the mean difference is greater than the LSD. Step 4: Continue Step 3 for the next largest mean until this iteration reaches the second smallest mean. Step 5: Summarize the painvise comparison results by arranging the a treatments in order of mean and then underlining treatments whose means are not significantly different.
Example 13.2
(Least Significant Difference Method) Consider the tensile strength data in Example 13-1. The following summary quantities are obtained from the analysis of variance: a = 4 , n = 5 , N = 2 0 , MSE =7.9 j,. =10.0, Y2.=15.8, y3,=16.8, and y,, =21.4 Compare the means of the four treatments by using Fisher's LSD method. Use a = 0.05. I.+V Step 1: Determine the LSD for a.
Step 2: Arrange the means of a treatments in descending order. y,, =21.4, j,, =16.8, 7,. =15.8, and j,, =10.0 Step 3: Compare the difference between the largest and smallest means with the LSD. Continue this comparison with the next smallest mean as long as the mean difference is greater than the LSD. 4 ~ s 1=21.4-10.0=11.4>3.8 . 4 VS.2 = 21.4- 15.8 = 5.6 > 3.8 4 VS.3 = 21.4 - 16.8 = 4.6 > 3.8 Step 4: Continue Step 3 for the next largest mean until this iteration reaches the second smallest mean. 3 VS.1 = 16.8 - 10.0 = 6.8 > 3.8 3 VS. 2 = 16.8- 15.8= 1.0<3.8 2 vs. 1 = 15.8 - 10.0 = 5.8 > 3.8 Step 5: Summarize the painvise comparison results by arranging the a treatments in order of mean and then underlining treatments whose means are not significantly different. 4 3 2 1
13-2 The Completely Randomized Single-Factor Experiment
Exercise 13.2 (MR 13-14)
13-11
Consider the tube conductivity data in Exercise 13-1. The following summary quantities are obtained from the analysis of variance: a = 5 , n = 4 , N = 2 0 , MSE =16.2 J,. =145.0, 7,. =145.3, 7,. =131.5, J4, =129.3, and 7,. =145.3 Compare the means of the five treatments by using Fisher's LSD method. Use a = 0.05.
13-2.5 Residual Analysis and Model Checking
Describe the purpose of residual analysis in the analysis of variance. Calculate residuals. Residual Analysis
In Section 13-2.2, the single-factor ANOVA model has three assumptions on the error term: (1) normality, (2) constant variance, and (3) randomness. These assumptions can be checked by analyzing residuals (eij = yij - yij = yij - 7;.), which is called the assessment of model adequacy. First, the normality of error is checked by one of the following: (1) Frequency histogram of residuals (2) Normal probability plot of residuals (3) Standardization of residuals: If the residuals are normally distributed, about 95% of the standardized residuals e.. eij d . . =L-P , i = 1 , 2 ,..., a and j = 1 , 2 ,..., n
"-rn
are in the interval (-2,2). Residuals whose standardized values are beyond +3 may be outliers. Next, the constant variance and randomness of error can be checked by examining the following residual plots: (1) eij vs. factor level i, i = 1,2,. .. , a (2) eij vs.
7,
(3) eij vs. t (if time sequence is known) An undesirable residual pattern (see Section 11-8.1) in a residual plot may be corrected by transformation of the response variable ('y) into another form such as
r
J; or logy. Example 13.3
(Calculation of Residual) Consider the tensile strength data in Example 13-1. The following summary quantities are obtained from the analysis of variance: J1, =10.0, J,, =15.8, &, =16.8, and =21.4 Calculate the residual of yl2 = 8. (overestimate) ~h' el, = yI2 - j12 = y12- J1, = 8 - 10 = -2
L,
13-12 Chapter 13
Design and Analysis of Single-Factor Experiments: The Analysis of Variance
Consider the tube conductivity data in Exercise 13-1. The following summary quantities are obtained from the analysis of variance: y,, =145.0, j2,=145.3, y,, =131.5, 7,. =129.3, and 7,. =145.3 Calculate the residual of y24 = 143.
Exercise 13.3 (MR 13-5)
13-2.6 Determining Sample Size
O Determine the sample size of a single-factor experiment by using an appropriate operating characteristic (OC) curve.
Operating Characteristic (OC) Curve
Operating characteristic (OC) curves in Figure 13-6 of MR plot required sample sizes n for a fixed-effects factor experiment at different levels of the parameters a,p,@,vl,andv2,i.e., n =f( a , S, a,vl, and ~ 2 )
1% n2.:
where:
=
v 2 = a(n - 1) (Notes) 1. If the error variance o2is unknown, an estimate of o2(such as MSE, conventional value, and subjective estimate) can be used. I
2. The analyst can define the ratio
1=1
r' lo2 (or 0: / 02) in sample size
determination.
r
3. The sample size required increases as a, P, and
XI=,
r: (or o: ) decrease and I
3
o increases. Example 13.4
(Sample Size Determination; Fixed-Effects Factor) Consider the tensile strength experiment in Example 13- 1. Four hardwood concentrations (a = 4) are selected and their tensile strength means tested at a = 0.05. Suppose that the four normal populations have common variance o2= 32 and means pl = 12, y2 = 15, p3 =
I
18, and p4 = 20 (i.e.,
xa r=l
I
T: / o2= 4.1 ). Determine the number of replicates
within each treatment to reject the hypothesis of equality of means with power of at least 0.90 (= 1 - B). Use the operating characteristic curve of Figure 13-6(a) in MR for a fixedeffects factor with vl = 3 (= a - 1 = 4 - I). From the OC curve, the power of the test according to n is found as follows: n Q, v2 = a(n - 1) fi Power (1 p)
-
13-13
13-3 The Random Effects Factor
Example 13.4 (cont.)
I Exercise 13.4 (MR 13-20)
Thus, at least n = 5 replicates are needed to achieve the designated power of the test. Suppose that five normal populations (a = 5) have common variance o2= 1o2 and means p l = 175, p2 = 190, p3 = 160, p4 = 200, and p5 = 215 (i.e.,
za r=l
T: lo2 =
18.3). How many observations per population must be taken so that the probability of rejecting the hypothesis of equality of means is at least 0.95? Use a = 0.01.
13-3
The Random Effects Factor
Test the significance of a random-effects factor in a completely randomized design.
0 Estimate the variances of error (02), treatment effect (03, and response variable (V(YV)). --- ---- *#
ms*
ws
*--*
*
-+ms
Context
Model
***
*---
%
dMFm
*
*
*
*
*
a
a a
*
a
*
1. Factor: Random-effects factor with a treatments 2. Blocking: No blocks defined 3. Repetition: n for each factor level (balanced design) (Note) N (total number of observations) = n x a 4. Randomization: Complete randomization
Y'J = p + z i where: p
zi pi E,
+&..
rJ
= p i +&..
= overall
'
mean
= ithtreatment
{
i = 1 , 2 , ..., a j = 1 , 2 , ..., n
effect
th
- i.i.d. N (0, 0):
treatment mean = random error i.i.d. N (0, 02)
=i
-
ww
A
a>
e
a
13-14 Chapter 13 Model (cont.)
Design and Analysis of Single-Factor Experiments: The Analysis of Variance (Notes) 1. The random variables zi and E,. are assumed independent of each other. 2. Y, ~ ( y , , o ;+ a 2 )
-
3. In the random-effects model, testing the individual treatment effects (zi) are meaningless because the treatments are selected at random. ANOVA
The analysis of variance procedure of the fixed-effects model is valid for the random-effects model. The computation procedure for an ANOVA table in the fixed-effects model (see Section 13-2.2) is identical to that of the random-effects model. However, the conclusion of ANOVA for the random-effects model can be extended to the entire population of treatments. This generalization of conclusion is due to random selection of treatments in the experiment. The variance components (0: and 02) of the random-effects model can be and MSE: estimated by using MSTreatnebts
Therefore, the estimate of the variance of YV is
B(y,)=8: Hypothesis Test (F-test)
+ir2
Step 1: State Ho and H I . Ho: o,2 = 0 HI: 0 : f O (Note) If all treatment effects are identical, between treatment effects, a:
#
o:
= 0 ; but if there is variability
0.
Step 2: Determine a test statistic and its value.
Fo= "Treatments '(a SS, l(N - a)
- MSTraatments MSE
- F(u-1,N-a)
Step 3: Determine a critical value(s) for a. fa,a-l,(\i-a
Step 4: Make a conclusion. Reject Hoif f 0
ft
Example 13.5
'fa,a-1.~-a
(Estimation of Variance Components) Consider the tensile strength data in Example 13-1. Assume that the four levels of hardwood concentration are randomly selected. The following summary quantities are obtained from the analysis of variance: n = 5 , MSTreatments = 109.7 , and MS, = 7.9 Estimate the (1) variability in tensile strength due to random error (02), (2) variability due to hardwood concentration (0,2) and , (3) total variability in tensile strength (V(Yg)).Also interpret these variability estimation results.
13-4 Randomized Complete Block Design
13-15
Example 13.5 (cont.)
These variance estimates indicate that about 72% (= 20.4128.3) of the total variability in tensile strength is attributable to hardwood concentration.
bf;
Exercise 13.5
13-4
Consider the tube conductivity data in Exercise 13-1. Assume that the five coating types are chosen at random. The following summary quantities are obtained from the analysis of variance: n = 4 , MS,,e,,,ent, =265.1, and MS, =16.2 Estimate the (1) variability in tube conductivity due to random error (02), (2) variability due to coating type (o:), and (3) total variability in tube conductivity (V(Yij)).Also interpret these variability estimation results.
Randomized Complete Block Design
0 Distinguish between complete block design and incomplete block design. Explain the effects of blocking on the error term. Test the significance of a fixed-effects factor in a randomized complete block design. Perform multiple comparisons by Fisher's least significant difference (LSD) method.
Complete vs. Incomplete Block Designs
A complete block design contains all the treatments in each block (see Table 135), while an incomplete block design may omit one or more treatments in each block. Recall that blocking is used to balance out the effect of a nuisance factor (heterogeneity between experimental units) on the response variable.
Table 13-5 Data Structure of a Complete Block Design
Treatments
Blocks
1
2
...
i
...
a
Totals
Means
13-16 Chapter 13
Context
Model
Design and Analysis of Single-Factor Experiments: The Analysis of Variance 1 . Factor: Fixed-effects factor with a treatments 2. Blocking: Complete block design with b fixed-effect blocks 3. Repetition: One observation for each combination of factor levels and blocks (Note) N (total number observations) = a x b x 1 4. Randomization: Restricted randomization (randomization within each block) i=l,2,...,a
Yv = = + T i + P j +Eii = y i + P i
+E..
j = 1 , 2 , ..., b
where: y = overall mean
ri = fh treatment effect,
ri = 0 i=l
b
Pi
=jth block
P
effect,
=0
;=I
k i = ithtreatment mean cii (Notes) 1.
= random
error
- i.i.d. N (0, 02)
Y, - N(yi, 0 2 )
2. The treatments and blocks are assumed to be independent of (not interacting with) each other. 3. Since the treatments and blocks are assumed to be fixed-effects factors in the model, the treatment effects (ri) and block effects (Pj) are defined as deviations from the overall mean; thus, the sums of ri's and Pj's are zero. ANOVA
The analysis of variance method is used to test the significance of a factor in a randomized complete block design. The total variability in Y (SST, total sum of squares; degrees of freedom = N - 1) is partitioned into three components: (1) Variability explained by the treatments (SSTreatments, treatment sum of squares; degrees of freedom = a - 1) (2) Variability explained by the blocks (SSsloCh,block sum of squares; degrees of freedom =b - 1) (3) Variability due to random error (SSE, error sum of squares; degrees of freedom = (a - l)(b - I)) In other words,
a
b
a
b
2
where: S S , = z z ( Y , - ~ . ) 2 = ~ , ~2 Y--Y.. , i=l ~ T I i=1 ,=I ab
13-4 Randomized Complete Block Design
13-17
ANOVA (cont.)
The ratio of S S T ~ , , ~ ~-, ~1)~to, ~SSEI(a ( ~ - l)(b - 1) follows an F distribution:
This statistic F is used to test Ho:TI = TZ = ... = = 0, because F becomes as the effect of the factor is insignificant. The quantities MSTreatments(treatment ~ ~ mean ~ ~ square), ~ and MSE (error mean square) mean square), M S B (block ~ SSE ~ ~by ~ their , corresponding degrees of are adjusted SSTreatments, S S B ~and freedom, respectively. Note that
MS, =
ss~
(a - l)(b - 1)
=62
The variation quantities from the ANOVA analysis are summarized in Table 13-6 to test the significance of a factor under consideration. Table 13-6 ANOVA Table for Testing the Significance of a Factor: Randomized Com~lete ,~ .Block -Design -0.Source of Sum of Mean Degrees of Variation Squares Freedom Square Fo Treatments SSTreatmentS a-1 MS~reatments MSTreatmentS / MSE -
Blocks Error Total Effectiveness of Blocking
-
~
~-
S S ~ ~ o ~ k ~
SSE
b-1 (a - l)(b - 1)
M S ~ l ~ ~ k ~
MSE
N-1
Compared to a completely randomized design (presented in Section 13.-2.2), a randomized complete block design reduces the sum of squares and degrees of freedom for error. In the block design the error sum of squares (see Figure 13-1) and corresponding degrees of freedom are decreased by SSslocks and b - 1 (= (N a) - (a - l)(b - 1); N = ab), respectively.
Figure 13-1 The effect of blocking on error sum of squares (SSE):(a) completely randomized design; (b) randomized complete block design.
13-18 Chapter 13
Design and Analysis of Single-Factor Experiments: The Analysis of Variance
Effectiveness of Blocking (cont.)
The effectiveness of blocking depends on the significance of block effects. If the effect of blocks is significant, a complete block design will result in a smaller error mean square (MSE) than a completely randomized design, which makes the test more sensitive to detect the effect of treatments. In general, when the significance of block effects is uncertain, apply blocking and check if the block effects exist. There would be a slight loss in the degrees of freedom for error, which may be negligible in the test unless the number of trials is very small.
Hypothesis Test (F-test)
Step 1: State Ho and H I . Ho: T I = T ~ ... = =z,=O HI: T ~ # O foratleastonei,i= 1 , 2,..., a Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. fa,u-l,(u-l)(b-1)
Step 4: Make a conclusion. Reject Hoif f 0
'
fa,a-l,(u-l)(b-1)
Multiple Comparison Methods
The multiple comparison methods for the completely randomized design (see Section 13-2.3) are applicable to the complete block design. As an example, the value of Fisher's LSD is
Residual Analysis
The complete block design model has three assumptions on the error term: (1) normality, (2) constant variance, and (3) randomness. These assumptions can be checked by analyzing residuals e, = Y , -5, = Y , -(Y, +Y, - F These residuals are analyzed by the model adequacy assessment methods ex~lainedin Section 13-2.5.
Example 13.6
The effect of grip angle on maximum grip strength (unit: Ib.) is under study. Three grip angles (a = 3; 20°, 50°, and 80" from the horizontal) are selected by the experimenter. Five participants are recruited in the experiment; and each participant exerts hisfher maximum strength at the three different grip angles presented at random. The grip strength measurements are as follows:
I
13-19
13-4 Randomized Complete Block Design
Example 13.6 (cont.)
5 *--. Totals Averages
? X ~ . X ' X / " ~ - ~ n
62 297
"iW"r*I.w-"UIU
".-7 1
-~m-a~,mas--
65 317
338
198 952
66.0
~ ~ . B ~ B . . B ~ " . ~ - m " m m a ~,-mv..*m.m,.mmeasae
The following sum a
N = 15,
b
yxy y 2
a
= 62,132,
i=1 j = l
y i 2 = 302,942, and
b
Y,j2
= 185,794
j=l
i=l
1. (Test on the Significance of a Fixed-Effects Factor; Complete Block Design) Perform the analysis of variance to test if there is any significant difference in maximum grip strength due to grip angle at a = 0.05. D T ~The analysis of variance of the grip strength data are as follows:
Source of Variation
Sum of Squares
Grip Angle
168.1
2
84.1
Participant
1,511.1
4
377.8
Error
32.5
8
4.1
Total
1,711.7
14
Mean Square
Degrees of Freedom
Fo 20.7
Step 1: State HOand H I . Ho: z1 = z 2 = ... =I-,=() HI: zi#O foratleastonei,i= 1 t o 3 Step 2: Determine a test statistic and its value.
f o = "~reatments S ( a -1
'1
- - MS~reatments = - 1) MS,
841= 20.7 4.1
Step 3: Determine a critical value@) for a. fa,a-l,(a-l)(b-1)
= f0.05,3-1,(3-1)(5-1)
Step 4: Make a conclusion. Since fo = 20.7 > fa,a-l,(a-l)(b-l)
= f0.05,2,8 = 4.46
= 4.46, reject HOat a = 0.05.
I/ 11I
13-20 Chapter 13
Design and Analysis of Single-Factor Experiments: The Analysis of Variance
I
Example 13.6 (cont.)
2. (Least Significant Difference Method) Compare these three treatment means by using Fisher's LSD method at a = 0.05. ~hStep . 1: Determine the LSD for a.
I
Step 2: Arrange the means of a treatments in descending order. 7,. = 67.6, j7,, = 63.4, and 7,.= 59.4 Step 3: Compare the difference between the largest and smallest means with the LSD. Continue this comparison with the next smallest mean as long as the mean difference is greater than the LSD. 2 VS.1 = 67.6 - 59.4 = 8.2 > 3.0 2 VS.3 = 67.6 - 63.4 = 4.2 > 3.0 Step 4: Continue Step 3 for the next largest mean until this iteration reaches the second smallest mean. 3 vs. 1 = 63.4 - 59.4 = 4.0 > 3.0 Step 5: Summarize the painvise comparison results. 2 3 1 (significantly different from each other)
I Exercise 13.6 (MR 13-29)
3. (Calculation of Residual) Calculate the residual of y34= 79. e34=y34- j 3 4= y 3 4-(Y3, +J.4 -j7,,)=79-(63.4+77.0-63.5)=2.1
An experiment is conducted to investigate leaking current in a near-micron SOS MOSFETS device. This experiment investigates how leakage current varies as the channel length changes. Four channel lengths (a = 4) are selected by the experimenter; and for each channel length five different channel widths (b = 5) are used. Suppose that channel width is considered a nuisance factor. The data are as follows:
i = ~ j=l
i=l
-
.j=1
1. Perform the analysis of variance to test if there is any significant difference in leaking current due to the difference in channel length at a = 0.05. 2. Compare these four treatment means by using Fisher's LSD method at a = 0.05. 3. Calculate the residual = 0.8.
I1
q
MINITABApplications
13-21
MINITABApplications Examples 13.1 - 3
I
(Fixed-Effects Factor; Completely Randomized Design)
(1) Choose File
* New, click Minitab Project, and click OK. orksheet.
Click Comparisons. Check Fisher's, individual er'ror rate and type '5'
E'EI -
,,.-Ji =----[
..
"
.-
.-..a
23
1
.--.-..
8
L3
SLL'Z SLL'b-
SZO'ESLS'OI-
ST
S U O S T l W d U O S aSTl&.lIWd S , I a q S I 4
...................................................................................... O'SZ
O'OZ
O'SI
0.01
8
=
n a a a s parood
+---------+---------+---------+-----(-___ *__-__)
I88'Z
OOb'IZ
S
0Z
snsJaa slanp!saa u~ ..xapJo snsJaa slanppaa pue ' s ~ gsnsJaa slanp!saa 'slanp!sa~jo $old l a u r ~ y3ay3 o ~ 's)old lanp!saa lapun 'sqda~f)y3q3 (s)
a3ue!leA jo s!sdleuv aqL :sluaurpadxg.roped-alZu!sjo s!sd1euv pue uZ!saa
I
sa1durex3 -
E 1 laldeq3
zz-c c
MINITABApplications
Example 13.4
13-23
I (Sample Size Determination; Fixed-Effects Factor) (1) Choose Stat % Power and Sample Size % One-way ANOVA. Enter the number of treatments (a)in Number of levels. Click Calculate power for each sample size and enter the range of sample size to be examined in Sample sizes and true means of the treatments in Means. Enter also
(2) In Significance level enter the probability of type I error (a).Then click OK twice.
edefined test condition.
I
I
Power and Sample Size
Sigma = 3 Alpha = 0.05 Nunber of Levels = 4 Corrected Sum of Squares of Means = 36.75 Means = 12, 15, 18, 20
......................... Sample Size 3
4 5
Power
i
0.6253 ; 0.8308 I 0.9322 j 0.9752
6 1 .........................
4
w
13-24 Chapter 13
Example 13.6
I
Design and Analysis of Single-Factor Experiments: The Analysis of Variance
(Randomized Complete Block Design) (1) Choose File % New, click Minitab Project, and click OK.
Click Graphs. Under Residual Plots, check Normal plot of residuals. Residuals versus fits, and Residuals versus order. in Remsiduals versus - 2
7 MINITAB
Example 13.6 (cont.)
Applications
13-25
Click Results. In Display means corresponding to the terms, select
I
Analysis of Variance (Balanced Designs)
Factor Angle Partlcip
Type Levels Values fixed 3 1 fixed 5 1
2 2
3 3
4
5
Analysis of Variance for Strength :"""'..."""" ....................................................................................... :Source :Angle ~Particip :Error :Total
DF
SS
BS
2 4 8 14
168.13 1511.07 32.53 1711.73
84.07 377.77 4.07
20.67
0.001
.........................................................................................................
:
I Means Angle
N
Strength
1 2 3
5 5 5
59.400 67.600 63.400
13.6.1
Design and Analysis of Single-Factor Experiments: The Analysis of Variance
13-26 Chapter 13
Answers to Exercises 1. (Test on the Significance of a Fixed-Effects Factor; Completely Randomized Design) The sums of squares for the analysis of variance are as follows:
Exercise 13.1
This analysis of variance is summarized in the following table:
Source of Variation
Sum of Squares
Degrees of Freedom
Mean Square
fo
I
Coating type
1,060.5
4
265.1
16.3
Error
243.3
15
16.2
I
Total
1,303.8
19
I
I
Step 1: State HOand H I . Ho: z l = z2= ... = z a = O H I : zi#O f o r a t l e a s t o n e i , i = 1 t o 5 Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. fa,a-1,~-a
-
- f0.05,5-1,20-5
= f0.05,4,15 = 3'06
Step 4: Make a conclusion. Since f , = 16.3 >fa,a-l,N-a= 3.06, reject Ho at a = 0.05.
1 2. (Confidence Interval on pi)
1
1-a=0.95 3 a=0.05 95% two-sided CI on LI I :
Answers to Exercises
Exercise 13.1 (cont.)
Exercise 13.2
13-27
( 3. (Confidence Interval on pi - &)
1
95% two-sided CI on p 2 - p 3:
(Least Significant Difference Method) Step 1: Determine the LSD for a. LSD = ta,2,N-a--
- 2.131 x 2.846 = 6.07
Step 2: Arrange the means of a treatments in descending order. y2. = J 5 ,=145.3, J,, =145.0, J3. =131.5, and 7,. = 129.3 Step 3: Compare the difference between the largest and smallest means with the LSD. Continue this comparison with the next smallest mean as long as the mean difference is greater than the LSD. 2, 5 vs. 4 = 145.3 - 129.3 = 16.0 > 6.07 2, 5 vs. 3 = 145.3 - 131.5 = 13.8 > 6.07 2, 5 vs. 1 = 145.3 - 145.0 = 0.3 < 6.07 Step 4: Continue Step 3 for the next largest mean until this iteration reaches the second smallest mean. 1 vs. 4 = 145.0 - 129.3 = 15.7 > 6.07 1 vs. 3 = 145.0 - 131.5 = 13.5 > 6.07 3 vs. 4 = 131.5 - 129.3 = 2.2 < 6.07 Step 5: Summarize the painvise comparison results by arranging the a treatments in order of mean and then underlining treatments whose means are not significantly different. 2 5 1 3 4
Exercise 13.3
Exercise 13.4 I
E
(Calculation of Residual) e2, = y2, - y2, = y2, - J 2 , = 143 - 145.3= -2.3
(overestimate)
(Sample Size Determination; Fixed-Effects Factor) Use the operating characteristic curve of Figure 13-6(b) in MR for a fixedeffects factor with vl = 4 (= a - 1 = 5 - I). From the OC curve, the power of the test according to n is found as follows:
13-28 Chapter 13
Design and Analysis of Single-Factor Experiments: The Analysis of Variance
Exercise 13.4 (cont.)
Thus, at least n = 3 replicates are needed to achieve the designated power of the test.
Exercise 13.5
I (Estimation of Variance Components; Random-Effects Factor) I ( I ) B2 MS, = 16.2 =
I Exercise 13.6
These variance estimates indicate that about 79% (= 62.2178.4)of the total variability in tube conductivity is attributable to coating type. 1. (Test on the Significance of a Fixed-Effects Factor; Complete Block Design) The analysis of variance of the leaking current data is as follows:
1 SS~reatments
=-
" i=1
2 Y.. ~ i .-
-
219.4 27.72 T - -= 5.5 20
13-29
Answers to Exercises
Exercise 13.6 (cont.)
I
Source of Variation
Sum of Squares
Degrees of Freedom
Mean Square
fo
Channel Length
5.5
3
1.8
4.4
Channel Width
3.6
4
0.9
Error
5.0
12
0.4
Total
14.1
19
Step 1: State Ho and H I . Ho: z, = z 2 = ... = ~ , = 0 H I : z i + 0 for at least one i, i = 1 to 4 Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. fa,a-l,(a-l)(b-1)
= f0.05,4-1,(4-1)(5-1)
= f0.05,3,12 = 3'49
Step 4: Make a conclusion. Since f o =4.4> fa,a-l,(a-l)(b-l, =3.49, reject Ho at a = 0 . 0 5 .
2. (Least Significant Difference Method) Step 1: Determine the LSD for a .
Step 2: Arrange the means of a treatments in descending order. Y3.=1.9, Y4.=1.9, y2, =0.9, and y,, =0.8 Step 3: Compare the difference between the largest and smallest means with the LSD. Continue this comparison with the next smallest mean as long as the mean difference is greater than the LSD. 3 , 4 VS.1 = 1.9 - 0.8 = 1.1 > 0.87 3 , 4 vs. 2 = 1.9 - 0.9 = 1.0 > 0.87 Step 4: Continue Step 3 for the next largest mean until this iteration reaches the second smallest mean. 2 VS. 1 = 0.9 - 0.8 = 0.1 < 0.87 Step 5: Summarize the painvise comparison results by arranging the a treatments in order of mean and then underlining treatments whose means are not significantly different. 3 4 2 1
3. (Calculation of Residual) ezl = yZ1- j Z 1= y21- ( j 2 ,+Y,l -j,.)=0.8-(0.9+0.9-1.4)=0.4
I
1
14 14-1 14-3 14-4 14-5 14-7
14-1
Introduction Factorial Experiments Two-Factor Factorial Experiments General Factorial Experiments 2k Factorial Designs 14-7.1 22 Design 14-7.2 2k Design for k 2 3 Factors 14-7.3 Single Replicate of the 2k Design
Design of Experiment with Several Factors
14-8 Blocking and Confounding in the 2k Design 14-9 Fractional Replication of the 2k Design 14-9.1 One-Half Fraction of the 2k Design 14-9.2 Smaller Fractions: The 2k-p Fractional Factorial MINITABApplications Answers to Exercises
Introduction
This chapter presents statistical concepts and methods to design an experiment with multiple factors (2 2) and analyze the data obtained from the experiment. Most of the techniques introduced in Chapter
13 for a single-factor experiment are extended to this multiple-factor experiment.
14-3
Factorial Design
CI Explain the term factorial design. Distinguish between main and interaction effects. Explain why information of interactions between factors is more meaningful than that of the main effects when the interactions are significant. O Explain the potential pitfall of a one-factor-at-a-time design when interactions exist.
14-2
Chapter 14
Factorial Design
Main vs. Interaction Effects
Design of Experiment with Several Factors A factorial design is an experimental design where all combinations of factor levels under consideration are tested at each replicate. For example, if factors A and B have two and three levels, respectively, a factorial design of A and B runs six (= 2 x 3) combinations of the levels at each replicate. The effect of a factor indicates the change in response by a change in the level of the factor. Two types of factor effects are defined: 1. Main effect: The effect of a factor alone. 2. Interaction effect: The effect of an interaction between factors. An interaction exists between factors if the effect of one factor depends on the condition of the other factor(s). Figure 14-1 shows that the main effect of A is independent of the level of B and vice versa, indicating the absence of an interaction between A and B. In other words, the main effects of A at the low and high levels of B are the same as 20 (i.e., 30 - 10 = 20 at the low level of B and 40 - 20 = 20 at the high level of B). Similarly, the main effects of B at the low and high levels of A are the same as 10. This insignificant AB interaction can be identified by calculating the difference of the diagonal averages:
Notice that the Bhighand Blowlines in the figure are parallel to each other when the interaction of A and B is absent. Response 50
0
A low
A high
Factor A
Factor A A~OW Ahigh Factor B
Blow
Bhizh Totals Averages
~vm-~~*-wea~ B%v---
eBa v-s
-aBa"--a.e
a
10 20 30 - *15&V
?
mw
v
sv
ve
30 40 70 35 *
d b BA
*<
B%
Torais
Averages
40 60 100
20 30
#*
a a
-
B*
vm
-
25 e
-
Figure 14-1 Two-factor factorial experiment: no interaction. On the other hand, Figure 14-2 illustrates that the main effect of A (or B) varies depending on the level of the other factor, indicating the presence of an interaction between A and B. The main effects of A at the low and high levels of B are 20 (= 30 - 10) and -20 (= 20 - 40) each, resulting in an average of zero as
i 1
1
14-3 Factorial Experiments
Main VS. Interaction Effects (cont.)
14-3
Response --
50
--
- I
0
A low
A high
Factor A
Factor B Totals
Blow Bhigh
10
30
40
20
40 50
20 50
60 100
30
Figure 14-2 Two-factor factorial experiment: presence of an interaction. the main effect of A. Similarly, the main effects of B at the low and high levels of A are 30 (= 40 - 10) and -10 (= 20 - 30) each, resulting in an average of 10 as the main effect of B. This significant AB interaction can be identified by a nonzero value of the diagonal average difference: Ahigh Bhigh + Alow Blow - Alow Bhigh + Ahigh Blow - 20 + 10 40 + 30 ----=-2o AB = 2 2 2 2 Notice that the Bhighand Blowlines in the figure are not in parallel in the presence of an interaction between A and B. As illustrated in Figure 14-2, a significant interaction can mask the significance of main effects. Accordingly, when interaction is present, information of the main effects is not meaningful any longer because the main effect of a factor depends on the condition of the other factor(s).
Necessity of Factorial Design
A factorial design is the only experimental design to check interactions between factors. When a significant interaction exists between factors, experimenting one factor at a time can produce inappropriate results. For example, in Figure 14-2, the maximum response is obtained at A =Alo, and B = Bhighby experimenting all the treatment combinations of A and B. But if the experiment had been designed first for A at B = Blowand then for B at a time, the maximum response would have been concluded at A = Ahighand B = Blowas follows: (1) First the maximum response at B = BI, would be found at A = Ahigh. (2) Then by experimenting B at A = Ahigh,the maximum response would be achieved at B = Blow. To avoid this kind of incorrect conclusion in the one-factor-at-a-time design, the factorial design should be used unless the absence of interactions is certain.
14-4
Chapter 14
Design of Experiment with Several Factors
Two-Factor Factorial Experiments
14-4
Explain the model of a factorial design with two fixed-effects factors. R Test the significance of the main and interaction effects in a two-fixed factor factorial design. R Compare individual treatment means with each other by Fisher's LSD method. 0 Calculate residuals in a two-factor factorial design. R Explain a method to analyze a two-factor factorial design with a single replicate. ~
w
a
~
e
e
w
a
~
Context
*
~
e
m
m
a
~
~
-
e
w
~
~
~
-
-
-
m
m
~
~
,
~
"
~
-
~
1. Factor: Two fixed-effects factors (A with a treatments and B with b treatments) 2. Repetition: n replicates for each of ab treatment combinations; N (total number of observations) = nab (balanced design; see Table 14-1) 3. Randomization: Complete randomization
Table 14-1 Data Structure of a Two-Factor Factorial Design
Ylll
Y211
Y122
Y 222
...
Y a22
Y.2.
V.2.
Y.b,
Y.b
Factor B
b
<w - M w * a * # w #
~
~
Ylbl
Y2bl
Ylb2
Y2b2
Ylbn
~
Y2bn
Totals
Y1.
Y2..
Means
71..
Y2.
-
Yabl
...
yab2
...
-Ya.. Y.
...
Yo..
~
~
Yabn
-
Model
where: y
= overall mean a
ri
= effect
of the ithlevel of factor A,
r j =0 i=l
-
~
mw---
-
Y.
~
14-4 Two-Factor Factorial Experiments
Model (cont.)
p I. = effect of thejthlevel of factor B,
zp
14-5
b
=0
j=l
( T P ) ~= effect of interaction between A and B, b
a
~ ( T P )=,O for j = 1 . 2 ,...,b , ~ ( T P=O,for ) ~ i = 1 , 2,... , a i=l
j=1
E~~ = random error
(Notes) 1.
- i.i.d. N (0, o2)
q,k - ~ ( p , . , a ~ )
2. In the fixed-effects model, the treatment effects zi9s,Pj's, and ( T P ) ~ ' ~ are defined as deviations from the overall mean; thus, each sum of 'ti's, Pj's, and ( T P ) ~is' ~zero. ANOVA
In the two-factor factorial design, the analysis of variance decomposes the total variability in Y (SSn total sum of squares; degrees of freedom = N - 1) into four components: (1) Variability due to factor A (SSA;degrees of freedom = a - 1) (2) Variability due to factor B (SSB; degrees of freedom = b - 1) (3) Variability due to interaction AB ( S S A ~degrees ; of freedom = ( a - l)(b- 1)) (4) Variability due to random error (SSE; degrees of freedom =ab(n- l ) = N - a b ) In other words,
ssT= ssA+ ssB+ ssAB+ ssE a
where:
SST
b
n
a
= C Y , X ( Y , ~-F..)' a
b
2
AT
n
i=1 j=1 k=l
b
ssB= ~
2
b
~-y.,)' ( =
i=1 ,=I k=l b
2
n
S S A B = c C C ( ~ i j- ,5 ) ' i=1 j=1 k=l
2
Y...
i=l
n
~ a
z%-, zY,-, a
ssA= C ~ C ( L ~ . . - L . )=' a
2
n
b
= C C -C.Y Y-~
Y j=1 an a
b
2
~Y... 2
, 2
yij Y.. =CC---n iV
ss, - SS,
i=1 j=1
By dividing each sum of squares by the corresponding degrees of freedom, the mean squares of A, B, AB, and error are determined as follows:
14-6
Chapter 14
ANOVA (cont.)
Design of Experiment with Several Factors The expected values of these mean squares are a
= o2 (Note) MSE is an unbiased estimator of 02.
E ( M S E )= E(&]
The ratios MSA/MSE,MSB/MSE,and MSAB/MSEhave F distributions:
-
F=-MSAB F((a - l)(b - I), N - ab) MSE
These F values become small as the effects of A, B, and AB are insignificant (i.e., the null hypotheses Ho: zi = 0, i = 1 to a, Ho: = 0,j = 1 to b, and Ho: (z& = 0 are true). The ANOVA results of the fixed factorial design are summarized in Table 14-2 to test the significance of the effects.
Table 14-2 ANOVA Table for a Two-Factor Factorial Design: Fixed-Effects Model .
Source of Variation A B AB
Sum of Squares
Error
SSE SST
Total
Hypothesis Test (F-test)
ssA SSB
ssAB
Degrees of Freedom a -1 b -1 (a - l)(b - 1)
Mean Square MSA MSB MSAB MSE
N-ab
Fo MSA I MSE MSB I MSE MSABI MSE
N-1
Step 1: State Ho and HI. Ho: zi=O f o r a l l i 7 s , i = 1 , 2 ,..., a H I : zi f 0 for at least one i Ho: & = 0 f o r a l l j ' s , j = 1 , 2 ,..., b HI: Pi # 0 for at least one j
1
for A
) for B
Ho: ( ~ f i=)0~ for all combinations of i's and j's HI: ( T P )#~ 0 for at least one combination of i and j
) for AB
1 I
14-4 Two-Factor Factorial Experiments
Hypothesis Test (cant.)
14-7
Step 2: Determine a test statistic and its value. SS, /(a - 1) - M S ~ F ( a - 1,N - ab) for A Fo= SSE/(N-ab) MS, SSBl(b - 1) - MSB F(b - 1,N - ab) for B Fo = SS, l(N - ab) MS, SSAB /(a - l)(b - 1) - MSAB F((a - l)(b - l), N - ab) for AB Fo= SS, l(N - ab) MSE
-----
_--
Step 3: Determine a critical value(s) for a. fa,a-1,~-ab forA forB
fa,b-1,~-ab
fa,(a-l)(b-l),~-ab
for AB
Step 4: Make a conclusion. Reject Ho if f0 f0
' '
fa,a-1,~-ab
for A
fa,b-1,~-ab
for
fo > fa,@-l)(b-l),N-ab
for AB
It is recommended that the test for interaction be conducted first followed by the tests for main effects. If interaction is insignificant, the tests for main effects are meaningful; otherwise, the tests for main effects are meaningless because the effect of a factor is subject to the condition of the other factor(s).
Multiple Comparison
For significant fixed factors, the means of their individual treatments are compared by a multiple comparison method such as Fisher's least significant difference (LSD) method (see Section 13-2.3). When interaction AB is insignificant, the treatment means of each significant main effect are compared. When AB is significant, the treatment means of one factor (say A) at a particular level of the other factor (say B) are compared. For a fixed two-factor factorial design, the LSDs are determined as follows: LSD = ta,2,N-ub
JT
2MSE
for A
.,/%
-ta/2,~-ab
for B for AB
Residual Analysis
Like the single-factor model (shown in Section 13-2.2), the two-factor factorial model has three assumptions on the error term: (1) normality, (2) constant variance, and (3) randomness. These assumptions can be checked by analyzing residuals (eyk= yijk- jijk = yqk- Yy, ) as explained in Section 13-2.5.
14-8
Chapter 14
Factorial Design with a Single Replicate
Design of Experiment with Several Factors Sometimes a two-factor factorial experiment has only one observation (n = 1) for each combination of the treatments. In this case, the error degrees of freedom become zero (= N - ab = nab - ab for n = 1) and thus the error mean square is unavailable. In this unreplicated design, the interaction mean square can be used as the error mean square, assuming the interaction effect is negligible. However, this nointeraction assumption can yield an improper conclusion if the interaction is significant; thus, data and residuals should be carefully examined to check the significance of the interaction. An engineer suspects that the surface finish (Y) of metal parts is influenced by drying time ( A ) and type of paint (B). Three drying times (a = 3; 20,25, and 30 min.) and two types of paint (b = 2) are specifically selected by the engineer and three parts (n = 3) are tested for each combination of the drying times and paint types, yielding the following: Drying T i m (A; unit: min.) 20 (1) 25 (2) 30 (3) Totals Means 74 73 78 1 64 61 85 62 1 69.0 Paint 50 44 92 Type (B) 2 86 73 45 70 1 77.9
Example 14.1 (MR 14-2)
.~~a*"d~8w-.**w'a.8""~w~~.B.~a"~.m
1
mmmwel--___-
*"-
Totals Means
72.3
,m,wa.a"sasw-b
'*,a"
72.8 -._,.. ' . " ~ . ~75. " .."^ ~.~,"~"~"
."m*d"e_"
The following summary quantities are also provided for the surface finish data:
zzx a
N = 18, and
b
n
yVk2= 101,598,
z b
a
y i 2 = 582,726,
y , j , 2= 877,042,
7'2
y,,' = 298,066
1 . (Test on the Significance of Main and Interaction Effects; Two-Factor Factorial Design) Test if the effects of A, B, and AB are significant each at a = 0.05 by the analysis of variance. The analysis of variance of the surface finish data is as follows:
1
--
-
14-9
14-4 Two-Factor Factorial Experiments
Example 14.1 (cont.)
The ANOVA results are summarized in the following table: Saurce of Sumof Degrees of Mean VWw Squaw Freedom Square 27.4 2 13.7 Drying Time (A)
0.07
355.6
1
355.6
1.90
AB
1,878.8
2
939.4
5.03
Error
2,242.7
12
186.9
Total
4,504.4
17
Paint Type (B)
Step 1: State Ho and H I . Ho:zi = 0 for i = 1 to 3 HI:zi # 0 for at least one i
) for A
Ho:P j = O f o r j = l and2 HI:Pi # 0 for at least onej
) for B
Ho:( T P )=~ 0 for all combinations of i's and j's HI:( T P )#~ 0 for at least one combination of i andj
h
) for AB
Step 2: Determine a test statistic and its value.
MSB - 355.6 - 1 for B MS, 186.9 MSAB - 939.4 fo= -- -= 5.03 for AB MS, 186.9 f0 -----
Step 3: Determine a critical value(s) for a. fa,a-1,N-ab = f0.05,3-1,18-6 fa,b-1,N-ab
= f0.05,2,12 = 3'89 for A
= f0.05,2-1,18-6 = f0.05,1,12 = 4'75
fa,(a-l)(b-l),N-ab
for
- f0.05,(3-1)(2-1),18-6 = f0.05,2,12 = 3'89 for AB
Step 4: Make a conclusion. Since fo = 5.03 > fa,(a-l)(b-l),N-ab = 3.89, reject Hoat a = 0.05 for AB. Due to the significant interaction, tests on the main effects of A and B are meaningless.
14-10 Chapter 14
Design of Experiment with Several Factors 2. (Multiple Comparison) Compare the surface finish means of the three drying times for paint type 1. The corresponding surface means are =85.0 YII, =62.7, y2,.=59.3, and
Example 14.1 (cont.)
rm
Step 1: Determine the LSD for a.
Step 2: Arrange the means of a treatments in descending order. y3,,=85.0, TI,,=62.7, and Y21, =59.3 Step 3: Compare the difference between the largest and smallest means with the LSD. Continue this comparison with the next smallest mean as long as the mean difference is greater than the LSD. 3 VS. 2 = 85.0 - 59.3 = 25.7 > 24.32 3 VS. 1 = 85.0 - 62.7 = 22.3 < 24.32 Step 4: Continue Step 3 for the next largest mean until this iteration reaches the second smallest mean. 1 vs. 2 = 62.7 - 59.3 = 3.40 < 24.32
I
Step 5: Summarize the painvise comparison results by arranging the a treatments in order of mean and then underlining treatments whose means are not significantly different. 3 1 2
I
3. (Calculation of Residual) Calculate the residual 0fy313= 92. h
Exercise 14.1 (MR 14-4)
e313= y,,, - j313 = Y , , ~ - j731, = 92 - 85 = 7 (underestimate)
An experiment is conducted to determine whether either firing temperature ( A ) or furnace position (B) affects the baked density (Y) of a carbon anode. Three firing temperatures (a = 3) and two furnace positions (b = 2) are specifically selected by the experimenter and three measurements (n = 3) are collected for each combination of the treatments. The measurements are as follows:
Temperature (A; unit: O C ) 1
SOO(1) 5.70 5.65
825 (2) 10.63 10.80
850 (3) 5.65 5.10
TOMS
Means
65.69
7.30
5.47 5.21 33.14 5.52
10.26 10.04 62.04 10.34
5.38 5.32 32.61 5.44
62.10
6.90
Position (B) 2 Totals Means
127.79
7.10 I1 The following summary quantities are also provided for the baked density data:
(
a
N =18, ~
b
n
~ 2 =1,003.1, ~
gy,'
b
=6,010.6, Y Z CY,~ =8,171.6, ,'
-
-
14-5 General Factorial Experiments
Exercise 14.1 (cont.)
and
zC
14-11
yb,2 = 3,007.7
1 1. Test if the effects of A, B, and AB are significant each at a = 0.05 by the analysis of variance. 2. Test the differences between the baked density means of the three firing temperatures (A): TI,,= 5.52 , y,,, = 10.34, and y,,, = 5.44 3. Calculate the residual ofy213= 10.43.
14-5
General Factorial Experiments
Factorial Design with More Than Two Factors
14-7
An experiment can include more than two factors. The analysis of variance procedure for this general factorial experiment is just an extension of that of a two-factor factorial design but requires tedious computation; thus, a computer software program is mostly used for the analysis.
2k ~actorialDesigns
14-7.1 2* Design
O Estimate the main and interaction effects of a 22 factorial desi n.
0 Test the significance of the main and interaction effects of a 29 factorial design. 2k Design
A 2k factorial design is a factorial design with k factors, each having two levels. The low and high levels of each factor are denoted by - (or '-1 ') and + (or ' 1'), respectively. A complete replicate of this design requires 2k observations, which is the smallest number of observations for a complete factorial design with k factors. Due to the minimum number of runs, the 2k design is particularly useful for an experiment with many factors for screening purposes. Two treatments are chosen for each factor by assuming a linearity of the response over the range of the factor levels.
ANOVA for 22 Design
The totals of the treatment combinations in a 22 design (2k design where k = 2 factors) are denoted by a series of lowercase letters: (I), a , b, and ab (see Figure 14-3). The presence (absence) of a letter indicates that the corresponding factor is at the high (low) level in the treatment combination. For example, the letter a indicates that factor A is at the high level and factor B is at the low level.
14-12 Chapter 14
Design of Experiment with Several Factors high (+I; +)
ANOVA for 2' Design (cont.)
low (-1; -)
low (-1;
A
high (+I; +)
-1
Figure 14-3 The 22 factorial design. The main effects A and B and interaction AB with n replicates are estimated as follows:
The linear combinations in the numerators of the main and interaction equations are called orthogonal contrasts (denoted as C) as they have the following two properties: (1) Each linear combination has coefficients {ci) whose sum is zero i
x
(2) Any two contrasts have coefficients {ci) and {dif whose sum of the products of the coefficients is zero ( cidi = 0 ). i
By using the orthogonal contrasts, the sums of squares for A, B, and AB are calculated: C: - [a+ab-b-(1)12 SS, = 74n
nC
ci
i=l
[ab + (1) - a - b12 4n
-,
Note that the total and error sums of squares are calculated by 2
2
ssT = c c R Y : i=1 j=l k=l
2
Y...
14-13
14-7 2k Factorial Designs
ANOVA for 22 Design (cont.)
Since each contrast sum of squares has a single degree of freedom, the mean squares of A, B, and AB are equal to the corresponding sums of squares each. The mean squares of error is M s E =--- SS, - ss, 4(n-1) N - 4 The ratios MSA/MSE,MSBIMS~, and MSAB/MSEhave F distributions:
These F values become small as the effects of A, B, and AB are insignificant (i.e., Ho: zi = 0 , Ho: P , = 0 , and Ho:( T P ) ~= 0 are true). The variation quantities from ANOVA are summarized in Table 14-3.
*"" "
Hypothesis Test (F-test)
B AB
SSB SSAB
Error
SSE SST
Total -
*
_
X
m
*
~
-
-
~
~
*
-
.
^
I__
-
^"""*
",-
1 1 N-4 N-1
Step 1: State HOand H I . Ho: zi = 0 for i = 1 and 2 H I : zi # 0 for at least one i Ho: HI:
Pi = 0
Ho: HI:
( T P ) =~ 0
for j = 1 and 2 & # 0 for at least onej
#
MSE X1 L
X
I----
''A~
=--
-
XIWII-"-
'I
SS, I 4(n - 1 )
---
- MSAB
MSE
r
X1
awemIIX^X V
) for A ) for B
Step 2: Determine a test statistic and its value. ss, 11 - MSA F(1,4(n - I)) F,, = SS, I4(n - 1 ) MSE SS, I1 --MSB F(1,4(n - 1 ) ) F,, = SS, I 4(n - 1) MS,
F,, =
X1
for all combinations of i's and j's 0 for at least one combination of i and j
---
MSB I MSE MSABI MS,
MSB M~AB
for A for B
F(1,4(n - 1)) for AB
) for AB
14-14 Chapter 14 Hypothesis Test (cont.)
r
Design of Experiment wit11 Several Factors Step 3: Determine a critical value(s) for a. fa,l,4(n-l) for A, B, and AB Step 4: Make a conclusion. Reject HOif for A, B, and AB fo > f a , l , 4 ( n - l )
Example 14.2
Exercise 14.2
(Calculation of Effects and Sums of Squares; 22 Design) Consider the surface finish data in Example 14-1. Assume that only two drying times (20 and 30 min.; a = 2) and two types of paint (b = 2) were tested. The following summary quantities are provided for the surface finish data: n = 3 , (1)=188, a = 2 5 5 , b = 2 4 6 , a n d ab=196 Estimate the effects A, B, and AB and calculate the corresponding sums of squares. nThe orthogonal contrasts of the 22 design for the surface finish data are C , =a+ab-b-(1)=255+196-246-188=17 C B =b+ab-a-(1)=246+196-255-188=-1 CAB=ab+(l)-a-b=196+188-255-246=-117
I
Thus, the estimates of the main and interaction effects are
I
Then, the corresponding sums of squares are
Consider the carbon anode baked density data in Exercise 14- 1. Assume that only two firing temperatures (800 and 850 "C; a = 2) and two furnace positions (b = 2) were tested. The following summary quantities are provided: n = 3 , (1)=17.18, a =16.65, b = 15.96, and ab=15.96 Estimate the effects A, B, and AB and calculate the corresponding sums of squares.
14-7 2k Factorial Designs
14-15
14-7.2 2* Design for k 2 3 Factors
O Identify the orthogonal contrasts of a 2k design. Estimate main and interaction effects in a 2k factorial design. -mmwa
"mmeb*a-m--
Orthogonal Contrasts of 2k Design
sms
-
-
m e
*--
----
m-
aswsd-
---*bms
m h w
* *
--
--%.
As shown in the anal sis of variance for a 22 design, orthogonal contrasts are useful to analyze a 2Ydesign. The effects of factors and their interactions and corresponding sums of squares can be calculated by using orthogonal contrasts: Contrast Effet = n2k-'
To determine the orthogonal contrasts of a 2k design in a systematic manner, a table of plus and minus signs is used (see Table 14-4 for a 23 design). This table includes columns of treatment combinations, identity (I), main effects, and interactions. In a main effect column, a plus (minus) sign indicates the high (low) level of the corresponding factor for the treatment condition (e.g., in the main effect column A, a plus is assigned to the treatment combination ac because the level of A is high. Once the signs of the main effects are established, the signs of each interaction column are determined by multiplying associated columns row by row (e.g., AB =A x B). Except the identity column, each column includes an equal number of plus and minus signs. By using the sign table, main and interaction effects and their sums of squares are calculated. For example, in Table 14-4, the main effect A and the corresponding sum of squares when n = 2 are determined as follows:
Table 14-4 Signs for Orthogonal Contrasts in the Z 3 Design
Treatment Combinations
C
ac bc abc
f
+ + + +
Effi,cts AB C
A
3
-
-
+
+
-
-
-
+
+ +
-
+
+ + + +
AC
BC
-
-
+ -
+
+ +
ABC
+ -
+
14-16 Chapter 14
Example 14.3
Design of Experiment with Several Factors
(Calculation of Effects and Sums of Squares; 2k Design) Power grip strength is measured on a handle to examine the effects of three handle design factors: diameter (A), angle (B), and weight (C). Two treatments are chosen for each factor and a single replicate (n = 1) is run. The grip strength measurements (unit: kg) are as follows:
Estimate the main and interaction effects of this 23 design and calculate the corresponding sums of squares. ~ - r rFrom Table 14-4, the orthogonal contrasts of the 23 design for the grip strength data are C A=a+ab+ac+abc-(1)-b-c-bc =40+33+42+34-35-31-36-30=17 CB = b+ab+bc+abc-(1)-a-c-ac =31+33+30+34-35-40-36-42~-25 CAB= ( l ) + a b + c + a b c - a - b - a c - b c =35+33+36+34-40-31-42-30~-5 C , =c+ac+bc+abc-(1)-a-b-ab =36+42+30+34-35-40-31-33=3 C,, = ( l ) + b + a c + a b c - a - a b - c - b c =35+31+42+34-40-33-36-30=3 CB, =(l)+a+bc+abc-b-ab-c-ac =35+40+30+34-31-33-36-42~-3 CAB,=a+b+c+abc-(1)-ab-ac-bc =40+31+36+34-35-33-42-30~1
I
Thus, the estimates of the main and interaction effects are
I
The corresponding sums of squares are
14-17
14-7 2k Factorial Designs
Example 14.3 (cont.)
d
An engineer is interested in the effects of cutting speed (A),metal hardness (B), and cutting angle (C) on the life of a cutting tool. Two levels are chosen for each factor and two replicates (n = 2) of this 23 factorial design are run. The tool life data (unit: hours) are as follows:
Exercise 14.3 (MR 14-13)
a
b ab c ac bc
43 5 348 472 453 377 500
325 354 552 440 406 605
760 702 1024 893 783 1105
380.0 351.0 512.0 446.5 391.5 552.5
Estimate the main and interaction effects of the factors and calculate the corresponding sums of squares.
14-7.3 Single Replicate of the 2* Design
Explain a method to analyze a two-factor factorial design with a single replicate. O Identify significant high-order interactions in a 2k design by a normal probability plot.
- -
'
-
2k Design with a Single Replicate
--
I X("
--^
ni
le-w.#i-n^-w-t
--wv*rraixi-lea
a,"
%
w^wB
?A
ar ra..
*,"-*ax-
i . i
a a
-"
i
xi
rx
-
-.*
llxa
Like the factorial design with a single replicate (presented in Section 14-4), an assumption is necessary for an unreplicated 2k design to have the error mean square. Interactions higher than a second-order interaction are often assumed negligible and then used as the error mean square. Caution should be exercised to check if the interaction(s) used for the error term is significant. Significant high-order interactions can be identified by constructing a normal probability plot of main and interaction effects. Negligible effects will be normally distributed with mean zero and variance 02and thus lie along a straight line in the normal probability plot, whereas significant effects deviate from the straight line (see Figure 14-21 of MR).
14-18 Chapter 14
lr
Example 14.4
Design of Experiment with Several Factors
(Normal Probability Plot of Effects) Consider the grip strength experiment with a single replicate in Example 14-3. The estimates of the main and interaction effects are as follows: A = 4.25 AC= 0.75 B = -6.25 BC= -0.75 C = 0.75 ABC= 0.25 AB= -1.25 Construct a normal probability of these effects and check if the third-order interaction ABC is significantly important. w The normal probability plot of the main and interaction effects below indicates that the second- and third-order interactions (including ABC) are not important. Thus, ABC can be used as the error term to test the significance of the main and other second-order interaction effects.
-5
0
5
Effect
d
Exercise 14.4 (MR 14-21)
14-8
An experiment has run a single replicate of a 24 design and calculated the following factor effects: A = 80.25 AB = 53.25 ABC = -2.95 B = -65.50 AC = 11.00 ABD = -8.00 C = -9.25 AD = 9.75 ACD = 10.25 D = -20.50 BC = 18.36 BCD = -7.95 BD = 15.10 ABCD = -6.25 CD =-1.25 Construct a normal probability of these effects and check if the third- and fourthorder interactions are significantly important.
Blocking and Confounding in the 2* Design
CI Construct 2P blocks for a 2k design. 0 Estimate main and interaction effects of a 2k design with 2P blocks.
14-8 Blocking and Confounding in the 2k ~ e s i ~ n 14-19
2k Block Design and Confounding
Sometimes all treatment combinations cannot be run under homogeneous conditions for various reasons. For example, suppose that a 2, design (with 4 treatment combinations) requires a total of 16 hours (4 hours for each combination) to run a single replicate; thus, this experiment is conducted in two days (blocks) and randomization is restricted within each block. This kind of a factorial design is called 2k design with 2* blocks, p < k. Some effects are confounded with blocks in a 2k block design. As an example, Figure 14-4 displays a 2' block design with interaction AB is confounded with the blocks. In this block design, the contrasts (C) to estimate the effects of A, B, and AB are C, =ab+a-b-(1) C, = a b + b - a - ( 1 ) C,, = a b + ( I ) - a - b However, the AB contrast is identical to that of the block effect and thus AB is confounded with the blocks. Block 1
Block 2
Figure 14-4 A 22 block design. Construction of Blocks
To construct a 2k design with 2' blocks, first,p effects should be selected for confounding. In general high-order interactions are chosen to confound with blocks; main and low-order interaction effects are less likely selected due to their importance. In addition to thep effects, 2P - p - 1 effects (interactions of thep effects; called generalized interactions) are confounded with blocks. For example, in a 23 design with 2' blocks, if AB and BC are selected for confounding, their generalized interaction AC (= AB x BC = AB'C because B~ = I) is also confounded. Care should be taken not to confound effects of interest. Second, based on the selectedp effects, defining contrasts (L) are formulated: L = a , x , +a,,x, +...+ a,xk where: a; =
0,
if the ith factor is not in the counfounded effect
1,
if the ith factor is in the counfounded effect
0,
if the ithfactor is not in the treatment combination
if the ithfactor is in the treatment combinaiton 1, For example, in case that AB and BC are selected for confounding in a 23 design with 2, blocks, two defining contrasts are established from AB and BC as follows: L, = X , + x2 and L, = x, + x, where: xi =
i
0,
if the ithfactor is not in the treatment combination
1,
if the ithfactor is in the treatment combinaiton
14-20 Chapter 14
Design of Experiment with Several Factors
Construction of Blocks (cont.)
r
n
I
*
m
Lastly, treatment combinations having the same values of L (modulus 2) will be placed in the same block. For example, for the 23 block design with L1 = xl + x2 and L2 = x2 + xg, the four blocks of the treatment combinations are determined according to their values of Ll (mod 2) and L2 (mod 2) as follows:
m
a
ANOVA
Example 14.5
a b
(0 + 0) (mod 2) = 0 (0 + 0) (mod 2) = 0 (1 + 0) (mod 2) = 1 (0 + 0) (mod 2) = 0 (0 + 1) (mod 2) = 1 (1 + 0) (mod 2) = 1
1 2 3
ac bc abc
(1 + 0) (mod 2) = 1 (0 + 1) (mod 2) = 1 ( 0 + l ) ( m o d 2 ) = 1 (1 + l ) ( m o d 2 ) = 0 + l x m o d 2J = 0 Q*-+A(mod 2) = 0
3 2 1
rannaxrrx-
-- --
smw
--*-
mwr
-----
-*-
The analysis of variance procedure by using orthogonal contrasts presented in Section 14-7.2 is applicable to the 2k block design except treating the confounded effect(s). The confounded effect(s) is redefined as the block effect and the significance of the block effect can be tested by comparing with the error mean square. Recall that in case of a single replicate experiment, a high-order interaction(s) can be used as the error mean square. The effects of backrest angle (A) and location of lumbar support (B) on the disc pressure (unit: N ) at the low back are under study. Two backrest angles (90" and 120") and two lumbar support locations (1 and 5 cm) are selected in the experiment. 1. (Construction of Blocks; 2k Block Design) Suppose that the four treatment combinations cannot be examined under the same condition. Establish two blocks with AB confounded for the 22 design. h The defining contrast is formulated from AB: L=xl +X2
I
where: xi =
I
i
0,
if the ith factor is not in the treatment combination
1,
if the ith factor is in the treatment combinaiton
The four treatment combinations are assigned to two blocks according to the
a b ab
(0 + 0) (mod 2) = 0 (1 + 0) (mod 2) = 1 (0 + 1) (mod 2) = 1 l L (-m od2 = 0 1 -+ ,, ,-,
*-
1 2 2 1
----.
a
I
2. (Calculation of Effects and Sums of Squares) The following disc pressure data are collected:
I
Calculate the sums of squares for this 2' block design with AB confounded.
-
Example 14.5 (cont.)
14-21
14-9 Fractional Replication of the 2kDesign The orthogonal contrasts of the 22 design for the disc pressure data are C, =a+ab-b-(1)=320+155-380-620=-525 CB =b+ab-a-(1)=380+155-320-620=-405 CAB=ab+(l)-a-b=155+620-320-380=75
I
Thus, the estimates of the effects are
75 5= = 75
lock effect = AB = n2k-1
I
The corresponding sums of squares are
Consider the tool life experiment in Exercise 14.3. Suppose that the eight treatment combinations cannot be examined under the same condition. 1. Establish four blocks with AB and AC confounded for the 23 design. 2. Calculate the sums of squares for this 23 design with four blocks. The orthogonal contrasts of the tool life length data have been calculated as follows: C , = 146, C B = 674, Cc = 574, CAB= -90, CAc = -954, CBc = -194, and CAB, = -278
Exercise 14.5 (MR 14-29)
14-9
22-1
Fractional Replication of the 2k Factorial Design
14-9.1 One Half Fraction of the 2" Design
0 Describe the use of a fractional factorial design. Explain the terms generator, defining relation, principal fraction, alternate fraction, aliasing, and dealiasing. D Construct a 2k-' design. 0 Estimate the main and interaction effects of a 2k-' design. Identify the alias structure of a 2k-1 design. am-
* -*
*-
#u.mwm
s
"+
a m
a*
#
-"
sms
ww*
me-*-*
a
>--
- w w w w
"*
-wwA-wm
meawawm A " m w e - *
d w w w m a
-
w
*a
14-22 Chapter 14 2k-p Fractional Factorial Design
Of 2k
-I
Design
Design of Experiment with Several Factors A 1/2P(p < k) fraction of a 2k design (denoted as 2k-p ) can be run when highorder interactions are negligible. As the number of factors (= k) increases, the number of runs required increases geometrically but a large portion of the runs are used to estimate high-order interactions, which are often negligible. For example, a 25 design (k = five factors) which requires 32 runs uses, out of the 3 1 degrees of freedom, 5, 10, and 16 degrees of freedom to estimate the main effects, two-factor interactions, and three-factor and higher order interactions, respectively. When the three-factor and higher order interactions can be assumed negligible, a 25-' fractional factorial (which requires 16 runs) can be used to analyze the main effects and low-order interactions of the five factors. A 2k-1 design is constructed by selecting treatment combinations having the same sign (say, plus) on an effect column selected (called generator) in the table of signs for the 2k design. For example, a 23-1 design can be formed by selecting four treatment combinations (a, b, c, and abc) that have the plus sign on the ABC column in the sign table of the 23 design (Table 14-5). The fraction with the plus sign is called principal fraction and the other fraction is called alternate fraction. Since the identity column has only the plus sign, a 2k-1 design has the following relations (called defining relation) between the identity and generator columns: the signs of the generator column are equal to those of the identity column for the principal fraction and the opposite becomes true for the alternate fraction. For example, a 23-1 design with generator ABC has the defining relation I = ABC for the principal fraction and I = -ABC for the alternate fraction.
C
ac bc abc
+ + + +
-
-
+
+
-
-
-
+
+ +
-
+
+ + + +
-
+ -
+
-
+
-
-
+ +
-
+
Another method to construct a 2k-1 design is available as follows: Step 1: Prepare the sign table of a factorial design with k - 1 factors (called basic design). (e.g.) For a 23-1 design in Table 14-6, the basic design starts with two factors ( A and B). Step 2: Add the kthfactor and identify the corresponding interaction by using the defining relation of the fractional design. (e.g.) I = ABC in Table 14-6 3 C.I = A B C ~ =C=AB (forc2=~) Step 3: Determine the signs of the interaction for the kthfactor column. Step 4: Identify the treatment combinations of the 2k-1 design row by row.
14-9 Fractional Replication of the 2k Design Construction of 2k
14-23
Table 14-6 Construct~onof the 23-' Design with I = ABC
-I
Design
.&MI , ', ,A ,
(cont.)
1 2 3
""r
(1) a b
-
+ -
-
+
+ -
-
C
a b abc
a.r*,aq-~.e*."'Aa-"
Effects
In a 23-1 design with I = ABC (principal fraction in Table 14-5), the linear combinations that are used to estimate the main effects and interactions have the following relationships: I A -1 -2[a-b-c+abc]=l,c
I , =+[-a+b-c+abc]=IAc I , =+[-a-b+c+abc]=I,, These relationships indicate that each liner combination estimates both a main effect and an interaction: l,=A+BC
I,=B+AC I, =C+AB In contrast, a 23-' design with I = -ABC (alternate fraction in Table 14-5) produces the following estimates of the main effects and interactions: 1; = + [ - ( l ) + a b + a c - b c ] = A - B C
It is not possible to differentiate between A and BC, B and AC, C and AB. Thus, assuming the second-order interactions are negligible, the linear combinations lA (or I; ), 1, (or I; ), and Ic (or I: ) are used to estimate the main effects. A normal probability plot can be used to identify significant interactions (see Section 14-7.3). Negligible interactions will lie on a straight line in the normal probability plot, whereas significant ones deviate far from the straight line.
Aliasing
In a fractional factorial design, some effects cannot be estimated separately and these effects are called aliases. For example, in a 23-1 design with I = ABC, A and BC, B and AC, and C and AB are aliased with each other. An alias of an effect is found by multiplying the defining relation to the effect. For example, a 23-' design with I = ABC has the following alias structure: A = A.I = A.ABC = A2Bc = BC (for A2 = I) B = B.1 = B.ABC = A B ~ C= AC (for B2 = I) C = C.I = C.ABC = A B =~AB (for C2= I)
14-24 Chapter 14
Design of Experiment with Several Factors
Aliasing (cont.)
It is a common practice to form a fractional factorial design which aliases the main effects and low-order interactions with high-order interactions (which are mostly negligible).
Dealiasing
A sequence of fractional factorial designs can be performed to identify more detailed information of interactions. For example, a 23-' design with I = ABC aliases the main effects with the two-factor interactions, assuming that the twofactor interactions are negligible. However, if this assumption is uncertain, the alternate fraction can be run after running the principal fraction. By using the data from the principal and alternate fractions, the main effects and interactions can be estimated separately (called dealiasing) as follows: ~1 ( 1 +, I > ) = $ ( A + BC+ A - B C ) = A +(IA -I>)=BC
Projection of 2k-1
Design
fif
Example 14.6
When one or more factors are excluded from a 2k-' design due to their lack of significance, the fractional design will project into a full factorial design to examine the active factors in detail. For example, a z3-' design will project into a 22 design as one factor is dropped.
A study examines the effects of four factors on maximum acceptable weight (MAW; unit: kg) in lifting: object width (A),lifting level (B),lifting distance (C), and lifting frequency (D).A 24-1 design with I = ABCD is employed, yielding the following data:
'1 1. (Alias Structure; ABCD.
2'-' Design) Identify the alias structure of this 2'-' design
with I =
[)"
The 24-1 design with I = ABCD has the following alias relationships: A = A.ABCD = A ~ B C D= BCD B = B.ABCD = A B ~ C D= ACD C = C.ABCD = A B ~ = DABD D = DSABCD= A B C D ~= ABC AB = AB.ABCD = A ~ B ~ C=DCD AC = AC.ABCD = A 2 B 2 D = BD AD = AD.ABCD = A ~ B C D= ~BC
I
14-25
14-9 Fractional Replication of the 2k Design
Example 14.6 (cont.)
2. (Estimation of Effects) Estimate the factor effects. Which effects appear large? mThe main effects and interactions are estimated as follows: I , = A + BCD=$(-24+32-24+16-37+15-17+18)=-5.25
I
I D = D + ABC=~(-24+32+24-16+37-15-17+18)=9.75 I,,
=AB+cD=~(~~-32-24+16+37-15-17+18)=1.75
I,,
=AD+BC=+(24+32-24-16-37-15+17+18)=-0.25 (Note) The interactions are estimated by forming columns of AB, AC, and AD in the sign table.
I Exercise 14.6 (MR 14-32)
The effects A, B, D, and AC are relatively large.
A 24-' design with I = ABCD is used to study four factors in a chemical process. The factors are A = temperature, B = pressure, C = concentration, and D = stirring rate, and the response is filtration rate. The design and data are as follows:
---**
6 7 8
--w
+ +
-a -va-*-*
-
+ +
r n W * ~ ~ ~ ~ e -ww - *
+ + +
sw *a-
-a
wv m-
d
+
m e M # s a*-
*w
a
--
ac bc abed
ms-B&*mm
*e
* w#m
60 80 96 *
swaw
1. Estimate the factor effects. Which factor effects appear large? 2. Project this design into a full factorial including only important factors and provide a practical interpretation of the results.
14-9.2 Smaller Fractions: The 2 k - pFractional Factorial
0 Explain the conditions of design resolutions 111, IV, and V in a fractional factorial design. Construct a 2k-p fractional factorial with a designated design resolution.
0 Identify the alias structure of a fractional factorial design. Design Resolution
The term design resolution indicates the pattern of aliases in a fractional factorial design. The alias patterns of resolution 111, IV, and V designs are compared with each other in Table 14-7. A resolution I11 design (such as 23-' design with I = ABC) does not alias any main effects with other main effects but aliases main
14-26 Chapter 14
Design (cant.)
Design of Experiment with Several Factors
effects with two-factor interactions and some two-factor interactions with each other. Next, a resolution IV design (such as 24-' design with I = ABCD) does not alias any main effects with other main effects or two-factor interactions but aliases two-factor interactions with other two-factor interactions. Lastly, a resolution V design (such as 25-' design with I = ABCDE) does not alias any main effects and two-factor interactions with other main effects and two-factor interactions. The resolution of a fractional factorial design is indicated with a subscript (e.g., 2;;' denotes a 23-' design with resolution 111). As the design resolution becomes higher, more information about interactions is produced. For example, a resolution 111 design provides information about main effects only, while a resolution IV design provides information about two-factor interactions as well as main effects. Resolution 111 and 1V designs are often used in experiments to screen factors.
Table 14-7 Alias Patterns of Resolution 111, IV, and V Designs
Main Effects Two-Factor
Construction of 2k-p Design
X
(Iv)
0 (111)
X
(Iv)
0 (111)
To achieve a designated resolution for a fractional design with k factors, a set o f p generators should be selected properly. Table 14-29 of MR presents recommended generators for various 2k-p designs (k I 15), design resolutions (111, IV, and V), and runs (n I 128). For example, the table recommends two design generators (D = AB and E = AC, which are equivalent to the design relation I = ABD =ACE) for a 2;;' design with n = 8 runs. With the recommended generators of a fractional design, a complete defining relation is determined. Incorporating the generalized interactions of the generators (identified by multiplying the generators with each other; 2*-p - 1 generalized interactions are produced by p generators), the complete defining relation is formulated. For example, the two ( p = 2) recommended generators, 2 ABD and ACE, of the 2;;' design produces one (=2P-p - 1 = 2 - 2 - 1) generalized interaction: ABD.ACE = A'BCDE = BCDE Thus, the complete defining relation of the 2:i2 design becomes I = ABD =ACE = BCDE
14-27
14-9 Fractional Replication of the 2kDesign
2 k -P
Design (cont.)
Like the method to construct a 2k-' design, a 2k-p design is constructed as follows (see Table 14-8 for an example): Step 1: Prepare the sign table of a factorial design with k -p factors (basic design). Step 2: Addp columns for recommended design generators. Step 3: Determine the signs of thep columns with the design generators. Step 4: Identify the treatment combinations of the 2k-p design row by row. Table 14-8 Construction of the 2&* Design with I =ABD =ACE = BCDE
*Iias Structure of ?k -P
A
Design
4
+
+
6
+
-
7 8
-
+
+ +
-
+ + +
+
-
-
+
-
-
+
+
abd ace bc abcde
The alias structure of a 2k-p design is found by multiplying each effect to the complete defining relation. For example, the 2;i2 design with I = ABD =ACE = BCDE has the following alias structure: A = BD = CE = ABCDE BC=ACD=ABE=DE B = A D = ABCE = CDE CD=ABC=ADE=BE C=ABCD=AE=BDE D = AB = ACDE = BCE E=ABDE=AC=BCD When a number of factors under consideration is large (say, k 2 5), high-order (say, third- or higher-order) interactions are assumed to be negligible, which greatly simplifies the alias structure. For example, the alias structure above of the 2&* design with I = ABD = ACE = BCDE can be simplified as follows by assuming that interactions higher than three factors are assumed negligible: A=BD=CE BC=ACD=ABE=DE B=AD=CDE CD = ABC = ADE = BE C=AE=BDE D=AB=BCE E=AC=BCD
14s7
1. (Construction of a 2 k - pDesign) Construct a 2fG2 fractional factorial design. Table 14-29 of MR recommends two (p = 2) design generators, E = ABC and F = BCD, for the 2fG2 design, which yield the defining relation I = ABCE = BCDF. The two generators produce one generalized interaction: ABCE.BCDF = AB*C?DEF = ADEF
~.rr
14-28 Chapter 14
Design of Experiment with Several Factors
Example 14.7 (cont.)
Thus, the complete defining relation of the 2;' I = ABCE = BCDF = ADEF
I
~
m
design with I = ABCE = BCDF
By using the two design generators, the 2';:
1 2 3
-
-
-
-
+
-
-
-
-
-
+
-
-
+ +
+
-
+ +
+ + + +
+ + + +
+
-
-
-
10 11
+
-
-
~
-
-
+
-
,
"
+
~
~
,
+ +
~
-
~
+ +
+
-
~
~
~ ~ ~
+
~~
w
~~
~ -
*
+ + ~~
,
acf bc
adef bde
-
-
-
(1) ae bef
-
-
+ +
13 14 15 16
+ + -
-
+
-
-
+
-
-
-
6 7
-
design is
-
-#
m
x ~
cde acd bcdf abcdef
e~
-
~~
~
~~
~ ~ ~
2. (Alias Structure) Identify the aliases of the fractional design, assuming that interactions higher than three factors are negligible. h Assuming that four-factor or higher-order interactions are negligible, the following aliases are found by multiplying each effect to the complete ~ defining relation of the 2 : ~design: A = BCE = DEF AB = CE B =ACE = CDF AC=BE C = ABE = BDF AD = EF D=BCF=AEF AE=BC=DF E = ABC = ADF AF = DE F=BCD=ADE BD = CF BF= CD
Exercise 14.7 (MR 14-39)
1. Construct a 2:i3 fractional factorial design. 2. Identify the aliases of the fractional design, assuming that only the main effects and two-factor interactions are of interest.
.~
-
s~
m
MINITABApplications
14-29
MINITABApplications Example 14.1
I
(Two-Factor Factorial Design)
(1) Choose File ) New, click Minitab Project, and click OK. (2) Enter the surface finish data on the worksheet.
1
r
---.. L--Ll--".+ ---- ---- ' -l ~ r y l n gT~mePaint Type / Surface ~ l n l s h l g l
(3) Choose Stat > ANOVA ) Two-way. In Response select Surface Finish and in Row factor and Column factor select Drying Time and Paint Type, respectively. Check Display means and Store residuals. Then click OK.
I
(4) Interpret the analysis results.
FI
I
Two-way Analysis of Variance
jAnaly91= of Variance for Surface :$ource DT 55 ns iDrylnp- T 2 27 14 :Palnc Ty 1 356 356 Ilnteractxon 2 1879 939 !Error 12 2243 187 :ToCaI 17 4504
F
P
0.07 1.90 5.03
0.930 0.193 0.026
i i i
i
14-30 Chapter 14
Design of Experiment with Several Factors
Example 14.2
(2' Factorial Design)
(1) Choose File
* New, click Minitab Project, and click OK.
*
(3) Choose Stat ANOVA ) Two-way. In Response select Surface Finish and in Row factor and Column factor select Drying Time and Paint
I J
I
I
Two-way Analysis of Variance
.............................................................................................................
: A n a l y s i s of Variance f o r S u r f a c e ; Source DF SS MS j Drying T 1 24 24 : P a i n t Ty 1 0 0 i Interaction 1 1141 1141 i Error 8 1501 188 Total 11 2666
F 0.13 0.00 6.08
P 0.729 0.984 0.039
.............................................................................................................
i
i i
: :
MINITABApplications
Example 14.4
I
14-31
(Normal Probability Plot of Effects) (1) Choose File
* New, click Minitab Project, and click OK. .+--. ..-.-..j,-, Effects E-Estimates
J.
1-
--I---
A
------.-ma-----.+
B
AB 4 C -- ------__^_l_.----.-._l_ 5 AC --
3
"""
^
Choose Graph
"
1
4.25J 6.2q -----.-I--1.25 ! "II"
4
l("-.l__X--._--_-
* Probability Plot. In Variables select E-Estimc
99 -
95
ML Erf rnstes
-
90 -
80 70 60 50 40 -
30
-
20 10 -
5 1 -
Data
Mean
0321428
StOev
2 92072
and
14-32 Chapter 14
Design of Experiment with Several Factors
Answers to Exercises Exercise 14.1
II
1 . (Test on the Significance of Main and Interaction Effects; Two-Factor Factorial Design) The analysis of variance of the carbon anode density data is as follows:
I
a
b
n
2
i=1 j = 1 k=l
Source of Variation
I
127.79~ 1,003.1 ------- = 95.87 18
Sum of Squares
Mean
Degrees of Freedom
Square
94.53
2
47.27
0.54
12
0.04
95.87
17
Temperature (A) Furnace Position (B) AB Error Total t_wjn~,-~ew.Mwm~-*#*.~vm~B~8~a
-an*"~"~.ea;nwAw*~
Ho: Hl:
Pj = 0 Pj # 0
1,056.1
.e.nriw.-ien.a.na.
Step 1 : State HOand H I . Ho: zi = 0 for i = 1 to 3 H I : Ti f 0 for at least one i f o r j = 1 and 2 for at least one j
) for A ) for B
No: ( ~ ( 3=) 0~ for all combinations of i's and j's H I : ( T P )#~ 0 for at least one combination of i and j
I
h
-Firino -----D
Step 2: Determine a test statistic and its value.
) for AB
-
-
-
---
7
Answers to Exercises
14-33
Example 14.1 (cont.) Step 3: Determine a critical value(s) for a. fa,,-1,N-ab
= f0.05,3-1,18-6
= f0.05,2,12 = 3'89 for A
fa,b-1,N-ab
= f0.05,2-1,18-6
= f0.05,1,12 = 4'75
fa,(,-l)(b-l),N-ab
- f0.05,(3-1)(2-1),18-6
Step 4: Make a conclusion. Since fo = 0.9 > f,,~,-l,(,-l,,N-ub
for
= f0.05,2,12 = 3'89
for AB
= 3.89, fail to reject Ho at a = 0.05
for AB, which indicates further tests on the main effects A and B are = 3.89 for A and meaningful. Next, since fo = 1,056.1 > f -,, ,,-, fo = 16.0 > fa,b-l,N-,b = 4.75 for B, we can conclude that both A (firing temperature) and B (furnace position) significantly affect the baked density of a carbon anode.
2. (Multiple Comparison) Step 1: Determine the LSD for a.
Step 2: Arrange the means of a treatments in descending order y2,,= 10.34, 2,. = 5.52, and &.,= 5.44 Step 3: Compare the difference between the largest and smallest means with the LSD. Continue this comparison with the next smallest mean as long as the mean difference is greater than the LSD. 2 vs. 3 = 10.34 - 5.44 = 4.90 > 0.25 2 VS. 1 = 10.34 - 5.52 = 4.82 > 0.25 Step 4: Continue Step 3 for the next largest mean until this iteration reaches the second smallest mean. 1 vs. 3 = 5.52 - 5.44 = 0.08 < 0.25 Step 5: Summarize the painvise comparison results by arranging the a treatments in order of mean and then underlining treatments whose means are not significantly different. 2 1 3 3. (Calculation of Residual)
J,,, = 10.62 e,,, = y 2 1 , - j 2 1 ,= y 2 1 3 - y 2=10.43-10.62=-0.19 ,~
(overestimate)
-
---
14-34 Chapter 14
Design of Experiment with Several Factors
(Calculation of Effects and Sums of Squares; 2' Design) The orthogonal contrasts of the 22 design for the baked density data are C, =a+ab-b-(1)=16.65+15.96-15.96-17.18=-0.53
Exercise 14.2
C, =b+ab-a-(1)=15.96+15.96-16.65-17.18=-1.91 CAB=ab+(l)-a-b=15.96+17.18-16.65-15.96=0.53
I
Thus, the estimates of the main and interaction effects are
1
The corresponding sums of squares are (-1.91)~ C: - (-0.53)~ 2 SS, =1;= 0.02 SS, =--C~ = 0.30 n2 3x2' n2k 3 ~ 2 ~
I
(Calculation of Effects and Sums of Squares; 2k Design) The orthogonal contrasts of the 23 design for the tool life length data are CA =a+ab+ac+abc-(1)-b-c-bc = 760 + 1,024 + 783 + 81 1- 532 - 702 - 893 - 1,105 = 146 CB =b+ab+bc+abc-(1)-a-c-ac =702+1,024+1,105+811-532-760-893-783=674 CAB= ( l ) + a b + c + a b c - a - b - a c - b c =532+1,024+893+811-760-702-783-1,105=-90 C, = c + a c + b c + a b c - ( 1 ) - a - b - a b = 893 + 783 + 1,105+ 81 1- 532 - 760 - 702 - 1,024 = 574 CAC= ( l ) + b + a c + a b c - a - a b - c - b c =532+702+783+811-760-1,024-893-1,105=-954 CBc =(l)+a+bc+abc-b-ab-c-ac =532+760+1,105+811-702-1,024-893-783 = -194 CAB, =a+b+c+abc-(1)-ab-ac-bc =760+702+893+811-532-1,024-783-1,105 = -278
Exercise 14.3
I
Thus, the estimates of the main and interaction effects are
14-35
Answers to Exercises Exercise 14.3 (cont.)
CABC- -278 ABC=---=-34.8 n2k-1 2 x 23-1
I
Exercise 14.4
The corresponding sums of squares are
1 (Normal Probability Plot of Effects) The normal probability plot of the effects below indicates that all the thirdand fourth-order interactions are not significant so that we can use these insignificant interactions to form the error mean square.
Effect
Exercise 14.5
(
I
1. (Construction of Blocks; 2' Block Design) The defining contrasts are formulated from AB and AC as follows: L1 = x1 + x2 and L2 = X, + x3 The eight treatment combinations are assigned to four blocks according to
--
a*-
a b
(1 + 0) (mod 2) = 1 (1 + 0) (mod 2) = 1 (0 + 1) (mod 2) = 1 (0 + 0) (mod 2) = 0
2 3
ac bc
(1 + 0 ) (mod2)- 1 (1 + 1) (mod2)=O (0 + 1) (mod 2) = 1 (0 + 1) (mod 2) = 1 Lmod22 = 0 (1 + 1)" (mod 2) = 0
3 2 1 --
l a
i-w
XX(IIIX
ali-m
*X.
iXlNIXWVX
X
14-36 Chapter 14
Design of Experiment with Several Factors
2. (Calculation of Effects and Sums of Squares) The generalized interaction of AB and AC is ABXAC=A~BC=BC
Exercise 14.5 (cont.)
I I
Thus, the effects AB, AC, and BC are confounded with the blocks. The estimates of the effects are
574 C = % - = = 7 1 . 8 n2k-' 2 x 23-1
ABC = ' A B C - - 278 - 34.8 dk-' 2 x 23-1
Block effect = AB + AC + BC = CAB+ CAC+ CBC n2 k-' - -90 - 954 - 194 = 154.8 2 x 23-1 The corresponding sums of squares are 2
2
ssA ---C", =--14" nzk
Exercise 14.6
2
~
- 1,332.3
2
~
CB SS, = 7 -6742 = 28,392.3 n2 2 ~ 2 ~
1. (Estimation of Effects) The main effects and interactions are estimated as follows: I , =A+BCD=t(-45+100-45+65-75+60-80+96)=19.0
I , =B+ACD=$(-45-100+45+65-75-60+80+96)=1.5 1, = C + ABD=$(-45-100-45-65+75+60+80+96)=14.0 I , =D+ABC=$(-45+100+45-65+75-60-80+96)=16.5 I,,
=AB+CD=$(45-100-45+65+75-60-80+96)=-1.0
I,,
=AC+BD=$(45-100+45-65-75+60-80+96)=-18.5
I,,
=AD+BC=$(45+100-45-65-75-60+80+96)=19.0
Three factors A, C, and D show large effects on filtration rate, but factor B displays a relatively very small effect. 2. (Projection into a Factorial Design) The 24-1 design projects into a 23 design with a single replicate including factors A, C, and D. The main effects and interactions of this unreplicated 23 design can be easily determined by using the effect estimates of the design as follows:
-
Answers to Exercises
Exercise 14.6 (cont.)
A = 19.0 C = 14.0 D = 16.5
' Exercise 14.7
14-37
AC =-18.5 AD = 19.0 CD = -1.0 ACD = 1.5
The effects A, C, D, AC, and AD have relatively large influence on filtration rate. 1 . (Construction o f a 2k-PDesign) Table 14-29 of MR recommends three @ = 3) design generators, D = AB, E = AC, and F = BC, for the 2;i3 design, which yield the defining relation I = ABD = ACE = BCF. These three generators produce four (= 2P- p - 1 = 233 - 1) generalized interactions: ABD.ACE = A~BCDE= BCDE ABD.BCF = AB~CDF= ACDF ACE.BCF = A B ~ E F = ABEF ABD.ACE BCF = A~B~CL?DEF = DEF Thus, the complete defining relation of the 2fi3 design is I=ABD=ACE=BCF=BCDE=ACDF=ABEF=DEF By using the three design generators, the 2 f i 3 design with I = ABD =ACE =
6 7
+ -
-
+
+ +
-
+ -
-
+
abd cd ace bcf
2. (Alias Structure) Assuming that three-factor or higher-order interactions are negligible, the following aliases are found by multiplying each effect to the complete defining relation of the 2;i3 design: A=BD=CE B=AD=CF C=AE=BF D=AB=EF E=AC=DF F=BC=DE AF=CD=BE
-
Nonparametric Statistics
15-1 Introduction 15-2 Sign Test 15-3 Wilcoxon Signed-Rank Test
15-5 Nonparametric Methods in the Analysis of Variance MINITABApplications
15-4 Wilcoxon Rank-Sum Test
Answers to Exercises
15-1
Introduction
0 Explain nonparametric statistics in comparison with parametric statistics. sm.,m,sm.~m,ae
~ ~ > ~ * s s . , a ~ - . ~ ~ , m M . Buw, . aw.m&"mM-
Nonparametric Statistics
.m.m,mm ,mm,~.-a*,~'%~*,-~d*e~~.d.~**a,e~#whw#
~ . ~ . m ~ ~ . ~ s ~ m . ~ , ~ ~ ~ ~ ~ ~ > ~ . ~ ~ ~ ~ , . w . ~ ~ " ~ ' ~ ~ ~ a ~ ~ ~ v , ~".BMBw"v"s" w m ' s .a.wm, ~ ~ ~ ms.aea.s,"a.*a % * ~ - ~ , ~ a ~ ~ ~ d ~ b ~ ~ . -
Nonparametric methods do not use information of the distribution of the underlying population, whereas parametric methods (such as t and F tests) are based on the sampling distribution of a particular statistic for the parameter of interest. The nonparametric methods make no assumptions except the underlying distribution is continuous, using only signs andlor &-the original observations may need to be transformed. Therefore, the nonparametric statistics is considered a distribution-free method and does not include the procedure of parameter estimation (point and interval estimations). Parametric and nonparametric statistics have their own advantages and disadvantages in terms of ease of use, time efficiency, applicability, and sensitivity (statistical power), as summarized in Table 15-1. Nonparametric tests are easier and quicker to perform and their applications are extended to the following contexts where parametric tests are inappropriate: (1) Underlying distribution unknown: Information of the underlying distribution is unavailable. (2) Normal approximation inapplicable: Normal approximation is not applicable when the underlying distribution is significantly deviated from a normal distribution and the sample size is small (say, n < 30) (see Section 7-5). (3) Categorical data: Categorical data include nominal and ordinal data (see Section 9-7).
15-2
Chapter 15
Nonparametric Statistics
Nonparametric Statistics However, nonparametric tests are less sensitive (statically powerful) in detecting a small departure from the hypothesized value because they do not use all the information of the sample. Therefore, the nonparametric methods usually require a larger sample size to achieve the same statistical power.
Table 15-1 Comparison of Parametric and Nonparametric Statistics Ctitepia Parametric Nonpammtric Ease of use 0 Time efficiency 0 Applicability 0 Sensitivity 0 * * - %*" - * (Note) 0 : preferred --w
15-2
-mss wmm-*-mw*
-*w*ma
wa a
a#vaa
-#
#
es
-
w
-#
&<--"
Sign Test
Read the sign test table. Test a hypothesis on the median ( p ) of a continuous distribution by the sign test. O Calculate the P-value of a sign test statistic. Apply the normal approximation to the sign statistic when n is moderately large. O Test a hypothesis on the median of differences ( ji, ) for paired samples by the sign test.
Sign Test
The sign test uses the plus and minus of the differences between the observations and the hypothesized median P o . The sign test assumes that the observations are drawn from a continuous distribution. The sign test statistics R- , R+ , and R are determined by the following procedure: Step 1: Compute the differences X i - y o , i = 1,2,. .. , n. Step 2: Count the numbers of the minus and plus signs of the differences R- = number of the minus signs R+ = number of the plus signs , +) R = m i n ( ~ -R Note that ties (xi = p,) are not included in the sign test because the probability that a value of4 is exactly equal to
Sign Test
Po is zero in a continuous distribution:
The sign test table (see Appendix Table VII in MR) provides critical values (r:, ) of a sign test statistic for various sample sizes (n) and levels of significance (a).
-
15-2 Sign Test
Sign Test Table (cont.)
Inference Context for p
(e.g.) Reading the sign test table ('1
r0*05,10
(2)
ro:025,30
(3)
ri005,25 =
= =9
Parameter of interest: $ (median) Test statistic of p : The appropriate sign statistic is selected depending on the type of HI as follows: (1) R-=numberoftheminussinnsof X i- p , , i = 1,2, ..., n,forHl: p > p 0 (2) R+ = number of the plus signs of (3) R=min(R+,R-)
P-value Calculation
forH1:
Xi- P o ,
i = 1,2,. .. , n, for H1: p < Po
p#po
The sign statistics R- , Rf , and R follow the binomial distribution B(n,0.5) ; therefore, the P-value (significance probability) of a sign statistic is
Recall that the null hypothesis Ho is rejected if P 5 a (see Section 9-1).
Sign Test Procedure for ii
15-3
Step 1: State HOand H I . H,,: p=po
Po for two-sided test p < Po or p > PO for one-sided test
HI: ji
;t
Step 2: Determine a test statistic and its value. R- = number of the minus signs of Xi- Po for upper-sided test R + = number of the plus signs of
R = min(R-, R')
Xi-Po
for lower-sided test for two-sided test
Step 3: Determine a critical value(s) for a. r:12,, for two-sided test; ri,, for one-sided test Step 4: Make a conclusion. Reject Hoif for upper-sided test r- 5 r:, + * r 5r for lower-sided test r5r
for two-sided test
15-4
r
Chapter 15
Nonparametric Statistics
Normal Approximation
Recall that a binomial distribution B(n, p ) approximates to a normal distribution with p = np and o2= np(1 p ) if np > 5 and n(1 - p ) > 5 (see Section 4-7). Since R+ ( R - or R) follows B(n, 0.5), if n is moderately large (say, n > lo), the sign statistic has approximately a normal distribution with mean and variance y = 0.5n and 02= 0.5~12 Therefore, a hypothesis on the median by R+ can be tested by using the z-test statistic R+ - 0.5n 2, = 0.5&
Comparison to the t-Test
If the underlying distribution is normal or nonnormal but symmetric, the t-test is preferred to the sign test for better statistical power.
Example 15.1
-
Test scores of 12 students in a statistics class are as follows:
1. (Test on p ; Sign Test) Test if the median of the test scores is equal to 80 at a = 0.05. r ~ Step 1: State Ho and H I . Ho: p = 80 HI: p # 80
Sign + - + * Excluded since xi = Po.
#w"wAs--w-m
-
+
-mr-m~e-~--sw~aa"wmw".am.a.~~~z~~"~~ -.B ~ ~ ~ ".B* 'au%v.w
r =min(r-,r')
= min
-
-
a .m.ammm~F,*r*ree~*a.s-asd.
* *.d-"9e.B.
-
B.
+
b.*a,aw~s"B~um-~m-:
+
+
mwm"m,.-m-..m.,m,-.m.m"s~,
(5,6) = 5
Step 3: Determine a critical value(s) for a. r:,2,n = = 1 (Note) number of ties = 1
ri025,11
Step 4: Make a conclusion. Since r = 5 5 r,* = 1 , fail to reject Hoat a = 0.05.
,,,,
2. (P-value Calculation) Find the P-value of the sign statistic r = 5 for the twosided test.
Since P = 1.0 5 a = 0.05, fail to reject Hoat a = 0.05.
15-5
15-2 Sign Test
Example 15.1 (cont.)
3. (Normal Approximation) Apply the normal approximation to the sign statistic R and draw a conclusion at a = 0.05. mStep 1: State Ho and HI. H,: ji = 80
Step 2: Determine a test statistic and its value. r-0.5n 5 - 0 . 5 ~ 1 1= -0.30 z, = 0.5& 0.5~-fi Step 3: Determine a critical value(s) for a. = '0.0512 = '0.025 =
'1x12
Step 4: Make a conclusion. Since (z,( = 0.30 2 z,,,,~ = 1.96, fail to reject HOat a = 0.05. Exercise 15.1 (MR 15-3)
The impurity level (unit: ppm) is routinely measured in an intermediate chemical product. The following data are observed in an impurity test:
~ ~ ~ # # ~ ~ ~\ ~ 4 ~a$fj~j\C58~ & J m c 8 ~ & 7 ~ & ~ ~ b$ ~ ~ & ~ ~ $ ~ ~ # # ~ ~ ~ $ f#* iw#fl~~~@~~ "
bs*e@ 8
@@V#
d A
Imurit A')
2.4
2.5
1.7
sld
s 8
~srB# 6kee
1.6
1.9 2.6
1.3
1.7 2.0
4 ,
1.3
4i
1.9
2.0
$ g # #;( * & ~ & $ ~ # @ t ~ @ ~ ~ # & ~ g ~ & ~ J ~ I m ~ u r 2 C _ X ) 2.3
unrnun-
a*m
w
.xn
-a
ma-iaxr
2.0
n a nnrpa-ar
1.8
=xnixana.-
*ma rraarin-w-rxlaam--ww-
1. Test if the median of the impurity level is
n-ri
1.9 2.3 il
2.5
1.9 2.4
s l m r a r ~ n . a x a i . n ~ r a a e m n . x n v ~ - m *r-
2.6
1.6
anenanman
*.
than 2.5 ppm at a = 0.05.
2. Find the P-value of the sign statistic r + = 2 for the lower-sided test. 3. Apply the normal approximation to the sign statistic R+ and draw a conclusion at a = 0.05. Inference Context for p,
-
Parameter of interest: p, (median of the paired differences D j = X V - X2j ) Test statistic of ji, : The appropriate sign statistic is selected depending on the type of HI as follows: (1) R- = number of the minus signs of D j ,j = 1,2,. .. , n for HI: ji, > 6, (2) R +
= number of the
plus signs of D j ,j
=
1,2,. .. ,n for HI:
) ji, +6, (3) ~ = r n i n ( ~ - , R +forH,: Sign Test Procedure for ji,
Step 1: State Ho and HI. Ho: Po = 6 , HI:
p, + 6 ,
for two-sided test ji, < 6, or ji, > 6, for one-sided test
p,
< 6,
15-6
Chapter 15
Sign Test Procedure for pD (cont.)
Nonparametric Statistics Step 2: Determine a test statistic and its value. R- = number of the minus signs of D j for upper-sided test Rf = number of the plus signs of D j
for lower-sided test
R = min(R-, R + )
for two-sided test
Step 3: Determine a critical value(s) for a. r,*,,,, for two-sided test; r , for one-sided test Step 4: Make a conclusion. Reject Ho if r 5 , for upper-sided test + * r I ran for lower-sided test for two-sided test
r5r
Example 15.2
II
pD ;Sign Test) Scores of 10 students for two exams are below. Test Ho: pD = 0 vs. H I : pD # 0 at a = 0.05. (Test on
No. Exam2(X2$
---em*
a
*ma
-am-
1
2
90
73*
-a m--sw---sw
3
4
5
--
98""-*- 80
**am"
6 92 *
7 72 *
ewma%m
8 78 -**
ex"v-
9
-85-
1
0
75 ---
92
*
-
< *a
w*
Step 1: State Ho and H I . Ho: p , = o HI:
I
I
p,
#O
Steo 2: Determine a test statistic and its value.
Sign m.*"we**-,".B,a
".,"m~w"ws,~"""w~.a,~"m.--#~*#~~""#
+
-
-.-*-
",a,~w.,"s."""*."ww~*".".".~-~~." s+-"*a"w
* Excluded since d j = 6,.
".>,"*w.B.w
..w.w
8..s",
".%"*.".""""*,"""-.a
.a.s
,-* ~ ~ . - - . ~ " ~ " ~ . ,",*"cw ~~~-~,*",
Step 3: Determine a critical value@) for a. r,*,,,, = r;,,,,, = 1 (Note) number of ties = I Step 4: Make a conclusion. Since r = 1 I r,*,,,, = 1 , reject Ho at a = 0.05.
Exercise 15.2 (MR 15-12)
The diameter of a ball bearing was measured by 12 inspectors. The measurements, corresponding differences, and their signs are presented in the table on next page. Test if there is a significant difference between the medians of the populations of measurements by using two different calipers. Use a = 0.05.
15-7
15-3 Wilcoxon Signed-Rank Test
Exercise 15.2 (cont.)
15-3
Wilcoxon Signed-Rank Test
--
uswe -wm
a
*
a
*
*>w
" a -a
#
-
A #%*
a**--
a me
e *.d
*a
kw
mmVe-B
w
*
*e a*- mr-
rs
-
*".me
*
. --
Read the Wilcoxon signed-rank test table.
O Test a hypothesis on the mean ( p ) of a continuous distribution by the Wilcoxon signed-rank test. Apply the normal approximation to the signed-rank statistic when n is moderately large.
O Test a hypothesis on the mean of differences (b)for paired samples by the Wilcoxon signed-rank test.
Wilcoxon Signed-Rank Test
While the sign test uses only the plus and minus signs of the differences between the observations X, and the hypothesized median ii,,the Wilcoxon signed-rank test uses both the of the differences between the observations X, and the hypothesized mean po and the &of the absolute magnitudes of the differences. The signed-rank test assumes that the underlying distribution is svrnmetric and continuous, i.e., the mean equals the median. The Wilcoxon signed-rank statistics W- , W+ , and Ware determined by the following procedure: Step 1: Compute the differences X i- p,, i = 1,2,. .. , n.
1
Step 2: Rank the absolute differences (xi - po in ascending order. For t k , use the average of the ranks that would be assigned if they differed. Step 3: Assign the signs of the differences to the corresponding ranks. Step 4: Calculate the sum of the signed ranks: W- = sum of the negative ranks W + =sum of the positive ranks
w = min(W-, W ' )
15-8
Chapter 15
Wilcoxon Signed-Rank Test Table
Nonparametric Statistics Like the sign test table, the Wilcoxon signed-rank test table (see Appendix Table VIII in MR) provides critical values ( wi,, ) of a signed-rank test statistic for various sample sizes (n) and levels of significance (a). (e.g.) Reading the signed-rank test table
Inference Context for p
(1)
w;.05,10 =
10
(2)
w;.025,20 = 52
(3)
w;.005.,5 =
15
Parameter of interest: p Test statistic of p: The appropriate signed-rank statistic is selected depending on the type of HI as follows: (I) W-=sumofthene.gativeranksof X i - p o , i = 1,2, ..., n forH1: p > p , (2) w + = sum of the positive ranks of
xi - po , i = 1,2,. .. , n
for HI: p < p0
(3) W=min(W-,W+) forHl: p # p 0
Wilcoxon Signed-Rank Test Procedure for p
Step 1: State HOand H I . Ho: C L = C L ~
HI: p # po for two-sided test p < po or p > p0 for one-sided test Step 2: Determine a test statistic and its value. W-= sum of the negative ranks of X , - po
for upper-sided test
W+= sum of the positive ranks of X i- po
for lower-sided test
W = min(W-, w + )
for two-sided test
Step 3: Determine a critical value(s) for a. w : , ~ , ~for two-sided test; w;,,
for one-sided test
Step 4: Make a conclusion. Reject Ho if for upper-sided test w- I w;,,
w+ Iw;,, w5w Approximation
,
,
for lower-sided test for two-sided test
If the sample size is moderately large (say, n > 20), the signed-rank statistic W + ( W- or)'+I has approximately a normal distribution with mean and variance n(n + 1)(2n + 1) - n(n + 1) Pw+ -- 4 and ow+= 24 Therefore, a hypothesis on the single mean by W+ can be tested by using the following z-statistic:
15-9
15-3 Wilcoxon Signed-Rank Test
Normal Approximation (cont.)
r
Example 15.3
Test scores of 10 students in a statistics class are below. No. 1 2 3 4 5 6 7 8 9 Score(X$ 85 75 96 74 87 71 73 80 * *- * * -" * -* ww
ww-*
-*,-*w+
w-
a
#-w**
aww'
-4-*
weew+
w
#
1 75 -
->
*
0 90 -
-
1. (Test on p; Wilcoxon Signed-Rank Test) Test if the mean of the test scores is equal to 80 at a = 0.05. m Step 1: State Ho and H I . Ho: p = 80 H,: p + 8 0 Ster, 2: Determine a test statistic and its value.
xi-80
5
-5
16
-6
0
-5
10
(xi-801 Rank
5 2
5 2
16 9
* 6 7 9 7 4 5 . 5 7 5 . 5 *
5 2
10 8
*
-2
8
Signed 2 -2 9 rank * Excluded since xi = p, . w = min(w-, w')
-4
7
5.5
-9
-7
-7
-5.5
= min(20.5,24.5) = 20.5
Step 3: Determine a critical value@) for a. w : , ~ , ~=
~ i =, 5 ~(Note) ~ ~number , ~of ties = 1
Step 4: Make a conclusion. Since w = 20.5 $ w:,,,,
= 5 , fail to reject Ho at a = 0.05.
2. (Normal Approximation) Apply the normal approximation to the signed-rank statistic W and draw a conclusion at a = 0.05. k Step 1 : State Ho and H I . Ha: p = 80 H,: j,l # 80
je de
Step 2: Determine a test statistic and its value. 9x(9+1) n(n + 1) W-20.5 za = n(n+1)(2n+l) = 9 x ( 9 + 1 ) x ( 2 x 9 + 1 ) = -0.24
1sai pap!s-lamol.~oj
fa j o syum ah!l!sod aqljo urns = + A
lsal pap!s-laddn ~ o j 30 syum a~!@au ayljo urns = - A .anlaa s$! pua 3!p!$aqs qsaq s aupmalaa :Z days
'SO'() = x, 1" Uo!Sn~3U03 a ~ s l pus p + A 3!ls!yels yual-pauZ!s aqlol uo!~sur!xo.~dds pm~.~ou aql klddv ' 2 .SO.O
= x, 1s wdd
s.2 usql35j s! ]anal Lpt~dur!aqljo usaw aqlj! IsaL '1
-
15-11
15-3 Wilcoxon Signed-Rank Test
Example
(Test on p D ;Wilcoxon Signed-Rank Test) The exam scores in Exercise 14.2 is presented again below. Test Ho: p D = 0 VS.H I : p D # 0 at a = 0.05.
askiwjra 2 >~~&~~&~#@f&~j~#~~$~~~$~~j~;~~g&~$~$$~~#gjjj~~#$@&~~$ Examl(X1)
, ;%::."
'J
"-8
#-"
%~CW t
$i
&&ej8
85 90
h
'
n
75 73
*-navasw.
*
96 98
m*
74 80
& - # *
87 92
71 72
73 78
80 85
75 75
90 92-**
-
Step 1 : State Ho and H I . Ho: p D = O
H,: pD f O
I
I
I
6.5 3 3 Rank Signed -6.5 3 -3 rank ,~", * Excluded since d j = 6 , . ~
"s*sm.#*s"-
9
6.5
1
6.5
6.5
-9
-6.5
-1
-6.5
-6.5
* *
3 -3
----
%a%dw-,d,a,a,-a-.#-
.e-~~ " " " ~ , . " v " , " . ~ ' " . ~ ~ e ~ . ~ . ~ " ~ ~ . ~ w , ~ ~ ~ ~ ~ " ~ - ~,~c-ds . " ~a.aa""" ~ ' ~ m,,,m,*, , ~ ~ ,"""a,n"*~%v"*b " . e . " ~ ~ ~ ~ , ~ ~ s w ~ " . " ~ " e ~ . ~
*e#.~..--e~.*m-.",*a
Step 3: Determine a critical value(s) for a. w : ~ =~w, ;~ , ~=~5 ~ (Note) , ~ number of ties = 1
m
Step 4: Make a conclusion. Since w = 3 5 = 5, reject Ho at a = 0.05.
~
Exercise 15.4 (MR 15-26)
i
~
~
,
~
Reconsider the ball bearing diameter data in Exercise 15.2. The measurements, corresponding differences, and their signed ranks are presented below. Test Ho: p D = 0 vs. H1: p D # 0 at a = 0.05.
Measurement X2 XI
(XI-X2)
14
Rgok
Signed
15-12 Chapter 15
15-4
Nonparametric Statistics
Wilcoxon Rank-Sum Test
0 Read the Wilcoxon rank-sum test table. Test a hypothesis on the difference in mean of two continuous populations (p1 - p2) by the Wilcoxon rank-sum test. 0 Apply the normal approximation to the rank-sum statistic when nl and n2 are moderately large. m . , ~ . ~ , " ~ ~ ~ B - - . m . ~ ~ ~ ~ ~ ~ ~ A ~ * * - . ~ e ' e w a ~ ~ * ~ : ~ - , * - - ~ ~ " ~ ~ ~ ~ ~ ~ ~ u s * ~ m - ~ ~ ~ . , # ~ ~ - ~ ~ . ~ * ~ ' , ~ B a w ~ - ~ . - ~ ~ ~ ~
Wilcoxon Rank-Sum Test
The Wilcoxon rank-sum test (often called Mann-Whitney test) uses the ranks of the observations from two independent continuous populations XI and X2 to test the difference in mean ( p -~p2). Let XI,, X12,...,X,,,, and X2, ,X2, ,...,XZn2denote two independent random samples of sizes nl and n2 (a1 5 n2) from the populations XI and X2. Then, the Wilcoxon rank-sum statistics Wl and W2 are determined by the following procedure: Step 1: Rank the n1 + n2 observations in ascending order. For t k , use the average of the ranks that would be assigned if they differed. Step 2: Calculate the sum of the ranks for each sample: W, = sum of the ranks in the smaller sample (size = nl) W2 = sum of the ranks in the larger sample (size = n2) Note that Wl and W2 will be nearly equal for both the samples after adjusting for the difference in sample size.
Wilcoxon Rank-Sum Test Table
The Wilcoxon rank-sum test table (see Appendix Table IX in MR) provides critical values ( w;,,,, ) of a rank-sum statistic for various sample sires (nl and n2 where nl I n2) and levels of significance (a = 0.05 and 0.01) for a two-sided test. (e.g.) Reading the rank-sum test table for a two-sided test (1) w;.05i2,10,15 = w;;.025,10,15= 94
(2) w;.01i2,5,,0 = w~.005,5,10= 19 Inference Context for PI - CLZ
Wilcoxon Rank-Sum Test Procedure for PI - Pz
Parameter of interest: p1 Test statistic of p1- p2: The appropriate rank-sum statistic is selected depending on the type of HI as follows: (1) W,=sumoftheranksof X , j , j = 1 , 2 ,..., nl forH1: p , - p 2 <8,
Step 1: State Ho and HI.
Ho: p1 - p2 = 8 0 HI: PI - p2 f 80 for two-sided test p1 - 11.2 > 60 or p1 - p2 < 8o for one-sided test
15-13
15-4 Wilcoxon Rank-Sum Test
Wilcoxon Rank-Sum Test Procedure (cont.)
Step 2: Determine a test statistic and its value. W, = sum of the ranks of X , ; for lower-sided test 'J
W2 = sum of the ranks of XZj for upper-sided test Wl or W2
for two-sided test
Step 3: Determine a critical value(s) for a. *
for two-sided test;
W a I 2,n, ,n2
w
2
for one-sided test
Step 4: Make a conclusion. Reject Hoif 1
Wa,nl ,n2
for lower-sided test
w2 5 w:,nI,n2
for upper-sided test
W1
*
W1
Normal Approximation
5 W a I 2,n, ,n2 Or
W2
* 5 Wa/2,n,,n2
for two-sided test
When nl and n2 are moderately large (say, nl and n2 > 8), the rank-sum statistic Wl (or W2 ) has approximately a normal distribution with mean and variance n, (n1 + n2 + 1) 2 n1nz(n1+n2+1) and owl= 2 12 Therefore, a hypothesis on the difference in mean by Wl can be tested by using the z-statistic
Pw, =
Y
Example 15.5
The life lengths (unit: hour) of INFINITY(XI) and FOREVER(X2) light bulbs are under study. Suppose that no information is available about the underlying distributions of XI and X2. A random sample selected from each brand results in the following:
F O R E V E R ( X ~795 L
-am.m
a.m--
wwaMMM
w-
780
r*a-w--m-
820
770
810 765
ma wawam-m-*-
wM--s-s
800 790
- 750
---Maww#%w-m-
730
*#"B#a*m wwa-s.8 'r
1. (Test on p1- p2; Wilcoxon Rank-Sum Test) Test if the mean life lengths of the two brands are the same at a = 0.05. o- Step 1: State Ho and H I . Ho: p1- p 2 = 0 H1: P1 - p 2 2 0 a test statistic and its value.
%9pfpB" ##@dk46 146
XI Rank X2 Rank
"mu
760 810 775 680 9 17.5 12 1 795 780 820 770 15 13 19 11 w l = 6 1 and w2= 129 ~a*-e"--ams.#m*---
730 5.5 810 17.5
725 4 765 10
740 7 800 16
700 2 790 14
$ (8~d -
& $ g81 C ~ j 4 s g g .
720 3 750 8 -.#*-
730 5.5-
* w - ~ # - ~ ~ " ~ a " ~ - ~ ~ - ~ * * ~ w w w - w m M " ~ - w ~ ~ # ~ " w ~ " ~ - ~*aw'"**" -
15-14 Chapter 15
Example 15.5 (cont.)
Nonparametric Statistics Step 3: Determine a critical value(s) for a. ~;/2,n,,n,
- ~ ~ . 0 5 1 2 , 9 , 1=0 65 -
Step 4: Make a conclusion. Since w, = 61 I
~ i / =~65,,reject ~ Ho , at~a ~= 0.05.
2. (Normal Approximation) Apply the normal approximation to the rank-sum statistic Wl and draw a conclusion at a = 0.05. Step 1: State HOand H I . Ho: p1 -p2 = o H1: p1 - p 2 + 0
(
Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. Z,/2 = Z0,025 = 1.96
I
The manufacturer of a hot tub is interested in testing two different types of heating elements. The heating element that produces a larger heat gain (unit: OF) after 15 minutes will be preferable. Ten observations of the heat gain for each heating element are obtained as follows:
Exercise 15.5 (MR 15-30)
(
I 15-5
Step 4: Make a conclusion. Since lzol = 2.37 > Z0,025 = 1.96 ,reject HO at a = 0.05.
1. Test if one heating element is superior to the other at a = 0.05. 2. Apply the normal approximation to the rank-sum statistic Wl and draw a conclusion at a = 0.05.
Nonparametric Methods in the Analysis of Variance
O Perform the Kruskal-Wallis test in the single-factor analysis of variance (ANOVA). Explain use of ranks in ANOVA.
KruskalWallis Test
The Kruskal-Wallis test is a nonparametric alternative to the parametric F-test of the single-factor analysis of variance (ANOVA). While the parametric ANOVA model, Yi,. = p + Z~ + E~ ,i = 1,2,. .. ,a and j = 1,2,... ,n i , assumes the error
15-5 Nonparametric Methods in the Analysis of Variance
KruskalWallis Test (cont.)
15-15
terms ei, are normally distributed with mean zero and variance 02, the nonparametric ANOVA method assumes that the eYhave the same continuous distribution for all the factor levels i = 1,2, ... ,a. The Kruskal-Wallis statistic H i s determined by the following procedure: Step 1: Rank all N (=
za r=l
ni ) observations in ascending order. For t k , use
the average of the ranks that would be assigned if they differed. Step 2: Calculate the sum ( R,,) of the ranks RYin the ithtreatment, i = I, 2, ... , a. Step 3: Calculate the Kruskal-Wallis statistic l2 - 3(N + 1) for observations without ties H= N(N + 1) i=l ni
gc
4 *i
where S 2 =-!-[T~R~ N - 1 i=l j=1
x2
Approximation Inference Context
1
for observations with ties
I
N ( N + 112 -
4
The Kruskal-Wallis static approximates to a X2-distributionwith a - 1 degrees of freedom when the sample sizes ni are moderately large (ni 2 6 for a = 3; ni 2 5 for a > 3).
Parameters of interest: 2, ,2,, ...,z, Test statistic of T,,2,, ...,2, (1) Case 1: Observations with ties
(2) Case 2: Observations without ties
KruskalWallis Test Procedure
Step 1: State HOand H I . Ho: 7, = z 2 =...=z,
=o
(pl =p2 =.-.=Po)
HI: zi + 0 foratleastone i,i=1 , 2,..., a Step 2: Determine a test statistic and its value. l2 - 33(N+ 1) for observations without ties H= N(N + 1) ni
gg
4
for observations with ties
4
I
15-16 Chapter 15
KruskalWallis Test Procedure (cont.)
Nonparametric Statistics
Step 3: Determine a critical value(s) for a.
Step 4: Make a conclusion. Reject Ho if 2
h 2 %a,,-,
If the ordinary F-test were to be applied to the ranks instead of the original data in the single-factor ANOVA, the F-test statistic would be H /(a - 1) F, = (N-1-H)l(N-a) Note that Fois closely related to the Kruskal-Wallis statistic H, which indicates that the Kruskal-Wallis test is approximately equivalent to the parametric ANOVA to the ranks. Accordingly, the F-test on the ranks can be applied when a nonparametric alternative to the analysis of variance is unavailable.
F-test on Ranks
r
It is recommended that the F-test be performed on both the original data and the ranks when the normality assumption andlor the effect of an outlier(s) are concerned. If both the test results are similar, use of the original data is satisfactory; otherwise, use of the ranks is preferred. Example 15.6
1 (Nonpararnetric ANOVA; Kruskal-Wallis Test) Maximum power grip forces (unit: lb.) of a 45-year-old male at four elbow angles (0°, 45", 90°, and 135") are I obtained below. Test if elbow angle has a significant effect on maximum power 1
grip strength at a = 0.05.
0" (YI) 45" (y2) 90" (y3)
., r3XLri)
mmw-B,
h
* ,-,
65 70 67 58 -*-Bww--B
68 72 63 60 *-
- -
ms--wmB
62 69 68 59 , -**
-
m
=
66 75 64 63 ,. *--
-
ev
*
69 68 67 60 .-
mw mMw
Step 1: State HOand H I .
Step 2: Determine a test statistic and its value.
Hbanv Wzk
Ri.
135" bj) Rank (r4J
-**-"Mwwm
a#
58 1
60 3.5
" * ~ - ~ 6 w * - . m ~ " - ~ r a B ~ ~ " ~ " ~ " ~ w , w ~ ~ a ~ a v - - -
59 2
-w~w,-e"6a--*"M,vw--*~-,%"v
63 6.5
60 3.5
6---*---m"eMm*-m--w---
16.5
--
15-5 Nonparametric Methods in the Analysis of Variance
15-17
Example 15.6 (cont.)
Step 3: Determine a critical value(s) for a. 2 - 2 2 Xa,a-1 - X O . O ~ , ~= - 1X 0 . 0 5 ~= 7.81
Step 4: Make a conclusion. Since h = 14.52 >
xi.,-,
Exercise 15.6 (MR 15-37)
= 7.81, reject Ho at a = 0.05.
The tensile strengths (unit: 1b.1in.~)of cements prepared by four different mixing techniques (A, B, C, and D) are measured below. Test if the difference in mixing technique affects the tensile strength at a = 0.05.
15-18 Chapter 15
Nonparametric Statistics
MINITABApplications Example 15.1
on ; Sign Test) I (Inference (1) Choose File * New, click Minitab Project, and click OK.
I
Score
75-
96
*
Choose Stat Nonparametrics > 1-Sample Sign. In Variables select Score. Check Test median, enter '80' (hypothetical median), and in
I
Sign Test for Medim Slgn test of median = 80.00 versus
not =
80.00
............................................................................................... N 12
Below r- = 5
Equal 1
Above
P
v+ = 6
1.0000
j
................................. ........................................................: rr..
Median 82.501
3: "
I
Example 15.2
(Inference on
E D ;Sign Test)
(1) Choose File
* New, click Minitab Project, and click OK.
(2) Enter the score data of the two exams on the worksheet.
(3) Choose Calc > Calculator. In Store result in variable, enter C3 as a column to store calculation results. In Expression, click Exam 1, minus
.I
1
(~xam 1 ( X I ) Exam 2 (x2) 85
- .
XI - x2
I
-4
-5 ..
15-20 Chapter 15
Example 15.2 (cont.)
Nonparametric Statistics
I
(5) Choose Stat > Nonparametrics > 1-Sample Sign. In Variables select x l - x2. Check Test median, enter '0' (hv~otheticalmedian of the score
I
Sign Test for Median
............................................................................................... Sign t e s t of median = 0.00000 v e r s u s
1x1
-
x2
N
Below
10
r- = 8
Equal 1
not =
Ahove r+=l
0.00000
P 0.0391
.............................................................................................
.i
Example 15.3
i
Hedian
1-3.500 :
.".. -
(Inference on p.;Wilcoxon Signed-Rank Test) (1) Choose File > New, click Minitab Project, and click OK.
(2) Enter the test score data on the worksheet. (3) Choose Stat > Nonparametrics > 1-Sample Wilcoxon. In Variables select Score. Check Test median, enter '80' (hypothetical mean), and in
15-21
MINITAB Applications
Example 15.3 (cont.) k -Wilcoxon Signed Rank Test Test of medzan = 80.00 versus medlan not = 80.00
....................................................................................... N for N
Test 9
Wilcoxon Statlstlc w + =24.5
i Estimated P
i
i ....................................................................................... iscore
Example 15.4
10
0.859
Nedian 80.001
71 %I
(Inference on p, ;Wilcoxon Signed-Rank Test) (1) Choose File
, ,
New, click Minitab Project, and click OK.
(2) Enter the score data of the two exams on the worksheet. (3) Choose Calc Calculator. In Store result in variable, enter C3 as a column to store calculation results. In Expression, click Exam 1, minus sign, and Exam 2 in sequence. Then click OK.
(4) Name the C3 column x l - x2.
,
,
(5) Choose Stat Nonparametrics 1-Sample Wilcoxon. In Variables select X I - x2. Check Test median, enter '0' (hypothetical mean of the
Wilcoxon Signed Rank Test
I Test of medlan
= 0.000000 versus median not = 0.000000
N for
Wilcoxon Statistic wf = 3.0
i~stimated Median -3.000 0.024 i
; :..................................................................................... . jxl - x2
N
Test
10
9
P
15-22 Chapter 15
Example 15.5
Nonpararnetric Statistics
(Inference on yl- CL;?;Wilcoxon Rank-Sum Test) (1) Choose File New, click Minitab Project, and click OK.
*
bulb brands on the worksheet.
J.
1 3
lnfin~ty / Forever I 760 p1 795 8101 7801
I
'
*
*
(3) Choose Stat Nonparametrics Mann-Whitney. In First Sample select Infinity, in Second Sample select Forever, and in Alternative
Mann-Whitney Confidence Interval and Test .......................................................................................
-
i
Infinity N = 9 Bedian 730.00 Forever N = 10 Median = 785.00 i Point estimate for ETA1-ETA2 is -47.50 I 95.5 Percent C I for ETA1-ETA2 is (-79.99,-10.01)i
............................................................... i
Test of ETA1 = ETA2 v s ETA1 not = ETA2 is significant at 0.0200 The test 1s significant at 0.0199 (ad]usted for ties11
$5
43
MINITABApplications
Example 15.6
I
15-23
(Nonparametric A N 0 VA; Kruskal-Wallis Test) (1) Choose File
* New, click Minitab Project, and click OK.
(2)
ksheet.
I--, Force
. I
I.; ;I r
1
Angle
-
*
01 +--- -65 -
6ar -----. . 0,- ............. 62
...
..........
:hoose Stat > Nonparametrics ) Kruskal-Wallis. In R.esponse select
I
I
Kruskal-Wallis Test Kruskal-Wallis Test on Force Angle
N
Median
Ave Rank
Z
0
5
45
5
90
5 5 20
66.00 70.00 67.00 60.00
10.9 17.5 10.3 3.3 10.5
0.17 3.06 -0.09 -3.14
135
Overall
15-24 Chapter 15
Nonparametric Statistics
Answers to Exercises Exercise 15.1
1. (Test on
I
p ;Sign Test)
Step 1: State Ho and HI. Ho: ji =2.5 H I : p < 2.5 Step 2: Determine a test statistic and its value.
11
2 " ' - . 3 'B,'?:
'.-:..?;'.
.$
-
* Excluded since xi = P o . Step 3: Determine a critical value@) for a.
ri,, = r,',,,,,
I
I I
= 5 (Note) number of ties = 2
Step 4: Make a conclusion. Since r+ = 2 I r& = 5 ,reject Ho at a = 0.05. 2. (P-value Calculation)
Since P = 0.0002 < a = 0.05, reject HOat a = 0.05. 3. (Normal Approximation) Step 1: State Ho and HI. Ho: ji =2.5 H I : ji < 2.5
I
Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value@) for a. Z a = Zo,05 = z0,05= 1.65
I
Step 4: Make a conclusion. Since z , = -3.58 < -z0,,, = -1.65, reject Ho at a = 0.05.
#gS
fd
Answers to Exercises
Exercise 15.2
(Test on
15-25
PD ;Sign Test)
Step 1: State Ho and H I . Ho: PD = 0 H,: jiD# O Step 2: Determine a test statistic and its value. r=min(r-,r') =min(2,6)=2 Step 3: Determine a critical value(s) for a. ri12,n= r;025,8 = 0 (Note) number of ties = 4 Step 4: Make a conclusion. Since r = 2 g = 0 , fail to reject Ho at a = 0.05.
riI2,,
Exercise 15.3
1. (Test on p; Wilcoxon Signed-Rank Test) Step 1: State Ho and H I . Ho: p = 2.5 HI: p < 2.5
Step 2: Determine a test statistic and its value.
#xl-ls) .
No,
3 -q-2.3
12 13 14 15
2.3 2.0 1.8 1.3
-0.2 -0.5 -0.7 -1.2
0.2 0.5 0.7 1.2
5.5 8.0 14.0 19.5
-5.5 -8.0 -14.0 -19.5
17 18 19 20
2.0 1.9 2.3 1.9
-0.5 -0.6 -0.2 -0.6
0.5 0.6 0.2 0.6
8.0 11.5 5.5 11.5
-8.0 -1 1.5 -5.5 -11.5
17.5
- -17.5 -
22 1.6 -0.9 * Excluded since x, = p, .
A m - - s - - - " # - - d . v - - P
*'.
? " .*
0.9
Mm -m
&&
"+a%%* * e--*--
.
9i@J@d fa&
15-26 Chapter 15
Exercise 15.3 (cont.)
Nonparametric Statistics
Step 3: Determine a critical value(s) for a. w:,~ = u ' ( ; , ~=~60 , ~ ~(Note) number of ties = 2
Step 4: Make a conclusion. Since w* = 5 I w:.,
= 60, reject Hoat
a = 0.05.
2. (Normal Approximation) Step 1: State Ho and H I . Ho: p =2.5 HI: p < 2.5 Step 2: Determine a test statistic and its value.
1
Step 3: Determine a critical value(s) for a. Z, = z0,05= 1.65 Step 4: Make a conclusion. Since zo = -3.73 < -z,,,,
Exercise 15.4
= -1.65
,reject Hoat a = 0.05.
(Test on y, ;Wilcoxon Signed-Rank Test) Step 1: State Ho and H I . Ho: y, = o H I : y, # O Step 2: Determine a test statistic and its value. w = min(w-, w') = min(14.5,21.5) = 14.5 Step 3: Determine a critical value(s) for a. w:/~,, = ~
: , ~ = ~ 3 ~ , (Note) 8
Step 4: Make a conclusion. Since w = 14.5 $ w:,,,, Exercise 15.5
I
number of ties = 4
= 3 , fail to reject Hoat
1. (Test on pl - p2; Wilcoxon Rank-Sum Test) Step 1 : State Ho and H I . Ho: -~ 2 = 0 H I : p1 -y2.;tO
a = 0.05.
15-27
Answers to Exercises
Exercise 15.5 (cont.) Rank X2
2 31
4 33
5.5 32
9.5 35
7.5 34
3 29
1 38
11.5 35
Step 3: Determine a critical value(s) for a. wi12,n,,n2 -w ~ , ~ = 78~ for ~ two-sided ~ , ~ test ~ , Step 4: Make a conclusion. Since w, = 77 I w;,,
,,,,,,= 78 ,reject Ho at a
~ ~
= 0.05.
2. (Normal Approximation) Step 1 : State Ho and H I . H,: p, - p2 = o H I : p1 - p 2 + 0 Step 2: Determine a test statistic and its value.
Step 3: Determine a critical value(s) for a. z a I 2= z ~= 1.96 , ~ ~ ~ Step 4: Make a conclusion. Since lz, = 2.12 > z,,,,,
1
Exercise 15.6
= 1.96, reject Ho at
(Nonparametric ANOVA; Kruskal-Wallis Test) Step 1 : State Ho and H I . H,: 7 , = 7 , = ... = 7 , = 0 H I : zi # 0 for at least one i, i = 1, 2, 3, 4
13.5 37
a = 0.05.
19.5 30
ZP'E I =