Version 5.1.2
“The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.” Marcel Pro...
175 downloads
1841 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Version 5.1.2
“The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.” Marcel Proust
Design of Experiments JMP, A Business Unit of SAS SAS Campus Drive Cary, NC 27513
JMP Design of Experiments, Version 5.1.2 Copyright © 2004 by SAS Institute Inc., Cary, NC, USA. All rights reserved. Published in the United States of America. Your use of this e-book shall be governed by the terms established by the vendor at the time of your purchase or rental of this e-book. Information in this document is subject to change without notice. The software described in this document is furnished under the license agreement packaged with the software. The software may be used or copied only in accordance with the terms of the agreement. It is against the law to copy the software on any medium except as specifically allowed in the license agreement. JMP, SAS, and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration. Other brand and product names are trademarks of their respective companies.
Contents
iii
Design of Experiments 1
Design of Experiments (DOE) DOE Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Custom Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Screening Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Response Surface Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Full Factorial Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Taguchi Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Mixture Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Augment Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Sample Size and Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 A Simple DOE Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 The DOE Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Entering Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Entering Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Select a Design Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Modify a Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 The JMP DOE Data Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 DOE Utility Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Specialized Column Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2
Introduction to Custom Designs Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Define Factors in the Factors Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Describe the Model in the Model Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Design Generation Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Design Panel and Output Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Make Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modify a Design Interactively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introducing the Prediction Variance Profiler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Quadratic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Cubic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Routine Screening Using Custom Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Main Effects Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . All Two-Factor Interactions Involving Only One Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . All Two-Factor Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25 25 26 27 28 28 29 30 30 33 34 35 36 37
Contents
Contents
iv
Contents
How the Custom Designer Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3
Custom Design: Beyond the Textbook Custom Situations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flexible Block Sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fixed Covariate Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mixtures with Nonmixture Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Factor Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
Custom Design: Optimality Criteria and Tuning Options Custom Design for Prediction (I-Optimal Design) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A One-Factor Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Three-Factor Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Response Surface with a Blocking Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model-Robust Custom Design (Bayesian D-Optimal Designs) . . . . . . . . . . . . . . . . . . . . . . . . . . Example: Two Continuous Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example: Six Continuous Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Supersaturated Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example: Twelve Factors in Eight Runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tuning Options for DOE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
41 41 43 46 48
55 55 56 57 60 60 62 63 64 66
Screening Designs Screening Design Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two-Level Full Factorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two-Level Fractional Factorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plackett-Burman Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mixed-Level Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cotter Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Screening Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two-Level Design Selection and Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Display and Modify Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Output Options for the JMP Design Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Design Data Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Loading and Saving Responses and Factors (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Simple Effect Screening Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Main Effects Report Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Actual-by-Predicted Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Scaled Estimates Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71 71 71 72 72 73 74 74 76 78 79 80 81 81 82 83
7
v
The Response Surface Design Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Design Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Axial Scaling Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Central Composite Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fitting the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Box-Behnken Design: The Tennis Ball Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Geometry of a Box-Behnken Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Analysis of Response Surface Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
87 88 88 89 90 91 93 94
Response Surface Designs
Space Filling Designs Introduction to Space Filling Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sphere-Packing Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Graphical View of the Sphere-Packing Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Latin Hypercube Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Understanding the Latin Hypercube Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Graphical View of the Latin Hypercube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uniform Design Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Borehole Model Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a Sphere Packing Design for the Borehole Problem . . . . . . . . . . . . . . . . . . . . . . . . . Guidelines for the Analysis of Deterministic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Results of the Borehole Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
101 101 101 103 103 104 106 106 108 108 110 110
Full Factorial Designs The Factorial Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 The Five-Factor Reactor Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
9
Taguchi Designs The Taguchi Design Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Taguchi Design Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Analyze the Byrne-Taguchi Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
10 Mixture Designs The Mixture Design Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mixture Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simplex Centroid Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simplex Lattice Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Extreme Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Extreme Vertices Design for Constrained Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
135 136 136 138 139 140
Contents
6
Contents
vi
Contents
Adding Linear Constraints to Mixture Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Details on Extreme Vertices Method for Linear Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . Ternary and Tetrary Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fitting Mixture Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Whole Model Test and Anova Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Response Surface Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chemical Mixture Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plotting a Mixture Response Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
141 142 142 143 144 144 145 146
11 Augmented Designs The Augment Design Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Replicate Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add Centerpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fold Over . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add Axial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Reactor Example Revisited—D-Optimal Augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Augmented Design and its Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Analyze the Augmented Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
151 152 153 153 154 155 157 157
12 Prospective Power and Sample Size Prospective Power Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . One-Sample and Two-Sample Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Single-Sample Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Power and Sample Size Animation for a Single Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two-Sample Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . k-Sample Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . One-Sample Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . One-Sample and Two-Sample Proportions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Counts per Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sigma Quality Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References Index
165 166 167 169 170 170 171 172 173 174
Origin JMP was developed by SAS Institute Inc., Cary, NC. JMP is not a part of the SAS System, though portions of JMP were adapted from routines in the SAS System, particularly for linear algebra and probability calculations. Version 1 of JMP went into production in October, 1989. Credits JMP was conceived and started by John Sall. Design and development were done by John Sall, Chung-Wei Ng, Michael Hecht, Richard Potter, Brian Corcoran, Annie Dudley Zangi, Bradley Jones, Craige Hales, Chris Gotwalt, Paul Nelson, and Wenjie Bao. In the SAS Institute Technical Support division, Ryan Gilmore, Wendy Murphrey, Toby Trott, Peter Ruzsa, Rosemary Lucas, and Susan Horton provide technical support and conducted test site administration. Statistical technical support is provided by Craig DeVault, Duane Hayes, Elizabeth Edwards, and Kathleen Kiernan. Nicole Jones, Jianfeng Ding, Jim Borek, Kyoko Tidball, and Hui Di provide ongoing quality assurance. Additional testing and technical support is done by Noriki Inoue, Kyoko Takenaka, and Masakazu Okada from SAS Japan. Bob Hickey is the release engineer. The JMP manuals were written by Ann Lehman, Lee Creighton, John Sall, Bradley Jones, Erin Vang, and Meredith Blackwelder, with contributions from Annie Dudley Zangi and Brian Corcoran. Editing, creative services, and production was done by SAS Publications. Melanie Drake implemented the help system. Thanks also to Georges Guirguis, Warren Sarle, Gordon Johnston, Duane Hayes, Russell Wolfinger, Randall Tobias, Robert N. Rodriguez, Ying So, Warren Kuhfeld, George MacKensie, Bob Lucas, Warren Kuhfeld, Mike Leonard, and Padraic Neville for statistical R&D support. Acknowledgments We owe special gratitude to the people that encouraged us to start JMP, to the alpha and beta testers of JMP, and to the reviewers of the documentation. In particular we thank Michael Benson, Howard Yetter (d), Andy Mauromoustakos, Xan Gregg, Al Best, Stan Young, Robert Muenchen, Lenore Herzenberg, Ramon Leon, Tom Lange, Homer Hegedus, Skip Weed, Michael Emptage, Pat Spagan, Paul Wenz, Mike Bowen, Lori Gates, Georgia Morgan, David Tanaka, Zoe Jewell, Sky Alibhai, David Coleman, Linda Blazek, Michael Friendly, Joe Hockman, Frank Shen, J.H. Goodman, David Iklé, Lou Valente, Barry Hembree, Dan Obermiller, Jeff Sweeney, Lynn Vanatta, and Kris Ghosh. Also, we thank Dick DeVeaux, Gray McQuarrie, Robert Stine, George Fraction, Avigdor Cahaner, José Ramirez, Gudmunder Axelsson, Al Fulmer, Cary Tuckfield, Ron Thisted, Nancy McDermott, Veronica Czitrom, and Tom Johnson. We also thank the following individuals for expert advice in their statistical specialties: R. Hocking and P. Spector for advice on effective hypotheses; Robert Mee for screening design generators; Jason Hsu for advice on multiple comparisons methods (not all of which we were able to incorporate in JMP); Ralph
Credits
Credits and Acknowledgments
viii Credits and Acknowledgments
O’Brien for advice on homogeneity of variance tests; Ralph O’Brien and S. Paul Wright for advice on statistical power; Keith Muller for advice in multivariate methods, Harry Martz, Wayne Nelson, Ramon Leon, Dave Trindade, Paul Tobias for advice on reliability plots; Lijian Yang and J.S. Marron for bivariate smoothing design; George Milliken and Yurii Bulavski for development of mixed models; Will Potts and Cathy Maahs-Fladung for data mining; Clay Thompson for advice on contour plotting algorithms; and Tom Little, Blanton Godfrey, Tim Clapp, and Joe Ficalora for advice in the area of Six Sigma. For sample data, thanks to Patrice Strahle for Pareto examples, the Texas air control board for the pollution data, and David Coleman for the pollen (eureka) data. Translations Erin Vang coordinated localization. Noriki Inoue, Kyoko Takenaka, and Masakazu Okada of SAS Japan were indispensable throughout the project. Special thanks to Professor Toshiro Haga (retired, Science University of Tokyo) and Professor Hirohiko Asano (Tokyo Metropolitan University). Finally, thanks to all the members of our outstanding translation teams. Past Support Many people were important in the evolution of JMP. Special thanks to Jeffrey Perkinson, David DeLong, Mary Cole, Kristin Nauta, Aaron Walker, Ike Walker, Eric Gjertsen, Dave Tilley, Ruth Lee, Annette Sanders, Tim Christensen, Jeff Polzin, Eric Wasserman, Charles Soper, Yusuke Ono, and Junji Kishimoto. Thanks to SAS Institute quality assurance by Jeanne Martin, Fouad Younan, and Frank Lassiter. Additional testing for Versions 3 and 4 was done by Li Yang, Brenda Sun, Katrina Hauser, and Andrea Ritter. Also thanks to Jenny Kendall, John Hansen, Eddie Routten, David Schlotzhauer, and James Mulherin. Thanks to Steve Shack, Greg Weier, and Maura Stokes for testing JMP Version 1. Thanks for support from Charles Shipp, Harold Gugel (d), Jim Winters, Matthew Lay, Tim Rey, Rubin Gabriel, Brian Ruff, William Lisowski, David Morganstein, Tom Esposito, Susan West, Chris Fehily, Dan Chilko, Jim Shook, Ken Bodner, Rick Blahunka, Dana C. Aultman, and William Fehlner. Technology License Notices JMP for the Power Macintosh was compiled and built using the CodeWarrior C compiler from MetroWorks Inc.
Design of Experiments (DOE) The use of statistical methods in industry is increasing. Arguably, the most cost beneficial of these methods for quality and productivity improvement is statistical design of experiments. A trial-and-error search for the vital few factors that most affect quality is costly and time consuming. Fortunately, researchers in the field of experimental design have invented powerful and elegant ways of making the search process fast and effective. The DOE platform in JMP is a tool for creating designed experiments and saving them in JMP data tables. JMP supports two ways to make a designed experiment: • The first way is to let JMP build a custom design that both matches the description of your engineering problem and remains within your budget for time and material. Custom designs are general and flexible. Custom designs are also good for routine factor screening or response optimization. For problems that are not textbook, custom designs are the only alternative. To create these tailor-made designs, select DOE > Custom Design and DOE > Augment Design. • The second way is to choose a pre-formulated design from a list of designs. This is useful when you know exactly the design you want. JMP groups designs by problem type and research goal. To choose the design you want from a list, select DOE > Screening Design, DOE > Response Surface Design, DOE > Taguchi Deisgn, or DOE > Mixture Design. This chapter briefly describes each of the design types, shows how to use the DOE dialog to enter your factors and responses, and points out the special features of a JMP design data table.
1 Introduction
1
1 Contents DOE Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Custom Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Screening Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Response Surface Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Full Factorial Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Taguchi Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Mixture Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Augment Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Sample Size and Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 A Simple DOE Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 The DOE Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Entering Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Entering Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Select a Design Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Modify a Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 The JMP DOE Data Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 DOE Utility Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Specialized Column Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1 Design of Experiments (DOE)—DOE Choices
3
The DOE platform in JMP is an environment for describing the factors, responses and other specifications, creating a designed experiment, and saving it in a JMP table. When you select the DOE tab on the JMP Starter window, you see the list of design command buttons shown on the tab page as in Figure 1.1. Alternatively, you can choose commands from the DOE main menu. Figure 1.1 The DOE JMP Starter Tab
Note that the DOE tab in the JMP Starter window tells what each command does. The specific design types are described briefly in the next sections, and covered in detail by the following chapters in this book.
Custom Design Custom designs give the most flexibility of all design choices. Using the Custom Designer you can select from the following with complete generality: • continuous factors • categorical factors with arbitrary numbers of levels • mixture ingredients • covariates (factors that already have unchangeable values and design around them) • blocking with arbitrary numbers of runs per block • interaction terms and polynomial terms for continuous factors • inequality constraints on the factors • choice of number of experimental runs to do, which can be any number greater than or equal to the number of terms in the model.
1 Introduction
DOE Choices
4
1 Design of Experiments (DOE)—DOE Choices
After specifying all your requirements, this design solution generates a D-optimal design for those requirements.
Screening Design As the name suggests, screening experiments “separate the wheat from the chaff.” The wheat is the group of factors having a significant influence on the response. The chaff is the rest of the factors. Typically screening experiments involve many factors. The Screening designer supplies a list of popular screening designs for 2 or more factors. Screening factors can be continuous or categorical with two or three levels. The list of screening designs also includes designs that group the experimental runs into blocks of equal sizes where the size is a power of two.
Response Surface Design Response Surface Methodology (RSM) is an experimental technique invented to find the optimal response within specified ranges of the factors. These designs are capable of fitting a second order prediction equation for the response. The quadratic terms in these equations model the curvature in the true response function. If a maximum or minimum exists inside the factor region, RSM can find it. In industrial applications, RSM designs involve a small number of factors. This is because the required number of runs increases dramatically with the number of factors. The Response Surface designer in JMP lists well-known RSM designs for two to eight continuous factors. Some of these designs also allow blocking.
Full Factorial Design A full factorial design contains all possible combinations of a set of factors. This is the most conservative design approach, but it is also the most costly in experimental resources. The Full Factorial designer supports both continuous factors and categorical factors with arbitrary numbers of levels.
Taguchi Arrays The goal of the Taguchi Method is to find control factor settings that generate acceptable responses despite natural environmental and process variability. In each experiment, Taguchi’s design approach employs two designs called the inner and outer array. The Taguchi experiment is the cross product of these two arrays. The control factors, used to tweak the process, form the inner array. The noise factors, associated with process or environmental variability, form the outer array. Taguchi’s Signal-to-Noise Ratios are functions of the observed responses over an outer array. The Taguchi designer in JMP supports all these features of the Taguchi method. The inner and outer array design lists use the traditional Taguchi orthogonal arrays such as L4, L8, L16, and so forth.
1 Design of Experiments (DOE)—A Simple DOE Example
5
The Mixture designer lets you define a set of factors that are ingredients in a mixture. You choose among several classical mixture design approaches, such as simplex, extreme vertices, and lattice. For the extreme vertices approach you can supply a set of linear inequality constraints limiting the geometry of the mixture factor space.
Augment Design The Augment designer gives the following five choices for adding new runs to existing design: • replicate the design a specified number of times • add center points • create a foldover design • add axial points together with center points to transform a screening design to a response surface design. • add runs to the design (augment) using a model, which can have more terms than the original model. Adding runs to a design is particularly powerful. You can use this choice to achieve the objectives of response surface methodology by changing a linear model to a full quadratic model and adding the necessary number of runs. For example, suppose you start with a two-factor, two-level, four-run design. If you add quadratic terms to the model and five new points, JMP generates the 3 by 3 full factorial as the optimal augmented design.
Sample Size and Power Use the Sample Size and Power facility to answer the question “How many runs do I need to do?” The important quantities are sample size, power, and the magnitude of the effect. These depend on the significance level, alpha, of the hypothesis test for the effect and the standard deviation of the noise in the response. You can supply either one or two of the three important quantities. If you supply only one of these values, the result is a plot of the other two. If you supply two values, the Sample Size and Power feature computes the third. This capability is available for the single sample, two sample, and k sample situations.
A Simple DOE Example The following example demonstrates the interface for choosing designs from a list. It introduces the JMP DOE dialog that lets you • enter factors and responses • choose a design
1 Introduction
Mixture Design
6
1 Design of Experiments (DOE)—The DOE Dialog
• modify a design • generate a JMP table that contains the design runs. Suppose an engineer wants to investigate a process that uses an electron beam welding machine to join two parts. The engineer fits the two parts into a welding fixture that holds them snugly together. A voltage applied to a beam generator creates a stream of electrons that heats the two parts, causing them to fuse. The ideal depth of the fused region is 0.17 inches. The engineer wants to study the welding process to determine the best settings for the beam generator to produce the desired depth in the fused region. For this study, the engineer wants to explore the following three inputs, which are the factors for the study: Operator, technicians who operate the welding machine. Rotation Speed, which is the speed at which the part rotates under the beam. Beam Current, which is a current that affects the intensity of the beam. After each processing run, the engineer cuts the part in half. This reveals an area where the two parts have fused. The Length of this fused area is the depth of penetration of the weld. This depth of penetration is the response for the study. The goals of the study are • find which factors affect the depth of the weld • quantify those effects • find specific factor settings that predict a weld depth of 0.17 inches. The next sections show how to define this study in JMP with the DOE dialog.
The DOE Dialog When you first select any command from the DOE menu, the DOE dialog appears. It has two basic panels, as illustrated by the dialog shown in Figure 1.2. • The Responses panel has a single default response. You can enter as many responses as you want, and designate response goals as Maximize, Minimize, or Match Target. A response may also have no defined goal. The DOE platform accepts only numeric responses. • The Factors panel requires that you enter one or more factors. The appearance of the Factors panel depends on the DOE command you select. For the 2-level design panel shown in Figure 1.2, enter the number of Continuous, 2-Level, or 3-level factors you want and click Add. Factor panels for other types of design are shown in more detail in the following chapters that describe the specific design types. The results when you click Continue depend on the type of design. There are examples of each design type shown in the chapters that follow. For simplicity, this example uses the Screening designer.
1 Design of Experiments (DOE)—The DOE Dialog
7
Figure 1.2 The DOE Design Experiment Dialog For a Screening Design
Responses Panel Enter response and edit response names Define response goal Target, Min, Max, or None Factors Panel Enter number of factors and click Add Edit factor names
Click to see available designs.
Entering Responses By default, the Responses panel in the DOE dialog appears with one response (named Y) that has Maximize as its goal. There are several things you can do in this panel: Add New Responses Add an additional response with a specific type of goal type using selections from the Add Response popup menu. Or, click the N Responses button and enter the number of responses you want in a dialog. Responses created with the N Responses button have a goal type of Match Target by default. Specify Goal Type To specify or change the goal type of a response, click on the response text area for a response and select from popup menu that appears, as shown in Figure 1.3
1 Introduction
Note that the Responses and Factors panels have disclosure buttons so that you can close them. This lets you simplify the dialog when you are ready to Continue.
8
1 Design of Experiments (DOE)—The DOE Dialog
Figure 1.3 Setting Goals and Specific Target Values
• For responses such as strength or yield, the best value is the largest possible. A goal of Maximize supports this objective. The Lower Limit is a response value corresponding to a Desirability value of 0.02. The Upper Limit is a response value corresponding to a Desirability value of 0.98. The Minimize goal supports the objective of the smallest value, such as when the response is impurity or defects. The Lower Limit is a response value corresponding to a Desirability of 0.98. The Upper Limit is a response value corresponding to a Desirability value of 0.02. • The Match Target goal supports the objective when the best value for a responses is a specific target value, such as with part dimensions. The Lower Limit is a response value corresponding to a Desirability value of 0.02. The Upper Limit is a response value also corresponding to a Desirability value of 0.02. The default target value is assumed to be midway between the lower and upper limits. Its Desirability value is 1.0. You can alter the default target after you make a table from the design by using the Column Info dialog for the response. Because the response variable was created by the DOE platform, a text box shows on the Column Info dialog for you to enter an asymmetric target value. Assign Importance Optionally, you can specify an importance value for each response. Importance is the weight of each response in the computing the overall desirability. If there is only one response, then importance is unnecessary. With two responses you can give greater weight to one response by assigning it a higher importance value. Example To continue with the welding example open the Responses panel if it is not already showing. Note that there is a single default response called Y. Change the default response as follows: 1 Double click to highlight the response name and change it to Depth (In.). 2 The default goal for the single default response is Maximize, but this process has a target value of 0.17 inches with a lower bound of 0.12 and an upper bound of 0.22. Click on the Goal text edit area and choose Match Target from the popup menu, as shown in Figure 1.3. 3 Click the Lower Limit text edit area and enter 0.12 as the lower limit (minimum acceptable value), Then click the Upper Limit text edit area and enter 0.22 as the upper limit (maximum acceptable value).
1 Design of Experiments (DOE)—The DOE Dialog
9
Next enter factors into the Factors panel, which shows beneath the Responses panel. Design factors have different roles that depend on design type. The Factors panel reflects roles appropriate for the design you choose. The screening design accepts either continuous or categorical factors. The example shown in Figure 1.4 has one categorical factor (Operator) and two continuous factors (Speed and Current). Enter 1 in the 2-Level Categorical text box and click Add. Enter 2 in the Continuous text box nd click Add. These three factors first appear with default names (X1, X2, and X3) and the default values shown in Figure 1.4. Figure 1.4 Screening Design with Two Continuous and One Categorical Factor
The factor names and values are editable fields. Double click on these fields to enter new names and values. For this example, use Mary and John as values for the categorical factor called Operator. Name the continuous factors Speed and Current. High and low values for Speed are 3 and 5 rpm. Values for Current are 150 and 165 amps. After you enter the response, the factors, and edit their values (optional), click Continue.
Select a Design Type When you click Continue, the next section of the design dialog unfolds. The Choose a Design panel, like the one shown in Figure 1.5 is specific to the Screening designer. Other design types work differently at this stage. Details for each are in the following chapters.
1 Introduction
Entering Factors
10 1 Design of Experiments (DOE)—The DOE Dialog
Figure 1.5 List of Screening Designs for Two Continuous and One Categorical Factors
To reproduce this example, click on Full Factorial in the list of designs to select it. The next section discusses additional steps you take in the DOE dialog to give JMP special instructions about details of the design. If necessary you can return (Back) to the list of designs and select a different design. After you select a design type, click Continue again and interact with the Display and Modify Design panel to tailor the design. These detail options are different for each type of design.
Modify a Design Special features for screening designs include the ability to list the Aliasing of Effects, Change Generating Rules for aliasing, and view the Coded Design. A standard feature for all designs lets you specify the Run Order with selections from the run order popup menu (Figure 1.6). These features are used in examples and discussed in detail in the following chapters. Figure 1.6 Select the Order of Design Runs
1 Design of Experiments (DOE)—The JMP DOE Data Table
11
Note: All dialogs have a Back button that returns you to the previous stage of the design generation, where you can change the design type selection.
The JMP DOE Data Table The example in the discussion above is for a factorial design with one 2-level categorical and two continuous factors. When you click Make Table, the JMP table in Figure 1.7 appears. The table uses the names for responses, factors, and levels assigned in the DOE dialog panels. The Pattern variable shows the coded design runs. This data table is called DOE Example 1.jmp in the Design Experiment folder in the sample data. Figure 1.7 The Generated DOE JMP Data Table
The data table panels show table properties automatically created by the DOE platform: • The name of the table is the design type that generated it. • A table variable called Design also shows the design type. You can edit this table variable to further document the table, or you can create new table variables. • A script to generate the analysis model is saved with the table. The script labeled Model is a Table Property that runs a script that generates a Model Specification dialog (Figure 1.8) with the analysis specification for the design type you picked. In this example the Model Specification dialog shows a single response, Depth (In.), three main effects, Operator, Speed, and Current, and all two factor interactions.
1 Introduction
When the design details are complete, click Make Table to create a JMP table that contains the specified design.
12
1 Design of Experiments (DOE)—DOE Utility Commands
Figure 1.8 The Model Specification dialog Generated by the DOE Dialog
DOE Utility Commands
The DOE dialog has a number of efficiency features and other utility commands accessible using the popup menu on the Design Experiment title bar. The available commands vary depending on the design platform you choose. Most of these features are for saving and loading information about variables. This is handy when you plan several experiments using the same factors and responses.
1 Design of Experiments (DOE)—DOE Utility Commands
13
Save Responses The Save Responses command creates a JMP table from a completed DOE dialog. The table has a row for each response with a column called Response Name that identifies them. Four additional columns identify response goals to the DOE facility: Lower Limit, Upper Limit, Response Goal, and an Importance weight. This example in Figure 1.9 shows a DOE dialog for four responses with a variety of response goals, and the JMP table that is created when you use the Save Responses command. Figure 1.9 Save DOE Responses in a JMP Data Table
Load Responses If the responses and response goals are in a JMP table, as described previously, you can use that table to complete the DOE dialog for an experiment. When the responses table you want is open and is the current table, the Load Responses command copies the response names and goals into the DOE dialog. If there is no response table open, Load Responses displays the Open File dialog for you to open the table you want to use. Save Factors If an experiment has many factors, it can take time to enter the names and values for each factor. After you finish you can use the Save Factors command to save your work, so you only have to do this job once. The Save Factors command creates a JMP data table that contains the information in a completed factor list. The table has a column for each factor and a row for each factor level. As an example, suppose you entered the information showing at the top in Figure 1.10. Save Factors produces the data table shown beneath the dialog in Figure 1.10.
1 Introduction
There are examples of each feature in the list below. Many of the DOE case studies later in this manual also show how to benefit from these utilities.
14
1 Design of Experiments (DOE)—DOE Utility Commands
Figure 1.10 Save DOE Factors in a JMP Data Table
The columns of this table have a Column Property called Design Role, that identifies them as DOE factors to the DOE facility, and tells what kind of factors they are (continuous, categorical, blocking, and so on). You can also create a factors table by keying data into an empty table, but you have to assign each column its factor type. Use the New Property menu in the Column Info dialog and select Design Role. Then choose the appropriate design role from the popup menu on the design role column property tab page. Load Factors If the factors and levels for an experiment are in a JMP table as described previously, you can use that table to complete the DOE dialog for an experiment. If the factors table you want is open and is the current table, the Load Factors command copies the factor names, values, and factor types into the DOE dialog. If there is no factor table open, Load Factors displays the Open File dialog for you to open the factors table you want to use. Save Constraints Entering constraints on continuous factors is another example of work you only want to do once. In the next example, there are three variables, X1, X2, and X3, with three linear constraints. The Save Constraints command creates a JMP table that contains the information you enter into a constraints panel. There is a column for each constraint. Each has a column property called Constraint State that identifies it as a ‘less than’ or a ‘greater than’ constraint to the DOE facility. There is a row for each variable and an additional row that has the inequality condition for each variable.
1 Design of Experiments (DOE)—DOE Utility Commands
15
If factors have been entered into a DOE dialog and the constraints for those factors are in a JMP table, as described previously, you can use that table to complete the DOE dialog for an experiment. When the constraints table is open and is the current table, the Load Constraints command copies the response names and goals into the DOE dialog. If there is no constraints table open, Load Constraints displays the Open File dialog for you to open the table you want to use. Set Random Seed The Custom designer begins the design process with a random number. After a design is complete the Set Random Seed command displays a dialog that shows the generating seed for that design. On this dialog you can set that design to run again, or continue with a new random number. Simulate Responses When you check Simulate Response, that item shows as checked for the current design only. It adds simulated response values to the JMP design data table for custom and augmented designs. Show Diagnostics For Custom designs only, the Show Diagnostics command displays a table with relative D, G and A efficiencies. The diagnostics report shows beneath the model in the Model panel. It is a volume criterion on the generalized variance of the estimates. 1 / p⎞ 1D-effeciency = 100 ⎛ ------⎝ N X′X ⎠ D
⎛ ⎞ p -⎟ A-efficiency = 100 ⎜ ----------------------------------------------⎝ trace ( N ( X′X ) – 1 )⎠ D
p -⎞ ⎛ -----⎜ N ⎟ D⎟ G-efficiency = 100 ⎜ ----------⎜ σM ⎟ ⎝ ⎠ where ND is the number of points in the design, p is the number of effects in the model including the intercept, and σM is the maximum standard error for prediction over the design points. D–efficiency is the default objective. The A–efficiencies and G–efficiencies determine an optimal design when multiple starts produce the same D–efficiency.
1 Introduction
Load Constraints
16 1 Design of Experiments (DOE)—DOE Utility Commands
Figure 1.11 Custom Design Showing Diagnostics
Save X Matrix For Custom designs only, the Save X Matrix command creates a script and saves it as a table property called Desgin Matrix in the JMP design data table. When this script is run, it creates a global matrix called X and displays its number of rows in the log. Suppress Cotter Designs For Screening designs only, Supress Cotter Designs removes Cotter designs from the list of screening designs. The default preference is to suppress Cotter designs. You can change this by using the Preferences dialog to enable Cotter designs. Optimality Criterion
For Custom designs only, you can modify the design criterion by selecting either D-optimal or I-optimal from the Optimality Criterion submenu. The default criterion for Recommended is D-optimal for all design types unless you click the RSM button. Number of Starts The Number of Starts command is available for the Custom platform and the Augment Design platforms. When you select this command, a dialog appears with an edit box for you to enter the number
1 Design of Experiments (DOE)—DOE Utility Commands
17
You sometimes need to change the number of starts because one problem with optimal designs is that the methods used to generate them can not always find the optimal design in cases where the optimal design is known from theory. For example, all orthogonal designs are D-optimal with respect to a linear additive model. As the number of factors and sample size increase, the optimization problem becomes harder. It is easy for an optimizer to converge to a local optimum instead of a global optimum. Two facts help improve this situation. • If random starts are used for the optimization, the design produced at the end is not always the same. By increasing the number of random starts, the determinant of the best design found thus far will be monotone non-decreasing. • For designs with all two-level factors, there is a formula for the optimal determinant: If D is the determinant, n is the sample size, and c is the number of columns in the design matrix, the LogD = cLogn. If the determinants that result from the random starts match the formula above, the algorithm stops. The design is D-optimal and orthogonal. JMP does not start over with random designs until a jackpot is hit. The time it takes for one iteration of the algorithm (coordinate exchange) increases roughly as the product of the sample size and the number of terms in the model. The number of terms in the model cannot exceed the sample size, so the time is roughly proportional to the square of the sample size. By doing a large number of random starts for small sample sizes and reducing this number proportional to the square of the sample size as the designs get larger, the total time it takes to generate a design is kept roughly constant over the range of usual sample sizes. The default number of starts for each design are as follows: • If the sample size is less than 9, the number of starts is 80. • If the sample size is between 9 and 16 inclusive, the number of starts is 40. • If the sample size is between 17 and 24 inclusive, the number of starts is 10. • If the sample size is between 25 and 32 inclusive, the number of starts is 5. • If the sample size is greater than 32, the number of starts is 2. Finally, if the number of runs is a multiple of 4, each factor has only 2 levels, and the model is linear, then the number of starts listed above is multiplied by 4. This modification of rules puts extra effort towards finding two-level fractional factorial and Plackett-Burman designs (or their equivalents). Note: To revert back to the default number of starts, you must restart JMP. For more information, see “DOE Starts,” p. 67 in the “Custom Design: Optimality Criteria and Tuning Options” chapter.
1 Introduction
of random starts for the design you want to build. The number you enter overrides the default number of starts, which varies depending on the design.
18
1 Design of Experiments (DOE)—DOE Utility Commands
Sphere Radius Custom designs can be constrained to a hypersphere. The Sphere Radius command is available for the Custom platform and the Augment Design platforms. When you select this command, a dialog appears with an edit box for you to enter the sphere radius for the design in units of the coded factors (–1, 1). The JMP scripting language also supports this command. To use JSL, submit the following command before you build a custom design: DOE Sphere Radius = 1.0;
In this statement you can replace 1.0 with any positive number. Disallowed Combinations In addition to linear inequality constraints on continuous factors and constraining a design to a hypersphere, the Disallowed Combination option lets you define general factor constraints on the factors. You can disallow any combination of levels of categorical factors. When you select Disallowed Combination, an edit box appears and prompts for an expression or the name of a script that contains a previously compiled expression. The expression must evaluate to non-zero for disallowed factor combinations. Note: When forming the expression, use the ordinal value of the level instead of the name of the level. If the level names of the factor called price are ‘High,’ ‘Medium,’ and ‘Low,’ their associated ordinal values are 1, 2, and 3. For example, in a market research choice experiment you might not want to include a choice that allows all the best features of a product at the lowest price. Suppose Feature and Price are categorical variables with three levels and you want to exclude the third Feature level (best feature) and the third Price level (lowest price). Select Disallowed Combinations, and complete the edit box as follows:
To submit a script, first submit this JSL: my constraint = expr(Feature==3 & Price==3);
At the prompt for an expression, type my constraint in the edit box. Note: This feature is available for custom and augmented designs but is not supported for experiments with either mixture or blocking factors.
1 Design of Experiments (DOE)—Specialized Column Properties
19
Special properties can be assigned to a column using the Column Info dialog. Most columns have no special properties, but some tables that have experimental design data, or tables generated by the DOE platforms have columns with specialized DOE properties. You can manually assign properties using the Column Info dialog for a selected column. Figure 1.12 shows the Column Info dialog for a column called Stretch in Bounce.jmp data table found in the Design Experiments sample data. The Stretch column has two special properties, Role and Response Limits. Figure 1.12 Column Info Dialog and New Property Menu
All the special column properties are discussed in the JMP User’s Guide. The following discussion covers properties specific to the DOE platforms and useful for analyzing DOE data. Coding The Coding property transforms the data in the range you specify from –1 to +1. The Fit Model platform uses the transformed data values to compute parameter estimates in the analysis. This transformation makes tests and parameter estimates more meaningful but does not otherwise affect the analysis. When you select the Coding property in the Column Info dialog, edit boxes appear showing the maximum and minimum values in the data. Although you can assign coding values to a column using the Coding Property edit boxes, coding usually exists when the JMP DOE facility generates a design table from values entered into a DOE Factors panel. The Coding property can be used for any continuous variable, and is the default for continuous factors generated by the DOE facility in JMP.
1 Introduction
Specialized Column Properties
20 1 Design of Experiments (DOE)—Specialized Column Properties
Mixture The Mixture property can be assigned to a column if it is one of several factors that form 100% of a mixture. When factors (columns) have the Mixture property, a no-intercept model is automatically generated by the Fit Model platform when the model includes those factors. When you generate a design table from the JMP DOE facility, the Mixture property is assigned to any mixture factors you specified in the design. Figure 1.13 shows the Column Info dialog for a column assigned the Mixture property. Information about the mixture column includes Lower Limit and Upper Limit, Sum of Terms and check boxes for pseudocomponents. Defaults are 0 for Lower Limit, and 1 for Upper Limit and Sum of Terms. The pseudocomponents are unchecked by default. If pseudocomponent coding options are specified then when the model is fit, the terms are coded as XiL = (Xi – Li)/(1 – L) for the L pseudocomponent XiU = (Ui – Li)/(U – 1) for the U pseudocomponent where Li and Ui are the lower and upper bounds, L is the sum of Li and U is the sum of Ui. The Fit Model platform uses L pseudocomponent coding when it is set for any mixture factor and U pseudocomponent coding when it is set for any mixture factor. If both are checked for a factor, the Fit Model platform uses the L coding if (1 – L) < (U – 1) and the U coding otherwise. In the output, the main effects are labeled with the coding transformation. Crossed effects are not labeled, but coding values are used. All the features of fitting, such as the profilers and saved formulas respect the pseudocomponent coding but present the uncoded values in the tables and plots. Figure 1.13 Column Info Dialog for Column with Mixture Property
Response Limits The Response Limits property gives fields to enter Lower, Middle, and Upper limits, and Desirability values for a response column in an experimental design. There is also a menu listing the same selections
1 Design of Experiments (DOE)—Specialized Column Properties
21
Design Role The Design Role property provides a menu with selections that tell how a factor column is to be used in a model for a designed experiment. The menu of factor design roles are the same as those found in the DOE custom design platform Factor panel: Continuous, Categorical, Blocking, Covariate, Mixture, Constant, Signal, and Noise. These values are usually assigned by the DOE facility when a design table is created.
1 Introduction
found in DOE custom design platform Response panel menu: Maximize, Match Target, Minimize, and None, which are the possible goals for a DOE response variable. These values are usually assigned by the DOE facility when a design table is created.
Introduction to Custom Designs The DOE platform in JMP has the following two approaches for building an experimental design: • JMP can build a design for your specific problem that is consistent with your resource budget. • You can choose a predefined design from one of the design catalogs, which are grouped by problem type. create a design to solve a specific problem choose from catalogues of listed designs modify any design
The Custom designer supports the first of these approaches. You can use it for routine factor screening, response optimization, and mixture problems. Also, the custom designer can find designs for special conditions not covered in the lists of predefined designs. This chapter introduces you to the Custom designer. It shows how to use the Custom Design interface to build a design using this easy step-by-step approach: Key engineering steps: process knowledge and engineering judgement are important.
Describe identify factors and responses
Design compute design for maximum information from runs
Collect
use design to set factors; measure response for each run
Fit compute best fit of mathematical model to data from test runs
Key mathematical steps: appropriate computer-based tools are empowering
Predict use model to find best factor settings for on-target responses and minimum variability
2 Custom 1
2
2 Contents Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Define Factors in the Factors Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Describe the Model in the Model Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Design Generation Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Design Panel and Output Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Make Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modify a Design Interactively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introducing the Prediction Variance Profiler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Quadratic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Cubic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Routine Screening Using Custom Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Main Effects Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . All Two-Factor Interactions Involving Only One Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . All Two-Factor Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How the Custom Designer Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25 25 26 27 28 28 29 30 30 33 34 35 36 37 38
2 Introduction to Custom Designs—Getting Started
25
The purpose of this chapter is to guide you through the interface of the Custom Design personality. You interact with this facility to describe your experimental situation, and JMP creates a design that fits your requirements. The Custom Design interface has these key steps: 1 Enter and name one or more responses, if needed. The DOE dialog always begins with a single response, called Y, and the Response panel is closed by default. 2 Use the Factors panel to name and describe the types of factors you have. 3 Enter factor constraints, if there are any. 4 Choose a model. 5 Modify the sample size alternatives. 6 Choose the run order. 7 Optionally, add center points and replicates. You can use the custom design dialog to enter main effects, then add interactions, and specify center points and replicates.
Define Factors in the Factors Panel When you select Custom Design from the DOE menu, or from the DOE tab on the JMP Starter, the dialog on the right in Figure 2.1, appears. One way to enter factors is to click Add N Factors text edit box and enter the number of continuous factors you want. If you want other kinds of factors click Add Factor and select a factor type: Continuous, Categorical, Blocking, Covariate, Mixture, or Constant. By default, continuous factors enter with two levels noted –1 and 1. Categorical factors have the number of levels you select in the Categorical submenu. The number of levels for either type of variable can be changed after it is entered into the factor panel. • To increase the number of levels, click on the icon area to the left of the factor name in the factors panel and select Add Level. • To remove a level, click on that level, press the delete key, and then press return. When you finish defining factors, Click Continue in the Factors panel to proceed to the next step.
2 Custom 1
Getting Started
26 2 Introduction to Custom Designs—Getting Started
Figure 2.1 Select Custom Design and Enter Factors
enter number of factors
select type of factors
Describe the Model in the Model Panel When you click Continue, the Model panel initially appears with only the main effects corresponding to the factors you entered. Next, you might want to enter additional effects to estimate. That is, if you do not want to limit your model to main effects, you can add factor interactions or powers of continuous factors to the model. This simple example has two continuous factors, X1 and X2. When you click Continue, the current Model panel appears with only those factors, as shown in Figure 2.2. The Model panel has buttons for you to add specific factor types to the model. For example, when you select 2nd from the interactions popup menu, the X1*X2 interaction term is added to the model effects.
2 Introduction to Custom Designs—Getting Started 27
The Design Generation Panel As you add effects to the model, the Design Generation panel shows the minimum number of runs needed to perform the experiment. It also shows alternate numbers of runs, or lets you choose your own number of runs. Balancing the cost of each run with the information gained by extra runs you add is a judgment call that you control. The Design Generation panel has the following radio buttons: • Minimum is the number of terms in the design model. The resulting design is saturated (no degrees of freedom for error). This is an extreme choice that can be risky, and is appropriate only when the cost of extra runs is prohibitive. • Default is a custom design suggestion for the number of runs. This value is based on heuristics for creating balanced designs with a few additional runs above the minimum. • Compromise is a second suggestion that is more conservative than the Default. Its value is generally between Default and Grid. • Grid, in most cases, shows the number of points in a full-factorial design. Exceptions are for mixture and blocking designs. Generally Grid is unnecessarily large and is only included as an option for reference and comparison. • User Specified highlights the Number of Runs text box. You key in a number of runs that is at least the minimum. When the Design Generation panel is the way you want it, click Make Design to see the factor design layout, the Design panel, appended to the Model panel in the DOE dialog.
2 Custom 1
Figure 2.2 Add Interaction Term to Model
28 2 Introduction to Custom Designs—Getting Started
The Design Panel and Output Options Before you create a JMP data table of design runs you can use the Run Order option to designate the order you want the runs to appear in the JMP data table when it is created. If you select Keep the Same, the rows (runs) in the JMP table appear as they show in the Design panel. Alternatively, you can sort the table columns or randomize the runs. There are edit boxes to request additional runs at the center points be added, and to request rows that replicate the design (including any additional center points). Note: You can double click any title bar to change its text. It can be helpful to give your design dialog a meaningful name in the title bar labeled Custom Design by default Figure 2.3 A Quadratic Model With Two Continuous Factor double click any title bar to change its name
edit to enter 4 center points edit to specify 1 design replicate
Make Table When the Design panel shows the layout you want, click Make Table. This creates the JMP data table whose rows are the runs you defined. Make Table also updates the runs in the Design panel to match the JMP data table. The table in Figure 2.4 is the initial two-factor design shown above, which has four additional center points, and is replicated once as specified above.
2 Introduction to Custom Designs—Modify a Design Interactively 29
initial design replicate 4 added center points initial design replicate 4 added center points
Modify a Design Interactively There is a Back button at several stages in the design dialog that allows you to change your mind and go back to a previous step and modify the design. For example, you can modify the previous design by adding quadratic terms to the model, by removing the center points and the replicate. Figure 2.5, shows the steps to modify a design interactively. When you click Continue the Design panel shows with eight runs as default. If you choose the Grid option, the design that results has nine runs.
2 Custom 1
Figure 2.4 Design With Four Added Center Points Replicated Once
30 2 Introduction to Custom Designs—Introducing the Prediction Variance Profiler
Figure 2.5 Back up to Interactively Modify a Design
Back up and rethink design.
Specify output options remove center points and replicates.
Add quadratic terms to model.
Choose Grid to change design from 8 runs to 9 runs and then Make Design.
Click Make Table to create JMP table.
Introducing the Prediction Variance Profiler All of the listed designs in the other design types require at least two factors. The following examples have a single continuous factor and compare designs for quadratic and cubic models. The purpose of these examples is to introduce the prediction variance profile plot.
A Quadratic Model Follow the steps in Figure 2.6, to create a simple quadratic model with a single continuous factor. 1 Add one continuous factor and click Continue. 2 Select 2nd from the Powers popup menu in the Model panel to create a quadratic term. 3 Use the default number of runs, 6, and click Make Design.
2 Introduction to Custom Designs—Introducing the Prediction Variance Profiler
31
2) Create a quadratic term with the Powers popup menu in the Model panel.
3) Select Default from the Design Generation panel to generate 6 runs.
When the design appears, open the Prediction Variance Profile, as shown in Figure 2.7. For continuous factors, the initial setting is at the mid-range of the factor values. For categorical factors the initial setting is the first level. If the design model is quadratic, then the prediction variance function is quartic. The three design points are –1, 0, and 1. The prediction variance profile shows that the variance is a maximum at each of these points, on the interval –1 to 1. The Y axis is the relative variance of prediction of the expected value of the response. Figure 2.7 Design Runs and Prediction Profile for Single Factor Quadratic Model
The prediction variance is relative to the error variance. When the prediction variance is 1, the absolute variance is equal to the error variance of the regression model. What you are deciding when you choose a sample size is how much variance in the expected response you are willing to tolerate. As the number of runs increases, the prediction curve (prediction variance) decreases.
2 Custom 1
Figure 2.6 Use One Continuous Factor and Create a Quadratic Model 1) Select Continuous from the Add Factor popup menu, then Click Continue
32 2 Introduction to Custom Designs—Introducing the Prediction Variance Profiler
To compare profile plots, use the Back button and choose Minimum in the Design Generation panel, which gives a sample size of 3. This produces a curve that has the same shape as the previous plot, but the maxima are at 1 instead of 0.5. Figure 2.8, compares plots for sample size 6 and sample size 3 for this quadratic model example. You can see the prediction variance increase as the sample size decreases. These profiles are for middle variance and lowest variance, for sample sizes 6 (top charts) and sample size 3 (bottom charts). Figure 2.8 Comparison of Prediction Variance Profiles six runs
three runs
Note: You can Control-click (Command-click on the Mac) on the factor to set a factor level precisely. For a final look at the Prediction Variance Profile for the quadratic model, use the Back button and enter a sample size of 4 in the Design Generation panel and click Make Design. The sample size of 4 adds a point at –1 (Figure 2.9). Therefore, the variance of prediction at –1 is lower (half the value) than the other sample points. The symmetry of the plot is related to the balance of the factor settings. When the design points are balanced, the plot is symmetric, like those in Figure 2.8; when the design is unbalanced, the prediction plot is not symmetric, as shown below.
2 Introduction to Custom Designs—Introducing the Prediction Variance Profiler
33
A Cubic Model The runs in the quadratic model are equally spaced. This is not true for the single-factor cubic model shown in this section. To create a one-factor cubic model, follow the same steps as shown previously in Figure 2.6. In addition, add a cubic term to the model with the Powers popup menu. Use the Default number of runs in the Design Generation panel. Click Make Design and then open the Prediction Variance Profile Plot to see the Prediction Variance Profile and its associated design shown in Figure 2.10. The cubic model has a variance profile that is a 6th degree polynomial. Figure 2.10 One-Factor Cubic Design
Unequally Spaced Points
Augmented To Have Equally Spaced Points
Note that the points are not equally spaced in X. It is interesting that this design has a better prediction variance profile than the equally spaced design with the same number of runs.
2 Custom 1
Figure 2.9 Sample Size of Four for the One-Factor Quadratic Model
34 2 Introduction to Custom Designs—Routine Screening Using Custom Designs
You can reproduce the plots in Figure 2.10, with JSL code. The following JSL code shows graphically that the design with unequally spaced points has a better prediction variance than the equally spaced design. Open the file called Cubic Model.jsl, found in the Scripts folder in the Sample Data, and select Run Script from the Edit menu. When the plot appears, move the free values from the equally spaced points to the optimal points to see that the maximum variance on the interval decreases by more than 10%. // DOE for fitting a cubic model. n = 4; // number of points //Start with equally spaced points. u = [-0.333 0.333]; x = {-1,u[1],u[2],1}; y = j(2,1,.2); cubicx = function({x1}, rr=j(4,1,1);for(i=1,i<=3,i++,rr[i+1]=x1^i); rr;); NewWindow("DOE - Variance Function of a Cubic Polynomial", Graph(FrameSize(500,300),XScale(-1.0,1.0),yScale(0,1.2), Double Buffer, M = j(n,1,1); for(i=1,i<=3,i++, M = M||(x^i)); V = M`*M; C = inverse(V); yFunction(xi=cubicx(x);sqrt(xi`*C*xi),x); detV = det(V); text({-0.3,1.1},"Determinant = ",char(detV,6,99)); DragMarker(u,y); for(i=1,i<=2,i++,Text({u[i],.25},char(u[i],6,99)));)); show(n,d,u); // Drag the middle points to -0.445 and 0.445 // for a D-Optimal design.
Routine Screening Using Custom Designs You can use the Screening designer to create screening designs, but it is not necessary. The straightforward screening examples described next show that ‘custom’ is not equivalent to ‘exotic.’ The Custom designer is a general purpose design environment. As such, it can create screening designs. The first example shows the steps to generate a main-effects-only screening design, an easy design to create and analyze. This is also easy using the Screening designer.
2 Introduction to Custom Designs—Routine Screening Using Custom Designs
35
First, enter the number of factors you want into the Factors panel and click Continue, as shown in Figure 2.11. This example uses 6 factors. Because there are no complex terms in the model no further action is needed in the Model panel. The default number of runs (8) is correct for the main-effects-only model. Note to DOE experts: The result is a resolution 3 screening design. All main effects are estimable but are confounded with two factor interactions. Click Make Design to see the Factor Design table in Figure 2.11. Figure 2.11 A Main Effects Only Screening Design
The Prediction Variance Profile in Figure 2.12, shows a variance of 0.125 (1/8) at the center of the design, which are the settings that show when you open the Prediction Variance Profile. If you did all of your runs at this point, you would have the same prediction variance. But, then you could not make predictions for any other row of factor settings. The prediction variance profile for each factor is a parabola centered at the midrange of each factor. The maximum prediction variance is at each design point and is equal to p/n, where p is the number of parameters and n is the number of runs.
2 Custom 1
Main Effects Only
36 2 Introduction to Custom Designs—Routine Screening Using Custom Designs
Figure 2.12 A Main Effects Only Screening Design
All Two-Factor Interactions Involving Only One Factor Sometimes there is reason to believe that some two-factor interactions may be important. The following example illustrates adding all the two-factor interactions involving one factor. The example has five continuous factors. Note to DOE experts: This design is a resolution 4 design equivalent to folding over on the factor for which all two factor interactions are estimable. To get a specific set of crossed factors (rather than all interactions or response surface terms), select the factor to cross (X1, for example) in the Factors table. Select the other factors in the Model Table and click Cross to see the interactions in the model table, as shown in Figure 2.13. The default sample size for designs with only two-level factors is the smallest power of two that is larger than the number of terms in the design model. For example, in Figure 2.13, there are nine terms in the model, so 24 = 16 is the smallest power of two that is greater than nine.
2 Introduction to Custom Designs—Routine Screening Using Custom Designs 37
All Two-Factor Interactions In situations where there are few factors and experimental runs are cheap, you can run screening experiments that allow for estimating all the two-factor interactions. Note to DOE experts: The result is a resolution 5 screening design. Two-factor interactions are estimable but are confounded with three-factor interactions. The custom design interface makes this simple (see Figure 2.14). Enter the number of factors. Then click Continue and choose 2nd from the Interactions popup in the Model outline, then click Make Design. Figure 2.14, shows a partial listing of the two-factor design with all interactions. The sample size is both a power of two and is large enough to fit all the terms in the model.
2 Custom 1
Figure 2.13 Two-factor Interactions that Involve Only One of the Factors
Figure 2.14 All Two-Factor Interactions
How the Custom Designer Works The Custom designer starts with a random design with each point inside the range of each factor. The computational method is an iterative algorithm called coordinate exchange (Meyer, R.K. and Nachtsheim, C.J. (1995)). Each iteration of the algorithm involves testing every value of every factor in the design to determine if replacing that value increases the optimality criterion. If so, the new value replaces the old. Iteration continues until no replacement occurs in an entire iterate. To avoid converging to a local optimum, the whole process is repeated several times using a different random start. The designer displays the best of these designs. For more details, see the section “Tuning Options for DOE,” p. 64 in the “Custom Design: Optimality Criteria and Tuning Options” chapter. Sometimes a design problem can have several equivalent solutions. Equivalent solutions are designs with equal precision for estimating the model coefficients as a group. When this is true, the design algorithm will generate different (but equivalent) designs if you press the Back and Make Design buttons repeatedly.
No list of pre-defined designs has an exact match for every industrial process. To use a pre-fabricated design you usually have to modify the process description to suit the design or make ad hoc modifications to the design so that it does a better job of modeling the process. To use the Custom designer, you first describe process variables and constraints, then JMP tailors a design that fits. This approach is general and requires less experience and expertise in statistical design of experiments. The ability to mix factor roles as required by the engineering situation is what makes the Custom Design facility so flexible. The Add Factor popup menu shows the list of roles factors can take. Here is a sample of what you can do. • You can add factors with any role in any experiment. • Categorical factors can have as many levels as you need. • You can specify any number of runs per block. • Any design can have continuous or categorical covariate factors—factors whose values are fixed in advance of the experiment. • You can have non-mixture factors in a mixture experiment. • You can disallow certain regions of the factor space by defining linear inequality constraints.
Once you generate a design, you can use the Prediction Variance Profiler as a diagnostic tool to assess the quality of the design. You can use this tool to compare many candidate designs and choose the one that best meets your needs. This chapter presents several examples with aspects that are common in industry but which make them beyond the scope of any design catalog. It introduces various features of the Custom designer in the context of solving real-world problems.
Custom II
Custom Design: Beyond the Textbook
3
3
3 Contents Custom Situations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flexible Block Sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fixed Covariate Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mixtures with Nonmixture Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Factor Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41 41 43 46 48
3 Custom Design: Beyond the Textbook—Custom Situations
41 3
When your design situation does not fit a standard design, the Custom designer gives you the flexibility to tailor a design to specific circumstances. Here are some examples. • The listed designs in the Screening designer allow only 2-level or 3-level factors. Moreover, the designs that allow blocking limit the block sizes to powers of two. Suppose you are able to do a total of 12 runs, and want to complete one block per day. With a block size of two the experiment takes six days. If you could do three runs a day, it would take only four days instead of six. • Preformulated designs rely on the assumption that the experimenter controls all the factors. It is common to have quantitative measurements (a covariate) on the experimental units before the experiment begins. If these measures affect the experimental response, the covariate should be a design factor. The preformulated design that allows only a few discrete values is too restrictive. • The Mixture designer requires all factors to be mixture components. It seems natural to vary the process settings along with the percentages of the mixture ingredients. After all, the optimal formulation could change depending on the operating environment. • Screening and RSM designs assume it is possible to vary all the factors independently over their experimental ranges. The experimenter might know in advance that running a process at certain specified settings has an undesirable result. Leaving these runs out of an available listed design type destroys the mathematical properties of the design. The Custom designer can supply a reasonable design for all these examples. Instead of a list of tables, the Custom designer creates a design table from scratch according to your specifications. Instead of forcing you to modify your problem to conform to the restrictions of a tabled design, it tailors a design to fit your needs. This chapter consists of five examples addressing these custom situations.
Flexible Block Sizes When you create a design using the Screening designer, the available block sizes for the listed designs are a power of 2. Custom designs can have blocks of any size. The blocking example shown in Figure 3.1 is flexible because is using 3 runs per block, instead of a power of 2. When you first enter the factors, a blocking factor shows only one level because the sample size is unknown at this point. When you complete the design, the number of blocks is the sample size divided by the number of runs per block. For three continuous factors and one blocking factor with 3 runs per block are entered you see the Design Generation panel shown on the right in Figure 3.1. The choice of three runs per block leads to a default sample size of 6 runs. This sample size requires 2 blocks, which now shows in the Factors panel. If you chose the Grid option with 24 runs, the Factors panel changes to show 24/3 = 8 blocks.
Custom II
Custom Situations
42 3 Custom Design: Beyond the Textbook—Flexible Block Sizes
Figure 3.1 Examples of Blocking Factor Levels 1)
2)
3)
4)
5)
If you add the two-factor interactions formed by the three continuous factors to the design, as shown by the Model panel (Figure 3.2), the default number of runs is 9. When 12 is the user-specified number of runs, 3 runs per block produces 4 blocks (as shown). The table in the example results from the Randomize within Blocks option in the Run Order popup menu on the Output Options panel. Figure 3.2 Model Design Table For Blocking Factor With Four Levels
The initial Prediction Variance Profile for this design (Figure 3.3) shows that at the center of the design, the block-to-block variance is a constant. This results from the fact that each block has three runs.
3 Custom Design: Beyond the Textbook—Fixed Covariate Factors
43 3
If you drag the vertical reference lines in the plots of X1 through X3 to their high value of 1, you see the top plot in Figure 3.4. The bottom plot results from dragging the vertical reference line for X4 to block 4. When the vertical reference line is at block 4, the prediction variance is not constant over the blocks. This is due to an unavoidable lack of balance resulting from the fact that there are three runs in each block, but only two values for each continuous variable. Figure 3.4 Block 1 and Block 4 Prediction Variance at Point (1,1,1)
The main question here is whether the size of the prediction variance over the possible factor settings is acceptably small. If not, adding more runs (up to 15 or 18) will lower the prediction variance traces.
Fixed Covariate Factors For this example, suppose there are a group of students participating in a study. A physical education researcher has proposed an experiment where you vary the number of hours of sleep (X1) and the calories for breakfast (X2) and ask each student run 1/4 mile. The weight of the student is known and it
Custom II
Figure 3.3 Constant Block-to-Block Variance at Design Center
44 3 Custom Design: Beyond the Textbook—Fixed Covariate Factors
seems important to include this information in the experimental design. To follow along with this example, open the Big Class.jmp sample data table. Build the custom design as follows: • Add two continuous variables to the models, and name them calories and sleep. • Select Covariate from the Add Factors popup menu as shown in Figure 3.5. The Covariate selection displays a list of the variables in the current data table. Select weight from the variable list and click OK. The covariate now shows in the factors panel. • Click Continue and add the interaction to the model. To add the interaction, select calories in the Factors panel, select sleep in the Model panel, and then click the Cross button. Note: If you have more than one data table open, be sure the table that contains the covariate you want is the active (the current data table). Figure 3.5 Design with Fixed Covariate
The covariate, weight, shows in the Factors panel with its minimum and maximum as levels, and is a term in the model. The data table in Figure 3.6, shows the Model panel and the resulting JMP data table. Your runs might not look the same because the model is randomly generated. Figure 3.6 Design With Fixed Covariate Factor
You can see that weight is nearly independent of the Calories and Sleep factors by running the model with the weight as Y and the two-factor interaction as in the Model Specification dialog in Figure 3.7. The leverage plots are nearly horizontal, and the analysis of variance table (not shown) shows that the model sum of squares is near zero compared to the residuals.
3 Custom Design: Beyond the Textbook—Fixed Covariate Factors
45 3
You can save the prediction equation from this analysis and use it to generate a set of predicted weight values over a grid of calories and sleep values, and append them to the column of observed weight values in the experimental design JMP table. Then use the Spinning Plot platform to generate a plot of Calories, Sleep, and weight. This is a way to illustrate that the Calories and Sleep levels are well balanced over the weight values. Note: See the Cowboy Hat Template in the Sample data folder for an example of grid generation. Figure 3.8 Three-dimensional Spinning Plot of Two Design Factors, Observed Covariate Values and Predicted Covariate Grid
Custom II
Figure 3.7 Analysis to Check That Weight is Independent of Calories and Sleep
46 3 Custom Design: Beyond the Textbook—Mixtures with Nonmixture Factors
Mixtures with Nonmixture Factors This example taken from Atkinson and Donev (1992) shows how to create designs for experiments with mixtures where one or more factors are not ingredients in the mixture. • The response is the electromagnetic damping of an acrylonitrile powder. • The three mixture ingredients are copper sulphate, sodium thiosulphate, and glyoxal. • The nonmixture environmental factor of interest is the wavelength of light. Though wavelength is a continuous variable, the researchers were only interested in predictions at three discrete wavelengths. As a result they treat it as a categorical factor with three levels. The Responses panel in Figure 3.9, shows Damping as the response. The authors do not mention how much damping is desirable so the response goal is None. The Factors panel shows the three mixture ingredients and the categorical factor, Wavelength. The mixture ingredients have range constraints that arise from the mechanism of the chemical reaction. To load these factors choose Load Factors from the popup menu on the Factors panel title bar. When the open file dialog appears, open the file Donev Mixture factors.jmp in the DOE folder in the Sample Data. Figure 3.9 Mixture Experiment Response Panel and Factors Panel
The model in Figure 3.10, is a response surface model in the mixture ingredients along with the additive effect of the wavelength. To create this model, first click the Main Effects button to include all main effects in the model. Then, highlight the mixture factors in the Factors panel, click Interactions, and choose 2nd to create the interaction terms. There are several reasonable choices for sample size. The grid option in the Design Generation Panel (Figure 3.10) corresponds to repeating a 6-run mixture design in the mixture ingredients once for each level of the categorical factor. The resulting data table with 18 rows (runs) is on the right in Figure 3.10.
3 Custom Design: Beyond the Textbook—Mixtures with Nonmixture Factors 47
3 Atkinson and Donev provide the response values shown in Figure 3.10. They also discuss the design where the number of runs is limited to 10. In that case it is not possible to run a complete mixture response surface design for every wavelength. Typing “10” in the Number of Runs edit box in the Design Generation panel (Figure 3.11) sets the run choice to User Specified. The Design table to the right in Figure 3.11, shows the factor settings for 10 runs. Figure 3.11 Ten-Run Mixture Response Surface Design
Note that there are unequal numbers of runs for each wavelength. Because of this lack of balance it is a good idea to look at the prediction variance plot (top plot in Figure 3.12). The prediction variance is almost constant across the three wavelengths which is a good indication that the lack of balance is not a problem. The values of the first three ingredients sum to one because they are mixture ingredients. If you vary one of the values, the others adjust to keep the sum constant. The bottom profiler in Figure 3.12, shows the result of increasing the copper sulphate percentage from 0.4 to about 0.6. The other two ingredients both drop, keeping their ratio constant. The ratio of Na2S2O3 to Glyoxal is approximately 2 in both plots.
Custom II
Figure 3.10 Mixture Experiment Design Generation Panel and Data Table
48
3 Custom Design: Beyond the Textbook—Factor Constraints
Figure 3.12 Prediction Variance Plots for Ten-Run Design.
Factor Constraints Sometimes it is impossible to vary all the factors independently over their experimental ranges. The experimenter might know in advance that running a process at certain specified settings has an undesirable result. Leaving these runs out of an available listed design type destroys the mathematical properties of the design, which is unacceptable. The solution is to support factor constraints as an integral part of the design requirements. Factor constraints are set in the following way: 1 Determine the levels of the factor settings in the experimental units without respect to a constraint. 2 Determine the linear multiplier for each of the factor setting (1=1*X, 2=2*X, -1=-1*X). If a constraint is set to zero for a factor, JMP will ignore its settings and set no constraint. Setting a constraint to one for a factor will include its setting with no additional linear scaling and set a constraint per the conditions indicated at the bottom. 3 Define the multiplier, then determine the boundaries of the constraint by specifying less than or equal or greater than or equal to a specified value. The user defined value at the bottom of the constraint is the sum of the constraints times the factor settings (y=X1*C1+X2*C2. Xn*Cn). 4 Select Make Design, and JMP will apply the constraint to the generation of the DOE matrix. For this example, define two factors. Suppose that it is impossible or dangerous to perform an experimental run where both factors are at either extreme. That is, none of the corners of the factor region are acceptable points. Figure 3.13, shows a set of four constraints that cut off the corner points. The figure on the right in shows the geometric view of the constrains. The allowable region is inside the diamond defined by the four constraints.
3 Custom Design: Beyond the Textbook—Factor Constraints 49
3
Figure 3.13 Factor Constraints
Next, click the RSM button in the Model panel to include the two-factor interaction term and both quadratic effects in the model. This is a second order empirical approximation to the true functional relationship between the factors and the response. Suppose the complexity of this relationship required third order terms for an adequate approximation. Figure 3.14, shows how to create a higher order cross product term. To see this example, first select X1 from the Factors panel and X2*X2 from the Model panel. Then click the Cross button to add the cross product terms to the model. Figure 3.14 Creating a Cross-Product Term
Similarly, you can add the X1*X1*X2 cross product term. To complete the full third order model, select both factors and choose 3rd from the Powers popup menu in the Model panel. There are 10 terms in the design model. A 4 by 4 grid design would be 16 runs. Choosing an intermediate value of 12 runs yields a design similar to the one in Figure 3.15. The geometric view (Bivariate fit of X1 by X2) shows many design points at or near the constraint boundary.
Custom II
If you want to avoid entering these constraints yourself, choose Load Constraints from the Design Experiments title bar. Open the sample data file Diamond Constraints.jmp in the DOE folder.
50
3 Custom Design: Beyond the Textbook—Factor Constraints
Figure 3.15 Factor Settings and Geometric View
Figure 3.16, shows the prediction variance as a function of the factor settings at the center of the design and at the upper right constraint boundary. The variance of prediction at the center of the design is 0.622645, nearly the same as it is towards the boundary, 0.825. Figure 3.16 Prediction Variance at the Center of the Design and at a Boundary.
In many situations it is preferable to have lower prediction variance at the center of the design. You can accomplish this by adding centerpoints to the design. Figure 3.17, shows the result of adding two center points after having generated the 12 run design shown in Figure 3.15. Snee (1985) calls this exercising the boss option. It is practical to add center points to a design even though the resulting set of runs loses the mathematical optimality exhibited by the previous design. It is more important to solve problems than to run “optimal” designs.
3 Custom Design: Beyond the Textbook—Factor Constraints
51 3
When you compare the variance profile in Figure 3.17 to the one on the left in Figure 3.16, you see that adding two center points has reduced the variance at the center of the design by more than half, an impressive improvement.
Custom II
Figure 3.17 Add Two Center Points to Make a 14 Point Design
52
3 Custom Design: Beyond the Textbook—Factor Constraints
The Custom Designer constructs designs by finding a set of runs that maximizes some measure of the information available in the design. There are many possible ways of measuring this information. Some criteria, such as D-optimality, focus the ability to estimate the model coefficients with minimum variance. All the custom design examples in previous chapters have been D-optimal designs. For response surface design problems, the goal of the experiment is prediction. If you can predict the response precisely anywhere inside the region of the data, then you can find the factor settings that produce the most desirable response value. For this goal, the I-optimality criterion is more appropriate. An I-optimal design minimizes the average variance of prediction over the region of the data. This chapter has three examples showing how to construct I-optimal designs. I-optimal designs are the default when you choose an RSM model. Otherwise, the default is D-optimal. One limitation of D-optimal design is its dependence on a pre-stated model. In a D-optimal design, the purpose of all the runs is to lower the variability of the coefficients of this pre-stated model. In most real situations, the form of the pre-stated model is not known in advance. Perhaps only a first-order model is necessary, but suppose there are interactions or curvature. Ideally you would like to do a good job estimating the coefficients in the model, and at the same time have the ability to detect and estimate some higher order terms. Bayesian D-optimality is a modification of the D-optimality criterion that does that. This chapter also has three examples showing how to make and when to use Bayesian D-optimal designs. The last example in the chapter is notable because it shows how to use Bayesian D-optimality to create supersaturated designs. Supersaturated designs have fewer runs than factors, which makes them attractive for factor screening when there are many factors and experimental runs are expensive. The chapter ends with a section on scripting variables you can use to modify tuning parameters in the Custom Designer.
Custom III
Custom Design: Optimality Criteria and Tuning Options
4
4
4 Contents Custom Design for Prediction (I-Optimal Design) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A One-Factor Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Three-Factor Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Response Surface with a Blocking Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model-Robust Custom Design (Bayesian D-Optimal Designs) . . . . . . . . . . . . . . . . . . . . . . . . . . Example: Two Continuous Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example: Six Continuous Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Supersaturated Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example: Twelve Factors in Eight Runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tuning Options for DOE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55 55 56 57 60 60 62 63 64 66
4 Custom Design: Optimality Criteria and Tuning Options—Custom Design for Prediction (I-Optimal Design)
55 4
By default, the Custom Designer uses the D-optimality criterion to generate a design matrix that minimizes the variance of the model coefficient estimates. This is appropriate for first-order models and in screening situations because the experimental goal in such situations is often to identify the active factors; parameter estimation is key. The goal of response surface experimentation is to predict the response rather than the coefficients. The associated objective might be to determine optimum operating conditions; to determine regions in the design space where the response falls within an acceptable range; or, to develop feedback-control models. Precise estimation of the response therefore takes precedence over precise estimation of the parameters. An alternative design criterion that supports this goal is to minimize the integrated variance of prediction (Welch (1984)). I-Optimal Criterion If f T(x) denotes a row of the X matrix corresponding to factor combinations x, then I = ∫ f T ( x )( XTX )
–1
f ( x )dx
R
= Trace [ ( X T X )
–1
M]
where M = ∫ f ( x )f ( x )T dx R
is a moment matrix that is independent of the design and can be computed in advance. Designs that minimize this criterion are referred to as I-optimal designs. D-Optimal Criterion Let D = det[XTX]. D-optimal designs maximize D.
A One-Factor Example The I-optimal design tends to place less mass at the extremes of the design space than does the D-optimal design. As a small example, consider a one-dimensional quadratic model for n = 12. The D-optimal design for this model puts four runs at each end of the range of interest and four runs in the middle. The I-optimal design puts three runs at each end point and six runs in the middle. In this case, the D-optimal design places two-thirds of its mass at the extremes versus one-half for the I-optimal design.
Custom III
Custom Design for Prediction (I-Optimal Design)
56 4 Custom Design: Optimality Criteria and Tuning Options—Custom Design for Prediction (I-Optimal Design)
Figure 4.1 compares variance profiles of the I- and D-optimal designs for the one-dimensional quadratic model with n = 12. The variance function for the I-optimal design is below the corresponding function for the D-optimal design in the center of the design space; the converse is true at the edges. The average variance (relative to the error variance) for the D-optimal design is 0.2 compared to 0.178 for the I-optimal design. This means that confidence intervals for prediction will be more than 8 percent shorter on average for the I-optimal design. Figure 4.1 Prediction Variance Profiles for 12-run I-optimal (left) and D-optimal (right) Designs
A Three-Factor Example In higher dimensions, the I-optimal design continues to place more emphasis on the center of the region of the factors. The D-optimal and I-optimal designs for fitting a full quadratic model in three factors using 16 runs are shown in Figure 4.2. It’s interesting to note that the I-optimal design is the face-centered central composite design with two center-points. Figure 4.2 16-run I-optimal (left) and D-optimal (right) designs for Full Quadratic Model.
The value, -0.06, for X3 in Run 7 in the D-optimal design at the right in Figure 4.2 looks peculiar but it is not a mistake. Such non-integer values appear due to the way the coordinate exchange algorithm
4 Custom Design: Optimality Criteria and Tuning Options—Custom Design for Prediction (I-Optimal Design) 57
4 Profile plots of the variance function are displayed in Figure 4.3. These plots show slices of the variance function as a function of each factor, with all other factors fixed at zero. The I-optimal design has the lowest prediction variance at the center. Note that there are two center points in this design. The D-optimal design has no center points and its prediction variance at the center of the factor space is almost three times the variance of the I-optimal design. The variance at the vertices of the D-optimal design is not shown. However, it should be noted that the D-optimal design predicts better than the I-optimal design near the vertices. Figure 4.3 Variance Profile Plots for 16 run I-optimal and D-optimal RSM Designs
Response Surface with a Blocking Factor It is not unusual for a process to depend on both qualitative and quantitative factors. For example, in the chemical industry, the yield of a process might depend not only on the quantitative factors temperature and pressure, but also on such qualitative factors as the batch of raw material and the type of reactor. Likewise, an antibiotic might be given orally or by injection, a qualitative factor with two levels. The composition and dosage of the antibiotic could be quantitative factors (Atkinson and Donev (1992)). The Response Surface Designer only deals with quantitative factors. One way to produce an RSM design with a qualitative factor is to replicate the design over each level of the factor. This is unnecessarily time consuming and expensive.
Custom III
iteratively improves the random starting design. In cases like this one, the initial value for this coordinate is never exchanged. This is because none of the alternative values considered by the algorithm improves the optimization criterion. Exchanging zero for -0.06 actually lowers the determinant to be maximized. In practice, you can change -0.06 to zero in the JMP data table with negligible detrimental effect.The altered design is 99.99% D-efficient.
58
4 Custom Design: Optimality Criteria and Tuning Options—Custom Design for Prediction (I-Optimal Design)
The Custom Designer does not have this limitation. The following steps show how to accommodate a blocking factor in a response surface design. • First, define two continuous factors (X1 and X2). • Click Continue and then click the RSM button in the Model panel. You should see the top panels shown in Figure 4.4. • Now, use the Add Factor menu above the Factors panel to create a blocking factor with four runs per block (X3). As soon as you add the blocking factor, the model updates to show the main effect of the blocking factor in the Model panel. The blocking factor appears with two levels, but the number of levels adjusts to accommodate the number of runs specified for the design. • Enter 12 in the Number of Runs text edit box in the Design Generation panel. Note: The Optimality Criterion menu command on the Custom Menu title bar indicates Recommended as the optimality criterion, which is I-Optimal whenever you click the RSM button. Figure 4.4 Creating a Response Surface Design
To continue, • Enter 12 in the Number of Runs text edit box and note that the Factors panel now shows the Blocking factor, X3, with three levels. The Design Generation panel shows the default number of runs to be 8, but entering 12 runs defines 3 blocks with 4 runs per block. • Click Make Design. • Select Sort Right to Left from the Run Order list • Click Make Table to see the I-optimal table on the left in Figure 4.5. Figure 4.5 compares the results of a 12 run I-Optimal design and a 12-run D-optimal Design. To see the D-optimal design, • Click the Back button. • Choose Optimality Criterion from the menu on the Custom Design title bar and select Make D-Optimal Design from its submenu.
4 Custom Design: Optimality Criteria and Tuning Options—Custom Design for Prediction (I-Optimal Design) 59
4 Figure 4.6 gives a graphical view of the designs generated by this example. These plots were generated for the runs in each JMP table with the Overlay command in the Graph menu, using the blocking factor (X3) as the Group variable. Note that there is a center point in each block of the I-optimal design (left). The D-optimal design has no center points. Instead, three of the vertices are repeated (–1, –1 in blocks 2 and 3, –1, 1 in blocks 1 and 3, and 1, 1 in blocks 1 and 2). Figure 4.6 Geometric View of I-optimal (left) and D-optimal (right) RSM with Blocking Factor
Custom III
• Again, click Make Design, then Make Table. Figure 4.5 JMP Design Tables for 12-Run I-optimal (left) and D-optimal (right) Designs
60 4 Custom Design: Optimality Criteria and Tuning Options—Model-Robust Custom Design (Bayesian
Either of the above designs supports fitting the specified model. The D-optimal design does a slightly better job of estimating the model coefficients. The D-efficiency of the I-optimal design is 92.4%. The I-optimal design is preferable for predicting the response inside the design region. Using the formulas given in the first section of this chapter, you can compute the relative average variance for these designs. The average variance (relative to the error variance) for the I-optimal design is 0.61 compared to 0.86 for the D-optimal design. This means confidence intervals for prediction will be more than 16 percent longer on average for D-optimal design.
Model-Robust Custom Design (Bayesian D-Optimal Designs) In practice, experimenters often add center points and other checkpoints to a design to help determine whether the assumed model is adequate. Although this is good practice, it is also ad hoc. The Custom Designer provides a way to improve on this ad hoc practice while supplying a theoretical foundation and an easy-to-use interface for choosing a design robust to the modeling assumptions. The purpose of checkpoints in a design is to provide a detection mechanism for higher-order effects than are contained in the assumed model. Call these higher-order terms potential terms. (There are q potential terms.) The assumed model consists of the primary terms. (There are p primary terms.) To take advantage of the benefits of the Bayesian D-optimal approach, the sample size must be larger than the number of primary terms but smaller than the sum of the primary and potential terms. That is, p + q > n > p. The Bayesian D-optimal design is an approach that allows the precise estimation of all of the primary terms while providing omnibus detectability (and some estimability) for the potential terms. A limitation of the D-optimal design is its dependence on an assumed model. By focusing on minimizing the standard errors of coefficients, a D-optimal design often does not allow for the necessity of checking that the model is correct. It might not include center points when investigating a first-order model. In the extreme, a D-optimal design may have just p distinct runs with no degrees of freedom for lack of fit. By contrast, the Bayesian D-optimal design uses the potential terms to force in runs that allow for detecting any inadequacy in the model containing only the primary terms. Let K be the (p + q) by (p + q) diagonal matrix whose first p diagonal elements are equal to 0 and whose last q diagonal elements are the constant, k. The Bayesian D-optimal design maximizes the determinant of (XT X + K). The difference between the criterion for D-optimality and Bayesian D-optimality is this constant added to the diagonal elements corresponding to the potential terms in the XT X matrix.
Example: Two Continuous Factors For a model with an intercept, two main effects, and an interaction, there are p = 4 primary terms. The Custom Designer creates a four-run design with the factor settings shown in Figure 4.7.
4 Custom Design: Optimality Criteria and Tuning Options—Model-Robust Custom Design (Bayesian D-Optimal
4 Suppose you can afford an extra run (n = 5). You would like to use this point as a checkpoint for curvature. If you leave the model the same and increase the sample size, the Custom Designer replicates one of the four vertices. This is an example of the extreme case where there is no way to use the extra run to check for lack of fit. Adding the two quadratic terms to the model makes a total of six terms. This is a way to model any curvature directly. However, to do this the Custom Designer would require two additional runs (at a minimum), which exceeds the budget of 5 runs. The Bayesian D-optimal design provides a way to check for curvature while adding only one extra run (n = 5). Add the two quadratic terms by choosing 2nd from the Powers popup menu in the Model panel. Select these two terms and change their Estimability from Necessary to If Possible as shown in Figure 4.8. Now, the four primary terms are designated as Necessary while the q = 2 potential terms are designated as If Possible. The desired number of runs, 5, is between p = 4 and p + q = 6. Enter 5 into the Number of Runs edit box and click Make Design. The resulting factor settings appear at the right in Figure 4.8. This design incorporates the single extra run but places the factor settings at the center of the design instead of one of the corners. Figure 4.8 Five-Run Bayesian D-optimal Design
Custom III
Figure 4.7 Two Continuous Factors with Interaction
62 4 Custom Design: Optimality Criteria and Tuning Options—Model-Robust Custom Design (Bayesian
Example: Six Continuous Factors In a screening situation, suppose there are six continuous factors and resources for n = 16 runs. The resolution IV 2 6–2 fractional factorial design is a classical design that springs to mind as the design of choice. This example shows how to obtain this design using the Custom Designer. The model includes the main effect terms by default. Let these be the primary terms (p = 7). Now add all the two-factor interactions by choosing 2nd from the Interactions popup menu. Select all these terms and change their Estimability column from Necessary to If Possible as in shown in Figure 4.9. Figure 4.9 Model for 6-Variables 16-Run Design with 2-Factor Interactions If Possible
Changing the Estimability of these terms makes the q = 15 two-factor interactions into potential terms. Type 16 in the Number of Runs edit box. Now, the desired number of runs, 16, is between p = 7 and p + q = 22. Select Simulate Responses from the popup menu on the Custom Design title bar and click Make Design. Then click Make Table to see the JMP design table in Figure 4.10. Figure 4.10 Design Table For 6-Variable 16-Run Design
4 Custom Design: Optimality Criteria and Tuning Options—Supersaturated Designs 63
4
Figure 4.11 Standard Least Squares Shows Singularity Details
The analysis using Standard Least Squares indicates that the design is singular. This is because there are 22 terms in the model and only 16 runs. The singularity details are shown at the right in Figure 4.11. Note that there are 7 sets of confounded model terms. Each set contains only two-factor interactions— two-factor interactions are confounded only with other two-factor interactions. This characteristic defines what it means to be a resolution IV design. The Custom Designer includes the potential terms in the Table Property, Model, that it saves in the JMP table. There are two ways to avoid the singularity messages that come when you run this default model. The easiest way to remove all the potential terms is to select them in the Fit Model dialog and click the Remove button. A better but more involved approach is to change the model personality from Standard Least Squares to Stepwise. Then, run the stepwise regression and save the model you get. A good practice is to start the stepwise process by entering all the primary terms, then click the Step button on the Stepwise control panel until all active terms are added to the model.
Supersaturated Designs In a saturated design, the number of runs equals the number of model terms. The 2 7–4 and the 2 15–11 fractional factorial designs are both saturated with respect to a main effects model. Until recently, saturated designs represented the limit of efficiency in designs for screening.
Custom III
It is not obvious by inspection that this table is a resolution IV fractional factorial design. You can verify that it is resolution IV by running the model script as shown in Figure 4.10.
64 4 Custom Design: Optimality Criteria and Tuning Options—Supersaturated Designs
In the analysis of a saturated design you can (barely) fit the least squares model, but there are no degrees of freedom for error or for lack of fit. Normal probability plots of effects and the Bayes Plot option discussed in the section Bayes Plot of the JMP Statistics And Graphics Guide are useful tools in analyzing these designs. Factor screening relies on the sparsity principle. The experimenter expects that only a few of the factors in a screening experiment are active. The problem is not knowing which are the vital few factors and which are the trivial many. It is common for brainstorming sessions to turn up dozens of factors. Yet mysteriously, screening experiments in practice rarely involve more than ten factors. What happens to winnow the list from dozens to ten or so? If the experimenter is limited to designs that have more runs than factors, then dozens of factors translates into dozens of runs. This is usually economically unfeasible. The result is that the factor list is reduced without the benefit of data. Lin (1993) discusses going a step further—to the supersaturated design. As the name suggests, a supersaturated design is one where the number of model terms exceeds the number of runs. A supersaturated design can examine dozens of factors using less than half as many runs. There are caveats, of course. If the number of active factors approaches the number of runs in the experiment, then it is likely that these factors will be impossible to identify. A rule of thumb is that the number of runs should be at least four times larger than the number of active factors. If you expect that there may be as many as five active factors, you should have at least 20 runs. Analysis of supersaturated designs cannot yet be reduced to an automatic procedure. However, using forward stepwise regression is a reasonable start.
Example: Twelve Factors in Eight Runs As a simple example, consider a supersaturated design with twelve factors. Bayesian D-optimal design provides the machinery for creating a supersaturated design. Instead of considering interactions or polynomial terms as potential, in a supersaturated design even the main effects are potential! Note in Figure 4.12 that the only primary term is the intercept. Type 8 in the Number of Runs edit box. Now, the desired number of runs, 8, is between p = 1 and p + q = 13. Select Simulate Responses from the popup menu on the Custom Design title bar and click Make Design, then click Make Table.
4 Custom Design: Optimality Criteria and Tuning Options—Supersaturated Designs 65
4 Figure 4.13 shows a non-modal dialog window named Simulate Responses on the left and a JMP table at the right. The Y column values are controlled by the coefficients of the model in the Simulate Responses dialog. Change the default settings of the coefficients in the Simulate Responses dialog to match those in Figure 4.13 and click Apply. The numbers in the Y column change. Note that random noise is added to the Y column formula, so the numbers you see might not necessarily match those in the Y column. Figure 4.13 Give Values to Three Main Effects and Specify the Error Std as 0.5
Run the script of the Model table property as shown in Figure 4.13. Be sure to change the Personality in the Model Specification Dialog from Standard Least Squares to Stepwise, then click Run Model
Custom III
Figure 4.12 The Estimability of All Terms Except for the Intercept are If possible
66 4 Custom Design: Optimality Criteria and Tuning Options—Tuning Options for DOE
Figure 4.14 Stepwise Regression Identifies Active Factors In Model Specification dialog, change fitting personality from Standard Least Squares to Stepwise
Stepwise regression identifies three active factors, X2, X9, and X10 in three steps.
This example defines a few large main effects and sets the rest to zero. In real situations it is much less likely that the effects will be so clearly differentiated.
Tuning Options for DOE There are scripting variables that let you predefine guidelines for the design search by the Custom Designer. The following variables are described briefly here, and examples also appear in relevant chapters: DOE DOE DOE DOE DOE
Starts Search Points Per Factor K Exchange Value Starting Design Mixture Sum
To use these variables, type an assignment in a script window and submit it. The following sections describe the DOE tuning variables and show examples of how to write assignment statements. Note: Once one of these variables is set in a JMP session, you cannot return to its default setting. To reset the defaults, you must quit and restart JMP.
4 Custom Design: Optimality Criteria and Tuning Options—Tuning Options for DOE 67
4 One problem with optimal designs is that the methods used to generate them can not always find the optimal design in cases where the optimal design is known from theory. For example, all orthogonal designs are D-optimal with respect to a linear additive model. You can use the Number of Starts command in the menu on the platform title bar to reset the number of random starts. Or, for patient power users, there is a way to override the default choice of the number of random starts. For example, the following JSL assignment produces 200 random starts: DOE Starts = 200;
The Custom Designer looks for a global variable named DOE Starts and uses its value instead of the default. To revert back to the default number of starts you must restart JMP. For more information, see “Number of Starts,” p. 16 in the “Design of Experiments (DOE)” chapter. DOE Search Points Per Factor For a linear model, the coordinate exchange algorithm in the Custom Designer only considers the high and low values by default. Suppose the low and high values for a factor are -1 and 1 respectively. If you submit the JSL command: DOE Search Points Per Factor = 11;
then for each row, the coordinate exchange algorithm checks the eleven values –1, -0.8, –0.6, –0.4, –0.2, 0, 0.2, 0.4, 0.6, 0.8, 1.0. DOE K Exchange Value By default, the coordinate exchange algorithm considers every row of factor settings for possible replacement in every iteration. Some of these rows may never change. For example DOE K Exchange Value = 3;
sets this value to a lower number, three in this case, so the algorithm only considers the most likely three rows for exchange in each iteration. DOE Starting Design DOE Starting Design = matrix;
replaces the random starting design with a specified matrix. If a starting design is supplied, the Custom Designer has only one start using this design. DOE Mixture Sum If you want to keep a component of a mixture constant throughout an experiment, then the sum of the other mixture components must be less than one. For example, if you have a mixture factor with a constant value of 0.1, then the command DOE Mixture Sum = 0.9;
constrains the remaining mixtures factors to sum to 0.9 instead of the default 1.0. To make the constant mixture factor appear in the generated JMP table, add it as a constant factor.
Custom III
DOE Starts
Screening Designs
Screening designs are arguably the most popular designs for industrial experimentation. They are attractive because they are a cheap and efficient way to begin improving a process. The purpose of screening experiments is to identify the key factors affecting a response. Compared to other design methods, screening designs require fewer experimental runs, which is why they are cheap. The efficiency of screening designs depends on the critical assumption of effect sparsity. Effect sparsity results because ‘real-world’ processes usually have only a few driving factors; other factors are relatively unimportant. To understand the importance of effect sparsity, you can contrast screening designs to full factorial designs. A full factorial consists of all combinations of the levels of the factors. The number of runs is the product of the factor levels. For example, a factorial experiment with a two-level factor, a three-level factor, and a four-level factor has 2 x 3 x 4 = 24 runs. By contrast screening designs reduce the number of runs in two ways: 1 restricting the factors to two (or three) levels. 2 performing only a fraction of the full factorial design Applying these to the case described above, you can restrict the factors to two levels, which yield 2 x 2 x 2 = 8 runs. Further, by doing half of these eight combinations you can still assess the separate effects of the three factors. So the screening approach reduces the 24-run experiment to 4 runs. Of course, there is a price for this reduction. This chapter discusses the screening approach in detail, showing both pros and cons.
5 Screening
5
5 Contents Screening Design Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two-Level Full Factorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two-Level Fractional Factorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plackett-Burman Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mixed-Level Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cotter Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Screening Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two-Level Design Selection and Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Display and Modify Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Output Options for the JMP Design Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Design Data Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Loading and Saving Responses and Factors (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Simple Effect Screening Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Main Effects Report Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Actual-by-Predicted Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Scaled Estimates Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71 71 71 72 72 73 74 74 76 78 79 80 81 81 82 83
5 Screening Designs—Screening Design Types
71
The design list for the Screening designer features four types of designs. The discussion below compares and contrasts these design types.
Two-Level Full Factorial A full factorial design contains all combinations of the levels of the factors. The samples size is the product of the levels of the factors. For two-level designs, this is 2k where k is the number of factors. This can be expensive if the number of factors is greater than 3 or 4. These designs are orthogonal. This means that the estimates of the effects are uncorrelated. If you remove an effect in the analysis, the values of the other estimates remain the same. Their p-values change slightly, because the estimate of the error variance and the degrees of freedom are different. Full factorial designs allow the estimation of interactions of all orders up to the number of factors. Most empirical modeling involves first- or second-order approximations to the true functional relationship between the factors and the responses. The left-hand illustration in Figure 5.1 is a geometric representation of a two-level factorial.
Two-Level Fractional Factorial A fractional factorial design also has a sample size that is a power of two. If k is the number of factors, the number of runs is 2k – p where p < k. The fraction of the full factorial is 2-p. Like the full factorial, fractional factorial designs are orthogonal. The trade-off in screening designs is between the number of runs and the resolution of the design. If price is no object, you can run several replicates of all possible combinations of m factor levels. This provides a good estimate of everything, including interaction effects to the mth degree. But because running experiments costs time and money, you typically only run a fraction of all possible levels. This causes some of the higher-order effects in a model to become nonestimable. An effect is nonestimable when it is confounded with another effect. In fact, fractional factorials are designed by planning which interaction effects are confounded with the other interaction effects. In practice, few experimenters worry about interactions higher than two-way interactions. These higher-order interactions are assumed to be zero. Experiments can therefore be classified by resolution number into three groups: • Resolution = 3 means that Main effects are not confounded with other main effects. They are confounded with one or more two-way interactions, which must be assumed to be zero for the main effects to be meaningful. • Resolution = 4 means that main effects are not confounded with either other main effects or two-factor interactions. However, two-factor interactions can be confounded with other two-factor interactions. • Resolution ≥ 5 means there is no confounding between main effects, between two-factor interactions, or between main effects and two-factor interactions.
5 Screening
Screening Design Types
72 5 Screening Designs—Screening Design Types
All the fractional factorial designs are minimum aberration designs. A minimum aberration design is one in which there are a minimum number of confoundings for a given resolution. The right-hand illustration in Figure 5.1 is geometric representation of a two-level fractional factorial design. Figure 5.1 Representation of Full Factorial (left) and Two-Level Fractional Factorial (right) -1, 1, -1 –1, –1, –1 1, -1, -1
1, 1, -1 1, 1, 1
-1, -1, 1
Plackett-Burman Designs Plackett-Burman designs are an alternative to fractional factorials for screening. One useful characteristic is that the sample size is a multiple of 4 rather than a power of two. There are no two-level fractional factorial designs with sample sizes between 16 and 32 runs. However, there are 20-run, 24-run, and 28-run Plackett-Burman designs. The main effects are orthogonal and two-factor interactions are only partially confounded with main effects. This is different from resolution 3 fractional factorial where two-factor interactions are indistinguishable from main effects. In cases of effect sparsity, a stepwise regression approach can allow for removing some insignificant main effects while adding highly significant and only somewhat correlated two-factor interactions.
Mixed-Level Designs If you have qualitative factors with three values, then none of the classical designs discussed previously are appropriate. For pure three-level factorials, JMP offers fractional factorials. For mixed two-level and three-level designs, JMP offers complete factorials and specialized orthogonal-array designs, listed in Table 5.1 “Types of Mixed-Level Designs,” p. 72. Table 5.1 Types of Mixed-Level Designs Design
Two–Level Factors
Three–Level Factors
L18 John L18 Chakravarty L18 Hunter L36
1 3 8 11
7 6 4 12
5 Screening Designs—Screening Design Types 73
Cotter Designs Cotter designs are used when you have very few resources and many factors, and you believe there may be interactions. Suppose you believe in effect sparsity— that very few effects are truly nonzero. You believe in this so strongly that you are willing to bet that if you add up a number of effects, the sum will show an effect if it contains an active effect. The danger is that several active effects with mixed signs will cancel and still sum to near zero and give a false negative. Cotter designs are easy to set up. For k factors, there are 2k + 2 runs. The design is similar to the “vary one factor at a time” approach many books call inefficient and naive. A Cotter design begins with a run having all factors at their high level. Then follow k runs each with one factor in turn at its low level, and the others high. The next run sets all factors at their low level and sequences through k more runs with one factor high and the rest low. This completes the Cotter design, subject to randomizing the runs. When you use JMP to generate a Cotter design, JMP also includes a set of extra columns to use as regressors. These are of the form factorOdd and factorEven where factor is a factor name. They are constructed by adding up all the odd and even interaction terms for each factor. For example, if you have three factors, A, B, and C: AOdd = A + ABC BOdd = B + ABC COdd = C + ABC
AEven = AB + AC BEven = AB + BC CEven = BC + AC
It turns out that because these columns in a Cotter design make an orthogonal transformation, testing the parameters on these combinations is equivalent to testing the combinations on the original effects. In the example of factors listed above, AOdd estimates the sum of odd terms involving A. AEven estimates the sum of the even terms involving A, and so forth. Because Cotter designs have a false-negative risk, many statisticians recommend against them. Note: By default, Cotter designs do not appear in the list of screening designs in the design dialog. To select a Cotter design, first use the preferences panel for DOE and uncheck Suppress Cotter Designs. That is, select the Preferences command. Then click the Platform tab on the Preferences window and choose DOE from the list that appears. The list of check boxes for DOE includes Suppress Cotter Designs, which is checked by default. Uncheck this box to see Cotter designs in the list of screening design.
5 Screening
If you have less than or equal to the number of factors for a design listed in the table, you can use that design by selecting an appropriate subset of columns from the original design. Some of these designs are not balanced, even though they are all orthogonal.
74 5 Screening Designs—A Screening Example
A Screening Example Experiments for screening the effects of many factors usually consider only two levels of each factor. This allows the examination of many factors with a minimum number of runs. Often screening designs are a prelude to further experiments. It is wise to spend only about a quarter of your resource budget on an initial screening experiment. You can then use the results to guide further study. The following example, adapted from Meyer, et al. (1996), demonstrates how to use the JMP Screening designer. In this study, a chemical engineer investigates the effects of five factors on the per-
cent reaction of a chemical process. The factors are: • feed rate, the amount of raw material added to the reaction chamber in liters per minute • percentage of catalyst • stir rate, the RPMs of a propeller in the chamber • reaction temperature in degrees Celsius • concentration of reactant. To begin, choose Screening Design from the DOE tab on the JMP Starter or from the DOE main menu.
Two-Level Design Selection and Description When you choose Screening Design the dialog shown in Figure 5.2, appears. Fill in the number of factors (up to 31). For the reactor example add 5 factors. Then, modify the factor names and give them high and low values. To edit the names of factors, double click on the text and type new names. • Change the default names (X1-X5) to Feed Rate, Catalyst, Stir Rate, Temperature, and Concentration. • Enter the high and low values as shown in Figure 5.2.
5 Screening Designs—A Screening Example 75
Click Add to add one factor or enter number of factors to add in text box.
Optionally, edit factor names, factor values, response names, goals and limits. If the Responses outline level is closed, click the disclosure diamond to open it. You see one default response called Y. Double click on the name and change it to Percent Reacted. In this experiment the goal is to maximize the response, which is the default goal. To see the popup list of other goal choices shown to the right, click on the word Maximize. Change the minimum acceptable reaction percentage (Lower Limit) to 90 as shown in Figure 5.2. When you complete these changes, click Continue. Now, JMP lists the designs for the number of factors you specified, as shown to the left in Figure 5.3. Select the first item in the list, which is an 8-run fractional factorial design. Click Continue again to see the Design Output Options panel on the right in Figure 5.3. Figure 5.3 Two-level Screening Design (left) and Design Output Options (right)
5 Screening
Figure 5.2 Enter Responses and Factors
76 5 Screening Designs—A Screening Example
Display and Modify Design The Design Output Options Panel supplies ways to describe and modify a design. Change Generating Rules
controls the choice of different fractional factorial designs for a given number of factors. Aliasing of Effects
shows the confounding pattern for fractional factorial designs. Coded Design
shows the pattern of high and low values for the factors in each run. The following sections describe these design options. The Coded Design and Factor Generators Open the Coded Design outline level to see the pattern of high and low levels for each run as shown to the left in Figure 5.4 Each row is a run. Plus signs designate high levels and minus signs represent low levels. Note that rows for the first three columns of the coded design, which represent Feed Rate, Catalyst, and Stir Rate are all combinations of high and low values (a full factorial design). The fourth column (Temperature) of the coded design is the element-by-element product of the first three columns. Similarly, the last column (Concentration) is the product of the second and third columns. The Change Generating Rules table to the right in Figure 5.4, also shows how the last two columns are constructed in terms of the first three columns. The check marks for Temperature show it is a function of Feed Rate, Catalyst, and Stir Rate. The check marks for Concentration show it is a function of Catalyst and Stir Rate. Figure 5.4 Default Coded Designs and Generating Rules
You can change the check marks in the Change Generating Rules panel to change the coded design. For example, if you enter check marks as in Figure 5.5, and click Apply, the Coded Design changes as shown. The first three columns of the coded design remain a full factorial for the first three factors (Feed Rate, Catalyst, and Stir Rate). Note: Be sure to click Apply to switch to the new generating rules.
5 Screening Designs—A Screening Example 77
Figure 5.5 Modified Coded Designs and Generating Rules
Aliasing of Effects A full factorial with 5 factors requires 25 = 32 runs. Eight runs can only accommodate a full factorial with three, 2-level factors. As described above, it is necessary to construct the two additional factors in terms of the first three factors. The price of reducing the number of runs from 32 to 8 is effect aliasing (confounding). Confounding is the direct result of the assignment of new factor values to products of the coded design columns. For example, the values for Temperature are the product of the values for Feed Rate and Concentration. This means that you can’t tell the difference of the effect of Temperature and the synergistic (interactive) effect of Feed Rate and Concentration. The Aliasing of Effects panel shows which effects are confounded with which other effects. It shows effects and confounding up to two-factor interactions. In the example shown in Figure 5.6, all the main effects are confounded with two-factor interactions. This is characteristic of resolution 3 designs. Figure 5.6 Generating Rules and Aliasing of Effects Panel
5 Screening
Temperature is now the product of Feed Rate and Catalyst, so the fourth column of the coded design is the element by element product of the first two columns. Concentration is a function of Feed Rate and Stir Rate.
78 5 Screening Designs—A Screening Example
The red triangle above the Coded Design button displays the ShowConfounding Pattern option (Figure 5.7). This option displays a dialog for you to enter the number of aliasing to show. When you click OK, the aliasing pattern for the level you specified is listed in a new JMP data table. The table to the right in Figure 5.7 shows the third level aliasing for the 5 factor reactor example. The Effect Names begin with C (Constant) and are shown by their order number in the design. Thus, Temperature shows as “4”, with 2nd order confounding “1 5” (Feed Rate and Concentration), and 3rd order confounding is “1 2 3” (Feed Rate, Catalyst, and Stir Rate). Figure 5.7 Show Confounding Patterns
Output Options for the JMP Design Table The design dialog has options shown in Figure 5.8 that modify the final design table: Run Order
gives the popup menu (shown next), which determines the order of runs as they will appear in the JMP data table. Number of Center Points
lets you add as many additional center points as you want. Number of Replicates
lets you repeat the complete set experimental runs a specified number of times. Figure 5.8 Output Options for Design Table
5 Screening Designs—A Screening Example 79
The Design Data Table When you click Generate Table JMP creates and displays the data table shown in Figure 5.9, that lists the runs for the design you selected. In addition, it has a column called Percent Reacted for recording experimental results, as shown to the right of the data table. The high and low values you specified show for each run. If you don’t enter values in the Design Specification dialog, the default is –1 and 1 for the low and high values of each factor. The column called Pattern shows the pattern of low values denoted “–” and high values denoted “+”. Pattern is especially suitable to use as a label variable in plots. Figure 5.9 JMP Table of Runs for Screening Example
The Design of Experiments facility in JMP automatically generates a JMP data table with a JSL script that creates a Model Specification dialog with the appropriate model for the analysis of the specified design. If you double click on the Table Property name, Model, the dialog shown here appears with the JSL script generated by the DOE facility. Figure 5.10 JMP Table Variable with JSL Command to Analyze Design Table
The model generated by this example contains all the main effects and two estimable interaction terms, as shown in Figure 5.11. The two-factor interactions in the model actually represent a group of aliased interactions. Any predictions made using this model implicitly assume that these interactions are active rather than the others in the group.
5 Screening
When the options for the design table are the way you want them, click Make Table to generate the JMP data table that lists the design runs. Note that the Back button let’s you return to the previous stage of design if you want to make changes.
80 5 Screening Designs—Loading and Saving Responses and Factors (Optional)
Figure 5.11 Model Specification Dialog Generated by the Design Table with Interaction Terms
Loading and Saving Responses and Factors (Optional) If you plan to do further experiments with factors you have given meaningful names and values, it is convenient to save the factor information and load the stored information directly into the Factors panel. The popup menu on the Design Experiment title bar has commands to save the information you entered, and retrieve it later to reconstruct a design table. The reactor data is a good example. The names and values of the 5 factors shown in the dialog can be saved to a JMP data table with the Save Factors command in the platform popup menu. Save Factors creates the JMP Data table shown in Figure 5.12. The data table contains a column for each factor, and a row for each factor level. Use the Save or the Save As command to name the factors table and save it.
To load the factor names and level values into the DOE dialog: • open the data table that contains the factor names and levels • select the design type you want from the DOE menu • choose Load Factors from the Design dialog menu. Use the same steps to save and reload information about Responses.
5 Screening Designs—A Simple Effect Screening Analysis
81
See the first chapter, “Design of Experiments (DOE)” chapter, for a description of all the platform commands.
A Simple Effect Screening Analysis Of the five factors in the reaction percentage experiment, you expect a few to stand out in comparison to the others. The next sections show an approach to an analysis that looks for active effects, using the table generated previously by the DOE facility and the model. Open the sample data table Reactor 8 Runs.jmp to run the model generated by the data, as shown previously in Figure 5.9. You can choose the Model script stored as a Table Property (automatically generated by the DOE facility) to see the Model Specification dialog, or choose Fit Model from the Analyze menu and the model saved as a Table Property by the DOE facility automatically fills the Model Specification dialog.
Main Effects Report Options The Fit Model report consists of the outline shown to the left in Figure 5.13. The Factor Profiling command in the platform menu shown to the right in Figure 5.13, accesses these effect profiling tools: Profiler
shows how a predicted response changes as you change any factor. Interaction Plots
gives multiple profile plots across one factor under different settings of another factor. Contour Profiler
shows how predicted values change with respect to changing factors two at a time.
5 Screening
Figure 5.12 Load Factor Names and Values from a JMP Table
82 5 Screening Designs—A Simple Effect Screening Analysis
Cube Plots
show predicted values in the corners of the factor space. Box Cox Transformation
finds a power transformation of the response that would fit best. Figure 5.13 Platform Commands for Fit Model Report
The Actual-by-Predicted Plot The Actual-by-Predicted plot, shown on the left in Figure 5.14, is at the top of the Fit Model report. The pattern variable in the data table shows as the label for each point. The mean line falls inside the bounds of the 95% confidence curves, which tells you that the model is not significant. The model p-value, R-square and RMSE appear below the plot. The RMSE is an estimate of the standard deviation of the process noise assuming that the unestimated effects are negligible. In this case the RMSE is 14.199, which is much larger than expected. This suggests that effects other than the main effects of each factor are important. Because of the confounding between two-factor interactions and main effects in this design, it is impossible to determine which two-factor interactions are important without performing more experimental runs. Figure 5.14 The Actual by Predicted plot and Scaled Estimates Report from a Fit Model Analysis
5 Screening Designs—A Simple Effect Screening Analysis
83
The report on the right in Figure 5.14 is the Scaled Estimates report. It displays a bar chart of the individual effects embedded in a table of parameter estimates. The last column of the table has the p-values for each effect. None of the factor effects are significant, but the Catalyst effect is large enough to be interesting if it is real. At this stage the results are not clear, but this does not mean that the experiment has failed. It means that some follow-up runs are necessary. If you want to find out how this story ends, look ahead in the chapter “Augmented Designs,” p. 149. For comparison, the chapter “Full Factorial Designs,” p. 115, has the complete 32-run factorial experimental data and analysis.
5 Screening
The Scaled Estimates Report
Response Surface Designs
Response surface designs are useful for modeling a curved surface (quadratic) to continuous factors. If a minimum or maximum response exists inside the factor region, a response surface model can pinpoint it. Three distinct values for each factor are necessary to fit a quadratic function, so the standard two-level designs cannot fit curved surfaces. The most popular response surface design is the central composite design, illustrated by the left-hand diagram below. It combines a two-level fractional factorial and two other kinds of points: • Center points, for which all the factor values are at the zero (or midrange) value. • Axial (or star) points, for which all but one factor are set at zero (midrange) and that one factor is set at outer (axial) values. The Box-Behnken design, illustrated by the right-hand diagram shown below, is an alternative to central composite designs. One distinguishing feature of the Box-Behnken design is that there are only three levels per factor. Another important difference between the two design types is that the Box-Behnken design has no points at the vertices of the cube defined by the ranges of the factors. This is sometimes useful when it is desirable to avoid these points due to engineering considerations. The price of this characteristic is the higher uncertainty of prediction near the vertices compared to the central composite design. central composite design
Box-Behnken design fractional factorial points
axial points
center points
6 Surface
6
6 Contents The Response Surface Design Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Design Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Axial Scaling Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Central Composite Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fitting the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Box-Behnken Design: The Tennis Ball Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Geometry of a Box-Behnken Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Analysis of Response Surface Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
87 88 88 89 90 91 93 94
6 Response Surface Designs—The Response Surface Design Dialog 87
The Response Surface Design command on the DOE main menu (or DOE JMP Starter tab page) displays the dialog, shown to the left in Figure 6.1, for you to enter factors and responses. When you click Continue the list of design selections shown on the right appears. The response surface design list has a Box-Behnken design and two types of central composite design, called uniform precision and orthogonal. These properties of central composite designs relate to the number of center points in the design and to the axial values: • Uniform precision means that the number of center points is chosen so that the prediction variance at the center is approximately the same as at the design vertices. • For orthogonal designs, the number of center points is chosen so that the second order parameter estimates are minimally correlated with the other parameter estimates. Figure 6.1 Design Dialogs to Specify Factors and Choose Design Type
To complete the dialog, enter the number of factors (up to eight) and click Continue. In the table shown to the right in Figure 6.1, the 15- run Box-Behnken design is selected. Click Continue to use this design. The left panel in Figure 6.2, shows the next step of the dialog. To reproduce the right panel of Figure 6.2, specify 1 replicate with 2 center points per replicate, and change the run order popup choice to Randomize. Also, select Simulate Responses from the popup menu on the Response Surface Design title bar. When you finish specifying the output options you want, click Generate Table.
6 Surface
The Response Surface Design Dialog
88 6 Response Surface Designs—The Response Surface Design Dialog
Figure 6.2 Design Dialog to Modify Order of Runs and Simulate Responses
The Design Table The JMP data table (Figure 6.3) lists the design runs specified in Figure 6.2. Note that the design table also has a column called Y for recording experimental results. Figure 6.3 The JMP Design Facility Automatically Generates a JMP Data Table
runs in random order two center points per replicate
Axial Scaling Options When you select a central composite (CCD-Uniform Precision) design and then click Continue, you see the dialog on the right in Figure 6.4. The dialog supplies default axial scaling information but you can use the options described next and enter the values you want.
6 Response Surface Designs—The Response Surface Design Dialog 89
The axial scaling options control how far out the axial points are: Rotatable
makes the variance of prediction depend only on the scaled distance from the center of the design. Orthogonal
makes the effects orthogonal in the analysis. In both previous cases the axial points are more extreme than the –1 or 1 representing the range of the factor. If this factor range cannot be practically achieved, then you can choose either of the following options: On Face
is the default. These designs leave the axial points at the end of the -1 and 1 ranges. User Defined
uses the value entered by the user, which can be any value greater than zero. Inscribe
rescales the whole design so that the axial points are at the low and high ends of the range (the axials are –1 and 1 and the factorials are shrunken in from that).
A Central Composite Design The generated design, shown in the JMP data table in Figure 6.3, lists the runs for the design specified in Figure 6.2. Note that the design table also has a column called Y for recording experimental results. Figure 6.5, shows the specification and design table for a 20-run 6-block Central Composite design with simulated responses.
6 Surface
Figure 6.4 CCD-Uniform Design With a Specified Orthogonal Type of Axial Scaling
90 6 Response Surface Designs—The Response Surface Design Dialog
Figure 6.5 Central Composite Response Surface Design
The column called Pattern identifies the coding of the factors. The Pattern column shows all the factor codings with “+” for high, “–” for low, “a” and “A” for low and high axial values, and “0” for midrange. If the Pattern variable is a label column, then when you click on a point in a plot of the factors, the pattern value shows the factor coding of the point. Note: The resulting data table has a Table Variable called Design that contains the design type. This variable appears as a note at the top of the Tables panel to the left of the data grid. In this example, Design says Central Composite. The table also contains a model script stored as a Table Property and labeled Model.
Fitting the Model When you click the Table Property icon for the model (in the Tables panel to the left of the data grid), a popup menu appears with the Run Script command. The Run Script command opens the Model Specification dialog window and lists the appropriate effects for the model you selected. This example has the main effects and interactions as seen in Figure 6.6. When you collect data, you can key or paste them into the design table and run this model. The model is permanently stored with the data table.
6 Response Surface Designs—A Box-Behnken Design: The Tennis Ball Example 91
A Box-Behnken Design: The Tennis Ball Example The Bounce Data.jmp sample data file has the response surface data inspired by the tire tread data described in Derringer and Suich (1980). The objective is to match a standardized target value, given as 450, of tennis ball bounciness. The bounciness varies with amounts of Silica, Silane, and Sulfur used to manufacture the tennis balls. The experimenter wants to collect data over a wide range of values for these variables to see if a response surface can find a combination of factors that matches a specified bounce target. To begin, select Response Surface Design from the DOE menu. The responses and factors information is in existing JMP files found in the Design Experiment Sample Data folder. Use the Load Responses and Load Factors commands in the popup menu on the RSM Design title bar to load the response file called Bounce Response.jmp and the factor file called Bounce Factors.jmp. Figure 6.7, shows the completed Response panel and Factors panel.
6 Surface
Figure 6.6 Model Specification dialog for Response Surface Design
92 6 Response Surface Designs—A Box-Behnken Design: The Tennis Ball Example
Figure 6.7 Response and Factors For Bounce Data
After the response data and factors data loads, the Response Surface Design Choice dialog lists the designs in Figure 6.8. Figure 6.8 Response Surface Design Selection
The Box-Behnken design selected for three effects generates the design table of 15 runs shown in Figure 6.9. The data are in the Bounce Data.jmp sample data table. The Table Variable (Model) runs a script to launch the Model Specification dialog. After the experiment is conducted, the responses are entered into the JMP table.
6 Response Surface Designs—A Box-Behnken Design: The Tennis Ball Example 93
Geometry of a Box-Behnken Design The geometric structure of a design with three effects is seen by using the Spinning Plot platform. The spinning plot shown in Figure 6.10, illustrates the three Box-Behnken design columns. Options available on the spin platform draw rays from the center to each point, inscribe the points in a box, and suppress the x, y, and z axes. You can clearly see the 12 points midway between the vertices, leaving three points in the center. Figure 6.10 Spinning Plot of a Box-Behnken Design for Three Effects
6 Surface
Figure 6.9 JMP Table for a Three-Factor Box-Behnken Design
94 6 Response Surface Designs—A Box-Behnken Design: The Tennis Ball Example
Analysis of Response Surface Models To analyze response surface designs, select the Fit Model command from the Analyze menu and designate the surface effects in the Model Specification dialog. To do this, select (highlight) the surface effects in the dialog variable selection list. Then select Response Surface from the Macros popup menu (see Figure 6.6), which automatically adds the response surface effects to the model. However, if the table to be analyzed was generated by the DOE Response Surface designer, then the Model table property script automatically assigns the response surface attribute to the factors, as previously illustrated in Figure 6.6. Analysis Reports The standard analysis results appear in tables shown in Figure 6.11, with parameter estimates for all surface and crossed effects in the model. The prediction model is highly significant with no evidence of lack of fit. All main effect terms are significant as well as the two interaction effects involving Sulfur. Figure 6.11 JMP Statistical Reports for a Response Surface Analysis of Bounce Data
See the chapter “Standard Least Squares: Introduction,” p. 183 of JMP Statistics and Graphics Guide, for more information about interpretation of the tables in Figure 6.11. The Response Surface report also has the tables shown in Figure 6.12: • The Response Surface table is a summary of the parameter estimates. • The Solution table lists the critical values of the surface factors and tells the kind of solution (maximum, minimum, or saddlepoint). • The Canonical Curvature table shows eigenvalues and eigenvectors of the effects. Note that the solution for the Bounce example is a saddlepoint. The Solution table also warns that the critical values given by the solution are outside the range of data values.
6 Response Surface Designs—A Box-Behnken Design: The Tennis Ball Example 95
Figure 6.12 Statistical Reports for a Response Surface Analysis
The eigenvector values show that the dominant negative curvature (yielding a maximum) is mostly in the Sulfur direction. The dominant positive curvature (yielding a minimum) is mostly in the Silica direction. This is confirmed by the prediction profiler in Figure 6.13. The Prediction Profiler The response Prediction Profiler gives you a closer look at the response surface to find the best settings that produce the response target. It is a way of changing one variable at a time and looking at the effects on the predicted response. Open the Prediction Profiler with the Profiler command from the Factor Profiling popup menu on the Response title bar (Figure 6.13). The Profiler displays prediction traces for each X variable. A prediction trace is the predicted response as one variable is changed while the others are held constant at the current values (Jones 1991). The first profile in Figure 6.13, show initial settings for the factors Silica, Silane, and Sulfur, which result in a value for Stretch of 396, which is close to the specified target of 450. However, you can adjust the prediction traces of the factors and find a Stretch value that is closer to the target. The next step is to choose Desirability Functions from the popup menu on the Profiler title bar. This command appends a new row of plots to the bottom of the plot matrix, which graph desirability on a scale from 0 to 1. The row has a plot for each factor, showing its desirability trace, as illustrated by the second profiler in Figure 6.13. The Desirability Functions command also adds a column that has an adjustable desirability function for each Y variable. The overall desirability measure appears to the left of the row of desirability traces. The response goal for Stretch is a target value of 450, as illustrated by the desirability function in Figure 6.13. If needed, you can drag the middle handle on the desirability function vertically to change the target value. The range of acceptable values is determined by the positions of the upper and lower handles. See the chapter “Standard Least Squares: Exploring the Prediction Equation,” p. 239 of JMP Statistics and Graphics Guide, for further discussion of the Prediction Profiler.
6 Surface
See the chapter “Standard Least Squares: Exploring the Prediction Equation,” p. 239 of JMP Statistics and Graphics Guide, for details about the response surface analysis tables in Figure 6.12.
96 6 Response Surface Designs—A Box-Behnken Design: The Tennis Ball Example
The overall desirability shows to the left of the row of desirability traces. However, note in this example that the desirability function is set to 450, the target value. The current predicted value of Stretch, 396, is based on the default factor setting. It is represented by the horizontal dotted line that shows slightly below the desirability function target value. Figure 6.13 Prediction Profiler for a Response Surface Analysis
You can adjust the factor traces by hand to change the predicted value of Stretch. Another convenient way to find good factor settings is to select Maximize Desirability from the Prediction Profiler popup menu. This command adjusts the profile traces to produce the response value closest to the specified target (the target given by the desirability function). Figure 6.13, shows the result of the most desirable settings. Changing the settings of Silica from 1.2 to 1.7, Silane from 50 to 52.7227, and Sulfur from 2.3 to 2.8 raised the predicted response from 396 to the target value of 450. A Response Surface Plot Another way to look at the response surface is to use the Contour Profiler. The Contour Profiler command in the Factor Profiling menu brings up the interactive contour profiling facility as shown in Figure 6.14. It is useful for optimizing response surfaces graphically, especially when there are multiple responses. This example shows the profile to Silica and Sulphur for a fixed value of Silane. Options on the Contour Profiler title bar can be used to set the grid density, request a surface plot (mesh plot), and add contours at specified intervals, as shown in the contour plot in Figure 6.14. The sliders for each factor set values for Current X and Current Y.
6 Response Surface Designs—A Box-Behnken Design: The Tennis Ball Example 97
Figure 6.15 shows the Contour profile when the Current X values have the most desirable settings. Figure 6.15 Contour Profiler with High and Low Limits
The Prediction Profiler and the Contour Profiler are discussed in more detail in the chapter “Standard Least Squares: Exploring the Prediction Equation,” p. 239 of JMP Statistics and Graphics Guide.
6 Surface
Figure 6.14 Contour Profiler for a Response Surface Analysis
Space Filling Designs
Space-filling designs are useful in situations where variability is of far less concern than the form of the model. Sensitivity studies of computer-simulations is one such situation. For this case, and any mechanistic or deterministic modeling problem, any variability is small enough to be ignored. For systems with no variability, randomization and blocking are irrelevant. Replication is undesirable because repeating the same run yields the same result. In space filling designs, one objective is to spread the design points out to the maximum distance possible between any two points. This objective specifically prevents replicate points. A second objective is to space the points uniformly. Three design methods are implemented for these types of designs: • Sphere Packing (emphasizes spread of points) • Latin Hypercube (compromise between spread of points and uniform spacing) • Uniform (mimics the uniform probability distribution)
7 Space Filling
7
7 Contents Introduction to Space Filling Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sphere-Packing Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Graphical View of the Sphere-Packing Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Latin Hypercube Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Understanding the Latin Hypercube Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Graphical View of the Latin Hypercube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uniform Design Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Borehole Model Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a Sphere Packing Design for the Borehole Problem . . . . . . . . . . . . . . . . . . . . . . . . . Guidelines for the Analysis of Deterministic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Results of the Borehole Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
101 101 101 103 103 104 106 106 108 108 110 110
7 Space Filling Designs—Introduction to Space Filling Designs 101
Space-filling designs are useful for modeling systems that are deterministic or near deterministic. One example of a deterministic system is a computer simulation. Such simulations can be very complex involving many variables with complicated interrelationships. A goal of designed experiments on these systems is to find a simpler empirical model that adequately predicts the behavior of the system over limited ranges of the factors. In experiments on systems where there is substantial random noise, the goal is to minimize of variance of prediction. In experiments on deterministic systems, there is no variance but there is bias. Bias is the difference between the approximation model and the true mathematical function. The goal of space-filling designs is to bound the bias. There are two schools of thought on how to bound the bias. One approach is to spread the design points out as far from each other as possible consistent with staying inside the experimental boundaries. The other approach is to space the points out evenly over the region of interest. The Space Filling designer supports three design methods: • The sphere packing method emphasizes spreading points apart. • The uniform design method produces designs that mimic the uniform distribution. • The Latin Hypercube method is a compromise between the other two.
Sphere-Packing Method The sphere packing design method maximizes the minimum distance between pairs of design points. The effect of this maximization is to spread the points out as much as possible inside the design region.
A Graphical View of the Sphere-Packing Design To develop some graphical intuition about this method, do the following. Select DOE > Space Filling from the main menu. In the initial platform dialog, give the two existing factors, X1 and X2, values ranging of 0 and 1 (instead of the default –1 to 1). Select Show Diagnostics from the platform menu, as shown in Figure 7.1.
7 Space Filling
Introduction to Space Filling Designs
102 7 Space Filling Designs—Sphere-Packing Method
Figure 7.1 Space Filling Dialog for Two Factors
Click Continue. to see the design specification dialog shown on the left in. In that dialog, specify a sample size of 8 and click the Sphere Packing button. The platform now creates this design and displays the design runs and the design diagnostics when that option is selected from the platform menu, as shown previously. Open the Design Diagnostics outline node and note that 0.518 is the Minimum Distance. Figure 7.2 Space-Filling Design Dialog, Sphere-Packing Design Settings and Design Diagnostics
To help visualize this design, click Make Table. Then, select Graph > Overlay Plot. In the Graph launch dialog, specify X1 as X and X2 as Y, then click OK. When the plot appears, right-click inside the plot and select Size/Scale > Frame Size from the menu that appears. Set the frame size to be 150 by 150. (You can adjust the frame size to be anything you want, but it is important that the frame be square).
7 Space Filling Designs—Latin Hypercube Method 103
For Each Row(Circle({X1, X2}, 0.518/2))
where 0.518 is the minimum distance number you noted above. This script draws a circle centered at each design point with radius 0.259 (half the diameter, 0.518), as shown on the left in Figure 7.3. This plot shows the efficient way JMP packs the design points. Repeat the above procedure exactly, but with 10 runs instead of 8. When the plot appears, again set the frame size and create a graphics script using the minimum distance from the diagnostic report as the radius for the circle. You should see a graph similar to the one on the right in Figure 7.3. Note the irregular nature of the sphere packing. In fact, you can repeat the process to get a slightly different picture because the arrangement is dependent on the random starting point. Figure 7.3 Sphere-packing Example with 8 Runs (left) and 10 Runs (right)
Latin Hypercube Method In a Latin Hypercube each factor has as many levels as there are runs in the design. The levels are spaced evenly from the lower bound to the upper bound of the factor. Like the sphere packing method, the Latin Hypercube method chooses points to maximize the minimum distance between design points, but with a constraint. The constraint involves maintaining the even spacing between factor levels.
Understanding the Latin Hypercube Design To illustrate this, do the following. Select DOE > Space Filling from the main menu. In the initial platform dialog, add two factors and give all the factors values ranging from 1 and 8 (instead of the default –1 to 1). Click Continue to see the design specification dialog shown on the left in. In that dialog, specify a sample size of 8 and click the Latin Hypercube button. The platform now creates this design and displays the design runs.
7 Space Filling
Again right-click inside the frame and select Add Graphics Script Enter the following script into the dialog box that appears.
104 7 Space Filling Designs—Latin Hypercube Method
Figure 7.4 Space-Filling Dialog for Four Factors
Figure 7.5 shows the Latin hypercube design with 8 runs. Note that each column (factor) is assigned each level only once, and each column is a different permutation of the levels. Figure 7.5 Latin Hypercube design for 8 Runs with 8 Levels
A Graphical View of the Latin Hypercube You can also visualize Latin hypercube designs in the same way as in the sphere-packing example. Begin by choosing Space Filling from the DOE menu. The initial dialog appears with two factors. As before, give the two existing factors, X1 and X2, values ranging of 0 and 1 (instead of the default –1 to 1). Select Show Diagnostics from the platform menu, as shown in Figure 7.1. Click Continue, enter 8 runs, and click Latin Hypercube. You should see factor settings and design diagnostics results similar to those in Figure 7.6.
7 Space Filling Designs—Latin Hypercube Method 105
Click Make Table. Then, select Graph > Overlay Plot. In the Graph launch dialog, specify X1 as X and X2 as Y, then click OK. When the plot appears, right-click inside the plot and select Size/Scale > Frame Size from the menu that appears. Set the frame size to be 150 by 150. (You can adjust the frame size to be anything you want, but it is important that the frame be square). Again right-click inside the frame and select Add Graphics Script Enter the following script into the dialog box that appears. For Each Row(Circle({X1, X2}, 0.404/2))
where 0.404 is the minimum distance number you noted above. This script draws a circle centered at each design point with radius 0.202 (half the diameter, 0.404), as shown on the left in Figure 7.6. This plot shows the efficient way JMP packs the design points. Repeat the above procedure exactly, but with 10 runs instead of 8. When the plot appears, again set the frame size and create a graphics script using the minimum distance from the diagnostic report as the radius for the circle. You should see a graph similar to the one on the right in Figure 7.6. Note the irregular nature of the sphere packing. In fact, you can repeat the process to get a slightly different picture because the arrangement is dependent on the random starting point. Figure 7.7 Comparison of Latin Hypercube Designs with 8 Runs and with 10 Runs
7 Space Filling
Figure 7.6 .Latin Hypercube Design with 8 Runs
106 7 Space Filling Designs—Uniform Design Method
Note that the minimum distance between each pair of points in the Latin hypercube design is smaller than that for the sphere packing design. This is because the Latin hypercube design constrains the levels of each factor to be evenly spaced. The sphere packing design maximizes the minimum distance without any constraints.
Uniform Design Method The Uniform Design method minimizes the discrepancy between the design points (empirical uniform distribution) and a theoretical uniform distribution. These designs are most useful for getting a simple and precise estimate of the integral of an unknown function. The estimate is the average of the observed responses from the experiment. Create the 8 run design as before. Choose Space Filling from the DOE menu. The initial dialog appears with two factors. Give the two existing factors, X1 and X2, values ranging of 0 and 1 (instead of the default –1 to 1). Select Show Diagnostics from the platform menu, as shown in Figure 7.1. Click Continue, then enter 8 as the sample size and choose Uniform in the Space Filling Design Methods dialog. Figure 7.8 shows the resulting runs and design diagnostics Figure 7.8 Diagnostics for Uniform Space Filling Designs with 8 Runs
The emphasis of the Uniform Design method is not to spread out the points. Note that the minimum distances in Figure 7.8 vary substantially. Also, a uniform design does not guarantee even spacing of the factor levels. However, as the number of runs increases, running the Distribution platform on each factor should show a flat histogram.
Comparison of Methods Reviewing the objective of each space filling design method: • The sphere packing design method maximizes the minimum distance between design points. • the Latin Hypercube method maximizes the minimum distance between design points but requires even spacing of the levels of each factor. • The uniform design method minimizes the discrepancy between the design points (empirical uniform distribution) and a theoretical uniform distribution.
7 Space Filling Designs—Comparison of Methods 107
Figure 7.9 shows a comparison of the design diagnostics for the three 8-run space-filling designs. Note discrepancy for uniform design is the smallest (best). The discrepancy for the sphere-packing design is the largest (worst). The discrepancy for the Latin Hypercube takes an intermediate value that is closer to the optimal value. Also note that the minimum distance between pairs of points is largest (best) for the sphere-packing method. The uniform design has pairs of points that are only about half as far apart. The Latin Hypercube design behaves more like the sphere packing design in spreading the points out. For both spread and discrepancy the Latin Hypercube design represents a healthy compromise solution. Figure 7.9 Comparison of Diagnostics for Three 8-Run Space Filling Methods
sphere packing
Latin Hypercube
uniform
Another point of comparison is the time it takes to compute a design. The Uniform design method requires the most time to compute. Also the time increases rapidly with the number of points. For comparable problems, all the space filling design methods take longer to compute than the D-optimal designs in the Custom designer.
7 Space Filling
When the Show Diagnostic option is in effect, the Design Diagnostic panel displays the minimum distance from each point to its closest neighbor and the discrepancy value. This value is the integrated difference between the design points and the uniform distribution.
108 7 Space Filling Designs—Borehole Model Example
Borehole Model Example Worley(1987) presented a model of the flow of water through a borehole that is drilled from the ground surface through two aquifiers. The response variable y is the flow rate through the borehole in m3/year and is determined by the equation 2πT u ( H u – H l ) y = --------------------------------------------------------------------------------------------T 2LT u - + ------u ln ( r ⁄ r w ) 1 + --------------------------------------2 Tl ln ( r ⁄ r w ) r w K w
There are 8 inputs to this model. rw = radius of borehole, 0.05 to 0.15m r = radius of influence, 100 to 50,000 m Tu = transmissivity of upper aquifier, 63,070 to 115,600 m2/year Hu = potentiometric head of upper aquifier, 990 to 1100 m Tl = transmissivity of lower aquifier, 63.1 to 116 m2/year Hl = potentiometric head of lower aquifier, 700 to 820 m L = length of borehole, 1120 to 1680 m Kw = hydraulic conductivity of borehole, 9855 to 12,045 m/year This example is atypical of most computer experiments because the response can be expressed as a simple, explicit function of the input variables. However, this simplicity is useful for explaining the design methods. The factors for this model are in the Borehole Factors.jmp data table.
Creating a Sphere Packing Design for the Borehole Problem To begin, select DOE > Space Filling from the main menu. Then, load the factors with the Load Factors command from the platform menu. In the file selection dialog, select Borehole Factors.jmp table from the DOE Sample Data folder and click OK to see the DOE dialog in Figure 7.10.
7 Space Filling Designs—Borehole Model Example 109
Note: The logarithm of r and rw are used in the following discussion.
Click Continue to produce the dialog in Figure 7.11 and specify a sample size of 32 runs. Figure 7.11 Space Filling Design Method Dialog
Now, select Sphere Packing to produce the design. When the design appears, click Make Table to make a table showing the design settings for the experiment. The factor settings in the example table might not have the same ones you see when generating the design because the designs are generated from a random seed. An example with data is stored in the Borehole Sphere Packing.jmp data table (see Figure 7.12). This table also has a table variable that contains a script to analyze the data, and the results of the analysis are saved as columns in the table.
7 Space Filling
Figure 7.10 Loaded Factors for Borehole Example
110 7 Space Filling Designs—Borehole Model Example
Figure 7.12 Borehole Sample Data
Guidelines for the Analysis of Deterministic Data It is important to remember that deterministic data has no random component. As a result p-values from fitted statistical models do not have their usual meanings. A large F statistic (low p-value) is an indication of an effect due to a model term. However, you cannot make valid confidence intervals about the size of the effects or about predictions made using the model. Residuals from any model fit to deterministic data are not a measure of noise. Rather a residual shows the model bias for the current model at the current point. Distinct patterns in the residuals indicate new terms to add to the model to reduce model bias.
Results of the Borehole Experiment The sphere packing example produced sample data file Borehole Sphere Packing.jmp. A stepwise regression of the response, log y, versus the full quadratic model in the eight factors led to the prediction formula column. The prediction bias column is the difference between the true model column and the prediction formula column. Note that the prediction bias is relatively small for each of the experimental points. This is an indications that the model fits the data well.
7 Space Filling Designs—Borehole Model Example 111
In this case, the true model column contains a relatively simple formula, which allows profiling the prediction bias to find its value anywhere in the region of the data. To understand the prediction bias in this example, select Profiler from the Graph main menu. Complete the Profiler dialog as shown at the top in Figure 7.13. Be sure to check the Expand Intermediate Formulas box because the prediction bias formula is a function of columns that are also created by formulas. The profile plots at the bottom in Figure 7.13 show the prediction bias at the center of the design region. If there were no bias, the profile traces would be constant between the value ranges of each factor. In this example, the variables Hu and Hl show nonlinear effects. Figure 7.13 Profile of the Prediction Bias in the Borehole Sphere Packing data.
The range of the prediction bias on the data is smaller than the range of the prediction bias over the entire domain of interest. to see this, look at the distribution analysis of the prediction bias in Figure 7.14. Note that the maximum bias is 1.826 and the minimum is –0.684 (the range is 2.51).
7 Space Filling
In real world examples, the true model is generally not available in a simple analytical form. As a result it is impossible to know the prediction bias at points other than the observed data without doing additional runs.
112 7 Space Filling Designs—Borehole Model Example
Figure 7.14 Distribution of the Prediction Bias in the Borehole Sphere Packing data.
The top plot shows the maximum bias (3.02) over the entire domain of the factors. the plot at the bottom shows the comparable minimum bias (–4.82758). This give a range of 7.84. This is more than three times the size of the range over the observed data. Figure 7.15 Prediction Plots showing Maximum and Minimum Bias Over Factor Domains
Keep in mind that in this example the true model is given. In any meaningful application, the response at any factor setting is unknown. The prediction bias over the experimental data underestimates the bias throughout the design domain. There are two ways to assess the extent of this underestimation: • Cross validation • Verification runs
7 Space Filling Designs—Borehole Model Example 113
Verification runs are new runs performed at different settings to assess the lack of fit of the empirical model.
7 Space Filling
Cross validation refits the data to the model while holding back a subset of the points and looks at the error in estimating those points.
Full Factorial Designs
In full factorial designs you perform an experimental run at every combination of the factor levels. The sample size is the product of the numbers of levels of the factors. For example, a factorial experiment with a two-level factor, a three-level factor, and a four-level factor has 2 x 3 x 4 = 24 runs. Factorial designs with only two-level factors have a sample size that is a power of two (specifically 2f where f is the number of factors.) When there are three factors, the factorial design points are at the vertices of a cube as shown in the diagram below. For more factors, the design points lie on a hypercube. Full factorial designs are the most conservative of all design types. There is little scope for ambiguity when you are willing to try all combinations of the factor settings. Unfortunately, the sample size grows exponentially in the number of factors, so full factorial designs are too expensive to run for most practical purposes.
8 Factorial
8
8 Contents The Factorial Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 The Five-Factor Reactor Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
8 Full Factorial Designs—The Factorial Dialog 117
To start, select Full Factorial Design in the DOE main menu, or click the Full Factorial Design button on the JMP Starter DOE tab page. The popup menu on the right in Figure 8.1, illustrates the way to specify categorical factors with 2 to 9 levels. Add a continuous factor and two categorical factors with three and four levels respectively. Also, change the level names (optional) to those shown at the left in Figure 8.1. Figure 8.1 Full Factorial Factor Panel
When you finish adding factors, click Continue to see a panel of output options as shown to the left in Figure 8.2. When you click Make Table, the table shown in Figure 8.2, appears. Note that the values in the Pattern column describe the run each row represents. For continuous variables, plus or minus signs represent high and low levels. Level numbers represent values of categorical variables. Figure 8.2 2x3x4 Full Factorial Design Table minus sign for low level of continuous factor Plus sign for high level of continuous factor level numbers of categorical factors
The Five-Factor Reactor Example Results from the reactor experiment described in the chapter “Screening Designs,” p. 69, can be found in the Reactor 32 Runs.jmp sample data folder, (Box, Hunter, and Hunter 1978, pp 374-390). The
8 Factorial
The Factorial Dialog
118 8 Full Factorial Designs—The Five-Factor Reactor Example
variables have the same names: Feed Rate, Catalyst, Stir Rate, Temperature, and Concentration. These are all two-level continuous factors. To create the design yourself, select Full Factorial Design from the DOE main menu (or toolbar), or click Full Factorial Design on the DOE tab page of the JMP Starter window. Do the following to complete the Response panel and the Factors panel: • Use the Load Responses command from the popup menu on the Full Factorial Design title bar and open the Reactor Response.jmp file to get the response specifications. • Likewise, use the Load Factors command and open the Reactor Factors.jmp file to get the Factors panel. You should see the completed dialog shown in Figure 8.3. Figure 8.3 Full-Factorial Example Response and Factors Panels
A full factorial design includes runs for all combinations of high and low factors for the five variables, giving 32 runs. Click Continue to see Output Options panel on the right in Figure 8.3. When you click Make Table, the JMP Table in Figure 8.4 is constructed with a run for every combination of high and low values for the five variables. Initially, the table has an empty Y column for entering response values when the experiment is complete. In Figure 8.4 assume the experiment is complete and the Y column is called Percent Reacted. The table has 32 rows, which cover all combinations of a five factors with two levels each. The Reactor 32 Runs.jmp sample data file has these experimental runs and the results from the Box, Hunter, and Hunter study. Figure 8.4, shows the runs and the response data.
8 Full Factorial Designs—The Five-Factor Reactor Example 119
Begin the analysis with a quick look at the data before fitting the factorial model. The plot in Figure 8.5 shows a distribution of the response, Percent Reacted, using the Normal Quantile plot option on the Distribution command on the Analyze menu. Figure 8.5 Distribution of Response Variable for Reactor Data
8 Factorial
Figure 8.4 25 Factorial Reactor Data (Reactor 32.jmp sample data)
120 8 Full Factorial Designs—The Five-Factor Reactor Example
Start the formal analysis with a stepwise regression. The data table has a script stored with it that automatically defines an analysis of the model that includes main effects and all two factor interactions, and brings up the Stepwise control panel. To do this, choose Run Script from the Fit Model popup menu on the title bar of the Reactor 32 Run.jmp table (Figure 8.6). The Stepwise Regression Control Panel appears with a preliminary Current Estimates report. The probability to enter a factor into the model is 0.05 (the default is 0.25), and the probability to remove a factor is 0.1. Figure 8.6 Run JSL Script for Stepwise Regression Change from default settings: Prob to Enter factor is 0.05 Prob to Leave factor is 0.10 Mixed direction instead of Forward or Backward.
A useful way to use the Stepwise platform is to check all the main effects in the Current Estimates table, and then use Mixed as the Direction for the stepwise process, which can include or exclude factors in the model. To do this, click the check boxes for the main effects of the factors as shown in Figure 8.7, and click Go on the Stepwise control panel. Figure 8.7 Starting Model For Stepwise Process
The Mixed stepwise procedure removes insignificant main effects and adds important interactions. The end result is shown in Figure 8.8. Note that the Feed Rate and Stir Rate factors are no longer in the model.
8 Full Factorial Designs—The Five-Factor Reactor Example 121
Click the Make Model button to generate a new model dialog. The Model Specification dialog automatically has the effects identified by the stepwise model (Figure 8.9). Figure 8.9 Model Dialog for Fitting a Prediction Model
Click Run Model to see the analysis for a candidate prediction model. The left-hand figure in Figure 8.10 shows the actual by predicted plot for the model. The predicted model covers a range of predictions from 40% to 95% Reacted. The size of the random noise as measured by the RMSE is only 3.3311%, which is more than an order of magnitude smaller than the range of predictions. This is strong evidence that the model has good predictive capability. Figure 8.10, shows a table of model coefficients and their standard errors. All effects selected by the stepwise process are highly significant.
8 Factorial
Figure 8.8 Model After Mixed Stepwise Regression
122 8 Full Factorial Designs—The Five-Factor Reactor Example
Figure 8.10 Actual by Predicted Plot and Prediction Model Estimates
The factor Prediction Profiler also gives you a way to compare the factors and find optimal settings. Open the Prediction Profiler with the Profiler command on the Factor Profiling submenu (Figure 8.11) on the Response title bar. The Prediction Profiler is discussed in more detail in the chapter “Response Surface Designs,” p. 85, and in the chapter “Standard Least Squares: Exploring the Prediction Equation,” p. 239 of JMP Statistics and Graphics Guide. The top profile in Figure 8.11, shows the initial settings. An easy way to find optimal settings is to choose Desirability Functions from the popup menu on the profiler title bar. Then select Maximize Desirability, to see the bottom profile in Figure 8.11. The plot of Desirability versus Percent Reacted shows that the goal is to maximize Percent Reacted. The reaction is unfeasible economically unless the Percent Reacted is above 90%, therefore the Desirability for values less than 90% decreases and finally becomes zero. Desirability increases linearly as the Percent Reacted increases. The maximum Desirability is 0.945 when Catalyst and Temperature are at their highest settings, and Concentration is at its lowest setting. Percent Reacted increases from 65.5 at the center of the factor ranges to 95.875 at the most desirable setting.
8 Full Factorial Designs—The Five-Factor Reactor Example 123
8 Factorial
Figure 8.11 Initial Profiler Settings and Optimal Settings
Taguchi Designs
Quality was the watchword of 1980s and Genichi Taguchi was a leader in the growth of quality consciousness. One of Taguchi’s technical contributions to the field of quality control was a new approach to industrial experimentation. The purpose of the Taguchi method was to develop products that worked well in spite of natural variation in materials, operators, suppliers, and environmental change. This is robust engineering. Much of the Taguchi method is traditional. His orthogonal arrays are two-level, three-level, and mixed-level fractional factorial designs. The unique aspects of his approach are the use of signal and noise factors, inner and outer arrays, and signal-to-noise ratios. Dividing system variables according to their signal and noise factors is a key ingredient in robust engineering. Signal factors are system control inputs. Noise factors are variables that are typically difficult or expensive to control. The inner array is a design in the signal factors and the outer array is a design in the noise factors. A signal-to-noise ratio is a statistic calculated over an entire outer array. Its formula depends on whether the experimental goal is to maximize, minimize or match a target value of the quality characteristic of interest. A Taguchi experiment repeats the outer array design for each run of the inner array. The response variable in the data analysis is not the raw response or quality characteristic; it is the signal-to-noise ratio. The Taguchi designer in the DOE platform supports signal and noise factors, inner and outer arrays, and signal-to-noise ratios as Taguchi specifies.
9 Taguchi
9
9 Contents The Taguchi Design Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Taguchi Design Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Analyze the Byrne-Taguchi Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
9 Taguchi Designs—The Taguchi Design Approach 127
The Taguchi method defines two types of factors: control factors and noise factors. An inner design constructed over the control factors finds optimum settings. An outer design over the noise factors looks at how the response behaves for a wide range of noise conditions. The experiment is performed on all combinations of the inner and outer design runs. A performance statistic is calculated across the outer runs for each inner run. This becomes the response for a fit across the inner design runs. Table 9.1 “Taguchi's Signal to Noise Ratios” lists the recommended performance statistics. Table 9.1 Taguchi's Signal to Noise Ratios Goal
S/N Ratio Formula
nominal is best
⎛ 2⎞ S--------⎟ = 10 log ⎜ Y N ⎝ s2 ⎠
larger-is-better (maximize)
⎛ ⎞ S1 --= – 10 log ⎜ 1--- ∑ -----2-⎟ ⎜n i Y ⎟ N i ⎠ ⎝
smaller-is-better (minimize)
S- = – 10 log ⎛ 1--2⎞ --⎜ ∑Yi ⎟ N ⎝n i ⎠
Taguchi Design Example The following example is an experiment done at Baylock Manufacturing Corporation and described by Byrne and Taguchi (1986). The objective of the experiment is to find settings of predetermined control factors that simultaneously maximize the adhesiveness (pull-off force) and minimize the assembly costs of nylon tubing. The data are in the Byrne Taguchi Data.jmp data table in the Sample Data folder, but you can generate the original design table with the Taguchi designer of the JMP DOE facility. The signal and noise factors for this example appear in Table 9.2 “Definition of Adhesiveness Experiment Effects”. Table 9.2 Definition of Adhesiveness Experiment Effects Factor Name Interfer Wall IDepth Adhesive Time Temp Humidity
Type
Levels
Comment
control control control control noise noise noise
3 3 3 3 2 2 2
tubing and connector interference the wall thickness of the connector insertion depth of the tubing into the connector percent adhesive the conditioning time temperature the relative humidity
9 Taguchi
The Taguchi Design Approach
128 9 Taguchi Designs—Taguchi Design Example
The factors for the example are in the JMP file called Byrne Taguchi Factors.jmp, found in the DOE Sample Data folder. To start this example: 1 Open the factors table. 2 Choose Taguchi from the DOE main menu or toolbar, or click the Taguchi button on the DOE tab page of the JMP Starter. 3 Select Load Factors in the platform popup menu as shown here. The factors panel then shows the four three-level control (signal) factors and three noise factors listed in Figure 9.1. Figure 9.1 Response, and Signal and Noise Factors for the Byrne-Taguchi Example
This example uses the designs highlighted in the design choice panel. L9-Taguchi gives the L9 orthogonal array for the inner design. The outer design has three two-level factors. A full factorial in eight runs is generated. However, it is only used as a guide to identify a new set of eight columns in the final JMP data table—one for each combination of levels in the outer design. Click Make Table to create the design table shown in Figure 9.2. The pull-off adhesive force measures are collected and entered into the new columns, shown in the bottom table of Figure 9.3. As a notational convenience, the Y column names are ‘Y’ appended with the levels (+ or –) of the noise factors for that run. For example Y--- is the column of measurements taken with the three noise factors set at their low levels.
9 Taguchi Designs—Analyze the Byrne-Taguchi Data 129
Figure 9.3 Complete Taguchi Design Table
The column called SN Ratio Y is the performance statistic computed with the formula shown below. In this case, it is the ‘larger–the–better’ (LTB) formula, which is –10 times the common logarithm of the average squared reciprocal. –10Log10 Mean
1 1 1 1 1 1 1 1 , , , , , , , , y - - - 2 y - - + 2 y - + - 2 y - + +2 y+ - - 2 y+ - +2 y ++- 2 y +++2
This expression is large when all of the individual Y values are small.
Analyze the Byrne-Taguchi Data The data are now ready to analyze. The Table Property called Model in the Tables panel runs a JSL script that launches the Fit Model platform shown in Figure 9.4.
9 Taguchi
Figure 9.2 Taguchi Design Before Data Entry
130 9 Taguchi Designs—Analyze the Byrne-Taguchi Data
The default model includes the main effects of the four Signal factors. The two responses are the mean and S/N Ratio over the outer array. The goal of the analysis is to find factor settings that maximize both the mean and the S/N Ratio. Figure 9.4 Fit Model Dialog for the Byrne-Taguchi Data
The prediction profiler is a quick way to find settings that give the highest signal-to-noise ratio for this experiment. The default prediction profile has all the factors set to low levels as shown in the top of Figure 9.5. The profile traces indicate that different settings of the first three factors would increase SN Ratio Y. The Prediction Profiler has a popup menu with options to help find the best settings for a given Desirability Function. The Desirability Functions option adds the row of traces and column of function settings to the profiler, as shown at the bottom in Figure 9.5. The default desirability functions are set to larger-is-better, which is what you want in this experiment. See the chapter “Standard Least Squares: Perspectives on the Estimates,” in the JMP Statistics and Graphics Guide, for more details about the Prediction Profiler. After the Desirability Functions option is in effect, you can choose Maximum Desirability, which automatically sets the prediction traces to give the best results according to the desirability functions. In this example you can see that the settings for Interfer and Wall changed from L1 to L2. The Depth setting changed from L1 to L3. There was no change in Adhesive. These new settings increased the signal-to-noise ratio from 24.0253 to 26.9075.
9 Taguchi Designs—Analyze the Byrne-Taguchi Data 131
9 Taguchi
Figure 9.5 Best Factor Settings for Byrne Taguchi Data
Mixture Designs
The properties of a mixture are almost always a function of the relative proportions of the ingredients rather than their absolute amounts. In experiments with mixtures, a factor's value is its proportion in the mixture, which falls between 0 and 1. The sum of the proportions in any mixture recipe is 1 (100%). Designs for mixture experiments are fundamentally different from those for screening. Screening experiments are orthogonal. That is, over the course of an experiment, the setting of one factor varies independently of any other factor. The interpretation of screening experiments is simple, because the effects of the factors on the response are separable. With mixtures it is impossible to vary one factor independently of all the others. When you change the proportion of one ingredient, the proportion of one or more other ingredients must also change to compensate. This simple fact has a profound effect on every aspect of experimentation with mixtures: the factor space, the design properties, and the interpretation of the results. Because the proportions sum to one, mixture designs have an interesting geometry. The feasible region for a mixture takes the form of a simplex. For example, consider three factors in a 3-D graph. The plane where the sum of the three factors sum to one is a triangle-shaped slice. You can rotate the plane to see the triangle face-on and see the points in the form of a ternary plot. x3 x2
triangular feasible region
x1 The design of experiments facility offers the following types of designs for mixtures: • simplex centroid • simplex lattice • extreme vertices • ABCD designs The extreme vertices design is the most flexible, since it handles constraints on the values of the factors.
10 Mixture
10
10 Contents The Mixture Design Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mixture Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simplex Centroid Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simplex Lattice Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Extreme Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Extreme Vertices Design for Constrained Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding Linear Constraints to Mixture Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Details on Extreme Vertices Method for Linear Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . Ternary and Tetrary Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fitting Mixture Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Whole Model Test and Anova Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Response Surface Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chemical Mixture Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plotting a Mixture Response Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
135 136 136 138 139 140 141 142 142 143 144 144 145 146
10 Mixture Designs—The Mixture Design Dialog 135
The Mixture Design command on the DOE main menu or JMP Starter DOE tab page displays the standard Add Factors panel. When you click Continue, the Mixture dialog shown Figure 10.1, lets you select one of the following types of design: Optimal
Choosing Optimal invokes the Custom designer with all the mixture variables already defined. Simplex Centroid
You specify the degree up to which the factor combinations are to be made. Simplex Lattice
You specify how many levels you want on each edge of the grid. Extreme Vertices
You specify linear constraints or restrict the upper and lower bounds to be within the 0 to 1 range. ABCD Design
This approach by Snee (1975) generates a screening design for mixtures. Figure 10.1 Mixture Design Selection Dialog enter K for Simplex Centroid enter levels for Simplex Lattice enter degree for Extreme Vertices
The design table appears when you click a design type button. The following sections show examples of each mixture design type.
10 Mixture
The Mixture Design Dialog
136 10 Mixture Designs—Mixture Designs
Mixture Designs If the process of interest is determined by a mixture of components, the relative proportions of the ingredients, rather than the absolute amounts, needs to be studied. In mixture designs all the factors sum to 1.
Simplex Centroid Design A simplex centroid design of degree k with nf factors is composed of mixture runs with • all one factor • all combinations of two factors at equal levels • all combinations of three factors at equal levels • and so on up to k factors at a time combined at k equal levels. A center point run with equal amounts of all the ingredients is always included. The table of runs for a design of degree 1 with three factors (left in Figure 10.2) shows runs for each single ingredient followed by the center point. The table of runs to the right is for three factors of degree 2. The first three runs are for each single ingredient, the second set shows each combination of two ingredients in equal parts, and the last run is the center point. Figure 10.2 Three-Factor Simplex Centroid Designs of Degrees 1 and 2
To generate the set of runs in Figure 10.2, choose the Mixture Design command from the DOE menu and enter three continuous factors. Click Continue to see the Mixture Design Selection dialog shown in Figure 10.1. Enter ‘1’ for K and click Simplex Centroid to see the design on the left in Figure 10.3. Then use the Back button and enter ‘2’ for K and click Simplex Centroid to see the design on the right.
10 Mixture Designs—Mixture Designs 137
As another example, enter 5 for the number of factors and click Continue. When the Mixture Design dialog appears, the default value of K is 4, which is fine for this example. Click Simplex Centroid. When the design appears, click Make Table to see the 31-run JMP data table shown in Figure 10.4. Note that the first five runs have only one factor. The next ten runs have all the combinations of two factors. Then, there are ten runs for three-factor combinations, five runs for four-factor combinations, and (as always) the last run with all factors. Figure 10.4 List of Factor Settings for Five-Factor Simplex Centroid Design
10 Mixture
Figure 10.3 Create Simplex Centroid Designs of Degrees 1 and 2
138 10 Mixture Designs—Mixture Designs
Simplex Lattice Design The simplex lattice design is a space-filling design that creates a triangular grid of runs. The design is the set of all combinations where the factors’ values are i / m, where i is an integer from 0 to m such that the sum of the factors is 1. To create Simplex Lattice designs, specify the number of levels you want in the Mixture Design Type dialog (Figure 10.1) and click Simplex Lattice. Figure 10.5, shows the runs for three-factor simplex lattice designs of degrees 3, 4, and 5, with their corresponding geometric representations. In contrast to the simplex centroid design, the simplex lattice design does not necessarily include the center point. Figure 10.6, lists the runs for a simplex lattice of degree 3 for five effects. In the five-level example, the runs creep across the hyper-triangular region and fill the space in a grid-like manner. Figure 10.5 Three-Factor Simplex Lattice Designs for Factor Levels 3, 4, and 5
10 Mixture Designs—Mixture Designs 139
Extreme Vertices The extreme vertices design incorporates limits on factors into the design and picks the vertices and their averages formed by these limits as the design points. The additional limits are usually in the form of range constraints, upper bounds, and lower bounds on the factor values. The following example design table is for five factors with the range constraints shown here, where the ranges are smaller than the default 0 to 1 range. Click Continue and enter 4 as the Degree (Figure 10.7), then click the Exreme Vertices button. When the Display and Modify Design dialog appears (not shown here), select Sort Left to Right and click Make Table. Figure 10.8 shows a partial listing of the resulting JMP design table. Figure 10.7 Example of Five-factor Ex re me Vertices
10 Mixture
Figure 10.6 JMP Design Table for Simplex Lattice with Five Variables, Order (Degree) 3
140 10 Mixture Designs—Extreme Vertices Design for Constrained Factors
Figure 10.8 JMP Design Table for Extreme Vertices with Range Constraints
Details on Extreme Vertices Method for Range Constraints If the only constraints are range constraints, the extreme vertices design is constructed using the XVERT method developed by Snee and Marquardt (1974) and Snee (1975). After the vertices are found, a simplex centroid method generates combinations of vertices up to a specified order. The XVERT method first creates a full 2nf – 1 design using the given low and high values of the nf – 1 factors with smallest range. Then, it computes the value of the one factor left out based on the restriction that the factors’ values must sum to 1. It keeps the point if it is in that factor’s range. If not, it increments or decrements it to bring it within range, and decrements or increments each of the other factors in turn by the same amount, keeping the points that still satisfy the initial restrictions. The above algorithm creates the vertices of the feasible region in the simplex defined by the factor constraints. However, Snee (1975) has shown that it can also be useful to have the centroids of the edges and faces of the feasible region. A generalized n-dimensional face of the feasible region is defined by nf–n of the boundaries and the centroid of a face defined to be the average of the vertices lying on it. The algorithm generates all possible combinations of the boundary conditions and then averages over the vertices generated on the first step.
Extreme Vertices Design for Constrained Factors The extreme vertices design finds the corners (vertices) of a factor space constrained by limits specified for one or more of the factors. The property that the factors must be non-negative and must add up to 1 is the basic mixture constraint that makes a triangular-shaped region. Sometimes other ingredients need range constraints that confine their values to be greater than a lower bound or less than an upper bound. Range constraints chop off parts of the triangular-shaped (simplex)
10 Mixture Designs—Extreme Vertices Design for Constrained Factors 141
The geometric shape of a region bound by linear constraints is called a simplex, and because the vertices represent extreme conditions of the operating environment, they are often the best places to use as design points in an experiment. You usually want to add points between the vertices. The average of points that share a constraint boundary is called a centroid point, and centroid points of various degrees can be added. The centroid point for two neighboring vertices joined by a line is a 2nd degree centroid because a line is two dimensional. The centroid point for vertices sharing a plane is a 3rd degree centroid because a plane is three dimensional, and so on. If you specify an extreme vertices design but give no constraints, a simplex centroid design results.
Adding Linear Constraints to Mixture Designs Consider the classic example presented by Snee (1979) and Piepel (1988). This example has three factors, X1, X2, and X3, with five individual factor bound constraints and three additional linear constraints: Figure 10.9 Range Constraints and Linear Constraints 90 ≤ 85*X1 + 90*X2 + 100*X3 X1 ≥ 0.1 85*X1 + 90*X2 + 100*X3 ≤ 95 X1 ≤ 0.5 .4 ≤ 0.7*X1 + X3 X2 ≥ 0.1 X2 ≤ 0.7 X3 ≤ 0.7 You first enter the upper and lower limits in the factors panel as shown in Figure 10.10. Click Continue to see the Mixture Design dialog. The Extreme Vertices selection on the Mixture Design dialog has a Linear Constraint button to add linear constraints. Click the Linear Constraints button for each constraint you have. In this example you need three constraint columns in the Linear Constraint dialog. Figure 10.10, shows the factors panel and the constraints panels completed for each of the constraints given in Figure 10.9. After the constraints are entered, click Extreme Vertices to see the 13-run factor settings like those shown on the right in Figure 10.10. Note that you can enter a different number of runs in the sample size text box on the Display and Modify Design panel. Then click Find Subset to generate the optimal subset having the number of runs specified. For example, if you enter 10 runs, the resulting design will be the optimal 10-run subset of the 13 current runs. This is useful when the extreme vertices design generates a large number of vertices.
10 Mixture
region to make additional vertices. It is also possible to have a linear constraint, which defines a linear combination of factors to be greater or smaller than some constant.
142 10 Mixture Designs—Ternary and Tetrary Plots
Figure 10.10 Constraints and Table of Runs for Snee (1979) Mixture Model Example
Details on Extreme Vertices Method for Linear Constraints The extreme vertices implementation for linear constraints is based on the CONSIM algorithm developed by R.E. Wheeler, described in Snee (1979) and presented by Piepel (1988) as CONVRT. The method is also described in Cornell (1990, Appendix 10a). The method combines constraints and checks to see if vertices violate them. If so, it drops the vertices and calculates new ones. The method for doing centroid points is by Piepel (1988), named CONAEV. If there are only range constraints, click the Linear Constraints button to see the results of the CONSIM method, rather than the results from the XVERT method normally used by JMP.
Ternary and Tetrary Plots The Piepel (1979) example is best understood by the ternary plot shown in Figure 10.11. Each constraint is a line. The area that satisfies all constraints is the shaded feasible area. There are six active constraints, six vertices, and six centroid points shown on the plot, as well as two inactive (redundant) constraints.
10 Mixture Designs—Fitting Mixture Designs 143
10 Mixture
Figure 10.11 Ternary Plot Showing Piepel Example Constraints
A mixture problem in three components can be represented in two dimensions because the third component is a linear function of the others. This ternary plot shows how close to 1 a given component is by how close it is to the vertex of that variable in the triangle. The plot to the left in Figure 10.12, illustrates a ternary plot. The same idea in three dimensions for four components maps a mixture to points inside a tetrahedron (pyramid), as illustrated by the tetrary plot to the right in Figure 10.12. Figure 10.12 Ternary Plot (left) and Tetrary Plot (right) for Mixture Design X1
X1 (1, 0, 0)
(1/2, 1/2, 0)
(1/3, 1/3, 1/3) (1, 1, 0.8)
X2 (0,1,0)
(0,0,1)
X3
X2
X3
Fitting Mixture Designs When fitting a model for mixture designs, you must take into account that all the factors add up to a constant, and thus a traditional full linear model will not be fully estimable.
144 10 Mixture Designs—Fitting Mixture Designs
The recommended model to fit a mixture response surface is • to suppress the intercept • to include all the linear main-effect terms • to exclude all the square terms (like X1*X1) • to include all the cross terms (like X1*X2) This model is called the Scheffé polynomial (Scheffé 1958). When you create a Mixture design and save the runs in a JMP data table, the model is stored with the data table as a Table Property. This Table Property, called Model, runs the script to launch the Model Specification dialog, which is automatically filled with the saved model. In this model, the parameters are easy to interpret (Cornell 1990). The coefficients on the linear terms are the fitted response at the extreme points where the mixture is all one factor. The coefficients on the cross terms indicate the curvature across each edge of the factor space. Figure 10.13 Mixture Fit Model Dialog
Whole Model Test and Anova Report In the whole-model Anova table, JMP traditionally tests that all the parameters are zero except for the intercept. In a mixture model without an intercept JMP looks for a hidden intercept, in the sense that a linear combination of effects is a constant. If it finds a hidden intercept, it does the whole model test with respect to the intercept model rather than a zero-intercept model. This test is equivalent to testing that all the parameters are zero except the linear parameters, and testing that they are equal. The hidden-intercept property also causes the R-square to be reported with respect to the intercept model, rather than reported as missing.
Response Surface Reports When JMP encounters effects that are marked as response surface effects “&RS,” it creates additional reports that analyze the resulting fitted response surface. These reports were originally designed for full response surfaces, not mixture models. However, if JMP encounters a no-intercept model and finds a hidden intercept with linear response surface terms, but no square terms, then it folds its calculations, collapsing on the last response surface term to calculate critical values for the optimum. It can do this for any combination yielding a constant and involving the last response surface term.
10 Mixture Designs—Chemical Mixture Example 145
• Save the model prediction formula and use the Ternary Plot platform in the Graph menu. • Refit using a full response surface that omits the last factor. • Use the Contour Plot platform in the Graph menu, and add points to make the plot less granular.
Chemical Mixture Example Three plasticizers (p1, p2, and p3) comprise 79.5% of the vinyl used for automobile seat covers (Cornell, 1990). Within this 79.5%, the individual plasticizers are restricted by the following constraints: 0.409 ≤ x1 ≤ 0.849, 0 ≤ x2 ≤ 0.252, and 0.151 ≤ x3 ≤ 0.274. To create Cornell’s mixture design in JMP: • Select Mixture Design from the DOE menu or the JMP Starter DOE tab page. • In the Factors panel, request 3 factors. Name them p1, p2, and p3, and enter the high and low constraints as shown in Figure 10.14. Or, open the Plastifactors.jmp table in the Design Experiments sample data folder and use the Load Factors command in the menu on the Mixture Design title bar. • Click Continue, then specify a degree of three in Mixture Design Type dialog for an Extreme Vertices design. • When you click Exreme Vertices, then Make Table, JMP uses the 9 factor settings to generate a JMP table. Note: For this problem, the experimenter added an extra 5 design runs by duplicating the vertex points and center point shown highlighted in the table as shown in Figure 10.14, giving a total of 14 rows in the design table. After the experiment is complete, the results of the experiment (thickness) are entered in the Y column. Use the Plasticizer.jmp table found in the sample data folder to see the experimental results (Y values). Figure 10.14 .Mixture Data for Analysis
10 Mixture
The contour-plot feature of these reports does not fold to handle mixtures. If you want a contour plot of the surface, you can any of the following:
146 10 Mixture Designs—Plotting a Mixture Response Surface
To run the mixture model do one of the following: • Use the Table Property called Mixture RSM, which runs a script that creates the completed Model Specification dialog and runs the model. • Choose Fit Model from the Analyze menu, select p1, p2 and p3 as mixture response surface effects, and Y as the Y variable. Then click Run Model. When the model has run, choose Save Prediction Formula from the Save commands in the platform popup menu. The predicted values show as a new column in the data table. To see the prediction formula, open the formula for that column: 0–50.1465*p1 – 282.1982*p2 – 911.6484*p3 + p2*317.363 + p3*p1*1464.3298 + p3*p2*1846.2177
Note: These results correct the coefficients reported in Cornell (1990). When you fit the response surface model, the Response Surface Solution report (Figure 10.15) shows that a maximum predicted value of 19.570299 occurs at point (0.63505, 0.015568, 0.20927). You can visualize the results of a mixture design with the Profiler in the Fit Model platform, and a Ternary plot, as described in the next section. Figure 10.15 Mixture Response Surface Analysis
Plotting a Mixture Response Surface The Fit Model platform automatically displays a Prediction Profiler when the analysis emphasis is effect screening. If the Profiler is not visible, you can select the Profiler command from the Factor Profiling menu on the Response title bar to display it.
10 Mixture Designs—Plotting a Mixture Response Surface 147
The crossed effects show as curvature in the prediction traces. When you drag one of the vertical reference lines, the other two move in the opposite direction maintaining their ratio. Figure 10.16 Profiler for Mixture Analysis Example
To plot a mixture response surface choose Ternary from the Graph menu (or toolbar), or click Ternary on the Graph tab page of the JMP Starter. Specify plot variables in the Launch dialog as shown in Figure 10.17. Optionally, you can identify a contour variable if there is one. The contour variable must have a prediction formula to form the contour lines as shown by the Ternary Plots at the bottom in Figure 10.17 The Ternary platform only shows points if there is no prediction formula. The prediction equation is often the result of using the Save Prediction Formula command after fitting the response surface mixture. Figure 10.17 Ternary Plot of a Mixture Response Surface
10 Mixture
The Profiler in Figure 10.16, for the chemical mixture example, shows optimal settings of 0.6615 for p1, 0.126 for p2, and 0.21225 for p3, which give an estimated response of 19.26923.
Augmented Designs
It is best to treat experimentation as an iterative process. That way you can master the temptation to assume that one successful screening experiment has optimized your process. You can also avoid disappointment if a screening experiment leaves behind some ambiguities. The Augment designer supports the following five ways to extend previous experimental work: • Add Centerpoints Adding centerpoints is useful to check for curvature and to reduce the prediction error in the center of the factor region. • Replication Replication provides a direct check on the assumption that the error variance is constant. It also reduces the variability of the regression coefficients in the presence of large process or measurement variability. • Foldover Design A foldover design removes the confounding of two-factor interactions and main effects. This is especially useful as a follow-up to saturated or near saturated fractional factorial or Plackett-Burman designs. • D-optimal Augmentation D-optimal augmentation is a powerful tool for sequential design. Using this feature you can add terms to the original model and find optimal new test runs with respect to this expanded model. You can also group the two sets of experimental runs into separate blocks, which optimally blocks the second set with respect to the first. • Enhance a Screening Design Add axial points together with center points to transform a screening design to a response surface design. This chapter provides an overview of the interface of the Augment designer. It also presents a case study of design augmentation using the reactor example from the chapter “Screening Designs,” p. 69.
11 Augmented
11
11 Contents The Augment Design Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Replicate Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add Centerpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fold Over . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add Axial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Reactor Example Revisited—D-Optimal Augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Augmented Design and its Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Analyze the Augmented Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
151 152 153 153 154 155 157 157
11 Augmented Designs—The Augment Design Interface 151
The augment design feature of JMP DOE gives the ability to modify an existing design data table. If you do not have an open JMP table when you select Augment Design from the DOE menu or from the DOE tab on the JMP Starter, the File Open dialog for your computer appears as in Figure 11.1. Select a data set that you want to augment. For this example, use the Reactor 8 Runs.jmp data in the Design Experiment sample data folder. This table was generated previously in the chapter “Screening Designs,” p. 69. Figure 11.1 File Open Dialog to Open a Design Data Table
After the file opens, the dialogs in Figure 11.2, prompt you to identify the factors and responses you want to use for the augmented design. Select the columns that are model factors and click OK. Then select the column or columns that are responses. When you click OK again, the dialog below appears with the list of factors and factor values that were saved with the design data table. Buttons on the dialog give four choices for augmenting a design: • Replicate • Add Centerpoints • Fold Over • Add Axial • Augment Note: If you wanted the original runs and the resulting augmented runs to be identified by a block factor, first click Yes for the Group New Runs into Separates Block on the Augment Design dialog. The next sections describe how to use these augmentation choices.
11 Augmented
The Augment Design Interface
152 11 Augmented Designs—The Augment Design Interface
Figure 11.2 Choose Columns for Factors and Responses
Replicate Design The Replicate button displays the dialog shown here. Enter the number of times to perform each run. Enter two (2) in the dialog text entry to specify that you want each run to appear twice in the resulting design. This is the same as one replicate. Figure 11.3, shows the Reactor data with one replicate. Figure 11.3 Reactor Data Design Augmented With One Replicate
11 Augmented Designs—The Augment Design Interface 153
When you click Add Centerpoints, a dialog appears for you to enter the number of center-points you want. The table shown in Figure 11.4 is the design table for the reactor data with two center points appended to the end of the table. Figure 11.4 Reactor Data Design Augmented With Two Center Points
Fold Over When you select Foldover the dialog on the left in Figure 11.5 lets you choose which factors to fold. This example folds on all five factors. The default, if you choose no factors, is also to fold on all design factors. This example also includes a Block factor. When you click Make Table, the JMP Table that results lists he original set of runs as block 1 and the new (foldover) runs are block 2. If you choose a subset of factors to fold over, the remaining factors are replicates of the original runs. Note: Adding centerpoints or replicating the design also generates an additional Block column in the JMP Table.
11 Augmented
Add Centerpoints
154 11 Augmented Designs—The Augment Design Interface
Figure 11.5 Listing of a Foldover Design On All Factors for the Reactor Data
Add Axial When you click Add Axial, a dialog appears for you to enter the value the axial values in units of the factors scaled from –1 to +1, and number of center-points you want. When you click OK, the augmented design includes the number of center points specified and constructs two axial points for each variable in the original design. Figure 11.6 shows the Reactor 8 Runs.jmp table augmented with two center points and two axial point for five variables. Figure 11.6 Reactor Data Design Augmented With Two Center and Ten Axial Points
center points
axial points
11 Augmented Designs—The Reactor Example Revisited—D-Optimal Augmentation 155
The factors in the previous section were from the reactor example in the chapter “Screening Designs,” p. 69. This section returns to that example, which had ambiguous results. To begin, open the Reactor 8 Runs.jmp table from the Design Experiment sample data folder (if it is not already open). Then select Augment Design from the DOE menu. After you identify the factors and response and click OK, the Augment Design dialog shown in Figure 11.7 appears. Note: You can check Group New Runs into Separate Blocks? to add a blocking factor to any design. However, the purpose of this example is to estimate all two-factor interactions in 16 run, which can’t be done when there is the additional blocking factor in the model. Figure 11.7 Augment Design Dialog for the Reactor Example
Now click Augment on the Augment Design dialog to see the display in Figure 11.8. This model is the result of the model stored with the data table when it was created by the Custom designer. However, the augmented design is to have 16 runs in order to estimate all two-factor interactions. Figure 11.8 Initial Augmented Model
11 Augmented
The Reactor Example Revisited—D-Optimal Augmentation
156 11 Augmented Designs—The Reactor Example Revisited—D-Optimal Augmentation
To continue with the augmented reactor design, choose 2nd from the Interactions popup menu as shown on the left in Figure 11.9. This adds all the two-factor interactions to the model. The Minimum number of runs given for the specified model is 16, as shown in the Design Generation text edit box. You could change this number by clicking in the box and typing a new number. Figure 11.9 Augmented Model with All Two-Factor Interactions
When you click Make Design, the DOE facility computes D-optimally augmented factor settings, similar to the design shown in Figure 11.10. Figure 11.10 D-Optimally Augmented Factor Settings
Note: The resulting design is a function of an initial random number seed. To reproduce the exact factor settings table in Figure 11.10, (or the most recent design you generated), choose Set Random Seed from the popup menu on the Augment Design title bar. A dialog shows the most recently used random
11 Augmented Designs—The Reactor Example Revisited—D-Optimal Augmentation 157
Figure 11.11 Specifying a Random Number
The Augmented Design and its Model Figure 11.12 shows the Reactor Augment Data.jmp sample data file in the Design Experiment folder.The runs in this table are the corresponding runs in the Reactor Example from the chapter “Full Factorial Designs,” p. 115., which are similar to the runs generated in this example. The example analysis in the next section uses this data table. Figure 11.12 Completed Augmented Experiment (Reactor Augment Data.jmp)
Analyze the Augmented Design To start the analysis, run the Model script stored as a table property with the data table. This table property contains the JSL commands that display the Fit Model dialog with all main effects and two-factor interactions as effects. To continue, change the fitting personality from Standard Least Squares to Stepwise, as shown in Figure 11.13.
11 Augmented
number. Click OK to use that number again, or Cancel to generate a design based on a new random number. The dialog in Figure 11.11 shows the random number (12834729) used to generate the runs in Figure 11.10.
158 11 Augmented Designs—The Reactor Example Revisited—D-Optimal Augmentation
Figure 11.13 Fit Model Dialog for Stepwise Regression on Generated Model
When you click Run Model, the stepwise regression control panel appears. Click the check boxes for all the main effect terms. Then, choose Restrict from the Rules menu and make sure Prob to Enter is 0.050 and Prob to Leave is 0.100. You should see the dialog shown in Figure 11.14. Figure 11.14 Initial Stepwise Model
Click Go to start the stepwise regression and watch it continue until all terms are entered into the model that meet the Prob to Enter and Prob to Leave criteria in the Stepwise Regression Control
11 Augmented Designs—The Reactor Example Revisited—D-Optimal Augmentation 159
Figure 11.15 Completed Stepwise Model
After Stepwise is finished, click Make Model on the Stepwise control panel to generate this reduced model, as shown in Figure 11.16. You can now fit the reduced model to do additional diagnostic work, make predictions, and find the optimal factor settings. Figure 11.16 New Prediction Model Dialog
The ANOVA and Lack of Fit Tests in Figure 11.17, indicate a highly significant regression model with no evidence of Lack of Fit.
11 Augmented
panel. Figure 11.15, shows the result of this example analysis. Note that Feed Rate is out of the model while the Catalyst*Temperature, Stir Rate*Temperature, and the Temperature*Concentration interactions have entered the model.
160 11 Augmented Designs—The Reactor Example Revisited—D-Optimal Augmentation
Figure 11.17 Prediction Model Analysis of Variance and Lack of Fit Tests
The Scaled Estimates table in Figure 11.18, show that Catalyst has the largest main effect. However, the significant two-factor interactions are of the same order of magnitude as the main effects. This is the reason that the initial screening experiment, shown in the chapter “Screening Designs,” p. 69, had ambiguous results. Figure 11.18 Prediction Model Estimates Plot
It is desirable to maximize the percent reaction. The prediction profile plot in Figure 11.19, shows that maximum occurs at the high levels of Catalyst, Stir Rate, and Temperature and the low level of Concentration. When you drag the prediction traces for each factor to their extreme settings, the estimate of Percent Reacted increases from 65.375 to 95.6625.
11 Augmented Designs—The Reactor Example Revisited—D-Optimal Augmentation 161
To summarize, compare the analysis of 16 runs with the analyses of reactor data from previous chapters: • “Screening Designs,” p. 69, the analysis of a screening design with only 8 runs produced a model with the five main effects and two interaction effects with confounding. None of the factors effects were significant, although the Catalyst factor was large enough to encourage collecting data for further runs. • “Full Factorial Designs,” p. 115, a full factorial of the five two-level reactor factors, 32 runs, was first subjected to a stepwise regression. This approach identified three main effects (Catalyst, Temperature, and Concentration) and two interactions (Temperature*Catalyst, Contentration*Temperature) as significant effects. • By using a D-optimal augmentation of 8 runs to produce 8 additional runs, a stepwise analysis returned the same results as the analysis of 32 runs. The bottom line is that only half as many runs yielded the same information. Thus, using an iterative approach to DOE can save time and money.
11 Augmented
Figure 11.19 Maximum Percent Reacted
Prospective Power and Sample Size Prospective analysis helps answer the question, “Will I detect the group differences I am looking for, given my proposed sample size, estimate of within-group variance, and alpha level?” In a prospective power analysis, an estimate of the group means and sample sizes in a data table and an estimate of the within-group standard deviation (σ) are required in the Power and Sample Size dialog. The Sample Size, Power command in the DOE menu determines how large of a sample is needed to be reasonably likely that an experiment or sample will yield a significant result, given that the true effect size is at least a certain size. The Sample Size and Power platform requires that you enter any two of three quantities, difference to detect, sample size, and power, and computes the third for the following cases: • difference between one sample's mean and a hypothesized value • difference between two samples means • differences in the means among k samples • difference between a variance and a hypothesized value • difference between one sample proportion and a hypothesized value • difference between two sample proportions • difference between counts per unit in a Poisson-distributed sample and a hypothesized value. The Power and Sample Size calculations assumes that there are equal numbers of units in each group. You can apply this platform to more general experimental designs, if they are balanced, and a number-of-parameters adjustment is specified.
12 Power
12
12 Contents Prospective Power Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . One-Sample and Two-Sample Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Single-Sample Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Power and Sample Size Animation for a Single Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two-Sample Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . k-Sample Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . One-Sample Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . One-Sample and Two-Sample Proportions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Counts per Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sigma Quality Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
165 166 167 169 170 170 171 172 173 174
12 Prospective Power and Sample Size—Prospective Power Analysis 165
The following five values have an important relationship in a statistical test on means: Alpha
is the significance level that prevents declaring a zero effect significant more than alpha portion of the time. Error Standard Deviation
is the unexplained random variation around the means. Sample Size
is how many experimental units (runs, or samples) are involved in the experiment. Power
is the probability of declaring a significant result. Effect Size
is how different the means are from each other or from the hypothesized value. The Sample Size and Power calculation platform in JMP helps estimate in advance either the sample size needed, power expected, or the effect size expected in the experimental situation where there is a single mean comparison, a two sample comparison, or when comparing k sample means. The Sample Size, Power command is on the DOE main menu (or toolbar), or on the DOE tab page of the JMP Starter. When you launch this platform, the panel shown Figure 12.1 in appears with a button selection for three experimental situations. Each of these selections then displays its own dialog that prompts for estimated parameter values and the desired computation. Figure 12.1 Sample Size and Power Choices
12 Power
Prospective Power Analysis
166 12 Prospective Power and Sample Size—One-Sample and Two-Sample Means
One-Sample and Two-Sample Means After you click either One Sample Mean, or Two Sample Means in the initial Sample Size selection list (Figure 12.1), the Power and Sample Size dialog in Figure 12.2 appears and asks for the anticipated experimental values. The values you enter depend on your initial choice. As an example, consider the two-sample situation. Figure 12.2 Initial Power and Sample Size Dialogs for Single Mean (left) and Two Means (right)
The Two Sample Means choice in the initial Power and Sample Size dialog always requires values for Alpha and the error standard deviation (Error Std Dev), as shown here, and one or two of the other three values: Difference to detect, Sample Size, and Power. The power and sample size platform then calculates the missing item. If there are two unspecified fields, the power and sample size platform constructs a plot that shows the relationship between those two values: • power as a function of sample size, given specific effect size • power as a function of effect size, given a sample size • effect size as a function of sample size, for a given power. The Power and Sample Size dialog asks for the values depending the first choice of design: Alpha
is the significance level, usually 0.05. This implies willingness to accept (if the true difference between groups is zero) that 5% (alpha) of the time a significant difference will be incorrectly declared. Error Std Deviation
is the true residual error. Even though the true error is not known, the power calculations are an exercise in probability that calculates what might happen if the true values were as specified. Extra Params
is only for multi-factor designs. Leave this field zero in simple cases. In a multi-factor balanced design, in addition to fitting the means described in the situation, there are other factors with the extra parameters that can be specified here. For example, in a three-factor two-level design with
12 Prospective Power and Sample Size—One-Sample and Two-Sample Means 167
Difference to Detect
is the smallest detectable difference (how small a difference you want to be able to declare statistically significant). For single sample problems this is the difference between the hypothesized value and the true value. Sample Size
is the total number of observations (runs, experimental units, or samples). Sample size is not the number per group, but the total over all groups. Computed sample size numbers can have fractional values, which you need to adjust to real units. This is usually done by increasing the estimated sample size to the smallest number evenly divisible by the number of groups. Power
is the probability of getting a statistic that will be declared statistically significant. Bigger power is better, but the cost is higher in sample size. Power is equal to alpha when the specified effect size is zero. You should go for powers of at least 0.90 or 0.95 if you can afford it. If an experiment requires considerable effort, plan so that the experimental design has the power to detect a sizable effect, when there is one. Continue
evaluates at the entered values. Back
means go back to the previous dialog. Animation Script
runs a JSL script that displays an interactive plot showing power or sample size. See the upcoming section, “Power and Sample Size Animation for a Single Sample,” p. 169, for an illustration of this animation script.
Single-Sample Mean Suppose there is a single sample and the goal is to detect a difference of 2 where the error variance is 0.9, as shown in the left-hand dialog in Figure 12.3. To calculate the power when the sample size is 10, leave Power missing in the dialog and click Continue. The dialog on the right in Figure 12.3, shows the power is calculated to be 0.99998, rounding to 1.
12 Power
all three two-factor interactions, the number of extra parameters is five—two parameters for the extra main effects, and three parameters for the interactions. In practice, it isn’t very important what values you enter here unless the experiment is in a range where there is very few degrees of freedom for error.
168 12 Prospective Power and Sample Size—One-Sample and Two-Sample Means
Figure 12.3 A One-Sample Example
To see a plot of the relationship of power and sample size, leave both Sample Size and Power missing and click Continue. Double click on the horizontal axis to get any desired scale. The left-hand graph in Figure 12.4, shows a range of sample sizes for which the power varies from about 0.2 to 0.95. Change the range of the curve by changing the range of the horizontal axis. For example, the plot on the right in Figure 12.4, has the horizontal axis scaled from 1 to 8, which gives a more typical looking power curve. Figure 12.4 A One-Sample Example Plot
When only Sample Size, is specified (Figure 12.5) and Difference to Detect and Power are left blank, a plot of power by difference appears.
12 Prospective Power and Sample Size—One-Sample and Two-Sample Means 169
Power and Sample Size Animation for a Single Sample The Animation Script button on the Power and Sample Size dialog for the single mean displays an interactive plot that illustrates the effect that power of changing the sample size has on power. In the example shown in Figure 12.6, Sample Size is 10, Alpha is 0.05, and the Difference to Detect is set to 0.4. The animation begins showing a normal curve positioned with mean at zero (representing the estimated mean and the true mean), and another with mean at 0.04 (the difference to be detected). The probability of committing a Type II error (not detecting a difference when there is a difference), often represented as β in literature, is shaded in blue on this plot. The handles over the curves drag them to show how their positions affect power. Also, you can click on the values for sample size and alpha showing beneath the plot to change them. Figure 12.6 Example of Animation Script to Illustrate Power
12 Power
Figure 12.5 Plot of Power by Difference to Detect for a Given Sample Size
170 12 Prospective Power and Sample Size—k-Sample Means
Two-Sample Means The dialogs work similarly for two samples; the Difference to Detect is the difference between two means. Suppose the error variance is 0.9 (as before), the desired detectable difference is 1, and the sample size is 16. Leave Power blank and click Continue to see the power calculation, 0.5433, as shown in the dialog on the left in Figure 12.7. This is considerably lower than in the single sample because each mean has only half the sample size. The comparison is between two random samples instead of one. To increase the power requires a larger sample. To find out how large, leave both Sample Size and Power blank and examine the resulting plot, shown on the right in Figure 12.7. The crosshair tool estimates that a sample size of about 35 is needed to obtain a power of 0.9. Figure 12.7 Plot of Power by Sample Size to Detect for a Given Difference
k-Sample Means The k-Sample Means situation can examine up to 10 kinds of means. The next example considers a situation where 4 levels of means are expected to be about 10 to 13, and the Error Std Dev is 0.9. When a sample size of 16 is entered the power calculation is 0.95, as shown in the dialog on the left in Figure 12.8. As before, if both Sample Size and Power are left blank, the power and sample size calculations produce the power curve shown on the right in Figure 12.8. This confirms that a sample size of 16 looks acceptable. Notice that the difference in means is 2.236, calculated as square root of the sum of squared deviations from the grand mean. In this case it is the square root of (–1.5)2+ (–0.5)2+(0.5)2+(1.5)2, which is the square root of 5.
12 Prospective Power and Sample Size—One-Sample Variance 171
One-Sample Variance The One-Sample Variance choice on the Power and Sample Size dialog (Figure 12.1) determines sample size for detection a change in variance.The usual purpose of this option is to compute a large enough sample size to guarantee that the risk of accepting a false hypothesis (β) is small. In the dialog, specify a baseline variance, alpha level, and direction of change you want to detect. To indicate direction of change, select either Larger or Smaller from the Guarding a change menu. The computations then show whether the true variance is larger or smaller than its hypothesized value, entered as the Baseline Variance. An example is when the variance for resistivity measurements on a lot of silicon wafers is claimed to be 100 ohm-cm and a buyer is unwilling to accept a shipment if variance is greater than 55 ohm-cm for a particular lot. The examples throughout the rest of this chapter use engineering examples from the online manual of The National Institute of Standards and Technology (NIST). You can access the NIST manual examples at http://www.itl.nist.gov/div898/handbook As with previous dialogs, you enter two of the items and the Power and Sample Size calculations determines the third. Suppose you want to detect an increase of 55 for a baseline variance of 100, with an alpha of 0.05 and power of 0.99. Enter these items as shown on the left in Figure 12.9. When you click Continue, the computed result shows that you need a sample size of 170.
12 Power
Figure 12.8 Prospective Power for k-Means and Plot of Power by Sample Size
If you want to detect a change to a small variance, enter a negative amount in the Difference to Detect box. Note: Remember to enter the variance in the Baseline Variance box, not the standard deviation. Figure 12.9 Sample Size and Power Dialog To Compare Single-Direction One-Sample Variance
One-Sample and Two-Sample Proportions The dialogs and computations to test power and sample sizes for proportions is similar to those for testing sample means. The dialogs are the same except you enter Baseline Proportion and specify either a one-sided or two-sided test. The sampling distribution for proportions is actually binomial, but the computations to determine sample size and test proportions use a normal approximation, as indicated on the dialogs (Figure 12.10). Figure 12.10 Initial Power and Sample Dialogs for One-Sample and Two-Sample Proportions
Enter Baseline Proportion and ‘1’ or ‘2’ to indicate type of test (one or two sided)
12 Prospective Power and Sample Size—Counts per Unit 173
Figure 12.11 shows the entries in the Sample Size and Power dialog to detect a given difference between an observed proportion and a baseline proportion, and the computed sample size of approximately 77. To see the plot on the right in Figure 12.11, leave both Difference to Detect and Sample Size blank. Use the grabber tool (hand) to move the x-axis and show a specific range of differences and sample sizes. Figure 12.11 Dialog To Compare One Proportion to a Baseline and Sample Size Plot
Counts per Unit The Counts per Unit selection calculates sample size for the Poisson-distributed counts typical when you can measure more than one defect per unit. A unit can be an area and the counts can be fractions or large numbers. Although the number of defects observed in an area of a given size is often assumed to have a Poisson distribution, the area and count are assumed to be large enough to support a normal approximation. Questions of interest are: • Is the defect density within prescribed limits? • Is the defect density greater than or less than a prescribed limit? The Sample size and Power Dialog is similar to those shown previously. You enter alpha and the baseline count per unit. Then enter two of the remaining fields to see the calculation of the third. The test is for one-sided (one-tailed) change. Enter the Difference to Detect in terms of the baseline count per unit (defects per unit). The computed sample size is expressed in those units.
12 Power
Testing proportions is useful in production lines, where proportion of defects is part of process control monitoring. For example, suppose a line manager wants to detect a change in defective units that is 10% above a baseline. The current production line is running at a baseline of approximately 10% defective.The manager does not want to stop the process unless it has degenerated to greater that 20% defects (10% above 10% baseline defective). The process is monitored with a one-sided test at 5% alpha and a 10% risk (90% power) of failing to detect a change of that magnitude.
174 12 Prospective Power and Sample Size—Sigma Quality Level
As an example, consider a wafer manufacturing process with a target of 4 defects per wafer and you want to verify that a new process meets that target. Choose alpha of 0.1 to be the chance of failing the test if the new process is as good as the target. Choose a power of 0.9, which is the chance of detecting a change larger than 2 (6 defects per wafer). In this kind of situation, alpha is sometimes called the producer’s risk and beta is called the consumer’s risk. Enter these values into the dialog as shown in Figure 12.12, and click Continue to see the computed sample size of 8.128. In other words, the process meets the target if there are less than 48 defects (6 defects per wafer in a sample of 8 wafers). Figure 12.12 Dialog For Counts Per Unit Example
Sigma Quality Level The Sigma Quality Level button displays the dialog shown in Figure 12.13. You enter any two of the three quantities: • number of defects • number of opportunities • sigma quality level When you click Continue, the sigma quality calculator computes the missing quantity using the formula Sigma Quality Level = NormalQuantile(1 – defects/opportunities) + 1.5
The dialogs at the top in Figure 12.13 gives approximately 5.3 as the computed Sigma Quality Level for 50 defects in 1,000,000 opportunities. If you want to know how many defects reduce the Sigma Quality Level to “six-sigma” for 1,000,000 opportunities, enter 6 as the Sigma Quality Level and leave the Number of Defects blank. The computation shows that the Number of Defects cannot be more than approximately 3.4.
12 Prospective Power and Sample Size—Sigma Quality Level 175
Figure 12.13 Dialog for Sigma Quality Level Example
12 Power
Note: Six Sigma® is the term trademarked by Motorola to represent its quality improvement program.
References 177
Atkinson, A. C. and Donev, A. N. Optimum Experimental Designs Clarendon Press, Oxford (1992) p.148. Bose, R.C., (1947) “Mathematical Theory of the Symmetrical Factorial Design” Sankhya: The Indian Journal of Statistics, Vol. 8, Part 2, pp. 107-166. Box, G.E.P. and Wilson, K.B. (1951), “On the Experimental Attainment of Optimum Conditions,” Journal of the Royal Statistical Society, Series B, 13, 1-45. Box, G.E.P. and Behnken, D.W. (1960), “Some New Three-Level Designs for the Study of Quantitative Variables,” Technometrics 2, 455–475. Box, G.E.P. and Meyer, R.D. (1986), “An analysis of Unreplicated Fractional Factorials,” Technometrics 28, 11–18. Box, G.E.P. and Draper, N.R. (1987), Empirical Model–Building and Response Surfaces, New York: John Wiley and Sons. Box, G.E.P. (1988), “Signal–to–Noise Ratio, Performance Criteria, and Transformations,” Technometrics 30, 1–40. Box, G.E.P., Hunter,W.G., and Hunter, J.S. (1978), Statistics for Experimenters, New York: John Wiley and Sons, Inc. Byrne, D.M. and Taguchi, G. (1986), ASQC 40th Anniversary Quality Control Congress Transactions, Milwaukee, WI: American Society of Quality Control, 168–177. Chen, J., Sun, D.X., and Wu, C.F.J. (1993), “A Catalogue of Two-level and Three-Level Fractional Factorial Designs with Small Runs,” International Statistical Review, 61, 1, p131-145, International Statistical Institute. Cochran, W.G. and Cox, G.M. (1957), Experimental Designs, Second Edition, New York: John Wiley and Sons. Cornell, J.A. (1990), Experiments with Mixtures, Second Edition New York: John Wiley & Sons. Cook, R.D. and Nachtsheim, C.J. (1990), “Letter to the Editor: Resonse to James M. Lucas,” Technometrics 32, 363-364. Daniel, C. (1959), “Use of Half–normal Plots in Interpreting Factorial Two–level Experiments,” Technometrics, 1, 311–314. Daniel C. and Wood, F. (1980), Fitting Equations to Data, Revised Edition, New York: John Wiley and Sons, Inc. Derringer, D. and Suich, R. (1980), “Simultaneous Optimization of Several Response Variables,” Journal of Quality Technology, Oct. 1980, 12:4, 214–219. DuMouched, W. and Jones, B. (1994), “A Simple Bayesian Modification of D-Optimal Designs to Reduce Dependence on an Assumed Model,” Technometrics, 36, 37–47. Haaland, P.D. (1989), Experimental Design in Biotechnology, New York: Marcel Dekker, Inc.
References
References
178 References
Hahn, G. J., Meeker, W.Q., and Feder, P. I., (1976), “The Evaluation and Comparison of Experimental Designs for Fitting Regression Relationships,” Journal of Quality Technology, Vol. 8, #3, pp. 140-157. John, P.W.M. (1972), Statistical Design and Analysis of Experiments, New York: Macmillan Publishing Company, Inc. Johnson, M.E. and Nachtsheim, C.J. (1983), “Some Guidelines for Constructing Exact D–Optimal Designs on Convex Design Spaces,” Technometrics 25, 271–277. Jones, Bradley (1991), “An Interactive Graph For Exploring Multidimensional Response Surfaces,” 1991 Joint Statistical Meetings, Atlanta, Georgia Khuri, A.I. and Cornell, J.A. (1987) Response Surfaces: Design and Analysis, New York: Marcel Dekker. Lenth, R.V. (1989), “Quick and Easy Analysis of Unreplicated Fractional Factorials,” Technometrics, 31, 469–473. Lin, D. K. J. (1993), “A New Class of Supersaturated Design,” Technometrics, 35, 28-31. Lucas, J.M., (1990), “Letter to the Editor: Comments on Cook and Nachtsheim (1989),” Technometrics, 32, 363–364. Mahalanobis, P.C. (1947), “Sankhya,” The Indian Journal of Statistics, Vol 8, Part 2, April. Myers, R.H. (1976) Response Surface Methodology, Boston: Allyn and Bacon. Myers, R.H. (1988), Response Surface Methodology, Virginia Polytechnic and State University. Meyer, R.K. and Nachtsheim, C.J. (1995), The Coordinate Exchange Algorithm for Constructing Exact Optimal Designs,” Technometrics, Vol 37, pp. 60-69. Meyer, R.D., Steinberg, D.M., and Box, G.(1996), Follow-up Designs to Resolve Confounding in Multifactor Experiments, Technometrics, 38:4, p307. Mitchell, T.J. (1974), “An algorithm for the Construction of D-Optimal Experimental Designs,” Technometrics, 16:2, pp.203-210. Morris, M.D., Mitchell, T.J., and Ylvisaker, D. (1993), “Bayesian Design and Analysis of Computer Experiments: Use of Derivatives in Surface Prediction ,” Technometrics 35:2, 243-255. Piepel, G.F. (1988), “Programs for Generating Extreme Vertices and Centroids of Linearly Constrained Experimental Regions,” Journal of Quality Technology 20:2, 125-139. Plackett, R.L. and Burman, J.P. (1947), “The Design of Optimum Multifactorial Experiments,” Biometrika, 33, 305–325. St. John, R.C. and Draper, N.R. (1975), “D-Optimality for Regression Designs: A Review,” Technometrics, 17 pp 15-23. Sheffé, H. (1958) Experiments with Mixtures, JRSS B 20, 344-360. Snee, R.D. and Marquardt, D.W. (1974), “Extreme Vertices Designs for Linear Mixture Models,” Technometrics, 16, 391–408. Snee, R.D. (1975), “Experimental Designs for Quadratic Models in Constrained Mixture Spaces,” Technometrics, 17:2, 149–159.
References 179
Snee, Ronald D. (1985)Computer Aided Design of Experiments - Some Practical Experiences, Journal of Quality Technology, Vol 17. No. 4 October 1985 p.231. Taguchi, G. (1976), “An Introduction to Quality Control,” Nagoya, Japan: Central Japan Quality Control Association. Welch, W.J. (1984), “Computer-Aided Design of Experiments for repsonse Estimation,” Technometrics, 26, 217–224.
References
Snee, R.D. (1979), “Experimental Designs for Mixture Systems with Multicomponent Constraints,” Commun. Statistics, A8(4), 303–326.
Index
181
Design of Experiments A ABCD designs 135 aberration designs 72 acceptable values See lower limits and upper limits Actual-by-Predicted plots 82 adding center points in augment designs 149, 153–154 factors 25, 39, 58 linear constraints 142 responses 7 additional runs 28 A–efficiencies 15 algorithms CONSIM 142 coordinate exchange 38 aliasing effects 10, 76–77 Alpha 165–166 animation scripts 167 Anova reports 144 assigning importances (of responses) 8 augment designs choices provided 5 extending experiments 149 how to use 151 augmentation 149 axial points 85 scaling, central composite designs 88
B balanced designs 27 Bayesian D-optimal designs 53 Big Class.jmp 44 block sizes 41 blocks randomizing within 42 Borehole Factors.jmp 108 Borehole Sphere Packing.jmp 109
boss option 50 Bounce Data.jmp 91 Bounce Factors.jmp 91
Bounce Response.jmp 91
Box Cox transformations 82 Box-Behnken designs 85, 91 See also Response Surface designs Byrne Taguchi Data.jmp 127 Byrne Taguchi Factors.jmp 128
C Canonical Curvature tables 94 CCD See central composite designs center points augment designs 149, 153–154 central composite designs 85 number of 78 response surface designs 85 simplex centroid designs 136 central composite designs 85, 87, 89 See also response surface designs centroid points 141 Chakravarty 72 changing generating rules 10, 76 chemical mixture, examples 145 choosing designs 9 coded designs 10, 76 coding, column property 19 column properties coding 19 constrained state 14 design roles 14, 21 mixture 20 responses limits 20 combinations disallowed 18 CONAEV method 142 confounding 76–77, 82 resolution numbers 71 CONSIM algorithm 142 constraints adding 142 disallowing combinations 18 entering 14 linear 141
Index
Index
182 Index
loading 15, 49 saving 14 contour plots 145 profilers 81, 96 control factors 4, 127 CONVRT method 142 coordinate exchange algorithms 38 Cotter designs 16, 73 counts per unit (power and sample size) 173 creating data tables 28 factors tables 14 criterion optimality 16 crossing factors 36, 49 cube plots 82 Cubic Model.jsl 34 cubic models 33 custom designs advantages 41 cubic models 33 data tables 28 Design Generation panel 27 examples 39, 41–51 factors, defining 25 how they work 38 introduction 3, 23 models, describing 26 modifying a design 29 Prediction Variance Profiler 30 quadratic model 30 screening 34 steps involved in creating 25
D data tables creating 28 description tables
data tables, description 11 response surface designs 88 defaults number of random starts 17 defects 173 D-efficiencies 15, 60 describing models 26 design
matrix table properties 16 resolutions 71 roles 14 table variable 11 design roles 21 Design table variable 90 designers augment 5, 151 designs ABCD 135 aberration 72 augment 149 balanced 27 Bayesian D-optimal 53 Box-Behnken 85, 91 central composite 85, 89 coded 10, 76 Cotter, suppressing 16 custom See custom designs foldover 149, 153 fractional factorial 76 fractional factorials 71 full factorial 4, 69, 115, 117 full factorials 71 Latin Hypercube 103 minimum aberration 72 mixed-level 72 mixture 136, 145 modifying 29 orthogonal screening designs 71 screening experiments 133 surface designs 87 orthogonal arrays 72 Plackett-Burman 72 replicating 149, 152 response surface 85 saturated 27 screening 69 selecting 9 simplex centroids 136 simplex lattice 135, 138 space-filling 99–113 Sphere Packing 108 uniform precision 87 desirability functions 95, 122, 130 maximizing 96 traces 95
Index
E effect aliasing 10, 76–77 attributes 94 eigenvalue 94 eigenvector 94 size 165 sparsity 69, 72–73 effects nonestimable 71 orthogonal 89, 125 efficiencies D, G, and A 15 efficiency features 12 eigenvalue of effect 94 eigenvector of effect 94 equivalent solutions 38 error standard deviation 165–166 error variance 31 extra parameters 166 extreme vertices 135, 139 finding subsets 141
F factor constraints example 48–51 factor design tables 35 Factor Profiling option 95–96, 122, 146 factorial designs fractional 76 fractionals 71 full 4, 69, 71, 115, 117 three level 72 factors adding 25, 39, 58 constraints 18 control factors 4, 127 crossing 36, 49 defining 25 entering 9, 25 key factors 69 loading 14, 46, 80 examples 91, 128 nonmixture 46 points per factor 67 saving 13, 80 tables, creating 14 types 14 Factorspanel 6, 9 false negatives 73 finding subsets (extreme vertices) 141 fitting mixture designs 143 fitting models surface designs 94 fixed covariates example 43–45 flexible block size 41 folding the calculation 144 foldover designs 149, 153 fractional factorial designs 71, 76 full factorial designs 4, 69, 71, 115, 117 Design Generation panel 27 examples 118 functions desirability 95, 122, 130
G G–efficiencies 15 generating rules 10, 76 seeds 15 tables 79
Index
values 8 determinants 17 diagnostics for custom designs 15 Diamond Constraints.jmp 49 Difference to Detect option 166–167, 170 disallowed combinations 18 distributions 119 DOE simple examples 5 tab on the JMP Starter window 3 utilities 12 DOE Example 1.jmp 11 DOE K Exchange Value 67 DOE Mixture Sum 67 DOE Search Points Per Factor 67 DOE Sphere Radius 18 DOE Starting Design 67 DOE Starts 67 Donev Mixture factors.jmp 46 D-optimal augmentation 149 designs 53 optimality criteria 16
183
184 Index
global optimum 17 goal types (responses) 7 goals matching targets 8 minimizing and maximizing 8 Group By command 59 grouping variables (DOE) 59
H hidden intercepts 144 hyperspheres 18
I identifying key factors 69 importance of responses 8 importance weight 8, 13 inner arrays, inner designs 4, 125, 127 Inscribe option 89 interaction plots 81 interactions 73 high-order 71 intercepts hidden 144 I-optimal designs 53, 55 optimality criteria 16
J JSL (JMP Scripting Language) animation scripts 167 augmented designs 157 creating Model Specification dialog (Model script) 79 examples for DOE 34 factor constraints 18 Model script 129 random starts 67 searching for points per factor 67 sphere radius 18
K Keep the Same command 28
k-Sample Means (power and sample size) 170
L L18 Chakravarty 72 L18 Hunter 72 L18 John 72 L36 72 L9-Taguchi 128 Label column 79 larger–the–better formulas (LTB) 129 Latin Hypercube space filling design 99, 103–106 limits lower and upper 8 limits, responses 20 linear constraints 141–142 loading constraints 15, 49 factors 14, 46, 80 examples 91, 128 responses 13, 91 local optimum 17 lower limits 8
M matching target goals 8 matrix 16 maximizing desirability 96 goals 8 means one and two sample 166 methods CONAEV 142 minimizing goals 8 minimum aberration designs 72 mixed-level designs 72 mixture designs 136 compared to screening designs 133 definition 5 examples 145–146 fitting 143 linear constraints 141 response surfaces 146 simplex centroids 136, 138 with nonmixture factors 46 mixture, column properties 20 Model script Model Specification dialog 11
185 Index
Model Specification dialog 80 Model table properties 79, 94 models cubic 33 custom design 26 describing 26 modifying designs 29 my constraint 18
N N factors, adding 25 N responses, adding 7 noise factors 4, 125, 127 nonestimable effects 71 nonmixture factors 46 Normal Quantile command 119 Number of Center Points command 78 Number of Replicates command 78 number of runs 3, 27 screening designs 71 number of starts 16
O On Face option 89
one-sample and two-sample means 166 one-sample proportion (power and sample size) 172 one-sample variance (power and sample size) 171 optimal determinants 17 optimality criteria 16 Optimality Criterion 16, 58 order for runs 10, 78 order of runs 28 orthogonal array designs 72, 125 orthogonal designs screening designs 71 screening experiments 133 surface designs 87 Orthogonal option 89 outer arrays, outer designs 4, 125, 127–128
P parameters, extra 166 Pattern column 11, 79, 90, 117 performance statistics 127 Plackett-Burman designs 72
Plasticizer.jmp 145
plots Actual-by-Predicted 82 contour 145 cube 82 interaction 81 prediction variance 30 spinning 93 ternary 133, 143 points axial 85 center See center points centroid 141 per factor 67 Poisson-distributed counts 173 potential terms (DOE) 60 power analyses 165 factors in custom design 49 in statistical tests on means 165 one-sample and two-sample means 166–167 power and sample size calculations 163–175 animation 169 counts per unit 173 k-sample means 170 one-sample and two sample proportions 172 one-sample mean 167 one-sample variance 171 sigma quality level 174 two-sample means 170 prediction profilers 31, 95, 130 traces 95 variance profilers 30, 35 variances 31, 87 prediction formulas saving 147 primary terms (Bayesian D-optimal design) 60 profilers contour 81, 96 effect screening analysis 81 mixture response surface 146 prediction profilers 31, 95, 130 prediction variance profilers 30, 35 properties, columns 19 proportions (power and sample size) 172 prospective power analysis 165 pseudocomponent (mixture column property) 20
186 Index
Q quadratic model 30–33
R radius, sphere 18 random seeds displaying 15 randomizing runs 28 starts, JSL 67 randomizing within blocks 42 range constraints 139–140 Reactor 32 Runs.jmp 117 Reactor 32.jmp 119 Reactor 8 Runs.jmp 81, 151, 155 Reactor Augment Data.jmp 157 Reactor Factors.jmp 118 Reactor Response.jmp 118 regressor columns 73 relative proportions See mixture designs replicating designs 149, 152 number of replicates 78 requesting additional runs 28 rescaling designs 89 resolution numbers 71 resolutions of designs 71 response limits, column property 20 response surface designs examples 91, 96–97 introduction 4 purpose 85 reports 94 with blocking factors 57 with categorical factors 57 Response Surface Methodology (RSM) 4 response surfaces effects 144 mixture designs 146 responses adding 7 desirability values 8 goals 6 goals, desirability functions 95 loading 13, 91 lower limits 8 saving 13
simulated response values 15 upper limits 8 Responses panel 6 RMSE 82, 121 robust engineering 125 roles, design 21 Rotatable option 89 RSM (Response Surface Methodology) 4 rules changing generating 10, 76 runs additional 28 order they appear in table 10, 28, 78 requesting additional 28 screening designs 71
S sample means 166 Sample Size, Power command 163
sample sizes example comparing one proportion to baseline and sample size plot 173 example comparing single-direction one-sample variances 171 example with counts per unit 173 one and two sample means 167 prospective power analysis 165 screening designs 115 saturated designs 27 saving constraints 14 factors 13, 80 prediction formulas 147 responses 13 X Matrix 16 scaling axial 88 designs 89 Scheffé polynomial 144 screening designs 69 custom designs 34 design types 71 dialogs 74 examples 74–80 introduction 4 scripts animation 167 generating the analysis model
187 Index
Model script See Model table property scripting See JSL searching for points per factor 67 seeds displaying generating seeds for designs 15 selecting designs 9 setting random seeds 15 sigma quality level (power and sample size) 174 signal factors 125 signal-to-noise ratios 4, 125 simplex 133 centroid designs 136 lattice designs 135, 138 simulated response values 15 single-sample means (power and sample sizes) 167 solution tables 94 solutions equivalent 38 space-filling designs 99–113 simplex lattice 138 simplex centroids 136 sparsity, effect 69, 72–73 sphere packing Borehole problem 108 methods 101 space filling designs 99 sphere radius 18 spinning plots Box-Behnken designs 93 standard deviation, error 165 star points 85 starts, number of 16, 67 statistics, performance 127 Stepwise control panels 120 subsets, finding 141 supersaturated designs 63 suppressing Cotter designs 16 surface designs See response surface designs
T table properties Model 79, 94 tables Canonical Curvature 94 data tables, creating 28 factor designs 35
factors table, creating 14 generating 79 making in custom designs 28 solution 94 Taguchi designs 125–131 description 4 examples 127 methods 125 target values 8 ternary plots 133, 143 traces desirability 95 trade-off in screening designs 71 transformations, Box Cox 82 tutorial examples augment designs 155–161 custom designs 39, 41–51 DOE 5–11 full factorial designs 118 mixture designs 145–146 response surface designs 91, 96–97 screening designs 74 Taguchi designs 127 two-level categorical 9 two-level fractional factorials 71 two-level full factorials 71 two-sample and one-sample means 166, 170 two-sample proportion (power and sample size) 172
U Uniform (space filling design) 99, 106 uniform precision designs 87 upper limits 8 User Defined option 89 utilities commands 12
V values simulated responses 15 target 8 variables grouping (DOE) 59 variance error and prediction 31 variance of prediction 89 vertices
188
Index
extreme 135, 139 extreme, finding subsets 141
W-Z weight importance 8, 13 whole model test 144 X Matrix, saving 16 XVERT method 140, 142