Food Chemical Risk Analysis Edited by DAVID R. TENNANT TAS International London UK
BLACKIE ACADEMIC & PROFESSIONAL An I...
134 downloads
1437 Views
26MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Food Chemical Risk Analysis Edited by DAVID R. TENNANT TAS International London UK
BLACKIE ACADEMIC & PROFESSIONAL An Imprint of Chapman & Hall
London • Weinheim • New York • Tokyo • Melbourne • Madras
Published by Blackie Academic and Professional, an imprint of Chapman & Hall, 2-6 Boundary Row, London SEl 8HN, UK Chapman & Hall, 2-6 Boundary Row, London SEl 8HN, UK Chapman & Hall GmbH, Pappelallee 3, 69469 Weinheim, Germany Chapman & Hall USA, 115 Fifth Avenue, New York, NY 10003, USA Chapman & Hall Japan, ITP-Japan, Kyowa Building, 3F, 2-2-1 Hirakawacho, Chiyoda-ku, Tokyo 102, Japan DA Book (Aust.) Pty Ltd, 648 Whitehorse Road, Mitcham 3132, Victoria, Australia Chapman & Hall India, R. Seshadri, 32 Second Main Road, CIT East, Madras 600 035, India First edition 1997 © 1997 Chapman & Hall Typset in 10/12 pt Times by Florencetype Ltd., Stoodleigh, Devon, UK Printed in Great Britain by TJ. International, Padstow, Cornwall, UK ISBN O 412 72310 7 Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the UK Copyright Designs and Patents Act, 1988, this publication may not be reproduced, stored, or transmitted, in any form or by any means, without the prior permission in writing of the publishers, or in the case of reprographic reproduction only in accordance with the terms of the licences issued by the Copyright Licensing Agency in the UK, or in accordance with the terms of licences issued by the appropriate Reproduction Rights Organization outside the UK. Enquiries concerning reproduction outside the terms stated here should be sent to the publishers at the London address printed on this page. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. A catalogue record for this book is available from the British Library
© Printed on acid-free text paper, manufactured in accordance with ANSI/NISO Z39.481992 (Permanence of Paper)
Foreword The recognition that chemicals in food, whatever their origin, might present a risk to the consumer has long been recognised. However, early food regulations at the beginning of the century were primarily aimed at prevention of adulteration and fraud rather than directed at consumer safety. It is only in the second half of the century that the tools have been evolving to estimate the risks to human health from chemicals in food and to manage these risks in a meaningful way. These tools have their origins in forensic toxicology and pharmacology on the one hand, and in the emerging science of risk analysis directed initially at identifying sources, managing and 'designing out' risk from industrial activity and given added impetus through investment from the space programme. None of these disciplines was ideally suited to the purpose but from these roots have emerged increasingly refined techniques directed specifically at assessment of risk from chemicals in food, driven by the needs of regulatory authorities at the national level and by international committees such as the Joint FAO/WHO Expert Committee on Food Additives, the Joint FAO/WHO Meeting on Pesticide Residues and the Codex committees on food additives and contaminants, pesticide residues and veterinary drug residues in food. More recent developments in understanding of the mechanisms of chemical toxicity, with major inputs from the field of molecular biology, have added further impetus to the pace of evolution of the methodology of hazard characterisation and risk assessment, and point the way to further advances that might obviate, or at least minimise, the need for very extensive and expensive studies in experimental animals. It is recognised that food chemical risk analysis and management is a multi-stage process involving hazard characterisation, risk assessment and risk management, and this is reflected in the structure of this book. Traditionally, the hazard characterisation stage has been conducted largely in experimental animals with only limited input from human data (e.g. pharmacokinetics) and with in vitro data making a relatively minor contribution and directed at specific and limited end-points such as genotoxicity. This situation is undergoing a sea change and it is anticipated that such data will play a more extensive and important role in risk characterisation and evaluation. A further factor influencing the pace and direction of developments in risk analysis has been increasing consumer awareness of food chemicals as a source of involuntary risk and of the ethical issues arising from toxicological studies in animals. This has led to sometimes conflicting demands for greater rigour in risk characterisation whilst
reducing the extent of testing in laboratory animals. However, movement can already be seen towards resolving the conflict by the development of increasingly sophisticated in vitro techniques in pharmacokinetics and toxicodynamics and making use of genetically modified cells carrying genes coding for human variants of the enzymes involved in metabolism of xenobiotics. These techniques hold promise at least for prioritisation and, in the longer term, significant reduction of the need for experimental animal studies. The emerging techniques include those for predicting toxicity based on expert systems or molecular modelling of potential substrate interactions with key enzymes or receptors. These also appear to be potentially useful in determining the need for, and extent of, animal testing required for an adequate risk analysis. However, the need to limit the amount of animal experimentation is not based solely on ethical considerations; there is an increasing awareness that the current animal models often are not good surrogates for humans and, where comparative data on toxicity are available, frequently appear to produce irrelevant results (e.g. rodent nephrotoxicity/carcinogenicity related to a species specific a2juL-globulin or bladder carcinogenesis in male rats) or seriously to overestimate the risk (e.g. phthalate esters and other peroxisome proliferators). The methods of risk assessment to date have tended to concentrate on the effects of exposure to single chemicals, although the 'Group ADF approach has gone some way to linking together the assessment of chemicals which are similar in their chemical structure, mode of metabolism and mechanisms of toxicity. It is increasingly obvious that this does not always give an adequate characterisation of hazard and estimate of risk; both hazard and risk are modulated by other dietary components and a more holistic, integrated approach should be aimed at in order not to underestimate or overestimate risk. The former might compromise health while the latter would lead to unnecessary and expensive measures to reduce risk. There are two distinct paradigms used in the risk assessment stage based on two discrete assumptions. One is based on an assumption that toxicity is thresholded while the second makes no such assumption but adopts a dose-response model in which risk only reaches zero at zero dose (equally an assumption). In engineering terms, the former assumption is analogous to the organism, like a fibre or rod, having an 'elastic limit' which, only if it is exceeded leads to irreversible deformation and increasing loaddependent risk of failure; the latter assumes no elasticity in the system. Homeostasis indicates that for many kinds of chemical stress, organisms do have some elasticity, i.e. reversible capacity to adapt, but this thresholded model is not generally accepted as being applicable to genotoxic carcinogens where it is assumed that any load carries a finite risk of failure. It is clear that refinement is needed to both paradigms and this may come
from the advances both in biologically based dose response and pharmacokinetic models, and from the application of more sensitive biomarkers of exposure and critical effect. Developments on these aspects also proceed apace. Since risk is dependent on the degree of exposure as well as the intrinsic toxicity of chemicals in food, there has also been a need to refine the procedures for estimating intakes beyond those originally developed for nutrition research purposes. Particular attention has had to be paid to the variability in the patterns of food consumption in different cultures, by different age groups at different times and taking account of extreme consumers. Because of their higher caloric intake on a body weight basis, infants and children have been subject to particular scrutiny. However, the data on food intakes often remains fragmentary and makes risk analysis less precise than it might be. Finally, at the risk management stage, it is clear that science is not the only input; consumer perceptions of risk and of the socially acceptable limits to risk also determine the nature of the measures required to provide an appropriate degree of assurance. This requires specialised methodology to determine consumer perceptions of risk and benefit (as in the case of saccharin in the USA where the consumers rather than the regulators determined that the benefits outweighed the risk) and to understand the processes of risk communication. Ultimately, if the scientific appraisal of the risk is adequately communicated in an objective and unbiased manner, the social determinants of acceptability will have a major role in deciding the risk management procedures demanded. In all of these areas mentioned above there has been significant and increasingly rapid progress in providing a more secure foundation for risk analysis. It is therefore highly timely to take stock of the present situation. This book is a comprehensive appraisal of the current state of the art of food chemical risk analysis and risk management by specialists in the various contributory fields and with a forward looking perspective on future possibilities. As such it represents a unique compilation of great value to all who are involved in, or seek to understand, the processes of risk analysis and risk management. R. Walker
Contributors
B.N. Ames
Division of Biochemistry and Molecular Biology, Barker Hall, University of California, Berkeley, CA, 94270, USA
D. Ball
Centre for Environmental and Risk Management, School of Environmental Sciences, University of East Anglia, Norwich, Norfolk, NR4 7TJ, UK
M. Balls
ECVAM, JRC Environment Institute, 21020 Ispra, Italy
DJ. Benford
Molecular Toxicology Research Group, School of Biological Sciences, University of Surrey, Guildford, Surrey, GU2 6SU, UK
C.L. Broadhead
FRAME, Russell & Burch House, 96-98 North Sherwood Street, Nottingham, NGl 4EE, UK
F.F. Busta
University of Minnesota, 1334 Eccles Ave., Run 225, St Paul, MN 55108-6099, USA
C.F. Chaisson
Technical Assessment Systems, Inc., The Flour Mill, 1000 Potomac Street, NW, Washington, DC 20007, USA
M.A. Cheeseman
Food and Drug Administration, Center for Food Safety and Applied Nutrition, Office of Premarket Approval, HFS-200, 200 C Street SW, Washington DC 20204, USA
R.D. Combes
FRAME, Russell & Burch House, 96-98 North Sherwood Street, Nottingham, NGl 4EE, UK
J.S. Douglass
Technical Assessment Systems, Inc., 1000 Potomac Street, NW, Washington, DC 20007, USA
L.J. Frewer
Institute for Food Research, Earley Gate, Whiteknights Road, Reading, RG6 6BZ, UK
A.C.D. Hayward
School of Environmental Sciences, University of East Anglia, Norwich, NR4 7TJ, UK
B. Heinzow
Institute of Environmental Toxicology, Fleckenstr. 2-4, D-24105 Kiel, Germany
P. Judson
'Heather Lea', Bland Hill, Norwood, Harrogate, HG3 ITE, UK
N. Lazarus
Open University, St James House, 150 London Road, East Grinstead, RH19 IHG, West Sussex, UK
D.F.V. Lewis
Molecular Toxicology Research Group, School of Biological Sciences, University of Surrey, Guildford, Surrey, GU2 5XH, UK
D.P. Lovell
BIBRA International, Woodmansterne Carshalton, Surrey, SM5 4DS, UK
EJ. Machuga
Food and Drug Administration, Center for Food Safety and Applied Nutrition, Office of Premarket Approval, HFS-200, 200 C Street SW, Washington, DC 20204, USA
E.M. Mortby
Ministry of Agriculture, Fisheries and Food, Ergon House, c/o Nobel House, 17 Smith Square, London, SWlP 3JR, UK
J.A. Norman
Ministry of Agriculture, Fisheries and Food, Ergon House, c/o Nobel House, 17 Smith Square, London, SWlP 3JR, UK
M. Postle
Risk and Policy Analysts Ltd, Farthing Green House, 1 Beccles Road, Loddon, Norfolk, NR14 6LT, UK
Road,
C.J.M. Rompelberg TNO Nutrition and Food Research Institute, PO Box 360, 3700 AJ Zeist, The Netherlands N.R. Reed
Department of Pesticide Regulation, California Environmental Protection Agency, 1020 N Street, Sacramento, CA 95814-5624, USA
RJ. Scheuplein
The Weinberg Group Inc., 1220 Nineteenth Street, NW, Washington, DC 20036, USA
R. Shepherd
Institute for Food Research, Barley Gate, Whiteknights Road, Reading, RG6 6BZ, UK
T.H. Slone
Life Sciences Division, Ernest Orlando Lawrence Berkeley National Laboratory, Berkeley, CA, 94720, USA
M. Strube
TNO Nutrition and Food Research Institute, PO Box 360, 3700 AJ Zeist, The Netherlands
L. Swirsky Gold
Life Sciences Division, Ernest Orlando Lawrence Berkeley National Laboratory, Berkeley, CA, 94720, USA
D.R. Tennant
TAS International, Chartwell House, 38 Church Street, Malvern, Worcestershire, WR14 2AZ, UK
G. Thomas
BIBRA International, Woodmansterne Carshalton, Surrey, SM5 4DS, UK
P.J. van Bladeren
TNO Nutrition and Food Research Institute, PO Box 360, 3700 AJ Zeist, The Netherlands
G. van Poppel
TNO Nutrition and Food Research Institute, PO Box 360, 3700 AJ Zeist, The Netherlands
H. Verhagen
TNO Nutrition and Food Research Institute, PO Box 360, 3700 AJ Zeist, The Netherlands
R. Walker
School of Biological Sciences, University of Surrey, Guildford, Surrey, GU2 3XN, UK
Road,
Preface
It was not so long ago that food chemicals were controlled (when they were controlled at all) through a simple system of approvals which allowed virtually limitless use of some chemicals whilst prohibiting others. Food chemical regulation has come a long way since that time but the process of evolution is not yet complete. Two key factors have been central to development: first, that the toxicity of any chemical is related to the dose; and second, that whilst science may provide many answers, the solution of food safety problems is essentially a socio-political process. The understanding of dose-response has been a fundamental concern of toxicologists but now exposure analysts are entering the scene and trying to establish real dose levels that consumers are exposed to. Meanwhile, new toxicological methods have been under development which rely on techniques which reduce the reliance on traditional animal models. Taken together, these approaches form the core of food chemical risk assessment. Food chemical risk management used to be the domain of government regulators. Now it is being increasingly regarded as a responsibility of all stakeholders in the food production and consumption process. In particular, the role of consumers and their views and perceptions about food safety are being seen as legitimate and often crucial parts of the risk management paradigm. The purpose of this book is to fill in some of the detail around recent developments and then to anticipate the future evolution of food chemical risk analysis. Our aim has not been to provide a comprehensive analysis of every aspect of risk analysis - several of the topics covered could easily justify a separate volume. Instead, we have sought to provide an introduction to the methods presently in use, some of the current controversies and developments near the leading edge of the discipline. We hope that those working in the many diverse professions associated with food chemicals will find within this book an opportunity to learn more about the roles of other professionals whom they may rarely meet. We also hope that consumers will find the book a useful source of information about the safety of chemicals in food. After all, everyone who picks up this book is a consumer of food and we all have an inborn interest in the food we eat. I am very grateful for all the hard work put in by the many contributors to this book. I also appreciate deeply the patience of my employers
who allowed me to pursue this project; formerly the UK Ministry of Agriculture, Fisheries and Food, and currently at TAS International. David Tennant July 1997
Contents
Foreword .........................................................................
xv
Contributors .....................................................................
xix
Preface ............................................................................ xxiii Part I. Introduction ........................................................
1
1.
Food, Chemicals and Risk Analysis ...................................
3
1.1
Introduction ..........................................................
3
1.2
Food Chemicals ...................................................
4
1.3
Characteristics of Food Chemicals ...................... 1.3.1 Food Additives ............................................ 1.3.2 Pesticide Residues ..................................... 1.3.3 Veterinary Residues ................................... 1.3.4 Environmental Contaminants ..................... 1.3.5 Biogenic Contaminants .............................. 1.3.6 Inherent Phytotoxins ................................... 1.3.7 Cooking and Processing Contaminants ............................................. 1.3.8 Food Contact Materials .............................. 1.3.9 Novel Foods and Novel Food Technologies .............................................. 1.3.10 Beneficial Food Chemicals ......................... 1.3.11 Toxicological Effects of Food Chemicals ...................................................
5 5 6 6 6 7 8
This page has been reformatted by Knovel to provide easier navigation.
8 8 8 9 10
v
vi
Contents 1.4
Risk Analysis ....................................................... 1.4.1 Risk Assessment ........................................ 1.4.2 Risk Management ...................................... 1.4.3 Risk Communication ..................................
10 13 13 14
1.5
The Nature of Risk ...............................................
14
1.6
Personal Decisions about Risks ..........................
15
1.7
The Use of Risk Analysis in Food Safety .............
16
1.8
Uncertainty ..........................................................
16
1.9
Conclusion ...........................................................
17
Further Reading ............................................................
18
Part II. Risk Assessment ...............................................
19
2.
Food Chemical Risk Assessment .......................................
21
2.1
Introduction ..........................................................
21
2.2
Current Approaches to Risk Assessment ............. 2.2.1 Hazard Identification and Prioritization ....... 2.2.2 Hazard Characterization ............................. 2.2.3 Occurrence Information .............................. 2.2.4 Food Consumption Data ............................ 2.2.5 Intake Estimation ........................................ 2.2.6 Risk Characterization .................................
22 22 23 24 25 25 26
2.3
Sources of Uncertainty in Hazard Characterization .................................................. 2.3.1 Uncertainty Analysis ................................... 2.3.2 Animal Studies ........................................... 2.3.3 In Vitro Studies ........................................... 2.3.4 Human Studies ........................................... 2.3.5 Thresholded Toxins .................................... 2.3.6 Non-Thresholded Toxins ............................ 2.3.7 Interactions Between Food Chemicals ....... 2.3.8 Individual Susceptibility ..............................
26 27 29 32 33 34 36 40 41
This page has been reformatted by Knovel to provide easier navigation.
2.4
2.5
vii
Uncertainties in Risk Characterization ................. 2.4.1 Interpretation of Hazard Evaluation ............ 2.4.2 Variations in Food Chemical Intakes .......... 2.4.3 Time Integration of Intake Estimates .......... 2.4.4 Effect of Short-Term Variations in Food Consumption on Estimates of Intake .......... 2.4.5 Effect of Long-Term Variations in Food Consumption on Estimates of Intake .......... 2.4.6 Toxicological Significance of Dosing Period ......................................................... 2.4.7 Corrections for Body Weight and Age ........ 2.4.8 Effect of Age on Food Chemical Intakes ........................................................ 2.4.9 Correction Factors for Children’s Intakes ........................................................ 2.4.10 Alternative Correction Factors ..................... 2.4.11 Risk Characterization Developmental Needs .........................................................
42 43 43 44 44 46 46 47 48 50 50 52
Opportunities for Development in Risk Assessment .........................................................
52
Conclusion ...........................................................
53
References ....................................................................
54
Quantitative Risk Assessment ............................................
57
3.1
Introduction ..........................................................
57
3.2
What Is QRA? Definitions .................................... 3.2.1 Terminology: Hazard, Risk, Safety ............. 3.2.2 QRA ............................................................
57 58 58
3.3
QRA and Food Safety: UK and US Perspectives ........................................................ 3.3.1 Before Delaney ........................................... 3.3.2 The Delaney Clause ...................................
59 60 60
2.6 3.
Contents
This page has been reformatted by Knovel to provide easier navigation.
viii
Contents 3.3.3 3.3.4 3.3.5 3.3.6 3.3.7 3.3.8 3.4
After Delaney: Diethylstilboestrol, Packaging ................................................... The 1990s and Court Rulings ..................... Moves to Change Delaney (Unfinished Business) .................................................... Department of Health, Committee on Carcinogenicity Approaches ....................... EU Approaches .......................................... GATT, NAFTA ............................................
60 61 62 63 63 64
Advantages of QRA ............................................. 3.4.1 VSD, De Minimis, 'Bright Lines' and Negligible Risk ............................................ 3.4.2 ALARA and BATNEEC ...............................
64
3.5
Safety Factor Versus Mathematical Modeling ...... 3.5.1 Safety Factor .............................................. 3.5.2 Mathematical Modeling ..............................
66 66 68
3.6
The LMS Model ................................................... 3.6.1 Theory ........................................................ 3.6.2 The LMS Model in Practice ........................ 3.6.3 Limitations of the Mathematical Models Used in QRA ..............................................
71 71 72
Developments in Modeling ................................... 3.7.1 Time-to-Tumour Models ............................. 3.7.2 Physiologically-Based Pharamacokinetic (PB-PK) Models ............ 3.7.3 Biologically Based Dose-Response (BB-DR) Models ......................................... 3.7.4 Benchmark Doses ...................................... 3.7.5 Biomarkers .................................................
74 74
Future Developments in QRA .............................. 3.8.1 New EPA Guidelines ..................................
80 80
3.7
3.8
This page has been reformatted by Knovel to provide easier navigation.
64 65
73
76 77 78 79
Contents
ix
Linkage of PB-PK and BB-DR Models .......
81
Conclusion ...........................................................
82
References ....................................................................
85
Biomarkers in Epidemiological and Toxicological Nutrition Research ...............................................................
87
4.1
Introduction ..........................................................
87
4.2
Classification of Biomarkers ................................
88
4.3
Markers of External and Internal Exposure ..........
90
4.4
Markers of Biologically Effective Dose .................
90
4.5
Markers of Early Biological Effects ......................
94
4.6
Markers of Modified Structure or Function ...........
96
4.7
Markers of Individual Sensitivity ...........................
97
4.8
Selection, Evaluation and Application of Biomarkers .......................................................... 98 4.8.1 Biological Aspects ...................................... 99 4.8.2 Ethical Implications and Constraints .......... 99 4.8.3 Practical and Analytical Aspects ................ 101 4.8.4 Sensitivity and Specificity ........................... 102 4.8.5 Human Variability and Study Design .......... 103
4.9
Conclusions ......................................................... 104
3.8.2 3.9 4.
Acknowledgement ......................................................... 105 References .................................................................... 105 5.
Expert Systems for Hazard Evaluation ...............................
109
5.1
Introduction .......................................................... 109
5.2
Factors Influencing Biological Activity .................. 111
5.3
Making Rules for Expert Systems ........................ 5.3.1 Binary Trees ............................................... 5.3.2 Statistical Methods ..................................... 5.3.3 Probabilities ................................................ 5.3.4 Knowledge Bases .......................................
This page has been reformatted by Knovel to provide easier navigation.
114 115 116 118 118
x
Contents 5.4
Representation of Chemical Structural Information .......................................................... 119
5.5
Structural Descriptors Used in Expert Systems ............................................................... 5.5.1 Augmented Atoms ...................................... 5.5.2 Atom and Bond Sequences ........................ 5.5.3 Ring Descriptors ......................................... 5.5.4 Atom Pairs .................................................. 5.5.5 Three-Dimensional Descriptors ..................
121 123 124 124 124 125
5.6
The Effects of Choosing Different Types of Descriptors .......................................................... 126
5.7
Assessment of Hazard and Risk .......................... 128
5.8
Some Examples of Expert Systems ..................... 128
5.9
The Implications of Choosing Different Types of System ............................................................ 130
5.10 Applicability of Expert Systems to Food Chemical Hazard Evaluation ................................ 131 References .................................................................... 132 6.
Risk Assessment: Alternatives to Animal Testing ..............
133
6.1
Introduction .......................................................... 133
6.2
The Three Rs Concept ........................................ 134
6.3
Statistics for the Use of Animals in Food Safety Evaluation ................................................. 135 6.3.1 UK .............................................................. 135 6.3.2 Europe ........................................................ 136
6.4
Legislation Relating to Food Additive Safety Assessment ......................................................... 6.4.1 UK Legislation ............................................ 6.4.2 European Legislation .................................. 6.4.3 US Legislation ............................................
This page has been reformatted by Knovel to provide easier navigation.
136 136 137 138
6.5
Tests 6.5.1 6.5.2 6.5.3 6.5.4 6.5.5 6.5.6 6.5.7 6.5.8 6.5.9
Contents
xi
Required for Food Safety Assessment ....... Acute Oral Toxicity Tests ........................... Short-Term Genetic Toxicity Tests ............. Metabolism and Pharmacokinetic Studies ....................................................... Immunotoxicity Tests .................................. Neurotoxicity Tests ..................................... Reproductive and Developmental (Teratogenic) Toxicity Tests ....................... Carcinogenicity and Chronic Toxicity Tests ........................................................... Determination of the no Observed Adverse Effect Level .................................. Determination of the Acceptable Daily Intake ..........................................................
139 139 139 139 140 140 141 141 142 142
6.6
Problems with Animal Tests ................................. 143 6.6.1 Determination of the NOAEL and the ADI ............................................................. 143 6.6.2 Use of High Doses ..................................... 144
6.7
Currently Available Alternatives ........................... 6.7.1 Reduction Alternatives ............................... 6.7.2 Refinement Alternatives ............................. 6.7.3 Replacement Alternatives ..........................
6.8
Conclusions ......................................................... 157
144 145 149 149
References .................................................................... 159 7.
Molecular Modeling .............................................................
163
7.1
Introduction .......................................................... 163
7.2
Chemical Safety Evaluation and Risk Assessment ......................................................... 165
7.3
The COMPACT Approach ................................... 168
This page has been reformatted by Knovel to provide easier navigation.
xii
Contents 7.4
Cytochromes P450 and Their Role in Metabolic Activation ............................................. 173
7.5
Protein Modeling .................................................. 177
7.6
Quantitative Structure-Activity Relationships ....... 179
7.7
Conclusions ......................................................... 184
Acknowledgement ......................................................... 191 References .................................................................... 191 8.
Estimation of Dietary Intake of Food Chemicals .................
195
8.1
Introduction .......................................................... 195
8.2
Intake Assessment Methods for Pesticides and Other Agricultural Chemicals ........................ 8.2.1 Total Diet Studies ....................................... 8.2.2 Food Grouping Model ................................. 8.2.3 Federal Biological Agency for Agricultural and Forestry Management ...... 8.2.4 World Health Organization Tiered Approaches ................................................
8.3
8.4
Intake Assessment Methods for Food Additives .............................................................. 8.3.1 Analysis for Additive Usage Data ............... 8.3.2 Food and Nutrition Division of the French Council of Public Health Method ....................................................... 8.3.3 Budget Method ........................................... 8.3.4 Codex Proposal for Tiered Additive Intake Assessment .....................................
196 196 197 197 197 203 203
203 203 204
Food Consumption Data Sources for Food Chemical EDI Assessment .................................. 206 8.4.1 Food Consumption Survey Methodology ............................................... 206
This page has been reformatted by Knovel to provide easier navigation.
Contents 8.4.2 8.4.3
xiii
Validity, Reliability and Sources of Error in Food Consumption Survey Data ............ 209 Food Consumption Data Required for EDI Analysis ............................................... 211
8.5
Future Trends in Food Chemical Risk Assessment ......................................................... 213 8.5.1 Probabilistic Methods in Food Chemical Intake Estimation ........................................ 213 8.5.2 Intake of Multiple Chemicals ...................... 214
8.6
Uncertainty in Intake Assessment ........................ 215
8.7
Future Needs for Dietary Intake Assessment ....... 215
References .................................................................... 216 9.
Assessing Risks to Infants and Children ............................
219
9.1
Introduction .......................................................... 219
9.2
Infants and Children – Unique Population Subgroups ........................................................... 9.2.1 Pharmacokinetics and Pharmacodynamics .................................... 9.2.2 Toxicity ....................................................... 9.2.3 Exposures ..................................................
220 221 223 224
9.3
Implications for Risk Assessment ........................ 9.3.1 Toxicological Considerations ...................... 9.3.2 Exposure Assessment ................................ 9.3.3 Risk Characterization .................................
226 226 229 233
9.4
Other Considerations ........................................... 235 9.4.1 In Utero Exposures ..................................... 236 9.4.2 Multiple Chemical Exposures ..................... 236
9.5
Conclusion ........................................................... 237
References .................................................................... 238
This page has been reformatted by Knovel to provide easier navigation.
xiv
Contents
10. Dietary Chemoprevention in Toxicological Perspective ..........................................................................
240
10.1 Introduction – Nutrition and Cancer ..................... 240 10.2 Risk Assessment of Carcinogens ........................ 241 10.2.1 Threshold Approach for Non-Genotoxic Carcinogens ............................................... 241 10.2.2 Non-Threshold Extrapolation for Genotoxic Carcinogens .............................. 243 10.3 Genotoxic Substances in the Diet ........................ 243 10.4 Chemopreventive Substances in the Diet ............ 10.4.1 Tiered Approach for Studying Chemopreventive Agents ........................... 10.4.2 Mechanisms of Action ................................ 10.4.3 Alteration of Biotransformation Capacity .... 10.4.4 Nutritive Dietary Chemopreventive Agents ......................................................... 10.4.5 Non-Nutritive Dietary Chemopreventive Agents ........................................................ 10.5 The Lessons of Toxicology Transposed to Chemoprevention: Four Caveats ......................... 10.5.1 A First Caveat: Assessment of Antimutagenic Potential .............................. 10.5.2 A Second Caveat: The Threshold Concept ...................................................... 10.5.3 A Third Caveat: Beware of Toxicity! ........... 10.5.4 A Fourth Caveat: (Anti)Carcinogens Are Not Always (Anti)Mutagens and Vice Versa ..........................................................
244 245 248 249 252 253 256 257 258 258
259
10.6 Feasibility of Dietary Chemoprevention in Humans ............................................................... 260 10.6.1 Evidence from Epidemiological Studies ....................................................... 260 This page has been reformatted by Knovel to provide easier navigation.
Contents
xv
10.6.2 Evidence from Experimental Studies in Humans ...................................................... 260 10.6.3 More Than One Beneficial Compound: The Matrix Approach .................................. 261 10.7 Conclusion ........................................................... 262 Acknowledgements ....................................................... 262 References .................................................................... 263 11. Prioritization of Possible Carcinogenic Hazards in Food .....................................................................................
267
11.1 Causes of Cancer ................................................ 267 11.2 Cancer Epidemiology and Diet ............................ 11.2.1 Dietary Fruits and Vegetables .................... 11.2.2 Calorie Restriction ...................................... 11.2.3 Other Aspects of Diet .................................
267 268 268 269
11.3 Human Exposures to Natural and Synthetic Chemicals ............................................................ 270 11.4 The High Carcinogenicity Rate among Chemicals Tested in Rodents .............................. 273 11.5 The Importance of Cell Division in Mutagenesis and Carcinogenesis ........................ 274 11.6 Ranking Possible Carcinogenic Hazards ............. 11.6.1 Natural Pesticides ...................................... 11.6.2 Synthetic Pesticides ................................... 11.6.3 Cooking and Preparation of Food .............. 11.6.4 Food Additives ............................................ 11.6.5 Mycotoxins ................................................. 11.6.6 Synthetic Contaminants .............................
276 280 281 281 282 283 284
11.7 Future Directions ................................................. 285 Acknowledgements ....................................................... 289 References .................................................................... 289
This page has been reformatted by Knovel to provide easier navigation.
xvi
Contents
12. Threshold of Regulation ......................................................
296
12.1 Introduction .......................................................... 296 12.2 The Threshold of Regulation in Practice .............. 304 12.3 Advantages and Effects of the Threshold of Regulation Process ............................................. 308 12.4 Future Issues ....................................................... 311 References .................................................................... 316 13. An Approach to Understanding the Role in Human Health of Non-Nutrient Chemicals in Food .........................
317
13.1 Introduction .......................................................... 317 13.2 Non-Nutrient Chemicals under Discussion ........... 319 13.3 A New Approach .................................................. 320 13.4 Factors Affecting the Action of Chemicals in Food .................................................................... 13.4.1 Bioavailability .............................................. 13.4.2 Products Entering the Circulation ............... 13.4.3 Multiple Functionality ..................................
321 321 322 322
13.5 The Approach ...................................................... 322 13.5.1 Phase I ....................................................... 324 13.5.2 Phase II ...................................................... 324 References .................................................................... 326
Part III. Risk Management ............................................. 329 14. The Philosophy of Food Chemical Risk Management .......
331
14.1 Introduction – Responsibilities and Benefits ........ 331 14.2 A New Game on a Different Playing Field ............ 332 14.3 The Emerging Role of the Risk Manager ............. 333 14.4 A Glimpse into the Deliberations of the Risk Manager .............................................................. 334 14.5 Applying the Philosophy – Using the Tools .......... 335 This page has been reformatted by Knovel to provide easier navigation.
Contents 15. Consumer Perceptions ........................................................
xvii 336
15.1 Introduction .......................................................... 336 15.2 Ranking the Risks ................................................ 338 15.3 Theories of Risk Perception ................................. 15.3.1 The Psychometric Paradigm ...................... 15.3.2 Relationship to Sociodemographic Variables .................................................... 15.3.3 The Cultural Theory of Risk ........................
344 345 348 350
15.4 Risk Debates and the Importance of Trust ........... 354 15.5 Conclusion ........................................................... 357 Acknowledgement ......................................................... 359 References .................................................................... 359 16. Decision Aids .......................................................................
362
16.1 Introduction .......................................................... 362 16.2 Risk-Benefit Analysis ........................................... 365 16.2.1 The Analytical Framework .......................... 365 16.2.2 The Scope of the Analysis .......................... 366 16.3 Assessing Impacts on Producers and Consumers .......................................................... 369 16.4 Valuing Human Health Risks ............................... 16.4.1 The Risk Assessment Process ................... 16.4.2 The Valuation Techniques .......................... 16.4.3 Other Valuation Techniques .......................
372 372 372 375
16.5 Links to the Environment ..................................... 376 16.6 Summary and Conclusion .................................... 379 References .................................................................... 379 17. Risk Evaluation, Risk Reduction and Risk Control .............
381
17.1 Introduction .......................................................... 381 17.2 Risk Evaluation .................................................... 381 17.2.1 Stakeholder Analysis .................................. 382 This page has been reformatted by Knovel to provide easier navigation.
xviii
Contents 17.2.2 17.2.3 17.2.4 17.2.5 17.2.6
Decision Analysis ....................................... Ethical and Moral Dimensions .................... Quantitative Risk Evaluation ...................... Managing Uncertainty ................................ Sensitivity Analysis .....................................
382 383 384 384 385
17.3 Risk Reduction .................................................... 385 17.3.1 Options for Food Additive Risk Reduction ................................................... 387 17.3.2 Options for Food Contaminant Risk Reduction ................................................... 389 17.4 Risk Control ......................................................... 17.4.1 Risks and Regulation ................................. 17.4.2 Less Prescriptive Control Methods ............. 17.4.3 Voluntary Agreements ................................ 17.4.4 Codes of Practice ....................................... 17.4.5 Hazard Analysis Critical Control Points ......................................................... 17.4.6 Good Manufacturing Practice and ISO 9000 ........................................................... 17.4.7 Monitoring and Surveillance .......................
390 390 391 392 392 393 396 396
17.5 Evaluating, Reducing and Controlling Risks – Getting the Balance Right .................................... 397 References .................................................................... 398 18. Risk Communication ...........................................................
399
18.1 Introduction .......................................................... 399 18.2 Aims of Risk Communication ............................... 399 18.3 Problems Associated with Risk Communication .................................................... 400 18.4 Implications of Models of Risk Perception and Psychological Theories for Communication ......... 402 18.5 Contents of the Risk Message ............................. 404 This page has been reformatted by Knovel to provide easier navigation.
Contents
xix
18.6 Information Sources ............................................ 406 18.7 Target Recipients ................................................ 408 18.8 The Role of the Media ......................................... 409 18.9 Practical Concerns in Risk Communication ......... 413 18.10 Conclusions ......................................................... 414 Acknowledgements ....................................................... 415 References .................................................................... 416 19. Regulating Food-Borne Risks .............................................
418
19.1 Introduction .......................................................... 418 19.2 History of Food Regulation .................................. 418 19.2.1 Why Are Intentional Chemical Additives Used Today? .............................................. 422 19.3 Food Regulation in the USA ................................ 19.3.1 Early Regulation ......................................... 19.3.2 Statutory Background of Current US Food Regulation ......................................... 19.3.3 The Process of Regulatory Approval .......... 19.3.4 Local Enforcement – FDA Field Offices ........................................................ 19.3.5 HACCP, GLPs and Other Prevention Systems ......................................................
422 422
19.4 Scientific Basis for Food Safety Evaluation .......... 19.4.1 Traditional Approach – the Use of Animal Data ................................................ 19.4.2 Safety Factor Versus Risk-Based Methods ...................................................... 19.4.3 Quantitative Risk Assessment of Chemical Carcinogens ............................... 19.4.4 Comparison with Other National Regulatory Systems ...................................
433
This page has been reformatted by Knovel to provide easier navigation.
425 428 430 431
433 435 437 442
xx
Contents 19.5 International Regulation of Food-Borne Substances .......................................................... 19.5.1 GATT .......................................................... 19.5.2 Codex Alimentarius Commission ............... 19.5.3 European Union .........................................
445 445 447 448
19.6 Summary ............................................................. 449 References .................................................................... 450
Part IV. Conclusion ........................................................ 453 20. Integrated Food Chemical Risk Analysis ............................
455
20.1 Introduction .......................................................... 455 20.2 Integrated Risk Assessment ................................ 20.2.1 Integrated Hazard Characterization ........... 20.2.2 Biomarkers – Integrated Indicators of Exposure and Effect ................................... 20.2.3 PB-PK Modeling – an Integrated Approach to Hazard 457 Characterization ......................................... 20.2.4 Integrated Exposure Analysis ..................... 20.2.5 Integrated Risk Characterization ................ 20.2.6 Comparative Risk Assessment ..................
456 456
20.3 Integrated Risk Management ............................... 20.3.1 The Role of Science in Risk Management .............................................. 20.3.2 Integrating Consumer Perceptions ............. 20.3.3 Integrating Risk Communication ................ 20.3.4 Regulation and Deregulation ......................
460
457
457 457 460 460
462 463 464 464
20.4 Integrating Uncertainty ........................................ 465 20.5 Conclusion ........................................................... 466 References .................................................................... 466
Index ............................................................................... 467 This page has been reformatted by Knovel to provide easier navigation.
Part One Introduction
1 Food, chemicals and risk analysis D.R. TENNANT
1.1 Introduction We all consume many thousands of different chemicals in our food every day. Most of these chemicals are natural constituents of the food we eat. Some are present as a result of contamination from the environment, some arise during production, processing and preparation, and some are intentionally added to food. All chemicals have one characteristic in common: the potential to cause toxicological harm to consumers. Given the huge numbers of chemicals present, it is clear that the vast majority cannot be causing any actual harm; indeed, many are known to confer benefits. The purpose of risk analysis is to identify those chemicals in food which might cause harm, to analyse the potential consequences, to consider any possible benefits and to decide on any action necessary to protect consumers, whilst not unnecessarily impeding trade. Food safety assessment and control is not a new science. Even the ancient pharaohs had primitive 'risk assessors' in the form of food-tasters. The Hebrews introduced laws of food control, some of which may have had their origins in food safety. Early European food law was established to protect consumers from fraud - from the adding of chalk dust to flour, and the use of lead salts to sweeten wine, for example. In present times, food control agencies throughout the world exist to protect consumers whilst supporting the best manufacturing practices in the food industry. Until recently, in all but a few countries, information about the presence, effects and likely exposures of consumers to chemicals in food was scarce. Little reliable scientific information was available, so food chemical standards (where they existed) tended to be based on what industry was prepared to bear and on the absence of any obvious cases of food poisoning from chemicals. Some authorities would allow no added chemicals in food at all and set their limits for contaminants at zero. Now much more information is available and we are facing a revolution in the traditional approaches to food safety with the importation of risk analysis techniques from other disciplines, particularly engineering. The aim of introducing such techniques is to adopt a more scientific approach to food safety which will, in turn, result in more relevance, accuracy, reproducibility and transparency. Such improvements will bring benefits to both food consumers and food producers by ensuring safety whilst facilitating trade.
1.2 Food chemicals What are food chemicals? Strictly speaking, all food is entirely composed of food chemicals. However, the purpose of this book is to consider only those chemicals which are likely to present a toxicological hazard to consumers and so warrant risk analysis. We are not therefore generally concerned about the macro-constituents of food such as fats, carbohydrates, proteins and fibre. Instead we are interested in those substances present in foods at low concentrations (normally much less than 1%) and where there is some reason to undertake an evaluation, such as the licensing or approval of new products, or where there is evidence of the presence of potentially toxic contamination. Substances which are added intentionally to food, such as colours, sweeteners and preservatives, must usually undergo extensive evaluation prior to approval by the regulatory authorities. Pesticides and veterinary medicines which might persist as residues in food must also be evaluated as part of the licensing procedure. These examples clearly fall into the 'food chemical' category. Other chemicals found in food are not so easy to categorize (Table 1.1). Clearly, many food chemicals are substances which are not naturally present in food and which have been added directly or occur as a consequence of some human activity. However, this is not the case for certain natural contaminants such as mycotoxins like aflatoxin and ochratoxin.
Table 1.1 Food chemicals present in food Food additives Colours Flavours Preservatives Processing aids Contaminants Environmental contaminants Food packaging migrants Processing contaminants Residues Pesticides Veterinary medicines Animal feed additives Natural compounds Mycotoxins Marine biotoxins Plant toxins Bacterial toxins Adulterants Malicious tampering
These compounds constitute a cause for concern, because poisoning episodes (of farm animals) following exposures at low levels have identified these substances as potential human toxicants. There are also many normal and natural constituents of plants, such as glycoalkaloids in potatoes or cyanogenic compounds in cassava, which have the potential to cause harm and thus also warrant thorough analysis.
1.3 Characteristics of food chemicals Chemicals are intentionally added to food because they bring some benefits. The function of additives and the need for pesticides, veterinary medicines, etc. therefore constitute an important scientific and technological dimension which needs to be taken carefully into account in risk management. Some chemicals, whether they are added or naturally occurring, also bring benefits by inhibiting toxicological processes or otherwise preventing disease. The antioxidant vitamins, including ascorbic acid (vitamin C) and a-tocopherol (vitamin E), are examples of such compounds but many other natural and synthetic chemicals can have similar effects. This information must also form part of the risk analysis and needs to be presented to risk managers alongside information about potential toxic effects. 1.3.1
Food additives
Additives are substances added to food to modify the colour, flavour, keeping ability or other qualities of a food product. Often, additives are regarded as alien substances in food, since they are seen as the products of the chemical industry and not traditional food production. However, many substances produced in this way are actually pure analogues of naturally occurring chemicals. In recent years there has been a growing trend away from the use of synthetic additives towards the use of equivalent substances extracted from natural products. Manufacturers can then claim that their products contain 'no artificial additives'. All food additives must undergo extensive testing before they can be licensed for use in food. This applies as much to natural substances as to synthetic analogues. Licensing regimes vary from country to country but most authorities expect to be satisfied of the safety and quality of food additives as well as agreeing that there is a genuine need before giving approval for their use. Some 300 additives are approved for use in food in Europe. Approval governs the foods in which additives may be used and limits the levels of use in each type of food. Government and industry work together to ensure that the risks associated with food additives are minimal. However, the final risk management option lies with consumers
- strict labelling requirements ensure that consumers are told what is in the food products they are buying and give them the opportunity to choose whether they wish to eat them or not. 7.3.2 Pesticide residues Pesticides are substances applied during agricultural production to control weeds, insects, fungi and other factors which would affect the yield or quality of the crops. Some pesticides are added after harvesting to ensure that crops do not deteriorate during storage. Pesticides are normally regulated through maximum residue levels (MRLs) which are permitted in food on sale. MRLs are not strictly 'safety limits', because they reflect good agricultural practice, i.e. the levels of use which give optimum performance in field trials. However, all MRLs are checked to ensure that they will not result in intakes of pesticide residues which would exceed acceptable levels. Like additives, pesticides must undergo extensive toxicological testing to identify acceptable levels of intake. 1.3.3
Veterinary residues
Medicines and other substances are sometimes administered to animals during production to treat or prevent disease, encourage growth or control fertility. This can result in residues in meat after the animal has been slaughtered or in milk, eggs, etc. taken for human consumption. Veterinary medicines can be administered by injection or other treatment of individual animals but the usual route of administration is in feed. Residues are controlled by allowing a sufficiently long period between administration and slaughter for levels in tissues to fall to acceptable levels. Veterinary residues are regulated through MRLs in much the same way as pesticides. 1.3.4 Environmental contaminants There are many chemicals present in the environment which can find their way into food. Some substances occur naturally in soils and can be taken up by plants even though they are not plant nutrients. For example, there are always traces of heavy metals such as lead, cadmium and mercury in soils but in areas of mineralization the concentrations can be much higher. In historic mining areas old spoil tips can sometimes result in very high 'hot-spots' of metals in soils which can be taken up by crops. Another source of heavy metals in soils is the use of sewage sludge as a soil treatment. In urban areas industrial effluents in sewage can concentrate in the solid fraction, resulting in high concentrations. Fruits and vegetables are rarely a problem
regarding human poisoning because levels are generally low. However, if animals consume vegetation with high metal concentrations or ingest small amounts of soil whilst grazing, then they can absorb heavy metals. Animals can concentrate metals, resulting in high concentrations in organs such as the liver and kidneys. Where areas of mineralization or urbanization drain into estuaries, shellfish can also concentrate heavy metals to high levels. Organic chemicals which are industrial products or by-products can also pass into the food chain. Some pesticides, such as DDT (now banned in most countries), can accumulate in the environment and concentrate in certain foods - particularly those with a high fat content. Certain industrial chemicals, such as polychlorinated biphenyls (PCBs), can concentrate in fatty foods in a similar way. Other substances, such as dioxins, furans and poly cyclic aromatic hydrocarbons (PAHs), are produced as by-products of industrial activity - particularly during combustion. All of these chemicals are generally present at very low concentrations but may sometimes occur as 'hot-spots' of localized contamination. Environmental contaminants are usually controlled through setting regulatory limits on concentrations permissible in certain foods. Although some countries have some limits, particularly on lead and mercury, in general there are few statutory controls on environmental contaminants in food. 1.3.5 Biogenic contaminants Bacterial toxins usually result in microbiological food poisoning, which is beyond the scope of this book. However, bacteria, fungi and other organisms which infect food can sometimes produce toxins which may persist after cooking. Fungi are responsible for the production of mycotoxins. Aflatoxins are varieties of mycotoxins produced by the fungus Aspergillus flavus. The fungus grows on the surfaces of foods such as nuts and dried figs if they are stored in warm, moist conditions. If aflatoxins are present in animal feed they can be concentrated into milk. Other mycotoxins include ochratoxin, which occurs in cereals and can concentrate in pigs' livers, and patulin, which can indicate the presence of poor-quality fruit in apple juice. Mycotoxins can be controlled through good practice, although, apart from for aflatoxins, few regulatory limits exist. Algae can also produce toxins which can concentrate in food chains. Algal toxins are responsible for paralytic shellfish poisoning (PSP) and diarrhoetic shellfish poisoning (DSP) following the consumption of shellfish - particularly mussels - from some locations at certain times of year. The presence of PSP and DSP toxins often causes shell-fisheries to be closed during the summer months.
7.3.6 Inherent phytotoxins Plants contain many substances which are present for the benefit of the plant rather than its consumers. Some such non-nutrient chemicals can present a toxic hazard to consumers. Many of these chemicals are believed to have roles as pesticides to protect the plant. For example, the synthetic pyrethroid insecticides are based on a chemical which is found naturally in the Pyrethrum plant family, where it probably lends plants some protection from insect attack. Some inherent toxicants are released by plants after they have been damaged. Some of these are thought to protect the plant against fungal attack. The function of other chemicals is not known. For example, some plants contain amino acids which are not found in proteins. These substances are suspected of being neurotoxic. Although inherent plant toxicants may present a potential risk at least as great as that presented by synthetic chemicals, there are very few regulations governing their presence in food. 1.3.7 Cooking and processing contaminants Contamination which occurs during processing and cooking can be caused by leaks of machine lubricants and coolants, absorption of material from utensils such as copper or aluminium cooking pans, or the misuse of cleaning fluids and other carelessness. The chemical composition of food can also change through the process of cooking as a result of interactions between chemicals present in food. Mutagenic heterocyclic amines can be produced on the surface of meat when it is cooked at high temperatures by grilling, roasting, broiling, etc. These substances are amongst the most mutagenic yet discovered and could represent a significant cancer risk to high-level consumers. However, the prospects for controlling exposure to these substances are extremely limited. 1.3.8 Food contact materials Materials which come into contact with food can sometimes release chemicals into food (sometimes known as indirect food additives). Plastics usually contain substances which are designed to maintain the physical properties of containers or films and these plasticizers can leach into the layers of food at the interface. Waxes, inks and other substances used in packaging materials can also migrate into food. The best-quality cut glass requires a high lead content, and lead from lead crystal decanters can leach into wines and spirits if they are allowed to stand for long periods. 7.3.9 Novel foods and novel food technologies Novel foods can relate to many types of material, ranging from a selected strain of an existing food organism, a new strain selected by traditional
breeding techniques, a new strain produced by genetic modification or an organism not consumed by humans before. The novel food could be the organism itself, be it a micro-organism, plant or animal, or it may be a product derived from such an organism. Novel foods may contain potentially toxic substances which are analogous to inherent toxicants. Particular concerns are sometimes expressed about the use of biotechnology products in food production. Novel approaches to risk assessment may need to be developed to ensure that novel foods present no greater hazards than traditional foods. Novel technologies such as irradiation to control microbiological growth and ohmic heating, which acts by passing an electrical current through food, also have the potential to alter the chemical characteristics of food. These technologies need evaluation to ensure that chemicals formed do not present unacceptable risks to consumers. 1.3.10 Beneficial food chemicals In a book about food chemical risk analysis it is important not to overlook the fact that some chemicals in food bring benefits as well as presenting potential risks. The public is very familiar with the beneficial effects of chemicals: sales of vitamins, mineral supplements, trace elements, plant extracts such as garlic oil, ginseng and evening primrose oils, and animal products such as fish oils, are growing steadily. Such substances are believed to reduce the risks of certain diseases and although much of the evidence is mingled with folklore and commercial hyperbole there is much to justify their serious study. A detailed discussion of protective factors in the diet is given in Chapter 10. 'Whole food toxicology' (Chapter 13) integrates beneficial and harmful effects of naturally occurring food chemical into a single framework. Chemicals which are conventionally considered to be beneficial can sometimes also present hazards. Vitamin A, which is essential for good eyesight and healthy mucous membranes, can cause damage to unborn children. This is why women who are pregnant or who intend to become pregnant are advised to avoid vitamin A supplements or foods such as liver which contain large amounts of the vitamin. Other chemicals, such as many of the trace elements, copper, zinc, iron, selenium, etc., which are essential for good health, can also be toxic at higher doses. On the other hand, chemicals which are added to food to improve its appearance, flavour or keeping qualities can bring with them health benefits. Some antioxidants used to prevent the chemical breakdown of food have also been shown to be involved in the prevention of human illnesses. A precursor of vitamin A (beta-carotene), vitamin C (ascorbic acid) and vitamin E (a-tocopherol) are all commonly used food additives.
In fact, most chemicals have the potential to improve health (or at least be harmless) or to cause harm and it is only the dose which determines whether benefits or adverse effects will result. This is why understanding the dose-response relationship is such an important part of risk assessment. 1.3.11
Toxicological effects of food chemicals
Chemicals found in food are known to be contributory factors in several common diseases such as cancer and heart disease. They may also be involved in other diseases such as Alzheimer's disease and parkinsonism. There are even suggestions that food chemicals might be related to the rate of ageing. However, it has proved very difficult to reach definite conclusions as to the influence that chemicals in food have when compared to other factors such as the environment, lifestyle, occupation and, possibly most important of all, genetic disposition. In the face of such uncertainty, most regulatory regimes adopt a cautionary approach, only permitting chemicals to be added to food if the risks can be shown to be very low. Chemicals contaminating food are usually kept as low as is practicable. Toxicological testing can sometimes throw light on the possible health consequences of exposure to chemicals in food. For example, high doses given to animals might cause specific effects, such as signs of liver damage or effects on enzyme systems or general effects, such as weight loss. In such cases, chemicals are not allowed in food at levels above that associated with such adverse effects. Safety factors are usually added to allow for any uncertainty. No risk assessment technique can give a guarantee of absolute safety. There is always a small residual risk associated with uncertainties in the process. However, cases where food chemicals have been directly implicated as causes of human illness are very rare indeed. Nevertheless, this cannot be used as an excuse to avoid further innovation and the development of better risk assessment techniques. It is important to distinguish between chemicals which can cause acute effects which have their effect soon after eating the food, and those causing chronic effects where exposure over a long period of time, perhaps several decades, is necessary for effects to develop. A flexible approach to risk assessment is needed which takes duration of exposure into account. Hazard characterisation and risk evaluation are dealt with in much greater detail in Chapter 2. 1.4 Risk analysis Risk analysis is relatively simple in principle. It involves examining the possible causes of damage or harm (the hazard), assessing the likelihood that harm will actually be experienced by a human population and its
consequences (the risk), and, taking all other relevant social and economic factors into account, identifying the most appropriate course of action (risk management). In practice, risk analysis is extremely complex, key factors are difficult to define and often impossible to measure, and the outputs are uncertain and sometimes contentious. It is a relatively new science which is multidisciplinary in nature and broad in its applications. Whilst some aspects are relatively well established, for the most part risk analysis is in its early stages of development and is likely to change dramatically over the next few decades - particularly in the field of food chemical safety. Much of the science of risk analysis as it is applied to food chemicals is drawn from other disciplines, particularly engineering. The principles are broadly similar regardless of the application, although there may be significant differences in detail. One important difference is that engineering risk analysis can often be built upon experience - accident and failure rates associated with different operations within a particular industry, for example. When designing new equipment, whether it be a nuclear power plant, a road bridge or an aeroplane, each component can be tested to destruction in order to determine its failure characteristics. The failure rates of the different components can be combined, along with other relevant risk factors, such as the frequency of earthquakes or of lightning strikes, to construct a fault tree. The fault tree predicts the overall reliability of the complete system, and the effect on overall reliability of altering any single component can be investigated. The output is expressed in terms of the probability of a specific event occurring within a specified period of time. Where further information is required, scale models can sometimes be built to be tested under a variety of conditions. In contrast to this, chemical risk analysis is in its infancy and this means that it is impossible to use the highly quantitative techniques developed in other risk analysis fields. This does not mean that the terms and concepts of risk analysis cannot be applied to food chemicals and neither does it mean that full quantification cannot be set as a long-term aim. In the short term the risk analysis approach provides the opportunity to apply a robust framework to food chemical safety and offers the prospect of a more reliable qualitative or semi-quantitative approach. Whilst the principles and terminology used in food chemical risk analysis are based on those developed in other disciplines, including engineering and epidemiology, it has not been possible to achieve a direct adaptation and many diverse interpretations have grown up around the world. The absence of a common language of risk has presented a serious barrier to communication about risk between professionals and with the public. The United Nations FAO/WHO Codex Alimentarius Commission has tried to harmonize the terminology of food risk assessment, and the
Table 1.2 Glossary of terms used in food chemical risk analysis Exposure assessment
The qualitative and/or quantitative evaluation of the likely intake of chemical agents via food as well as exposure via other routes if relevant.
Hazard
A chemical agent in food with the potential to cause harm.
Hazard characterization
The qualitative and/or quantitative evaluation of the nature of the adverse effects associated with chemical agents which may be present in food. For chemical agents a dose-response assessment is normally performed.
Hazard identification
The identification of known or potential adverse health effects in humans produced by chemical agents which may be present in a particular food or group of foods.
Risk
An estimate of the likelihood of the occurrence of an adverse effect, weighted for its severity, that may result from a hazard in food.
Risk analysis
The scientific evaluation of the probability of occurrence of a known or potential adverse health effect (risk assessment) in order to be able to weigh policy alternatives in the light of all available information and identify optimal control options (risk management) and to exchange information among risk assessors, risk managers and all other stakeholders.
Risk assessment
The scientific evaluation of the probability of occurrence of known or potential adverse health effects resulting from exposure to chemicals in food.
terms and definitions used in this book are based largely on those of Codex (Table 1.2). Risk analysis has been traditionally considered to comprise three distinct, but related, phases: risk assessment, risk management and risk communication (Figure 1.1). This traditional model has been criticized because it does not allow any feedback between the activities and, in particular, risk communication is represented as a oneway process. More sophisticated models for risk analysis are now emerging.
Risk assessment
Risk management
Risk communication
All relevant scientific and technical information is assessed
All other relevant socio-economic information is assessed and a decision reached
The decision is communicated to the public and other stakeholders
Figure 1.1 The 'traditional' approach to food chemical risk analysis
It is fundamental to the understanding of risk analysis that individuals' usage of terms reflects their perceptions of risks. This is particularly important when considering lay perceptions of risk. Members of the public may not recognize many of the definitions presented in Table 1.2 or may place different interpretations on them. For example, experts sometimes use the term 'risk' to mean the likelihood that an adverse event will occur. Lay people, on the other hand, often include the severity of the adverse event within their definition. Thus if an expert states that 'the risk of cancer is small', lay people might infer that the expert finds cancer of little consequence, and serious misunderstandings can ensue. Care must therefore be taken to ensure that all parties understand the meaning which is being placed on words. 1.4.1
Risk assessment
Risk assessment brings together all the relevant scientific information about a particular food chemical. This will include any toxicological data in the hazard characterization, and information on the foods affected and likely intakes by consumers in the exposure analysis. A more detailed description of the risk assessment process is given in Chapter 2. Food chemical risk assessment rarely culminates in a probabilistic estimate of the risk of some adverse event occurring within a given period of time. Usually, the risk characterization output is an estimate of the likelihood of consumers exceeding an 'acceptable' or 'tolerable' level of exposure defined in the hazard characterization. It is sometimes possible to make probabilistic estimates of risk (see Chapter 3) but even here the risk assessment tends to be expressed in terms of the exposure which represents an 'acceptable risk' - often taken to be one person affected in one million lives. 1.4.2
Risk management
Risk management takes the information generated in the risk assessment and translates it into a policy decision. In risk management the aim is to make decisions in the context of the real world and so it is vital that social, political and economic factors are taken fully into account. It is sometimes difficult for risk assessors to understand that socially optimal decisions may depend more on political and economic factors than on scientific ones. However, in human terms the loss of large numbers of jobs in food production industries, for example, may be less socially acceptable than the risk of a very small and unpredictable amount of ill-health amongst consumers. The degree to which socio-economic factors should be taken into account in risk management in international agreements has been the subject of some controversy and will be discussed further in Chapter 14.
1.4.3 Risk communication Providing information to the public on the nature of risks is generally regarded as the final phase of risk analysis. It is generally accepted that the public has a right to know how risk decisions have been reached, and sometimes information which has been used in risk assessment and risk management is made available. Some regulatory authorities consider that the more technical information which is passed on to the public, the more the public is likely to accept the decisions made by the regulators. The Codex definition of risk management acknowledges that there needs to be a twoway exchange of information between consumers and regulators. However, the degree to which consumers should be part of the decision-making process is under debate. Some feel that consumers should have a role in risk assessment, others feel that they should make an input to risk management, whilst others regard consumers only as recipients of information. These issues will be discussed in greater depth in later chapters. 1.5 The nature of risk The discussion of risk in the context of food safety is a change for some food safety authorities. In the past the aim has been to ensure that food was 'absolutely safe', i.e. associated with zero risk. This seemed to be a reasonable approach when animal tests revealed no harmful consequences of exposure and analytical methods could detect the presence of no contaminants. Now, however, it is clear that there can be no human activity which is entirely free from risks. Almost every feature of life, whether it be travelling by car, undergoing medical treatment or eating a meal, has some risks and some benefits attached to it. The aim of each individual is to optimize these risks and benefits for himself or herself, family or community. In the context of food safety, zero risk is an unreasonable aim and only achievable by stopping eating and drinking all together. It is difficult to produce accurate figures which reflect the actual risks from everyday activities. Table 1.3 includes some estimates of the risk of death from exposure to various risk factors which have been reported in the scientific literature. None of these figures should be taken as factual, since all are based on estimates and some estimates are more reliable than others. Deaths from coal-mining, for example, are far easier to collate than predictions of deaths from routine X-ray examinations. Estimates of risk from the diet are particularly difficult to estimate. This is because almost all of the available data are based on projections from animal studies under controlled conditions and at doses very much higher than would be found in the human diet. It is also rarely possible to relate causes of death to specific dietary factors. Probabilistic risk estimates can therefore rarely be used in food chemical risk analysis.
Table 1.3 Estimated lifetime risk of death from exposure to risk factors Lifetime risk (per million)
Work
>100 000
Deep-sea fishing
10 000-100 000 Coal mining
Transport
Motor accidents
1000-10 000
Agriculture Air travel
100-1000
Clothes manufacture
Medicine
Lifestyle
Hanggliding
Smoking
Mountain Annual eering mammogram
Alcohol
Soccer
Skiing
Falling aircraft
<1
Diet
Oral contraception X-rays
Rail travel
10-100
1-10
Sport
Passive smoking
Aflatoxins
Anaesthesia Living near Benzonuclear pyrene reactor Vaccination
Botulism Caffeine
From reports published in the scientific literature.
The data in Table 1.3 reveal some interesting anomalies. For example, the risk of death from skiing appears to be much lower than that from agricultural employment. This is because skiing is a pastime undertaken for only a few weeks in the typical skier's lifetime, whereas agriculture is a full-time occupation. This example shows how difficult it is to make meaningful comparisons of risks, even at the most simple level.
1.6 Personal decisions about risks We all cope constantly with decisions about risks which range from the very significant, such as road traffic accidents, to the very small, such as those associated with minute traces of chemicals in foods. Thus, even commonplace decisions, such as where to cross the street or what to buy at the supermarket, involve some analysis of risk. We have to make so many daily decisions about risk that most are made unconsciously. Psychological research suggests that we each have a complex mechanism for analysing risks. This is based on our knowledge of technical facts (How busy is the road?), on the relative costs and benefits of different options (Is it worth walking to the nearest crossing?) and on personal experience, values and beliefs (Do I know someone who has been hurt in a road accident? Whose fault was it?). All of these factors add up to
define an individual's perception of a particular risk. An individual's view of the importance of a risk, when assessed in this way, may be quite different to the 'real' risk - the probability of being hit by a car - and different from another person's analysis of the same risk. Similarly, people's views about food-related risks will depend very much on their own personal perspectives.
1.7 The use of risk analysis in food safety Food chemical risk analysis is a branch of chemical risk analysis. Other strands include pharmaceutical development, industrial chemicals, household and personal products and environmental chemicals. Food chemicals share with environmental chemicals the feature that exposures tend to be at very low levels, but perhaps for very long periods - up to a lifetime. This adds an additional complication to the risk assessment process which will be discussed in Chapter 2. If it were possible to estimate the true degree of risk of ill-health associated with food-related hazards, the order of priorities would probably place issues like microbiological contamination at the top and food chemicals near the bottom. In fact, some issues, such as pesticides and food additives, have continued to command considerable public attention even though they are unlikely to pose a significant health threat as compared with potentially toxic natural components or more major risk factors such as overnutrition. In the past, when governments' advice has tended to imply that all food should be absolutely safe, consumers may have been led to believe that food should be free from all risks, i.e. be associated with zero risk. As our scientific capabilities and our understanding of risk assessment have developed, we have come to see that this is an impossible aim, since there is always some small degree of risk associated with every human activity. There will always come a point where the extra benefit to be gained from reducing the level of risk any further is outweighed by the cost to society of achieving this. Risk assessment can help to ensure that the levels of risk are the lowest reasonably achievable and that controls are kept in proportion to the risk.
1.8 Uncertainty Much of risk analysis concerns the assessment, quantification and expression of uncertainty. There are two types of uncertainty. The first relates to lack of knowledge. Experience with many chemicals is very limited, the specific elements of human metabolism and toxicity are poorly understood
and the best models available are animals whose biological characteristics may bear little resemblance to those of humans. True levels of exposure are difficult to measure and interactions with other chemicals, both natural and synthetic, unpredictable. The costs and benefits associated with chemicals in food are extremely difficult to quantify, and the needs, opinions and expectations of stakeholders difficult to assess. In the absence of such knowledge, conservative assumptions are usually employed. The second type of uncertainty derives from the essential randomness of nature. Just as human body heights vary, so does people's ability to metabolize or detoxify chemicals. All biological systems and many social factors exhibit such variability, and in risk analysis there are many sources of this random variability. It is sometimes possible to estimate typical or average values for such variables but risk analysis is very often interested in events near the extremes. These values are difficult to measure and so conservative estimates are once again applied. The use of conservative estimates leads to estimates of risk which are intentionally overcautious. Just how far above the true risk value these conservative risk figures are is often difficult to quantify. Further discussion of the problems and management of uncertainty in food chemical risk assessment will be found in Chapters 2 and 17. It is important to remember that, because of the inherent uncertainty, risk analysis can rarely be precise. There is always an element of uncertainty and the degree of uncertainty tends to become greater as the chance of an undesirable outcome becomes smaller. This is because uncertainties about food-related risks most often arise from difficulties in specifying the exact nature of very small risks and defining their magnitude. Uncertainties are usually allowed for by the inclusion of safety factors, so that estimates of risk are always conservative and tend to err on the side of consumer safety. This means that the outputs of risk analysis are rarely accurate - they often only provide a representation of a possible worst case. Ultimately, risk analysis can only provide information to inform and support decision-making. It can never be a substitute for responsible and well-informed judgement. 1.9 Conclusion Almost all chemicals present in food have the potential to cause toxic effects in large enough doses. In fact, there are very few examples of food chemicals causing any measurable harm. Nevertheless, governments, international organizations and food producers have a duty to introduce measures to protect consumers. Consumers are often concerned about the potential risks associated with chemicals in food and demand that action should be taken. There is a clear need to maintain a balance between
important health issues which might be neglected because of a lack of consumer concern and issues of concern to consumers which might be neglected because experts do not view them as significant hazards to health. It is therefore very important to ensure that a thorough scientific understanding of the nature of risks is developed as the basis for decision-making. It is also necessary to understand the socio-economic context and to establish a dialogue with consumers to better understand their perceptions of risks. Risk analysis techniques can provide a framework within which such activity can take place so that consumer concerns can be carefully balanced against expert judgements about the true degree of risk. Such techniques also allow other relevant factors, such as the cost and practicality of different policy options, to be taken fully into account.
Further reading Coultate, T. (1989) Food: The Chemistry of its Components, 2nd edn. The Royal Society of Chemistry, Cambridge. Coultate, T. and Davies, J. (1994) Food: The Definitive Guide. The Royal Society of Chemistry, Cambridge. Rodericks, J.D. (1992) Calculated Risks (The Toxicity and Human Health Effects of Chemicals in Our Environment). Cambridge University Press. Royal Society Study Group (1992) Risk, Analysis, Perception, Management. The Royal Society, London. The British Medical Association (1987) Living with Risk. John Wiley & Sons, Chichester.
Part Two Risk Assessment
2 Food chemical risk assessment DJ. BENFORD and D.R. TENNANT
2.1 Introduction The aim of food chemical risk assessment is to provide advice to risk managers on the likely levels of risk associated with given levels of exposure to chemicals via food. In this chapter it is presented as a stepwise procedure, although in practice risk assessment rarely follows such formal lines (Figure 2.1). There is much to be said, however, for separating out the components of risk assessment. This increases clarity, enhances objectivity and improves the overall credibility of the assessment. One important principle is that the two main strands, hazard characterization and exposure assessment, should be isolated from each other and their outputs not brought together until the risk characterization step. This will raise objectivity by preventing the exposure analyst's judgement from becoming clouded by knowledge of the acceptable or tolerable intake and keeping the toxicologist's judgement free from knowledge of likely intakes. Until now, risk assessment of food chemicals has concentrated heavily on defining exposure or intake limits associated with negligible levels of risk, in particular the acceptable daily intake (ADI), and checking to see that no consumers exceed this limit (World Health Organization, 1987a). This chapter will identify some sources of uncertainty in this risk assessment process. The aim here is not to undermine the current approach by suggesting that it is putting consumers at risk - history has provided scant evidence of past mistakes. Where the system does err, it probably does so on the side of safety - the system is over-conservative in order to minimize the possibility of error in the light of such uncertainty. A consequence of this cautious approach is that it is often extremely difficult to obtain a balanced view of the true risks and benefits. This means that the risk management system is heavily weighted against permitting additives and towards controlling contaminants. Improvements in risk assessment will lead to more accuracy and in turn to a more balanced approach to risk management which will bring benefits to industry and consumers whilst maintaining the highest standards of food safety. Current practice in chemical risk assessment is built upon traditional toxicology, in particular animal-based in vivo toxicology, toxicokinetics and, to a lesser extent, epidemiology. These disciplines are largely empirical, whereas risk assessment demands a more predictive approach. In the
Dietary survey data Intake estimation Hazard identification and prioritization
Levels and occurrence in foods
Hazard characterization
Risk characterization
Dose-response assessment
Figure 2.1 Framework for food chemical risk assessment.
future, risk assessments should provide risk managers with more information about uncertainty and the degrees of confidence that can be had in acceptable or tolerable intakes and in exposure estimates. This will require close examination of each step in the risk assessment procedure, seeking improvements and attempting to characterize and, where possible, quantify the levels of uncertainty. The following sections of this chapter describe current approaches used in food chemical risk assessment. Later sections of this chapter will discuss sources of uncertainty in the current approach, some more recent developments and opportunities for research.
2.2 Current approaches to risk assessment 2.2.1 Hazard identification and prioritization The first step in the risk assessment process is the identification and prioritization of potential hazards (Figure 2.1). Many potential hazards are drawn into the risk assessment system through applications for product approvals and licences. For such products the applicant is required to supply all the information required to complete a risk assessment. However, for some potential hazards, including natural flavouring and colouring substances, other natural constituents of food, certain contaminants and the products of interactions between food chemicals, there are often less data available on which to base decisions and set priorities for further investigations. Potential hazards might also be identified during food surveillance activities (see Chapter 17). This is particularly important for environmental contaminants where there may be no reliable means of predicting whether chemicals could be passed through the food chain. Occasionally, problems occurring overseas or reported in scientific journals can initiate risk assessment. It is also important to monitor public concerns about specific food
chemical hazards and, when appropriate, to respond by undertaking a risk assessment (see Chapter 17). After hazards have been identified and prioritized, the risk assessment procedure splits into two separate strands, the first of these being hazard characterization. 2.2.2 Hazard characterization Hazard characterization is normally conducted by expert toxicologists. All available animal, in vitro and human toxicity data are assessed in order to identify the toxicological consequences that might be expected to result from human exposure and, if possible, to postulate a biological mechanism. Having identified a toxicological endpoint, it is then necessary to characterize the dose-response relationship. There are two general models used to characterize dose-response relationships. The linear model assumes that the toxicological effect (response) is directly related to the dose (Figure 2.2). It is assumed that the dose-response curve passes directly through the origin (i.e. that no dose has no effect but even the smallest dose has some, albeit small, effect). The potency of the toxicological effect is related to the slope of the curve at any given point. This model is used to describe such endpoints as genotoxicity (direct damage to genetic material), where there is assumed to be no 'safe' dose. As we shall see later in this chapter, this model may sometimes give a misleading representation of the true picture. Many potentially toxic chemicals which enter the body are deactivated by enzyme systems located mostly in the liver. These systems have evolved
Effect
[a] Non-thresholded toxin [b] Thresholded toxin
Dose
Figure 2.2 Dose-response (effect) relationships.
over many human generations to cope with the intake of chemicals which occur naturally in food. A characteristic of these systems is that they have only a finite capacity. This means that up to a limit they are able to render chemicals harmless but there is a threshold above which they are unable to cope. The threshold dose-response model therefore shows a 'hockeystick'-shaped curve (Figure 2.2b). Point 4 x' in Figure 2.2 is the no observed adverse effect level (NOAEL) which is determined in the most sensitive mammalian species. The NOAEL is usually expressed in terms of a quantity ingested per day per unit body weight (e.g. 10 mg/kg(bw)/day). This is converted to an ADI or 'reference dose' (RfD) for humans. The ADI is also expressed on a body weight basis (e.g. 0.1 mg/kg(bw)/day). Many environmental contaminants are known to have toxicological characteristics which relate to periods of exposure much longer than 1 day. In such cases a provisional tolerable weekly intake (PTWI) may be set (e.g. 0.7 mg/kg(bw)/week). The second strand in the risk assessment framework, which runs parallel to hazard characterization, is exposure assessment. In order to estimate possible intakes of chemicals via food it is necessary to have a knowledge of the levels and occurrence of chemicals in foods and the amounts of those foods actually eaten. Intake estimation is discussed in detail in Chapter 8, and so only a brief introduction will be included here. 2.2.5 Occurrence information Information on the levels and patterns of chemicals present in foods depends on the type of food chemical. For additives, for example, it is possible to assume that all foods which are permitted to contain an additive actually do, at the maximum permitted levels. This is likely to result in a considerable overestimate of the true levels, however, since the use of additives is often related to brand, and many products which could legally contain an additive may contain none at all. A better representation of true level of exposure can be gained by gathering data on the additives listed in specific products' lists of ingredients. This is timeconsuming and may still not be absolutely reliable, since the level of use cannot be assumed to be the maximum permitted and because product formulations change. For an accurate estimate of the levels of additives present in food products, it is necessary either to gain the co-operation of industry in providing the data or to conduct an analytical survey. The occurrence data selected for intake estimation will also depend on the purpose for which the risk assessment is being done. When approving pesticides, for example, it is prudent to assume that the levels present in food will be the maxima expected after following good agricultural practice which are reflected in statutory maximum residue levels (MRLs).
Whilst this will allow for all possible situations, it is unrealistic to assume that the levels of pesticides in all foods will be at the MRLs. For a more accurate intake assessment, to assess the true level of risk to consumers, it is more appropriate to use field trials data or analytical surveillance results. Post-marketing survey data are expensive to produce and should therefore be targeted on areas where problems are most likely to occur. Post-marketing surveillance provides a feedback loop from risk management, thus providing an effective quality control mechanism. For contaminants, maximum values from surveillance data are often used in the setting of maximum tolerable levels (MTLs), so it is inappropriate to use MTLs to make accurate estimates of intake. Surveillance data must normally be used but for many contaminants which are widespread at low levels in foods this presents a problem. It is not economically viable to analyse every food for every possible contaminant. Instead foods are grouped with similar foods which would be expected to share similar contaminant levels. Only when the average level of a contaminant in a group is unexpectedly high would it be necessary to analyse individual group members. In some cases it may be possible to use data published in the scientific literature or mathematical modelling of transfer through food chains to conduct preliminary intake assessments. However, such methods are unreliable and should generally be only used for the purposes of hazard identification and prioritization. 2.2.4 Food consumption data Most surveys of food consumption are conducted for purposes other than risk assessment. Many are surveys of household expenditure, whilst others are directed towards gathering nutritional information. This means that data are rarely in a form which is immediately suitable for estimating intakes of food chemicals. One particular problem is that many data are collected on a household basis. This means that it is impossible to estimate the intake of chemicals by individuals. It is particularly important to base intake estimates on the consumption patterns of regular high-level consumers of particular foods or groups of foods, since these are the individuals most likely to have intakes above acceptable or tolerable levels. In some cases it is necessary to go further and to conduct special surveys of particular 'critical groups' such as diabetics, vegetarians and ethnic groups. 2.2.5 Intake estimation Exposure and consumption assessments are combined to create a range of estimates of intake depending on the data chosen, the time period over which the intake is averaged and the point on the distribution of values used to express the estimate. The intake estimate is usually presented in
the same format as the acceptable or tolerable intake (i.e. quantity of chemical per unit body weight per day) to facilitate a direct comparison. Intakes are usually presented for the mean consumer (i.e. excluding those who do not consume the foods in which the substance occurs) and for 'high-level' consumers. The definition of high-level consumers varies between different regulatory authorities but is normally either the 90th, 95th or 97.5th centile of the distribution of individual intake values. A high centile, rather than the maximum value, is usually chosen because maximum values are subject to great uncertainty. Furthermore, it is assumed that maximum intakes are unlikely to be maintained over long periods of time and are therefore not representative of high-level intakes. If estimates of acute intake are presented, then the maximum is normally quoted because the argument about long exposure periods no longer holds. 2.2.6
Risk characterization
In risk characterization the potential intake is compared with the toxicologically acceptable intake limit. The time period over which intake is averaged must be carefully chosen to reflect whether acute or chronic effects are being considered and the biological half-life of the substance, where this indicates the potential for accumulation. Age at exposure may also be an important factor, particularly if this occurs during life as a fetus or during infancy, when there may be particular concerns. The core of risk characterization is a comparison of estimated intakes with the toxicologically acceptable intake. This appears as a simple analysis but great care must be taken when presenting advice to risk managers. As we have seen, great uncertainties are introduced at every step in the risk assessment procedure, and risk characterizations should be seen as indicators only and all the uncertainties carefully spelled out. Risk characterizations, as they are presently derived, should never be assumed to present accurate representations of the real situation. An essential component of risk characterization should, therefore, be the analysis of uncertainty in the risk estimate. When the risk estimate is given to risk managers it must include a description of the underlying assumptions, inherent uncertainties and conservatisms. Without this information, risk managers are unable to perform their task.
2.3 Sources of uncertainty in hazard characterization There are two sources of uncertainty in risk assessment. First, there is uncertainty which is derived from the natural variability inherent in all systems. Good examples are variability in body weight and individuals'
ability to detoxify chemicals. With this type of uncertainty it is important to take steps to measure the degree of variability and ensure that values used in calculations are representative. If a conservative approach is adopted so that the most extreme values are used throughout, then unrealistically high estimates of risk will result. Methods are needed which can handle multiple sources of variability and create realistic estimates of the true upper level of risk. The second source of uncertainty derives from a lack of understanding of the system being studied. Are the animal models appropriate? Are we studying all relevant toxicological endpoints? This kind of uncertainty is much more difficult to measure and control and in the absence of new knowledge conservative safety factors must continue to be used. 2.3.7
Uncertainty analysis
There are many different classes of food chemical that could be considered from a toxicological viewpoint, including additives, natural constituents, mycotoxins and endotoxins, contaminants arising from processing (including cooking) or packaging, and residues of agricultural chemicals and veterinary medicines. 'Nature-alike' additives and genetically engineered foodstuffs are more recent issues that present even greater challenges in risk assessment. Figure 2.3 shows a simplified scheme of the routes by which chemicals in food may have adverse effects on the mammalian organism. The ADME processes of absorption, distribution around the body, metabolism (biotransformation) and elimination are often referred to as toxicokinetics, whereas the toxicodynamic properties of a chemical are defined by its potential to interact with cell constituents or functions. Most chemicals will proceed through a number of these routes, with potential formation of many different metabolites and interaction with different tissues. As the exposure to a particular chemical increases, it becomes more likely that the detoxification and repair mechanisms will become overwhelmed and the pathways leading to toxicity may predominate. Within the definition of toxicity, there are many possible effects, including acute and chronic effects, allergenicity and cancer. The particular organ or system affected may be determined by the toxicokinetic properties, e.g. the tissue exposed to the highest concentration of the chemical or its active metabolite, or the toxicodynamic properties, e.g. interaction with a specific cellular function. Within this scheme are many factors subject to genetic control with potential to cause differences between species and individuals. In addition, the capacity for metabolism and repair may be modified by many other factors such as age, hormonal status, disease status and exposure to other chemicals in the diet or environment.
Chemical in food
Ingestion
not bioavailable
modified by GI secretions ormicroflora
excretion
absorbed intact
hepatic portal vein
liver
stable metabolites
unchanged
reactive metabolites
interaction with cell constituents/functions excretion
circulatory system repair
toxicity
extrahepatic organs/systems (further metabolic activation and/or detoxication possible)
excretion interaction with cell constituents/functions
repair
toxicity
Figure 2.3 Schematic representation of hazard characterization.
Hazard assessment for food chemicals, as with any other class of chemical, would ideally be based upon a comprehensive and scientifically relevant package of toxicokinetic and toxicodynamic studies in experimental animals and in vitro systems, complemented by controlled studies in volunteers and epidemiological investigations. In practice such a complete database is never available and the nature of the data actually available tends to vary with the class of food chemical. For recently introduced food additives, there may be a relatively complete package of experimental data, although the effects seen at high doses may be irrelevant to very low-level exposure, and there are likely to be few or no human data. For some packaging constituents, there may be a large amount of data, but designed more with a view to classification regulations and worker protection than for low levels of contamination in foods. Information on mycotoxins is variable, with only a very few being well investigated. There are numerous possible natural constituents and processing products, few of which will have been defined toxicologically and many of which will not have been identified. Whilst there are good scientific arguments that some food components (such as those generated during normal mammalian intermediary metabolism) do not have toxicological implications, it cannot be assumed that natural is safe. Thus, a major issue for natural constituents is prioritization with respect to the need for toxicology profiles. An additional consideration is the requirement to avoid unnecessary animal experimentation. Each step in the risk assessment process involves assumptions that are associated with uncertainty. This may be related to many factors, from technical aspects of the animal experiments used as the primary basis for decisions, to genetic variability and external modifying factors, e.g. those relating to lifestyle, of the human population, to uncertainties over intake and bioavailability. According to Young (1989), 'the greater the uncertainty about a given effect, the more likely it is to be overestimated'. This is a natural result of the cautious (or conservative) approach to uncertainty, which involves worst-case assumptions at every stage. In practice the worst case is, by definition, a minority event, and a combination of all possible worst cases in a particular instance is statistically improbable. The purpose of this section is to review the main causes of uncertainty related to the stages of hazard identification and characterization. Guidelines for toxicity testing of food additives are available and will not be described in detail here (Scientific Committee for Food, 1980; World Health Organization, 1987a). 2.3.2 Animal studies The purpose of animal toxicology is to identify the nature of the toxic effects of a substance and to characterize the dose-response associated with the most critical effects.
It is often stated that the ideal species for toxicology studies is the species in which the toxicokinetics most resemble those in humans. This is not feasible, because such comparisons could only be made subsequent to extensive testing, including in humans, which may not be permitted for some chemicals. For reasons of practicality, most testing is conducted in rodents. The strain of animal is most likely to be selected on the basis of historical precedent within a given laboratory (to allow comparison with historical controls). For carcinogenicity studies, there is a need to balance the requirements of high responsiveness to test chemical and low incidence of spontaneous tumours. Large strain differences in sensitivity have been shown with some carcinogens, which may be attributed to differing capacities for metabolic activation and detoxication and/or to polymorphisms in key regulatory genes, and it has been proposed that use of multiple strains would be more informative without increasing overall numbers of animals used (Festing, 1995). Dose levels are selected in order to ensure that a toxic effect can be detected in a small group of animals. With relatively innocuous substances, like most food chemicals, this means giving extremely high doses, many orders of magnitude greater than actual human intake. The result may be non-specific signs of toxicity, such as decrease in body weight gain. The endpoints of toxicity seen in the animals at very high doses are not likely to be relevant to minority responses to smaller doses. This is particularly true if the compound under investigation is administered by gastric intubation rather than mixed in the diet. A single bolus administration once per day will result in initial high blood levels which will then decrease until repeating the dose the following day, whereas dietary administration will produce lower, but more sustained blood levels. Whilst we have effective methods for detecting toxic effects in specific organs, using combinations of biochemical and morphological techniques, it is much more difficult to detect subtle interactions with systems, e.g. the immune, endocrine, reproductive, nervous and cardiovascular systems. It could be argued that such functional interactions are more likely to be low-dose phenomena than is overt organ toxicity. It could also be postulated that such effects could result in some of the unexplained syndromes of neurobehavioural problems and fatigue seen in modern Western society. The fact that we have no evidence for associating these with dietary constituents could equally mean that there is no association, or that we do not have adequate methods to detect and define the effects. Extrapolation of results of animal experiments to humans is a source of continuing debate. In some areas, such as hepatotoxicity, we have extensive knowledge of the differences between humans and experimental animals, and the ways in which effects are modulated by factors such as metabolic activation and detoxication. Knowledge of the different human
and rat complements of isozymes of, for example, cytochrome P450 is expanding rapidly, as is the understanding of the role of the different enzymes in the toxicities of various classes of chemical and regulation of those enzymes by exogenous factors. Understanding of the molecular biology of such effects facilitates development of new models to investigate chemical-enzyme interactions, such the computer modelling technique, COMPACT, described in Chapter 7. In contrast, there is continuing uncertainty concerning the relevance to humans of animal models for effects on the reproductive system (Conning, 1990), and we do not have adequate experimental models for investigation of food intolerance. There are also some anatomical differences of relevance. For example, rats and mice have a forestomach, whereas humans do not. The relevance to humans of effects seen in the forestomach of rodents is not clear, but it would be imprudent to dismiss them without due consideration. Effects on the forestomach could be indicative of the potential to interact with the first tissue with which prolonged contact occurs. Alternatively, the oesophagus is morphologically similar to the forestomach, and could be subject to similar responses. Another extremely important factor in species extrapolation for food chemicals is the role of the gut microflora, for which the endogenous populations vary between species. The rat is a particularly poor model for humans, because significant numbers of bacteria occur in the upper intestinal tract of rats but not humans. Potential interactions may involve actions of the microflora on the chemical, i.e. biotransformation, or actions of the chemical on the microflora. Modification by a food chemical of the relative proportions of different species of microflora may be a potential source of interactions with other food constituents, including nutrients. Particularly useful in determining whether an effect seen in animals is relevant to human exposure are investigations of mechanisms of effect, determination of whether toxicity is due to the parent compound or to a metabolite. In practice, such detailed information is rarely available. There is also likely to be uncertainty over the shape of the doseresponse curve, which is based frequently upon only three doses. Establishment of the NOAEL is dependent upon the selection of dose levels used in animal studies. A wide range of dose levels, as may result from selecting a top dose that has some effect, and a low dose that is relevant to human exposure, would produce a value for the NOAEL, but no information as to how close this value would be to an actual effect level, and hence the level of safety margin inherent in establishing the NOAEL. Crump (1984) proposed the use of a 'benchmark dose', which is mathematically calculated to produce a predetermined increase in response rate of a given effect. This approach would reduce the uncertainty relating to determination of the NOAEL but has not been widely used.
Finally, it should be noted that all the above comments relate to studies of a single chemical in isolation. When a chemical is a constituent of a complex food matrix, its effects may be modulated by other components of the food. This issue is considered further in section 2.3.7 and in more detail in Chapter 10. 2.3.3 In vitro studies The role of in vitro genotoxicity studies in hazard identification is well established. They are used, mainly in a non-quantitative fashion, as an indication of a potential to cause inheritable changes either in the germ cells, which could lead to genetic abnormalities in the offspring, or in the somatic cells, a possible initial event in development of cancer. Although it cannot be certain that this genotoxic potential would be expressed in humans consuming low levels of a substance, it is clearly an undesirable property for a substance that is to be added to food, either directly or as a contaminant from packaging material. Interpretation of the relevance of genotoxicity becomes much more complex in the case of naturally occurring constituents and ubiquitous environmental contaminants, particularly as many foods also contain constituents that are able to modulate genotoxic activity. Many other systems have been proposed as alternatives to animals for assessing toxicity, and some of those most applicable to food chemicals are reviewed elsewhere in this book. Approaches include computer modelling, expert systems, structure-activity relationships and many different in vitro assays, ranging from undifferentiated cell lines, with basal cytotoxicity endpoints, to complex culture systems with sensitive measures of cell function as endpoints. As yet, none of these is truly validated as a replacement for animal experiments. Those tests that are closest to achieving regulatory acceptance are for assessing topical effects (skin and eye irritancy, corrosivity). These are hardly relevant for food chemicals. Interpretation of the results of in vitro tests for systemic toxicity is much more problematic. The value of in vitro studies in investigating mechanisms of toxicity of individual chemicals is well accepted. But, as yet, there are no recognized procedures for the application of in vitro assays in hazard identification for effects other than irritation and corrosivity. Partly, this is a problem of toxicokinetics. If we know that a compound causes a particular effect in a given tissue in vivo, then we can devise suitable models to investigate that effect in vitro. However, if we have no information on the distribution of a chemical within the organism, we cannot know whether an effect that is seen in vitro will be expressed in vivo. Ultimately, the use of physiologically based pharmacokinetic (PBPK) models may help us to identify the most appropriate cell systems for in vitro studies. In many cases, however, it is likely that we do not yet have
appropriate models. As noted above with respect to animal studies, identification and investigation of specific target organ effects is more straightforward than investigation of effects on functional systems. This is even more true in in vitro systems, because the complex interactions between different components of, for example, the immune system cannot be reproduced. Many approaches to in vitro toxicology attempt to use the results in a quantitative fashion. Correlations are produced between data generated in vitro and in vivo in experimental animals. Whilst there may be similarities in rankings for series of chemicals in some instances, this is unlikely to generate information that we can use in risk assessment. Inadequacies in the in vivo databases, and uncertainties over their relevance to humans mean that the validity of such approaches will always be subject to criticism by traditionalists. A more rational and acceptable approach may be to incorporate the use of in vitro data, in a qualitative or semi-quantitative fashion, into the overall process of hazard identification, as is currently the situation with genotoxicity assays. Until we have established sound scientific approaches to the application of in vitro assays in risk assessment of individual chemicals, it is unlikely that they will have widespread applications for food chemicals. We will need more information on the bioavailability of food constituents in order to ensure that we study a representative sample. In the near future, the most valuable use of in vitro systems in risk assessment for food chemicals is likely to be in studying mechanisms of those interactions for which there is good evidence of occurrence in humans. Obviously, it is preferable to use in vitro models that are relevant to the in vivo target organ and effect. 2.3.4 Human studies Human studies may take the form of either volunteer studies or epidemiological surveys. The scope for use of volunteers in hazard characterization of food chemicals is extremely limited. Studies on particular individuals may be used to investigate specific causes of food intolerance reactions, but sensitization studies in volunteers would not be considered ethically acceptable. Nevertheless, pre-market human volunteer studies of substances to be intentionally added to food would provide a logical and effective long-stop and give further reassurance about the safety-in-use of such chemicals. No medicine, no matter how innocuous, would ever be allowed onto the market without some human trials data. Food additives, on the other hand, can be added to food without any direct evidence of their effects on consumers. Epidemiological studies are relatively insensitive and only likely to be of value when large populations of exposed and unexposed individuals can be identified. Even in these circumstances it is unlikely that much
information on intake levels would be available. Attempts have been made to correlate health effects with dietary chemicals by means of epidemiology. The most significant correlation is seen with cooked meat and colon cancer (IARC, 1993), but whereas this could be due to heterocyclic amines, it is not possible to discount the contribution of saturated fats and/or overnutrition. There is a need to establish the role of food chemicals in human health problems in order to validate approaches to risk assessment. Investigations in individuals exhibiting genetic polymorphisms are likely to be extremely valuable in this respect. Development of biomarkers of exposure and effect, as in Chapter 4, should ultimately allow us to establish the relationship (or lack of one) between intake of certain chemicals and causation of health problems, thereby paving the way to greater certainty in risk assessment for food chemicals. 2.3.5
Thresholded toxins
For most toxicological effects, it is generally agreed that toxic effects are only expressed when exposure exceeds a threshold level. Thus, with reference to Figures 2.2 and 2.3, at low levels of exposure metabolic detoxication processes and cellular repair processes will be effective, and it is only when exposure reaches such a level that these defence mechanisms are overwhelmed that toxicity will result. As outlined above, the traditional approach to risk assessment for chemicals exerting threshold effects has been to derive an ADI by applying nominal safety factors to the NOAEL to allow for uncertainty (Lu and Sielken, 1991). In the case of food chemicals, this has most commonly been a factor of 10 to allow for interspecies variation (if good human data are not available) and a further factor of 10 to allow for interindividual variation in the human population, resulting in the safety factor of 100 being applied to the NOAEL established in animal studies. The NOAEL relates to the effect exhibited at the lowest dose as identified in the most sensitive species, unless there are justifiable scientific reasons to discount the relevance to humans. An additional factor of 10 could be applied if considered necessary by the expert panels, e.g. in the case of severe, irreversible effects. Whilst there is no evidence that this approach has failed in its general purpose of protecting the consumer, it is frequently criticized as being non-scientific and does not allow the risk manager to make judgements on or communicate to the public on the risks, for example, of brief excursions above the ADI. Barnes and Dourson (1988) proposed the use of the term 'reference dose' (RfD) to which uncertainty factors should be applied up to a theoretical maximum of 105. This figure relates to 10-fold factors applied where necessary for each of the following:
variation in sensitivity among the human population extrapolation from results of long-term animal studies extrapolation from results of short-term animal studies extrapolation from the lowest observed effect adverse level (LOAEL) if no NOAEL is available. In addition, a modifying factor of up to 10-fold could be applied according to expert scientific judgement relating to factors not already taken account of, such as the quality of the key study. In the above approaches, the value of 10 for each factor appears to have been selected on an arbitrary basis. More recently, two alternative approaches have been suggested. Renwick (1991, 1993) subdivided each of the two 10-fold factors into two separate factors to allow for toxicokinetics (delivery of the substance to the site of toxicity) and toxicodynamics (potency or activity of the substance at the site of toxicity). Based upon observations of species and interindividual variation, he proposed default factors of 2.5 for toxicodynamics and 4.0 for toxicokinetics. These individual components could then be modified according to the quality of the available data, with the added advantage that the potential impact of additional data could be identified. A novel approach, initially proposed by Lewis et al. (1990) and subsequently modified by the Houston Regional Monitoring Corporation as described in ECETOC (1995), suggested the following algorithm: _
NQAELammalS
human
" RHQ1Q2Q3U(Q where NAELhuman is the level predicted to have no adverse effect in the human population NAOELanimal is the no observed adverse effect level determined in a relatively small group of experimental animals S = Scaling factor (for toxicokinetic differences between species) R - Interspecies adjustment factor (toxicodynamics) H - Heterogeneity factor (for greater interindividual variation in humans than in experimental animals) Q1 = Critical human health factor (relevance to humans of critical effect observed in animal studies) Q2 - Study duration factor (for lifelong exposure of humans) Q3 = LOAEL-to-NOAEL factor (if appropriate) U = Uncertainty factor (to account for residual uncertainty in the data) (C) = Severity factor (depending on the severity of the critical effect - may be omitted on the grounds that it is non-scientific)
A particularly notable aspect of this approach is that the default values for 5 and R are unity; that is, in the absence of information to the contrary, it is assumed that humans and the animal species used to identify the
NOAEL are equally sensitive. This is clearly a less cautious approach than taken by other methods, where it is assumed that humans are more sensitive unless there is good evidence to the contrary. 2.3.6
Non-thresholded toxins
There are certain toxicological effects for which we have no scientific basis for identifying thresholds, i.e. germ cell mutagenesis, genotoxic carcinogenesis and response of presensitized individuals to sensitizers. In the absence of an assumed threshold it is not possible to identify a safe level, and it must therefore be supposed that exposure to any level is associated with some measure of risk. As noted above, it is reasonable to exclude substances with such properties from use as food additives, or in packaging materials if there is a risk that they may leach from the packing into the food. However, it is not possible to eliminate genotoxic and sensitizing substances from our food. Sensitizers in food and the subject of food intolerance is a major food safety issue at the current time. Sensitization responses to many plant constituents can be detected in human patch tests, and it is to be expected that reactions to foodstuffs will occur, albeit in a small proportion of the population. In the absence of good animal models for the human conditions, it is difficult to establish the extent to which chemicals in food constitute a contributory factor. The main focus in this section is on the issue of food carcinogens. The number of known, naturally occurring relatively low molecular weight compounds is assumed to be in the order of 100 000, and the actual number of such compounds in common food sources must considerably exceed this figure (Lu and Sielken, 1991). A significant number of these have been identified as being carcinogenic in experimental animals (Table 2.1). In addition, a number of carcinogenic materials have been shown to result from processing and cooking of foods. Very few are considered to be proven human carcinogens (e.g. aflatoxin B1), but this is because of the difficulty in obtaining adequate epidemiological evidence rather than the intrinsic properties of these substances. It is highly likely that many more of the substances that occur in our diet have the potential to cause human cancer, but it is currently not possible to establish a causal relationship with the incidence of any of the cancer types that have been shown to be dietrelated in epidemiological surveys. Whereas the general public is inclined to assume that synthetic food additives and contaminants such as pesticides and environmental pollutants are harmful, there is an increasing body of scientific evidence indicating that the risks due to natural chemical constituents and overnutrition are much greater than any ascribable to trace chemical constituents in the diet (Ames et al., 1990a,b; Lutz and Schlatter, 1992; Scheuplein, 1992). This issue is discussed further in Chapter 11.
In order to reconcile these divergent views, there is a need to establish which food-borne carcinogens are most likely to be associated with a significant risk of human cancer, and hence to prioritize efforts into reducing levels of such compounds, or to provide the public with advice on risks associated with particular foods. In the USA, mathematical models have been developed to estimate the probability of cancer resulting from different levels of carcinogen intake, in order to derive a 'virtually safe dose'. A cancer risk of 1 in 106 is generally considered to be acceptable by the FDA (Food and Drug Administration, 1977). The mathematical models are based upon data from carcinogenicity tests conducted in experimental animals, taking into account species scaling factors. The design of carcinogenicity bioassays has been subject to much criticism recently, in that the use of very high dose levels may result in effects that are not relevant (e.g. Monro and Davies, 1993; Butterworth et al., 1995). Clearly, the outcome of a model is dependent upon the quality of the data on which it is based, and on the assumptions included in the model. The limitations of quantitative risk assessment (QRA) are discussed further in Chapter 3. Suffice to say here that the use of such models is not widely accepted amongst expert groups in Europe. The contrasting approach is to consider the mechanism of carcinogenicity. Important considerations in the bioassay include such factors as the tumour sites, whether tumours occurred in both rat and mouse and in one or both sexes, the dose-response relationship and the relationship to toxicity in the target organ. Additional factors taken into account include information on the toxicokinetics and metabolism of the substance and the results of genotoxicity studies, and the presence in the chemical structure of moieties known to be associated with carcinogenicity. A view is then taken as to whether the substance acts via a genotoxic or nongenotoxic mechanism. If a plausible mechanism for a non-genotoxic effect can be proposed, it may be possible to determine a threshold level for non-genotoxic carcinogens, and they may then be evaluated as in section 2.3.5 (possibly applying an additional safety factor for the severity of the effect). Table 2.1 indicates some of the food-borne chemicals that are thought to act as non-genotoxic carcinogens (Tennant, 1993). It is also worth noting that alcoholic beverages are considered to cause cancer in humans, but, because the effect cannot be reproduced in animal studies (rodents are reluctant to drink alcohol), it is not clear whether this is due to the ethanol or other constituents. In the case of genotoxic carcinogens, it has to be assumed that there may be a risk associated with their presence in food, but that this cannot be reliably quantified. It is logical to assume that at very low levels the cellular defence and repair mechanisms will minimize this risk, whereas at higher levels of exposure it becomes more likely that these
Table 2.1 Examples of chemicals in food that have shown evidence of carcinogenicity in animal studies Chemical class
Examples
Human food source
Major cancer type
Natural constituents Mycotoxins
Aflatoxin B1
Stored maize and peanuts Maize Grain and pork Plants, herbal teas Herbs and spices Herbs and spices Citrus fruit peel, essential oils Many fruits, vegetables, seasonings and beverages prepared from plant materials, coffee
Liver3
Pyrrolizidine alkaloids Alkenyl benzenes Monoterpene
Fumonisins Ochratoxin Monocrotaline Saffrole Estragole d-Limoneneb Caffeic acid
Products of food cooking and processing Heterocyclic amines MeIQx PhIP Polycyclic aromatic hydrocarbons
Nitrosamines Carbamate
Food additives Artificial sweeteners Antioxidants Preservative Contaminants Aldehydes
Cooked meat Cooked meat
Furfural b Cooked meat Benzo(a)pyrene Formed in grilled and smoked meat, also ubiquitous contaminants in vegetables, grains and vegetable oils Dimethyl Formed during curing, frying, nitrosamine salting and pickling Urethane Fermentation product in alcoholic drinks, yogurt, bread, etc., also a ubiquitous contaminant
Liver Liver and kidney Lung Liver Liver Kidney Kidney and forestomach
Liver Small and large intestine Liver Multiple sites
Multiple sites Multiple sites
Saccharinb Cyclamateb
Many food products Many food products
BHTb BHAb Propionic acid
Many food products Many food products Many food products
Formaldehyde5
Natural metabolite Gastrointestinal and ubiquitous tumours if contaminant ingested Liver In fatty foods and fish as combustion product and ubiquitous contaminant Ubiquitous contaminant, Liver especially in fatty foods and fish
Dioxins and furans
TCDDb
Polychlorinated biphenyls (PCBs)
PCBb
Bladder Lymphosarcoma, bladder Liver Forestomach Forestomach
Table 2.1 Continued Examples
Human food source
Major cancer type
Aromatic hydrocarbons
Benzene
Leukaemia3
Metals
Arsenic Cadmium
Ubiquitous, especially in eggs, cooked meat, fruit and vegetables Water Root vegetables
Chemical class
Organochlorine pesticides Plasticizers
Nickel Dieldrinb Aldrin b DDTb Diethylhexyl phthalate b
Legumes Vegetables Vegetables Vegetables Ubiquitous
Skin3 Dependent on exposure route Lung and nose3 Liver Liver Liver contaminant Liver
a
Cancer site in humans. Compounds for which there is evidence of a non-genotoxic mechanism of carcinogenicity on the basis of mutagenicity data, and in some cases additional mechanistic data. MeIQx, 2-amino-3,8-dimethylimidazo[4,5-/]quinoxaline; PhIP, 2-amino-l-methyl-6-phenylimidazo[4,5-/?]pyridine; BHT, butylated hydroxytoluene; BHA, butylated hydroxyanisole; TCDD, 2,3,7 ,8-tetrachlorodibenzodioxin. b
processes will be decreasingly effective and the risk will increase with dose. Furthermore, it could be postulated that levels may exist at which the repair processes are totally effective but this anticipated threshold effect for genotoxic carcinogens has not yet been demonstrated. On a subjective basis we can make an association between the estimated level of intake and the potency in animal studies in order to identify and prioritize those genotoxic chemicals most likely to present an appreciable risk of human cancer. There is a need to establish whether there is a true risk associated with these substances. In the specific instance of food additives there are marked differences in approach between US and European authorities. In the USA, the Delaney Clause (1958) prohibited the addition to food of any level of a carcinogen, and despite criticism that such a blanket approach is no longer appropriate (e.g. Ashby, 1994; Weisburger, 1994), and the revision of EPA risk assessment guidelines to allow for mechanistic data, the clause has not yet been amended. The approach of the Joint FAO/ WHO Expert Committee on Food Additives (JECFA) is to review all data on a case-by-case basis and does not preclude acceptance of evidence for non-genotoxic mechanisms of carcinogenicity (World Health Organization, 1987a).
2.3.7 Interactions between food chemicals Currently accepted methods of toxicological evaluation were designed to assess the hazardous properties of individual chemicals, in order to support the introduction of new synthetic chemicals for various purposes: industrial use, pharmaceutical agents, agrochemicals, etc. Apart from the problems related to testing of relatively innocuous substances, as noted above, there are particular uncertainties associated with the potential for interactions of chemicals with other food constituents. Interactions between food chemicals may result in variations in bioavailability of a given chemical from different food matrices or in modulation of the biological effects. Chemicals in food may also interfere with absorption of nutrients or essential elements in the diet. There is a considerable amount of data from in vitro studies and animal studies showing the potential for interactions between food chemicals, mediated by effects such as induction or inhibition of drug-metabolizing enzymes. Depending upon the combinations of chemicals and the experimental protocols employed, such interactions may be synergistic or protective. For example, the synthetic antioxidant butylated hydroxytoluene (BHT) has been shown to have both protective and promoting effects on carcinogenesis (IARC, 1986). A common, but not inevitable, pattern of interaction is promotion if administered for prolonged periods after the carcinogen and protection if given immediately prior to, or concurrently with, the carcinogen. Because the animal studies frequently use high levels of the chemicals, the relevance of many of these effects to humans at realistic levels of exposure is uncertain in many cases. There is also the possibility that a protective effect in one tissue will potentiate the effect in another tissue. Co-administration of indole-3-carbinol (a component of cruciferous vegetables) with the tobacco-specific nitrosamine 4-(methylnitrosamino)-l-(3-pyridyl)-l-butanone (NNK) resulted in a decrease in tumours in a mouse lung tumour model. This effect appeared to be due to induction of NNK metabolism which was accompanied by enhanced DNA methylation in the liver, indicating an increased potential to cause liver tumours (Morse et «/., 1990). However, despite these uncertainties, there is epidemiological evidence that dietary factors can be protective against certain dietary-related cancers, and such interactions must warrant further investigation. Protective factors are discussed in greater detail in Chapter 10. There is also the potential for interaction of food chemicals with nutrients. Many studies have shown that dietary restriction reduces the numbers of spontaneous and chemically induced tumours seen in rodent studies, and also results in decreases in metabolic activation of carcinogens and increased DNA repair capacity (Manjgaladze et al, 1993; HaleyZitlin and Richardson, 1993). These observations lead to the conclusion that overnutrition increases susceptibility to cancer. The mechanism by
which this effect is mediated is not yet clear, but there is a potential for effects on several stages of carcinogenesis, including increasing endogenous or chemical-mediated DNA damage and a promotional effect through increasing cell turnover (Lutz and Schlatter, 1992). Our knowledge of interactions between food chemicals is restricted to a very limited number of chemicals and biological endpoints. However, there is enormous capacity for interactions, as may be illustrated using the example of the Maillard reaction products, also known as non-enzymic browning products. Reaction between the electrophilic carbonyl groups of reducing sugars and amino groups present in amino acids, peptides or proteins during heating of foods leads to a network of chemical processes resulting in the formation of low molecular weight products that are important food flavours and colouring agents (review: O'Brien and Morrissey, 1989). Hundreds of products have been identified, and many more may exist. The relative distributions of the products are affected by the composition of the food, e.g. ratios of sugars and amino acid sources, fat and water content and other factors such as pH and temperature. Controlled browning is used in caramel production, coffee roasting, chocolate manufacture, bread baking and many other processes in food technology, and the reactions are exploited in a less controlled fashion in domestic cooking to produce desired colours, aromas and flavours. Several toxicological properties have been ascribed to the Maillard reaction products, perhaps the most notable being mutagenicity, which is particularly associated with cooked meat. Some of the reaction products in cooked meat have been identified as heterocyclic amines that have been shown to be carcinogenic in animal studies. In view of the epidemiological evidence correlating incidence of colon cancer with consumption of wellcooked meat (IARC, 1993), it seems plausible that these Maillard reaction products could be implicated in the aetiology of human colon cancer. Studies are now focusing on a small number of the individual products, but there is very limited information on potential interactions between the different products or with other food constituents with respect to formation, bioavailability and biological effect. 2.3.8 Individual susceptibility The final source of variation to be considered here is that relating to the heterogeneity of the human population. An increasing number of the enzymes involved in metabolic activation and detoxication are being shown to be polymorphic; that is, distinct subgroups of the population exhibit enzyme activities that differ markedly from the majority. Initially, many of these were identified by phenotypic responses to drugs or industrial chemicals. Thus individuals with low capacity for acetylation reactions were found to be more susceptible to the neurotoxicity of the
drug isoniazid, whereas fast acetylators are more likely to suffer from hepatotoxicity due to the formation of a reactive metabolite (Breckenridge and Orme, 1987). Slow acetylators are also more susceptible to arylamineinduced bladder cancer (Cartwright et al., 1987). As advances in molecular biology have enabled us to study levels of individual isozyme content, in addition to the activity expressed, many other polymorphisms have emerged, and evidence is accumulating that these may influence an individual's susceptibility to chemicals in foods, such as mycotoxins, heterocyclic amines and polycyclic aromatic amines. Polymorphisms in enzymes involved in intermediary metabolism pathways may also influence individual susceptibility through specific biochemical interactions. The specific issue of susceptibility of infants and children is discussed in Chapter 9. Interactions between dietary chemicals, as described in section 2.3.7, may contribute to differences in susceptibility of individuals with different dietary habits, such as vegans or vegetarians compared with meat eaters, high/low fat intake, malnourishment, ethnic diets, etc. High alcohol intake may result in a direct influence of the ethanol on other food chemicals (it is an inducer of a specific form of cytochrome P450), but can also be associated with compromised liver function and poor nutritional quality of the diet. Interactions with chemicals derived from sources other than the diet, including smoking, environmental pollutants, inhalation of chemicals in the workplace and at home, medications and drug abuse, offer further scope for individual differences in susceptibility. Finally, it is known that certain disease states may influence susceptibility to drugs or industrial chemicals, and it is highly likely that this would also apply to chemicals in food. Within the scope of this chapter, it is not possible to give a comprehensive review of these modulating factors, but it should be noted that there is insufficient information available to establish whether the safety factor of 10, commonly applied for interindividual variation, is adequate.
2.4 Uncertainties in risk characterization It is important that risk characterization is as accurate as possible, since any error could result in a cost to the consumer, either from the presence of unacceptably high levels of chemicals in food or from increased prices and loss of choice if a particular food or additive is restricted. Large safety factors are included in present procedures which mean that for most of the time the regulatory process is over-conservative. This excess conservatism can impose a considerable burden on the food industry when seeking approval for new products. However, the danger remains that in exceptional cases the procedure might underestimate the true level of risk to consumers.
2.4.1 Interpretation of hazard evaluation The ADI was defined by JECFA (World Health Organization, 1987a) as 'an estimate of the amount of a food additive, expressed on a body weight basis, that can be ingested daily over a lifetime without appreciable health risk'. This definition of the ADI could be understood to mean one of two things: that intakes should be below the ADI every day of a lifetime; or that intakes should be below the ADI on average over a lifetime. If intakes of food chemicals occurred at the same level every day, then these two interpretations would produce identical results. Unfortunately, this is unlikely to ever be the case and intakes can normally be expected to fluctuate. JECFA advised that there was no need for concern about short-term intakes above the ADI, providing the average intake over longer periods did not exceed it (World Health Organization, 1987a). This, however, presents problems for exposure analysts who estimate intakes for comparison with the ADI: for how long can excursions above the ADI be tolerated and by how much can the ADI be exceeded? In other words, over what period should average intakes stay below the ADI? JECFA recognized the need for a slightly different approach for certain contaminants, such as heavy metals, which accumulate within the body and where contamination of particular foods may considerably increase daily intakes. In such cases the tolerable intake is usually expressed on a weekly basis (the provisional tolerable weekly intake or PTWI) (World Health Organization, 1972). A similar ambiguity exists for the PTWI as for the ADI. 2.4.2
Variations in food chemical intakes
Quantities of food consumed and the composition of the diet may vary considerably, even for the same individual. Short-term fluctuations relate to the desire to seek variety or may reflect changes in habits on different days of the week. This means that the same foods (other than dietary staples) are rarely selected on consecutive days. Medium-term variations may relate to the changing seasons, ill-health or changes in diet intended to induce weight loss. Long-term changes reflect the processes of childhood, development, maturation and ageing and trends in the availability and popularity of different types of food. In addition, the pattern of chemical intakes will also be affected by the occurrence of contaminants, changes in the food industry's use of additives or agricultural use of pesticides. For example, if a food which is only rarely consumed, such as shellfish, contains particularly high levels of a contaminant, then intakes of that contaminant may be generally very low but peak well above the PTWI on occasions when shellfish are consumed. Intakes of food chemicals are likely, therefore, to change over time and the risk characterization process should take account of this. As food
consumption data become more widely available and methods for analysing them become more sophisticated, exposure analysts will be able to base estimates of intake on time periods varying from a single meal to averages over a lifetime. Since the time period on which the intake estimate is based can have a dramatic effect on the result, exposure analysts need to know over what time interval intakes should be averaged to be consistent with other parts of the risk assessment process. 2.4.3
Time integration of intake estimates
Since many foods are eaten infrequently, estimated consumption figures will be affected by the time interval over which consumption is studied. For example, the apparent average consumption, for a 'consumer' (an individual who is observed to eat the food at least once during a survey), who eats a 200 g product once every 3 months, could be 200 g/day, 30 g/day, 7 g/day or 2 g/day, depending on whether the survey period was a day, a week, a month or a year. Between a year and a lifetime there would be little difference unless this food is only consumed during certain periods of life. A corollary of this effect is that the proportion of people who consume the food increases as the study period increases and this in turn has an effect on upper centile estimates of consumption. This means that the use of different averaging periods can have a significant effect on estimates of intake and the proportion of the population classified as consumers. In the UK the possibility of using time periods other than a day (or a week) in the hazard evaluation was accepted when the quantitative approach was first described (Rubery et a/., 1990) and advisory committees have taken the view in the past, based on the toxicology of particular compounds, that short-term excursions above ADIs are acceptable as long as average intakes are within the ADI. Renwick and Walker (1993) have attempted to quantify this approach and defined criteria for assessing the significance of the magnitude and duration of excursions above the ADI. These criteria require a detailed knowledge of the toxicokinetics of each substance. 2.4.4 Effect of short-term variations in food consumption on estimates of intake Table 2.2 gives a comparison of some adult food consumption data based on a sampling period of either one random day or one week from individuals in a survey conducted in the UK by the Ministry of Agriculture, Fisheries and Food and the Department of Health. The proportion consuming increases as the sampling period increases since more consumers will eat foods at some time during one week than on a random
day. This is most pronounced for the less frequently consumed foods like liver. For the population mean consumption for each food, the sampling interval makes very little difference to the estimate. This is because the larger quantities consumed (by a smaller proportion of the population) on one day are cancelled out by the larger population of non-consumers. Only when those who were recorded as consuming during the survey are considered separately do differences begin to emerge. As expected, the consumption by consumers on one random day is significantly higher than the consumption by consumers averaged over one week. Differences between one random day and one week appear to be slightly less pronounced for high-level consumers. This may be because there is an upper limit on what can be consumed in one day which has less of an impact on high-level weekly consumption, which may occur over several days. The effect is less pronounced for all foods combined in this case because the data are heavily weighted towards the food consumed in greatest amounts, in this case potato. In a study of artificial sweetener intakes in Germany, Bar and Bierman (1992) found that women, children (< 9 years) and adolescents (10-19 years) were over-represented in a subgroup of consumers with high sweetener intakes when expressed on a daily basis. The 7-day average intakes Table 2.2 Estimates of consumption of four common foods based on one random day and one week (Gregory et al, 1990) Food Yoghurt Potato Liver Banana All above foods
Sampling period
Proportion consuming (%)
Population mean (g/day)
1 day: Week: 1 day: Week: 1 day: Week: 1 day: Week: 1 day: Week:
9.5 26.8 67.5 98.0 4.4 23.8 9.7 31.6 75.1 99.4
13 12 133 126 5 4 10 9 161 151
Consumers mean (g/day) 137 43 197 129 110 17 101 29 214 152
97.5%-ile (g/day) 288 140 496 310 387 55 236 100 529 331
Upper figure = consumption on one randomly selected day. Lower figure = average daily consumption over one week. Population mean = average of all individuals in survey. Consumers mean = average of only those recorded as consuming specific food. 97.5%-ile = 97.5th centile of distribution of amounts consumed by consumers. The same foods are seldom eaten on consecutive days, this means that if the sampling period in a food consumption survey is only one day, then fewer individuals will be observed to consume a particular food than would be seen if a one-week sampling period had been used. Although the average amount of each food consumed by the entire population would be similar for both time periods, if only those who were observed to consume are considered, then their intake on one day will be significantly higher than their intake averaged over 1 week.
of this subgroup were found to be well below the maximum daily intake, and those whose intakes had been above the ADI for saccharin and cyclamate on individual days had 7-day average intakes below the respective ADIs. 2.4.5
Effect of long-term variations in food consumption on estimates of intake
Long-term variations in food consumption and chemical intake patterns relate mainly to ageing and in particular to periods in childhood when food consumption may be higher on a body weight basis than for adults. The estimated intakes of an additive for various age groups, if the averaging period were a week or a lifetime, are given in Table 2.3. The interpretation of these data is dependent on the value of the ADI for this hypothetical substance. If the ADI was 40 mg/kg(bw)/day there would be few concerns about consumers exceeding this level. If the ADI was 30 mg/kg(bw)/day, then intakes of young children who were high-level consumers could exceed the ADI on a weekly basis. Average lifetime intakes would, however, remain well below the hypothetical ADI. Only if the ADI was below 13 mg/kg(bw)/day would there be concerns if intakes were averaged over a lifetime. 2.4.6
Toxicological significance of dosing period
The specification of time intervals over which intakes should be averaged must take into account many factors, including the dynamics of absorption, distribution, metabolism and excretion of the substance, and turnover and repair rates of potentially affected cells. Renwick and Walker (1993) have highlighted the considerable complexities underlying the interpretation of toxicological data. In most cases such detailed data are unavailable Table 2.3 High-level intakes of an additive when averaged over different time intervals High-level intake estimate (mg/kg/day) Age group 6-12 months 1 '/2-472 years 10-15 years 16-64+ years a
Week
Lifetime
37 35a 13 8
12 12 12 12
Four days.
Patterns of food consumption vary considerably over a lifetime and as a consequence intakes of chemicals such as food additives can also vary. This variation is revealed when a short survey period such as 1 week is employed. However, if intakes are integrated over a lifetime then a different figure will be produced.
and default intervals may need to be denned. For example, in situations where the NOAEL is based on a non-specific systemic effect in animals, such as weight loss, then a prudent default interval would take into account the time needed for weight loss to develop as well as the uncertainties of extrapolation from animal data to humans. Where the NOEAL relates to a specific acute effect such as teratogenicity, it may be necessary to base the intake estimate on a period as short as a single eating occasion. For this risk characterization, only a range of portion sizes is required. For intermediate substances which are known to have longer half-lives, the period might relate to the dose at which metabolism or elimination processes become saturated (Renwick and Walker, 1993) or the rate of turnover and repair for potentially affected cells. Such substances would need to be evaluated on a case-by-case basis. Where risk is related to cumulative dose, such as for genotoxic carcinogens, then lifetime might prove to be the most appropriate time-averaging period. However, the estimation of lifetime intakes is difficult because it is impossible to predict the lifetime pattern of consumption for any individual. The best that can be done is to integrate data collected from different age groups to estimate the lifetime average daily dose or use computer modelling. This will not take into account changes in the availability of certain foods and food components or shifting patterns of food choice. 2.4.7
Corrections for body weight and age
JECFA usually presents its advice in the form of ADIs for additives or PTWIs for contaminants (World Health Organization, 1972, 1987a). ADIs and PTWIs are usually expressed in terms of an amount per unit body weight per day or week (e.g. 10 mg/kg(bw)/day or 70 mg/kg(bw)/week). Exposure analysts are sometimes able to express estimates of intake on an individual body weight basis and, where these are not available, use default body weights such as 60 kg for adults and 14 kg for children. Correction for body weight can result in considerable differences in the observed patterns of food consumption, particularly when different age groups are compared (Figures 2.4 and 2.5). A consequence of this is that children's intakes of food chemicals may be more likely to exceed ADIs. In a study of artificial sweetener intakes (including table-top use) in Germany, Bar and Bierman (1992) found that women, children (< 9 years) and adolescents (10-19 years) were over-represented in a group of consumers with high daily sweetener intakes when expressed on a body weight basis. Estimates of intake by children which apparently exceed ADIs or PTWIs are likely to trigger calls for regulatory action on the substances concerned. However, the full meaning of children's food chemical intakes, whether corrected or uncorrected for body weight, is unclear. As more
data on children's food consumption become available, more apparently high intakes by children may be revealed. The significance of such intake estimates needs careful evaluation before the use of substances in food is unnecessarily restricted. 2.4.8 Effects of age on food chemical intakes The ADI is defined to cover the entire lifetime, and if children are known to be more susceptible, then additional safety factors can be introduced (World Health Organization, 1987b). When the NOAEL is based on lifetime studies in animal species under test, this will include the lowbody-weight/high-food-consumption period which is a feature of all animals' growth. Since the NOAEL therefore takes into account the low body weight effect which causes intakes to be higher during childhood, it could be argued that separate intake studies to account for high intakes
Soft drinks Cakes/cookies B'fast cereal Other veg. Potato Meat Milk/dairy Bread/cereals g/day Mean consumption g/day; Adults Mean consumption g/day; Schoolchildren Mean consumption g/day; Preschool children Figure 2.4 Uncorrected mean consumption of foods by adults, schoolchildren and pre-school children (Acheson, 1989; Gregory et al. 1990, 1995).
by children due to their low body weight are unnecessary, and risk assessment should be based on adult diets alone. Unfortunately, not all NOAELs are based on lifetime studies. Luijckx et al. (1994) consider that extra uncertainty factors to allow for higher intakes by children are not necessary for ADIs based on long-term toxicity studies of compounds that do not accumulate in the body and are not carcinogenic. They observed a two-fold increase in food consumption by young rats which is similar to the higher average energy, nutrient and water needs of young humans. However, children are not simply small adults and they do not consume the same diets as adults. Milk, dairy products and soft drinks are typical foods that are consumed in larger amounts by children than might be extrapolated from adult consumption of these foods. Ice-cream, yoghurt, milk puddings and soft drinks are also important dietary sources of food additives. Some infants and young children may consume 10 times as much of these products as adults, when expressed by body weight. If such foods
Soft drinks Cakes/cookies B'fast cereal Other veg. Potato Meat Milk/dairy Bread/cereals g/kg bw/day Mean consumption g/kg bw/day; Adults Mean consumption g/kg bw/day; Schoolchildren Mean consumption g/kg bw/day; Pre-school children Figure 2.5 Mean consumption of foods by adults, schoolchildren and pre-school children corrected for body weight (Acheson, 1989; Gregory et al 1990, 1995).
also contain food additives, this could result in intakes of food additives being higher in children than in adults by a similar factor. In such cases the factor of 2 incorporated in the ADI would not allow for higher intakes by children. It is therefore always necessary to consider data on children's food consumption patterns and chemical intakes in risk assessment. It is less clear how these data should be presented and, in particular, whether and how correction factors should be applied. 2.4.9 Correction factors for children's intakes The risk to smaller individuals receiving the same quantity of a chemical as larger individuals is greater because they receive a larger effective dose. Some correction for body size is therefore required. Expressing ADIs and PTWIs on a body weight basis provides a convenient method for scaling from animal feeding regimes to human food consumption. However, body weight may not always be the most appropriate factor to use for correcting estimates of intake. For example, a heavy adult and a slim adult of the same height would be expected to have similar organ weights in some instances, despite their different body weights. It could be argued that if both consumed a similar quantity of a food containing a food chemical, then each would receive a similar average organ dose. However, after body weight correction the larger individual could appear to have a much lower intake than the lighter individual. As a consequence of this effect, heavy adults might have higher organ doses than slim adults before reaching the ADI. Similarly, a slim adult and a heavy child could have similar intakes on a body weight basis, whilst the risk to the child could be greater because of the smaller organ size. Clearly, such an approach is a simplification, as the dose to a particular organ would depend on the toxicokinetics of a given chemical, e.g. whether it accumulated in fat. In comparing animals that differ in size as much as small rodents and humans, intakes of therapeutic agents on a body weight basis have been found to be poor predictors of therapeutic effect (National Research Council, 1993). This is because the ability of individuals to metabolize exogenous substances is not related to body weight. It has been suggested that there are circumstances where the ability of an individual to detoxify an exogenous chemical is related to that individual's intake of energy (National Research Council, 1993). Body weight is therefore not the only correction factor which is available and there is therefore a need to consider alternative correction factors for food chemical risk assessment. 2.4.10 Alternative correction factors If the ADI or PTWI is intended to take account of the normal high food consumption per unit body weight of children in relation to adults,
then the ideal correction factor would normalize children's total food consumption to make it comparable to that of adults whilst allowing foods which are consumed in relatively large amounts by children to stand out. Correction by body weight may not be ideal in this context, because it tends to exaggerate the ratio between children's and adults' total food consumption. Body weight correction might therefore be over-conservative, and in some cases alternatives may be more appropriate. Lean body mass. Organ weight is related to lean body mass, and lean body mass is more closely associated with height than is body weight. Neville and Holder (1995) have shown that lean body mass (and hence organ weight) is proportional to height squared: LBM = cH2 where LBM - lean body mass c = constant of proportionality H = height Lean body mass correction is likely to give higher relative intakes for obese people than would body weight correction. However, when organ dose is a more important factor than total body dose, lean body mass might provide a more useful and relevant and less conservative correction factor than body weight The tissue distribution and target organ for a particular chemical would be important factors in deciding whether this approach would be appropriate. Caloric requirements. Energy requirement has been related to metabolic rate and proposed as an appropriate correction factor in some circumstances (National Research Council, 1993). It is proposed that those individuals who have a greater turnover of energy are more able to metabolize and thus detoxify food chemicals. However, this will depend on the mechanism of toxicity of specific chemicals, since greater metabolic capacity could result in more activation or detoxification. Children, by virtue of their low energy requirement, could therefore be more or less vulnerable. Energy requirement can be estimated from energy intake, although this will tend to overlook the consequences of overeating. Where estimates of food consumption are based on comprehensive dietary surveys it may be possible to make direct corrections for energy intakes of individuals from the survey data in the same way that body weight corrections are made. Where data on energy intakes are not available they can be estimated from body surface area (National Research Council, 1993). Body surface area shows a log-linear relationship to body weight, so that energy intake can be estimated from body weight: Energy intake = awtx where a = constant of proportionality jc = power function of weight (usually 2/3 or 3/4)
Where metabolic rate is a more important factor than total body dose, energy intake might provide a more useful and relevant correction factor than body weight. 2.4.11 Risk characterization developmental needs Food chemical intakes fluctuate over short and long time periods. This means that estimates of intake can be highly dependent on the time interval selected for averaging. If estimates of intake are to be accurately compared with ADIs or PTWIs, it is essential to know over what time period intakes should be averaged in order to generate a meaningful result. Ideally, the time period must relate to the physicochemical and toxicological characteristics of individual chemicals, but in many cases default values may need to be used. Without clear information about the time period to which ADIs, PTWIs, etc. relate, it is impossible to have confidence in the accuracy and relevance of the risk assessment process. Infants and children differ both qualitatively and quantitatively from adults in their consumption of foods. There are certain foods for which the levels of consumption during childhood exceed those which could be accounted for by the low-body-weight effect alone. This means that risk assessment needs to take account of not only absolute differences in intake between adults and children but also the additional intake which is not included in the body weight effect and which is related to the consumption of different foods. Because intakes by children and infants cannot necessarily be directly compared with adults' intakes, it is necessary to find ways of taking children's intakes into account in risk assessment without adding excessive conservatism to the system. Body weight is the conventional factor used to correct intakes but in some circumstances its use may overestimate intakes by children and underestimate toxicologically significant intakes for heavier individuals. Energy requirement and lean body mass may be more appropriate correction factors than body weight because they relate to biologically relevant factors - metabolic rate and organ size. Both methods could ensure that the relatively high consumption of certain foods by children is taken into account in the risk assessment process in a logical way and without excessive conservatism. Further work is needed to identify those situations where alternative correction factors should be considered.
2.5 Opportunities for development in risk assessment If chemical risk assessment is to continue to evolve as a science, new techniques must be developed which are capable of predicting the nature and probability of adverse outcomes and estimating the degree of confidence
that risk managers can have in those predictions. Some areas of uncertainty will never be eliminated; for example, animal studies will generate quantitatively different results even if repeated in the same laboratory with the same protocol, same animal strain, same batch of chemical, etc. However, this type of variability can be measured and its magnitude evaluated in the risk characterization. Similarly, methods are now becoming available to allow sources of variability between individuals to be investigated and quantified. Research into human polymorphisms in the enzymes responsible for activation and detoxification is an area of much current interest. It is likely that polymorphisms will be found increasingly in the factors which govern an individual's response to different toxic stimuli, such as receptors and enzymes of intermediary metabolism. With increased understanding of the regulation of genes involved in the modulation and expression of toxicity, it will be possible to develop rapid screens to assess complex interactions between food chemicals, leading to greater understanding of the magnitude of variability due to different dietary habits and other environmental factors. A final stage in this process is the need for research into the maximum variability likely to result from combinations of individual genetic and environmental factors. Other areas of uncertainty cannot be quantified but might eventually be eliminated. These relate to the relevance of dosing regimes, endpoints and high-dose to low-dose extrapolations that we currently use in experimental toxicology. Fundamental research into mechanisms of toxicity is required in order to establish methods that can be trusted to identify potential causes of human disease, particularly in the areas of neurotoxicity, reproductive effects and cancer.
2.6 Conclusion Current approaches to food chemical risk assessment have served the regulatory process well in the past. However, as more information on uncertainties in the process come to light, so the need for the methodology to evolve becomes more pressing. Generally, current methods can be considered to be over-conservative, but there is new evidence emerging that in special circumstances, such as when acute exposures are critical, the current approach may underestimate the true level of risk. Each step in the risk assessment process introduces uncertainties into the analysis. Some of this uncertainty is derived from variability in natural data such as body weights or individuals' genetic susceptibility to toxic effects. There is also a measurement error associated with any constant or variable. The traditional approach has been to include nominal safety factors to account for some uncertainty (e.g. animal to human
extrapolation and Intel-individual variability), to always use conservative estimates (e.g. of exposure) or to disregard uncertainty by using arbitrary values (e.g. the assumption that absorption by humans is the same as for the animal model). This means that the degree of uncertainty is often concealed inadvertently from risk managers, who are then unable to take it into account. This practice also introduces the possibility that conservative assumptions are used at every stage and the final estimates of hazard, exposure or risk far exceed the bounds of possibility. Since there are many sources of uncertainty, it is very difficult to assess this and express it to risk managers. Uncertainty analysis techniques are being developed which allow uncertainty to be estimated and presented to risk managers in a comprehensible way. Our inability to quantify the degree of uncertainty associated with risk estimates means that we may be forced to adopt unnecessarily conservative measures. Better information on uncertainty would therefore allow risk managers to exercise more discretion and judgement in their balancing of risks against benefits. If the probability of an adverse effect is extremely remote whilst the cost to society of introducing control measures is high, then risk managers may consider that stringent controls are not appropriate. On the other hand, if the probability of an adverse effect is high, then risk managers will be better able to justify control measures. Later chapters of this book set out some novel approaches to risk assessment. Some of these could be applied now, whereas others are at a more developmental stage. Opportunities to capitalize on these new technologies must be grasped if chemical risk assessment is to evolve into the next millennium.
References Acheson, D. (Chairman) (1989) The Diets of British Schoolchildren. Department of Health Report on Health and Social Subjects 36. HMSO, London. Ames, B.N., Profet, M. and Gold, L.S. (199Oa) Dietary pesticides (99.99% all natural). Proceedings of the National Academy of Sciences of the USA, 87, 7777-7781. Ames, B.N., Profet, M. and Gold, L.S. (199Ob) Nature's chemicals and synthetic chemicals: comparative toxicology. Proceedings of the National Academy of Sciences of the USA, 87, 7782-7786. Ashby, J. (1994) Change the rules for food additives. Nature, 368(6472): 582. Bar, A. and Bierman, Ch. (1992) Intake of intense sweeteners in Germany. Zeitschrift Ernahrungswiss., 31, 25-39. Barnes, D.G. and Dourson, M. (1988) Reference dose (RfD) for the establishment of an acceptable daily intake. Regulatory Toxicology and Pharmacology, 8, 471-486. Breckenridge, A. and Orme, M.LE. (1987) Principles of clinical pharmacology and therapeutics. In: Wetherall, DJ., Ledington, J.G.S. and Warrell, D.A. (eds) Oxford Textbook of Medicine, 2nd edn, Vol. 1. Oxford University Press, Oxford, p. 77. Butterworth, B.E., Conolly, R.B. and Morgan, K.T. (1995) A strategy for establishing mode of action of chemical carcinogens as a guide for approaches to risk assessments. Cancer Letters, 93, 129-146.
Cartwright, R.A., Rodgers, M.J., Barham-Had, D. et al. (1987) Role of N-acetyltransferase phenotypes in bladder carcinogenesis : a pharmacogenetic epidemiological approach to bladder cancer. Lancet, ii, 842-846. Conning, D.M. (1990) Strategies for toxicity testing of food chemicals and components. Food and Chemical Toxicology, 28, 735-738. Crump, K.S. (1984) A new method for determining allowable daily intakes. Fundamentals in Applied Toxicology, 4, 854-871. Delaney Clause (1958) Food Additives Amendment to the Federal Food, Drug and Cosmetic Act (1958), 21 U.S.C.S. 348. ECETOC (1995) Assessment Factors in Human Health Risk Assessment. Technical Report No. 68. ECETOC, Brussels. Festing, M.F. (1995) Use of a multistrain assay could improve the NTP carcinogenesis bioassay. Environmental Health Perspectives, 103, 44-52. Food and Drug Administration (1977) Chemical compounds in food-producing animals: criteria and procedures for evaluating assays for carcinogenic residues in edible products of animals. Federal Register, 42(35), 10412-10437. Gregory, J., Tyler, H. and Wiseman, M. (1990). The Dietary and Nutritional Survey of British Adults. HMSO, London. Gregory, J.R., Collins, D.L., Davies, P.S.W. et al. (1995) National Diet and Nutrition Survey; Children aged I1/2 to 4}/2 years. HMSO, London. Haley-Zitlin, V. and Richardson, A. (1993) Effect of dietary restriction on DNA repair and DNA damage. Mutation Research, 295, 237-245. IARC (1986) Some naturally occurring and synthetic food components, furocoumarins and ultraviolet radiation. IARC Monographs on the Evaluation of the Carcinogenic Risk of Chemicals to Humans, Vol. 40. IARC, Lyon. IARC (1993) Some naturally occurring substances: food items and constituents, heterocyclic aromatic amines and mycotoxins. IARC Monographs on the Evaluation of the Carcinogenic Risk of Chemicals to Humans, Vol. 56. IARC, Lyon. Lewis, S.C., Lynch, J.R. and Nikiforov, A.I. (1990) A new approach to deriving community exposure guidelines from 'no-observed-adverse-effect levels'. Regulatory Toxicology and Pharmacology, 11, 314-330. Lu, F.C. and Sielken, R.L. (1991) Assessment of safety/risk of chemicals: inception and evolution of the ADI and dose-response modelling procedures. Toxicology Letters, 59, 5-40. Luijckx, N.B., Rao, G.N., McConnell, E.E. et al. (1994) The intake of chemicals related to age in long-term toxicity studies - considerations for risk assessment. Regulatory Toxicology and Pharmacology, 20, 96-104. Lutz, W.K. and Schlatter, J. (1992) Chemical carcinogens and overnutrition in diet-related cancer. Carcinogenesis, 13, 2211-2216. Manjgaladze, M., Chen, S., Frame, L.T. et al. (1993) Effects of caloric restriction on rodent drug and carcinogen metabolising enzymes: implications for mutagenesis and cancer. Mutation Research, 295, 201-222. Monro, A. and Davies, T.S. (1993) High dose levels are not necessary in rodent studies to detect human carcinogens. Cancer Letters, 75, 183-194. Morse, M.A., Lagreca, S.D., Amin, S.G. and Chung, F.-L. (1990) Effects of indole-3-carbinol on lung tumourigenesis and DNA methylation induced by 4-(methylnitrosamino)-l-(3pyridyl)-l-butanone (NNK) and on the metabolism and disposition of NNK in A/J mice. Cancer Research, 50, 2613-2617. National Research Council (1993) Pesticides in the Diets of Infants and Children. National Academy Press, Washington DC. Neville, A.M. and Holder, R.L. (1995) Body mass index: a measure of fatness or leanness? British Journal of Nutrition, 73, 507-516. O'Brien, J. and Morrissey, P.A. (1989) Nutritional and toxicological aspects of the Maillard browning reaction in foods. Critical Reviews in Food Science and Nutrition, 28, 211-248. Renwick, A.G. (1991) Safety factors and establishment of acceptable daily intakes. Food Additives and Contaminants, 8, 135-150. Renwick, A.G. (1993) Data-derived safety factors for the evaluation of food additives and environmental contaminants. Food Additives and Contaminants, 10, 275-305.
Renwick, A.G. and Walker, R. (1993) An analysis of the risk of exceeding the acceptable or tolerable daily intake. Regulatory Toxicology and Pharmacology, 18, 463^80. Rubery, E.D., Barlow, S.M. and Steadman, J.H. (1990) Criteria for setting quantitative estimates of acceptable intakes of chemicals in food in the UK. Food Additives and Contaminants 7(3), 287-302. Scheuplein, RJ. (1992) Perspectives on toxicological risk. Critical Reviews in Food Science and Nutrition, 32, 105-121. Scientific Committee for Food (1980) Guidelines for the safety assessment of food additives. Reports of the Scientific Committee for Food, 10th series, EUR 6892, Commission of the European Communities, Luxembourg, pp. 5-21. Tennant, R.W. (1993) A perspective on nonmutagenic mechanisms in carcinogenesis. Environmental Health Perspectives, 101(Suppl. 1), 231-236. Weisburger, J.H. (1994) Does the Delaney Clause of the US Food and Drug laws prevent human cancers? Fundamental and Applied Toxicology, 22, 483^93. World Health Organization (1972) Evaluation of certain food additives and the contaminants mercury, lead and cadmium. World Health Organization Technical Report Series No. 505. World Health Organization, Geneva. World Health Organization (1987a) Principles for the safety assessment of food additives and contaminants in food. Environmental Health Criteria 70. World Health Organization, Geneva. World Health Organization (1987b) Principles for evaluating health risks from chemicals during infancy and early childhood: the need for a special approach. Environmental Health Criteria 59. World Health Organization, Geneva. Young, F.E. (1989) Weighing food safety risk. FDA Consumer, September, p. 8.
3 Quantitative risk assessment D.P. LOVELL and G. THOMAS
3.1 Introduction The objective of this chapter is to put the use of quantitative risk assessment (QRA) in the toxicological assessment of food chemicals into perspective. The use of QRA will be contrasted with other approaches which derive numerical safety standards or guidance values. The chapter aims to provide a historical perspective to the development of QRA methodology, a non-mathematical overview of the properties and limitations of the various mathematical models and a discussion of possible developments in the future.
3.2 What is QRA? Definitions QRA is a relatively recent development in the field of toxicology. Earlier editions of the standard toxicology textbooks did not treat risk assessment as a separate topic. Only in later editions in the 1980s did risk assessment and QRA, in particular, form a distinct part of the sections relating to regulatory toxicology. Differences in definitions arise partly from the different perspectives in two influential reports produced in the early 1980s by the US National Academy of Sciences (NAS) (National Academy of Science/National Research Council, 1983) and by the UK Royal Society (Royal Society, 1983). The US NAS developed a prescriptive way of subdividing the process of risk assessment into four stages: hazard identification, dose-response relationships, exposure assessment and risk characterization. A fifth stage involved the transfer of the findings from the risk assessment to the risk manager. The NAS report envisaged a very clear distinction between risk assessment and risk management. The Royal Society report defined the terms and stages differently (Figure 3.1). In general, the US terminology will be used in this chapter because many of the issues relating to QRA relevant to food chemicals arise in the context of the US regulatory environment. The use of the US definitions should not, however, be taken to mean that the US approach is to be preferred or that it can necessarily be transplanted to a UK or European regulatory environment.
Risk assessment
NAS/NRC (1983)
Royal Society (1983)
Hazard identification
Identification of outcomes
Dose-response relationship Exposure assessment Risk characterization
Estimation of the magnitude of the associated consequences of outcomes Estimation of the probabilities of these outcomes
Risk estimation
Risk evaluation
Risk assessment
Risk management Figure 3.1 Relationship between the stages in risk assessment and risk management as described by the US NAS/NRC (NAS/NRC, 1983) and the UK Royal Society (Royal Society, 1983). The figure illustrates both the differences in terminology between the two reports and the relationships between the different stages.
3.2.7
Terminology: hazard, risk, safety
In the context of risk assessment, the definitions of risk and hazard are much more specific than the everyday use of the words. Hazard is defined as 'the inherent properties of a chemical substance or mixture which make it capable of causing adverse effects in man or the environment when a particular degree of exposure occurs'. Risk is 'the predicted or actual frequency of occurrence of an adverse effect of a chemical substance or mixture from a given exposure to humans or the environment' (Richardson, 1985; Lovell, 1986). Terms involving the word 'safe' are particularly difficult because they rely upon the commonsense usage of 'safe', meaning free from danger. However, the concept of being totally free from danger or absolute safety is not generally recognized in risk assessment. (A readable general discussion of the concepts underlying risk can be found in the British Medical Association (199O).)
3.2.2
QRA
Definitions of QRA vary considerably and this can lead to confusion. In the context of toxicological studies, QRA has become associated, although not exclusively, with the methodology for the extrapolation from the results of animal studies carried out at high doses to estimate the potential risks to the human population at much lower risks. Limiting the definition of QRA to just the fitting of mathematical models would imply
that the use of the safety factor approach (described later) would not be considered QRA. However, this approach was specifically developed to provide a quantitative assessment as opposed to the previous qualitative assessment.
3.3 QRA and food safety: UK and US perspectives The major developed countries usually differentiate between the carcinogenic and non-carcinogenic properties of the chemicals they regulate. In general, non-carcinogenic chemicals have been regulated by approaches based upon the determination of acceptable or tolerable daily intakes (ADIs or TDIs) by applying safety factors to a no observed effect level (NOEL) determined in toxicological studies (Lu and Sielken, 1991). Underlying the approach is the belief that there is a threshold below which the toxic events will not occur. Major differences occur, however, in how they regulate chemicals determined to be carcinogens. The use of QRA with food chemicals has largely been restricted to North America, particularly the USA. The methodology is not accepted by all international regulatory authorities, despite considerable promotion by its proponents. In Europe there has been a reluctance to use QRA methods, and instead there has been a concentration on more qualitative assessments. In some cases a safety factor approach is used when it is believed that the chemical is a non-genotoxic carcinogen and there is a threshold below which the chemical does not cause cancer. Jasanoff s comment in 1986 still holds for the UK and probably a number of other countries. American agencies seem prepared to use mathematical models to estimate risk in situations where British experts either view the risks as insignificant or the technology of extrapolation as unproven. From the standpoint of the British regulatory process, the discussion of quantitative risk assessment remains largely academic... /T ff 1ft0/: x (Jasanoff, 1986)
QRA has, therefore, come to dominate US thinking regarding the regulation of carcinogens where it is assumed that thresholds are absent. (In general, the safety factor approach has been accepted for the regulation of non-carcinogenic compounds using toxicological endpoints assumed to have thresholds, although suggestions for the extension of QRA to this area have been made.) This is partly because of the success of the arguments of proponents for the approach but mainly as a consequence of the legal framework surrounding the US regulatory system. Several US laws and regulations now have a requirement for some form of formal risk assessment.
3.3.1 Before Delaney The development of QRA in the toxicological assessment of food chemicals in the USA derives from the history of specific legislative pressures on the regulatory processes in the USA. Initially, regulation of chemicals had been based upon qualitative assessments of the risks they posed, such as whether or not they were carcinogenic. The safety factor method was developed by Lehman and Fitzhugh (1954) to provide a quantitative approach for the regulation of non-carcinogenic food chemicals to replace this qualitative approach. Several chemicals were also being identified as carcinogens in the primitive carcinogenicity tests being carried out, although it was still thought that carcinogenicity was a relatively rare phenomenon. 3.3.2
The Delaney Clause
Widespread concerns about these carcinogens led in 1958 to the incorporation of the Food Additives Amendments to the 1954 US Federal Food, Drug & Cosmetics Act (FFDCA). One amendment was based on a proposal from the floor of the House of Representatives by representative James J. Delaney. It was incorporated, after limited discussion, into the Act and has since been known as the Delaney Clause. The Clause has been widely interpreted as meaning that no chemical that causes cancer in laboratory animals can be added to the food supply. The approach can be termed a zero-risk philosophy in that the Clause appears to imply that there is no safe dose of an animal carcinogen and that there are no permitted levels of such a chemical in food. The full text of the key part of the Clause is as follows: no additive shall be deemed safe if it is found to induce cancer when ingested by man or animal, or if it is found, after tests which are appropriate for the evaluation of the safety of food additives, to induce cancer in animals or man.
It is interesting to note that many of the issues surrounding the problems of how to define 'safe', how to accommodate new information and how to ensure interpretation in 'the light of reason' were addressed in the reports and discussions of the 1958 and 1960 Amendments to the FFDCA. The term 'safe' was not defined specifically in the 1958 Amendments; however, the legislative history shows that it was clearly taken to mean the 'reasonable certainty of no harm'. As part of the General Safety Clause of the FFDCA, there is the statement that the Clause 'does not - and cannot - require proof beyond any reasonable doubt that no harm will result under any conceivable circumstances'. 3.3.3 After Delaney: diethylstilboestrol, packaging Complications arose almost immediately for a number of reasons. First, carcinogenicity was recognized as a property of many chemicals rather
than a rare phenomenon, as the number and the power of the tests to detect carcinogenicity increased. Second, increasingly small quantities of chemical, such as PPM (parts per million) or even PPT (parts per trillion), could be detected as analytical methods improved (Flamm, 1986). Third, problems arose when contaminants of a food additive rather than the additive itself were determined to be carcinogenic. A spur to the development of QRA was the case of residues in food of the carcinogenic veterinary drug diethylstilboestrol (DES) administered to cattle in the 1960s. (This was before DES was banned from use in animals by the FDA in 1984 when it was identified as a human carcinogen.) The DES Proviso of 1962 was made to the FFDCA as an attempt to allow the continued use of carcinogenic drugs in animals provided that there were no detectable residues in the edible portions of the carcass using analytical methods prescribed by the US Secretary of the Department of Health and Human Services. However, attempts to designate analytical methods which could show exposures had not occurred or were sufficiently low to provide minimal risk were unsuccessful. A series of court cases resulted in clarifications of the interpretation of the Delaney Clause so that it did not apply to carcinogenic contaminants such as some packaging materials or to carcinogenic impurities in a non-carcinogenic food additive. Specific congressional action mandated the continued use of saccharin after it had been determined to cause bladder tumours in rats administered high doses. This action required the risk management action of ensuring that packets of saccharin carried a specific warning. The Delaney Clause only applies, however, to chemicals that have been deliberately added to food. The Clause consequently does not apply to natural ingredients or constituents of food, or to natural or accidental contaminants such as carcinogenic fungal contaminants like the aflatoxins. These and carcinogenic impurities of food additives which in their pure form are not carcinogenic can then be regulated under the general safety clause of the FFDCA, where a risk-benefit approach is permitted, because they are not being added deliberately. 3.3.4
The 1990s and court rulings
The US FDA has at various times tried to circumvent the problems posed by the Delaney Clause by applying QRA and arguing that some levels of risk were so low as to be considered 'tolerable'. The concept of small but 'tolerable' risks is based upon the de minimis principle. This approach accepts that there are some levels of risk that are too small to be considered relevant for concern in everyday life. Subsequently, some US courts have proposed that the legal concept of de minimis was appropriate for the consideration of risks from exposures to chemicals. Based upon the loose translation of the phrase 'de minimis non curat lex' as ' the law
does not concern itself with trifles', both the EPA and FDA adopted the de minimis approach to try to help them make what they considered to be reasonable decisions concerning carcinogens. Hopper and Oehme (1989) summarized the position as follows: 'the de minimis principle allows the regulatory agencies to set exposure levels that will result in cancer, but to do so at such an insignificant rate as to be acceptable and in the best interest of the public'. However, various challenges to this approach led to US courts essentially ruling that the Delaney Clause should remain in place. The courts indicated that they were very sympathetic to the argument based upon de minimis. Some, indeed, had supported the use of the de minimis rule in previous cases, except in those where the US Congress had been 'extraordinarily rigid'. However, with respect to the Delaney Clause cases they were forced to base their decision on the very clear and unequivocal interpretation required by the Clause. The courts clearly found this interpretation unsatisfactory and gave a clear message that they felt they were bound by the extremely clear mandate by Congress which could only be altered by elected senators and representatives. The courts agreed with the agencies' arguments, saying that requiring rigid interpretation of Delaney could 'lead to regulation that not only is "absurd or futile" in some general cost-benefit sense, but also is directly contrary to the primary legislative goal [of protection of the public health]' (italics in original). The problem for the courts was that despite real evidence of an appreciation of the problems associated with a zero-risk stance, the clear meaning of the statutory language and legislative history (the congressional hearings and floor arguments) supported Delaney's strict interpretation. The courts felt that they had no room for manoeuvre and had to pass responsibility back to Congress to change the law. 5.3.5
Moves to change Delaney (unfinished business)
The USA, therefore, continued to have problems with the interpretation of the Delaney Clause. One ruling, for instance, was that the Delaney Clause has to be strictly applied to pesticides entering processed food. The maximum levels of pesticides in plant products are established by the EPA, with the FDA monitoring adherence to these levels. Under Section 408 of the FFDCA, the EPA determines appropriate tolerance levels in or on agricultural commodities by considering both health effects and the value of the use, i.e. a risk-benefit approach. However, Section 409, which applies only to processed foods, includes the Delaney Clause and required the EPA to consider only pesticide risk and not any offsetting benefits. This was leading to the EPA revoking or banning pesticides which occur on or in raw products and might be concentrated by the processing of food. This has led to complexities such as a debate about whether washing
a lettuce was a form of food processing, whether dried fruit had been processed and the setting of tolerance levels for a pesticide if it was to be used on salad tomatoes but a ban on use if the tomatoes were to be used for canning. The continued debate on the Delaney Clause and de minimis in the USA have resulted in various attempts to overcome the problems caused by it. A plethora of bills has been discussed in Congress, many suggesting the use of risk assessment and management processes based upon risk-benefit analyses. The US Congress has recently (in August 1996) passed the Food Quality Protection Act (FQPA). This removed this specific use of the Delaney Clause of the FFDCA which had been interpreted as requiring the revoking of tolerances for food residues of pesticides where there was evidence of carcinogenicity. A number of the attempts to replace this part of the Delaney Clause had relied on the development of explicit acceptable or tolerable specific risk levels such as a numerical risk such as 10~6 determined by QRA. These are referred to as 'bright lines' estimates (Rosenthal et al., 1992). 3.3.6
Department of Health, Committee on Carcinogenicity approaches
In contrast to the US approach, the UK Department of Health (DoH) Committee on Carcinogenicity (CoC) has stated that a safety factor approach could be applied to non-genotoxic carcinogens provided that the underlying mechanisms are understood (Department of Health, 1991). It does not back the use of QRAs for chemical carcinogens, preferring a case-by-case weight of evidence qualitative method (Department of Health, 1991). This avoids the uses of QRA methods to identify exposure levels which are virtually safe. Rubery etal. (1990), in discussing the criteria for setting quantitative estimates of acceptable intakes of chemicals in the UK, state that the UK Committee on Toxicity believes that genotoxic carcinogens should not be deliberately added to the diet. In the case of contaminants which are genotoxic carcinogens, tolerable intakes are set which are likely to be at the lowest level achievable; this level will often be related to the limit of detection in a method of chemical assay for the contaminant. 3.3.7 EU approaches The European Commission has recently developed a risk assessment directive but this made no attempt to apply QRA approaches. Differences in terminology and definitions between the European Union (EU) and the USA approach to risk assessment remain a source of potential confusion. At least two EU countries, The Netherlands and Denmark, have used QRA approaches in their risk assessment procedures.
3.3.8 GATT, NAFTA Important in the future may be the consequences on international trade of different methods of assessing risks. Differences in the tolerable or acceptable levels of chemicals in food could cause disputes over restrictions on food imports by different countries or trading blocks. Opponents in the USA of reforms to the Delaney Clause have argued that agreements such as the General Agreement on Tariffs and Trade (GATT) and the North American Free Trade Area (NAFTA) would lead to the import of food with higher levels of chemicals than at present permitted in the USA. The counter-argument has been made that the Sanitary and Phytosanitary (SPS) Agreement in GATT means that approaches such as the Delaney Clause could still be applied in the USA. Aspects of QRA are likely, however, to play an increasing role in issues relating to the trade in food.
3.4 Advantages of QRA A series of advantages have been propounded for QRA, as follows. QRA provides an objective numerical value which can be used for ranking risks, for setting priorities, and for assessing the implications of actions and inactions. It is argued that it allows a formalization of a risk-benefit approach and provides a means of overcoming the problem of there being no such thing as zero risk. It is suggested that the results of QRA can be made understandable to the non-scientist, thereby helping the communication of risk to the general public and improving their perception of different risks. 3.4.1
VSD, de minimis, 'bright lines' and negligible risk
A central concept in the development of QRA is the virtually (or virtual) safe dose (VSD). Mantel and Bryan (1961) promoted the use of extrapolation using a mathematical model giving levels of exposure which could be considered 'virtually safe' or to pose 'acceptable' or at least 'tolerable' levels of non-zero risk. They argued that such methods, while not guaranteeing absolute safety, would satisfy, in their words, 'an arbitrary definition of "virtual safety" '. Their arbitrary definition of 'virtual safety' consisted of a 1 in 100 million (1/108) level of permissible risk, a 99% confidence level and a probit slope of 1. The probability of carcinogenicity would then be low enough to be considered virtually safe and the dose incurring such a low level of risk could be considered the VSD. They pointed out that 'other arbitrary definitions of "virtual safety" may be employed as conditions require'.
The FDA subsequently revised the VSD in 1977 to be based upon a level of one extra cancer per million (10~6) people exposed for a lifetime instead of Mantel and Bryan's original concept of an annual risk. The VSD is, therefore, now defined as the dose which in a worse-case scenario would cause one extra case of cancer among a million people exposed for a lifetime to that dose level. This is often qualified by some phrase such as 'this represents an upper limit on the estimated risk and the true risk may be much lower or even zero'. It is not always appreciated that the VSD may considerably overestimate the true risk because of the assumptions made. Consequently, the use of the VSD as an estimate of the true risk associated with an activity is commonly misunderstood and can affect the communication of the risk to a wider audience. The concept of a 10~6 risk as acceptable resulted from studies of what risks the general public would tolerate. The 10~6 figure is not a definitive value applicable in all circumstances: occupational exposures where higher risk levels occur may be considered tolerable. The VSD is also called a reference specific dose (RsD) by organizations such as the EPA. This is to provide a more neutral, less emotionally charged term, avoiding the connotations associated with the word 'safe'. The basis for the 10~6 must be clearly defined. It is also important to make explicit whether the 10~6 represents the whole population or a set of susceptible individuals. In all cases it should actually refer to a million people being exposed rather than to a smaller subgroup receiving the exposure within a larger group of a million. It should also be clear whether it refers to cases of cancer or deaths from cancer and whether it refers to a lifetime or annual risk. A risk defined only by a number would be about two orders of magnitude higher for an annual than a lifetime risk (based on an average lifespan of 70 plus years), assuming that the risk is uniform over time. In the USA, with a population of about 250 million and an average lifespan of 73 years, the 10~6 lifetime risk is equivalent to about 3.4 extra deaths per year if the whole population was exposed. In the UK, this would be equivalent to 0.8 extra deaths per year and in the EU (based on a population of about 370 million) 5.1 extra deaths per year. Putting this into perspective, about 450 000 US citizens die of cancer out of the annual US death rate of 2.5 million. In the UK, about 575 000 people (1% of the population of about 55 million) die each year (British Medical Association, 1990).
3.4.2 ALARA and BATNEEC An alternative approach to the management of risks than setting specific risk levels is to base the management on the concept of ALARA (as low as reasonably achievable) or the more stringent concept of BATNEEC
(best available technology not entailing excessive costs). In both cases the approach involves the person or organization introducing the risk showing that it is not seriously out of proportion to the benefits achieved. Clearly, terms like 'reasonably achievable' or 'excess costs' are subjective and subject to different interpretation by different people. The UK's approach to reducing levels of genotoxic carcinogens (and non-genotoxic carcinogens where the mechanism of action is not understood adequately) found in food to the lowest level achievable is one example of these approaches.
3.5 Safety factor versus mathematical modelling 3.5.1 Safety factor Non-carcinogenic chemicals in the food supply have traditionally been regulated using the safety factor approach developed by Lehman and Fitzhugh (1954). This approach establishes intakes such as ADIs and TDIs which can be considered 'safe'. An important concept is that these ADIs are considered to carry a 'margin of safety' and are defined as 'the daily intake of a chemical which, during an entire lifetime, appears to be without appreciable risk on the basis of all known facts at the time' (emphasis added) (World Health Organization, 1962). In this context 'without appreciable risk' is taken to mean 'that there is practical certainty that injury will not result even during a lifetime exposure' (World Health Organization, 1962; quoted by Lu and Sielken, 1991). This definition of concentrations of food chemicals in terms of safety rather than risk can be a source of some confusion. However, as discussed earlier, in most regulations 'safe' is usually considered to mean relative rather than absolute safety. A definition of safety used by the UK CoC is that 'safety is the converse of risk and, in common usage, refers to a situation of minimal risk' (Department of Health, 1991). The ADI or TDI have traditionally been derived by identifying a no observed effect level (NOEL) in an animal toxicity study. This is the highest dose level at which no changes are seen in treated animals compared with concurrent control animals in the study. The concept of a NOEL has been refined over time by recognizing that exposure to dose levels of a chemical can cause changes in the exposed compared with the control animals but that these changes are not detrimental or adverse. The lowest such level is called the no observed adverse effect level (NOAEL) or, sometimes, the no adverse effect level (NAEL) (Berry, 1988). The NOEL or NOAEL is then divided by a safety factor to produce a concentration which is considered to be below the threshold which would pose measurable or appreciable human risk. The safety factor allows for
uncertainty in the interpretation of the toxicity data with respect to effects on humans. Some organizations prefer to set a single variable factor. Others divide a safety factor of, for instance, 100 into two factors of 10: first, to account for possible differences in species susceptibility; and second, for interindividual variability in the human population (Weil, 1972; Lu and Sielken, 1991). The safety factor may be reduced if there are sound and reassuring human data. Alternatively, it may be increased if the toxic effects seen in animals are severe and irreversible or if there are concerns regarding the quality or completeness of the available data. The actual value used is determined on a case-by-case basis with expert judgement. Further refinement of the numerical values of the safety factors based upon consideration of the pharmacokinetic and pharmacodynamic data has been suggested by Renwick (1991). Four main criticisms of the safety factor approach have been made by advocates of a mathematical modelling approach: first, that the approach presumes there is a threshold below which no adverse effects occur; second, that sample sizes are not considered, so that good experiments may be penalized compared with poor experiments; third, the doseresponse relationship is not considered because only the NOAEL is used; and fourth, new information may challenge the considerations used to determine the NOAEL. Proponents of the approach argue, however, that the flexibility in the choice of safety factor adequately reflects the degree of confidence in the data. The ADI or TDI have been renamed as the reference dose (RfD) and safety factors as uncertainty factors by the US EPA (Barnes and Dourson, 1988). The RfD is defined by the EPA as 'an estimate (with uncertainty spanning perhaps an order of magnitude) of a daily exposure to the human population (including sensitive subgroups) that is likely to be without an appreciable risk of deleterious effects during a lifetime' (Johannsen, 1990). This change in terminology was intended to provide more neutral terms without the connotations of approval and endorsement that words such as acceptable, tolerable or safety imply. The EPA believed that this change in nomenclature would promote greater consistency in its assessment of non-carcinogenic effects and provide a clearer separation of the risk assessment and management roles. Whether or not the safety factor approach is a form of QRA depends upon how QRA is defined. It has, for instance, also been called numerical risk assessment, while the UK's DoH refers to the derivation of ADIs as a quantitative approach rather than a qualitative approach (Rubery et al., 1990). It represents an alternative methodology for deriving numerical values which can be used in the management of risk. A key concept is that it is based upon a different philosophy from that underlying the mathematical modelling applied to carcinogenic endpoints, in that there is an assumption of a threshold dose below which no toxic
effects will occur. In contrast, most of the commonly used mathematical models are based upon the assumption that there is some finite probability of cancer as a consequence of any level of exposure. 3.5.2 Mathematical modelling History: Mantel-Bryan. Mantel and Bryan's (Mantel and Bryan, 1961) pioneering use of mathematical modelling to try to circumvent the problems caused by the Delaney Clause was based on the Probit model (a mathematical model used in the LD50 test for acute toxicity) to fit data observed in a long-term rodent cancer bioassay (LTRCB). These studies usually consist of a number of treated groups of rodents and a control group. The data are the number of animals with tumours out of the number exposed. An equation fitting the model to the data is then extrapolated to low doses of chemicals to derive a VSD which it is considered would pose little risk. It is this dose-response modelling aspect of risk assessment that many toxicologists think of as QRA. However, the results obtained at the time using Mantel and Bryan's approach were considered by some to be too 'liberal'. Consequently, a range of different and competing mathematical models were proposed in the 1970s for fitting to LTRCB data. Models. One complication of a discussion of mathematical models is that classification into distinct groupings is difficult because of the considerable overlap between the different approaches. One separation, however, is into mechanistic models and tolerance models. Mechanistic models assume that there is some mechanistic basis to the process of carcinogenesis. The mathematical models are based upon the concepts of cancer resulting from a series of hits or, alternatively, of cells passing through a series of stages until they become malignant. The simplest of these models, the one-hit or one-stage, can be considered to be related either to a single 'hit' occurring for a tumour to develop or to a cell passing through a single stage. The multi-hit model assumes that a number of 'hits' is required, while the multi-stage model assumes that the cell has to pass through a series of stages before a tumour occurs. In the multi-hit model, the order in which the hits occur is not important, while the multi-stage model assumes that there is a progression in the order of the stages. The one-hit model, derived from radiation studies, was initially preferred over the Probit-based approach when the latter was judged to be not conservative enough. The one-hit model appeared consistent with the theory of a single molecule of a chemical being sufficient to cause cancer. It provided conservative estimates of risk but had the disadvantage of
being a poor fit to many sets of data. The multi-hit model was for a short time favoured by the Scientific Committee of the Food Safety Council (Food Safety Council, 1980) as a suitable model. It fell out of favour, though, following criticism of its mathematical properties and because its biological interpretation in terms of a series of hits was compromised by the realization that the best estimate of the number of hits was not necessarily an integer and that fractions of hits were possible. The multi-stage model took over from the one-hit model as the model favoured by most US regulatory agencies. The multi-stage model provided a better fit to the observed data than the one-hit model. Arguments for the use of the multi-stage model were put forward by Anderson of the US EPA's Carcinogenicity Assessment Group (CAG) (Anderson et a/., 1983). These included the following: first, most agents that cause cancer also irreversibly damaged DNA; second, a large proportion of carcinogens were mutagens; third, there was an expectation that quantal responses characteristic of mutagens were likely to be associated with a linear non-threshold dose-response relationship; fourth, experimental evidence from radiation and a variety of chemicals suggested that such a model was appropriate; fifth, linear no-threshold low dose-response relationships were consistent with the relatively few epidemiological studies of cancer response; and sixth, initiation and promotion experiments provided evidence consistent with a linear non-thresholded hypothesis. Anderson, however, attenuated the arguments for the use of the model by stating that 'there is no really solid scientific basis for any mathematical extrapolation model relating carcinogenic exposures to cancer risks of the extremely low levels of concentration that must be dealt with in evaluating environmental hazards'. Issues surrounding the practical use of the multi-stage model will be discussed later. Tolerance models are based upon the assumption that a population of exposed individuals has a distribution of tolerances. Tolerance is equivalent to an individual threshold such that at a particular dose a proportion of individuals whose tolerances have been exceeded will develop a tumour while those individuals whose tolerances have not been exceeded will not. Such tolerance models underlie the Probit model used by Mantel and Bryan and the analogous Logit model. Models based upon the Weibull distribution also assume a tolerance distribution. This has been widely used for modelling, for instance, the life of electrical components. These are extremely versatile models capable of fitting a wide range of data sets. The one-hit or one-stage model is, in fact, a special case of the Weibull, illustrating the overlap between the two classes of quantal models. Both the mechanistic and tolerance models appear, therefore, to have some biological justification. The mechanistic models initially seem to relate to possible mechanisms of toxic events such as chemicals 'hitting'
and interacting with sensitive targets such as DNA. Similarly, the multistage model of a progression of cell changes appears analogous to the stages of initiation, promotion and progression associated with carcinogenesis. The tolerance models might be related to the interindividual differences in susceptibility to toxicological effects seen in a population. However, these apparent similarities do not represent a true reflection of the underlying complexity of carcinogenicity. The models, in practice, bear little resemblance to the actual biological models of carcinogenicity. They should only be considered as a convenient way of deriving mathematical functions which fit the experimentally derived dose-response data. The estimates of the mathematical parameters have no simple biological interpretation and no quantal model has any better credentials in terms of its biological properties than any other. The properties of these quantal models were investigated in detail in the early 1980s. In general, most of the models provide a good fit to the observed data but the estimates of the VSDs associated with specific low levels of risk derived by extrapolation differed between the models, in some cases by orders of magnitude. The Food Safety Council (1980) showed that the models had fairly predictable properties. The ordering of the low-dose extrapolations was consistent over data sets which were either linear or showed sub-linear (convex) dose-response relationships. The VSDs produced by fitting the models to the different data sets were, in general, in the following order: one-hit < multi-stage < Weibull < multi-hit < Logit < Probit. In some rare cases where the dose-response relationship was supra-linear, such as data on vinyl chloride, the order was reversed. The implication is that the estimate of risk associated with a chemical will be determined by the choice of model used. A consistent choice of the multi-stage model will give very much lower and, hence, more conservative VSDs compared with choosing the Probit model. This consistent order of VSD from the models also means that comparisons should not be made between estimates of risks from chemicals when different models have been used. It would be unwise, for instance, to choose to compare estimates of risk obtained with the multi-stage model with those obtained from a Weibull model if the aim was to determine priorities for action on the basis of the estimates of risk presented. It is also wrong to base the choice of model to fit on the evidence of the criteria of goodness of fit. The argument, for instance, that the most appropriate model for a specific chemical would be the Probit on the basis of goodness of fit would mean that the chemical might be considered to have estimates of risk which were orders of magnitude lower using another model which gave almost as good a fit. In general, the only model which had a consistently poor fit to many data sets was the one-hit model. It was, however, widely used by the EPA in the late 1970s and early 1980s because it could be fitted to data with
only a limited number of groups. Many of the QRAs carried out using this approach and used for comparisons between chemicals were still being listed in the late 1980s. EPA 1986 guidelines and LMS. The most widely used mathematical model has been the linearized multi-stage model (LMS). It was used originally in a review of the risks associated with chemicals in relation to the US Safe Drinking Water Act by the US National Academy of Sciences. The EPA adopted the multi-stage model in 1980 following problems associated with the one-hit model. It became the 'default' model for the US EPA in 1986 when the EPA stated in its Guidelines on Carcinogenic Risk Assessment that the LMS should be used (US Environmental Protection Agency, 1986). The EPA stated that it would accept QRAs carried out using other models but would expect such methods to be justified. Extensive revisions to these guidelines (US Environmental Protection Agency, 1996) are now being finalized following public comment with the intention of publishing the guidelines in their final form in 1997 or 1998. Two relevant quotes from the 1986 guidelines are: Tn the absence of adequate information to the contrary, the linearized multistage procedure will be employed' and 'considerable uncertainty will remain concerning responses at low doses; therefore in most cases an upper-bound risk estimate using the linearized multistage model should be presented' (US Environmental Protection Agency, 1986). 3.6 The LMS model 3.6.1
Theory
The LMS is a refinement of the multi-stage model. The model now widely used is based upon a formulation of the multi-stage model into a polynomial form by Crump et al. (1976). A property of the multi-stage model is that at low dose there is an assumption of a linear dose-response relationship irrespective of the shape of the response in the observed range. The LMS model described by Crump (1984a) fits the LTRCB data and uses the upper 95% confidence limit of the linear term of the polynomial ^1, for low-dose prediction. This term, ^1*, is used in a simple model of the dose-response relationship which is approximately linear at low doses. This, in practice, produces a value which is the largest linear term that is consistent with the dose-response data. Estimates of risk produced using this term are referred to as the upper confidence level (UCL) risks. The EPA describes ^1* as 'a plausible upper limit to the risk that is consistent with some proposed mechanism of carcinogenesis'. The EPA recognizes that 'the true value of the risk is unknown, and may be as low as zero' (US Environmental Protection Agency, 1986).
At low doses, the extra lifetime risk of developing cancer is approximately equal to ^1* times the administered dose (d). 3.6.2
The LMS model in practice
Proportion of tumour bearing animals P(d)
The LMS model can be fitted by a number of computer programs which provide maximum likelihood estimates (MLE) of the various parameters, <7o> #1» #2 e t c -> in the model. The MLEs are those values which provide the best correspondence or 'fit' between the predicted values based upon the LMS model and the observed data in the LTRCB (Figure 3.2). The MLE can be considered as the central or point estimate for the parameter. This implies that the 'true' parameter value is equally likely to be greater or smaller than the MLE estimate. Ideally, a measure of the range or spread of the possible estimates should also be given. One approach is to give a confidence interval within which the true parameter value should lie. The upper 95% confidence level (95% UCL) of the parameter ^1, ^1* is usually quoted for the LMS. In some cases the statistical algorithm used will set some of the values of the qt parameters (including the linear term, ^1) to zero to get the best fit to the observed data. A goodness of fit test measures how well the model fits the data.
O 2/50 50 5/50 100 7/50 200 20/50
Administered dose mg/kg/day
Figure 3.2 An illustration of the fitting of a linearized multi-stage (LMS) model to data from a long-term rodent cancer bioassay (LTRCB). The LMS model provides estimates of the parameters g(), g,, g2, ^3 and ^1* associated with the model of 4.357 x 10~2, 8.729 x 10-4, O, 3.583 x 10-8 and 2.214 x IQ-3 respectively.
The extrapolation of the mathematical models to low doses is carried out to determine the risks associated with specific low dose levels or to determine the VSD associated with a specific risk. This is illustrated in Figure 3.3. The VSD associated with a 10~6 increased risk over background is estimated by drawing the perpendicular from where the line drawn horizontal to the dose axis from the 1O-6 extra risk meets the mathematical function extrapolated down into the low-dose region. The point where the perpendicular meets the dose axis is the VSD. The risk associated with a specific dose is calculated in a similar fashion by reading off the extra risk associated with a specific dose based upon the particular extrapolated line. A potentially confusing feature is that when the 95% UCL on ^1* is used to produce a VSD for a specific risk, the value obtained is referred to as the lower confidence level (LCL) VSD. This is illustrated in Figure 3.3. 3.6.3
Limitations of the mathematical models used in QRA
The LMS model has the advantage of being able to fit a wide range of data sets. In the early 1980s it was the most mathematically tractable model and also appeared to be biologically plausible. This justified its widespread adoption. However, the approach has a number of limitations (Lovell and Thomas, 1996).
95%LCL VSD
MLE VSD
Figure 3.3 An illustration of the low-dose extrapolation associated with the LMS model used in Figure 3.2. The maximum likelihood estimate (MLE) virtually safe dose (VSD) is 1.1456 xlO-3 mg/kg/day (from 10^/8.729XlQ-4), while the 95% lower confidence limit (95% LCL) VSD is 4.516 x 10^ mg/kg/day (from 10^/2.214XlQ-3). The slopes ql and q* represent the MLE or 'best' estimate and upper 95% confidence limit (95% UCL) or 'worst-case' upper bound estimates of a linear dose-response relationship at low dose. Dashed lines illustrate the extrapolation associated with estimating a dose associated with a one in a million (IO"6) increase in cancer risk.
Estimates of the parameters, particularly ^1, the linear term, are unstable, with switches from ql having a positive non-zero value to q} being set equal to zero as a consequence of small changes in the data. This instability in the values of ^1 led to the US EPA using the considerably more conservative 95% UCL, ^1*. However, in comparisons across data sets, the values of ^1* can be similar despite large differences in the values of ql. This raises questions about the justification underlying this approach. The value of ql obtained also does not necessarily reflect either the statistical significance of the experimental results or the biological interpretation of the data. Data where there is no statistical significance can have higher ^1 values than results where the biological interpretation is clearly positive. The values of ^1* are very similar for a given experiment irrespective of the biological results even when the interpretation of some of the results is negative. The value of ^1* is, thus, insensitive to changes in the data. The value of q\* appears to be more closely related to the value of the highest dose level used in the study than to results observed with the biological data. There is a strong correlation between the maximum tolerable dose (MTD), the TD50, the LD50 and ^1*. The LMS estimate may, therefore, be considered as an expensive measure of acute and subchronic toxicity. There is considerable subjectivity in the choice of data (e.g. study, species, sex, site, lesion) to include in the QRA process. This choice of the data, in effect, determines the ^1* value rather than the overall interpretation of the study. The relationship between ^1* and the MTD means that the LMS approach is equivalent to dividing the MTD by a very large and ill-defined number of the order of 500 000, i.e. VSD is MTD/500 000 (Figure 3.4). No general distinction can, therefore, be made between the quantal models on the grounds of their ability to fit the observed data. No model has any advantage over any other on the basis of any intrinsic biological plausibility. The choice of a model such as the multi-stage model by the EPA carries the implicit assumption that its choice will provide conservative VSDs or, alternatively, that it is risk averse. 3.7 Developments in modelling 3.7.1
Time-to-tumour models
One development of the quantal models was based upon a realization that the results of the LTRCB could be biased by survival differences between
mg/kg/day
O 15 75 150 750 Prediction: VSD = MTD/500000 Using MSTAGE computer program: Q1 =3.997x10-4 QI* = 6.601 X10-4 VSD = IO-^Q1*
Affected
3/40 3/40 3/40 8/40 14/47 1.5 ^ig/kg/day
= 1-515 x 10-3mg/kg/day = 1.515|ig/kg/day
Figure 3.4 An illustration of a set of data from a long-term rodent cancer bioassay showing the approximation of the virtually safe dose (VSD) to the maximum tolerated dose (MTD)/500 000.
the control and treated groups. Statistical methods were developed which used extra information such as the age of the animal when it died and whether, if it carried a tumour, this had killed it or was only detected incidentally at post-mortem after the animal had died of another cause. Time-to-tumour models were developed using the log-normal, Weibull and multi-stage models such as the Hartley-Sielken and the Armitage-Doll models. A complication is that these models require more parameters than the quantal models. However, only a few experiments, such as the 2-AAF (2acetylaminofluorene) ED01 (Staffa and Mehlman, 1980) and nitrosamine (Peto et al., 1991) studies, had sufficient dose groups for the models to be adequately tested. Standard LTRCBs do not have extra sets of animals which have to be killed at intervals during the study to provide the extra biological information needed to explain the relationship between dose and age-specific tumour rates. Problems also arise in determining whether the tumour killed the animal (i.e. was fatal) or was incidental to the death of the animal. The models may be useful where interim kills at specific intervals have been carried out in the study in order to follow the time course of tumour development. They will not be of use with existing LTRCB data sets where time-to-tumour data are not always available. More recent studies, though, usually include such data. In practice, though, even when suitable data have been available in the form of extra dose groups or serial kills, the models have not been successful at solving the problems associated with low-dose extrapolation. The risk estimates obtained still differ by orders of magnitude.
3.7.2
Physiologically-based pharmacokinetic (PB-PK) models
Physiologically based pharmacokinetic models (PB-PK) have been developed as quantitative descriptions of the time courses of chemicals and their metabolites in different species of animals exposed to a chemical by different routes over a large range of exposure conditions. Predictions from these models have then been used to prepare quantitative estimates of the carcinogenic risks associated with a chemical. The models provide estimates of the internal dose as opposed to the administered dose. Proponents for PB-PK models stress that they allow a better understanding of interspecies differences in pharmacokinetics and should aid extrapolation. PB-PK models are formulations of the administration, distribution, metabolism and elimination (ADME) of a compound within the body. Combinations of tissue and organs are aggregated into compartments based upon the similarity of blood flows, solubility of chemicals and their metabolic capability. These compartments, which are considered physiologically realistic, are connected by a network of arterial and venous blood flows. Models often consist of five or six compartments, such as the liver as the principal metabolic organ, fat, bone marrow, muscle and the combination of brain, heart, kidney and viscera. Other models might be based upon combinations of slowly perfused tissues such as muscle and skin and richly perfused tissues such as the viscera. More complex models can be developed to investigate more complex cases, e.g. the lactating rat and her pup. The models can be formulated to account for different routes of administration such as inhalation, intravenous, gavage or incorporation in the drinking water. The models make a series of assumptions about the metabolism of a chemical, such as whether it follows Michaelis-Menten kinetics, i.e. is a saturable process dependent upon the concentration of the compound in the blood being in equilibrium with the concentration in liver tissue. The rates of change of chemical concentrations within the compartments are described by differential equations using specialized computer software. There is, however, a degree of subjectivity in the development of an appropriate model, both in the choice of system to model, and the assumptions of the properties of the various compartments and their interrelationships. The models have the potential to identify the actual dose received at the target organ more realistically. They are capable of taking into account the qualitatively different mechanisms of metabolism that may occur at the high doses administered in some toxicological experiments, such as the MTDs administered in the rodent cancer bioassay. These better estimates of the biological dose or dose which actually reaches the target organ may replace the administered dose in applications of QRA
modelling such as the LMS. They also offer the possibility of providing a more effective extrapolation from the animal experimental exposure to the potential human exposure than the existing safety factor or interspecific scaling factors at present favoured by regulatory bodies such as the EPA and FDA. The PB-PK models, however, require more data than are normally collected in the course of routine toxicological studies. Such data include studies on the time courses and distributions of marker compounds in the animal. Some of these data can be collected by supplementary experiments, from historical data or from the literature. PB-PK models do not, however, solve the problem of low-dose extrapolation, because the difficulties associated with models such as the LMS remain even if the doses used in the model are considered to be closer to the actual target dose. The use of PB-PK models will probably result in less conservative estimates of risk. A strong biological justification can be made for their use to overcome the effects of unrealistically high-dose levels overwhelming the normal metabolic pathways. However, considerable confidence will have to be built up in both the methodology and the actual application of the approach for regulatory authorities to accept more 'liberal' estimates of the risks associated with exposure to a chemical. A further consequence is that regulatory agencies will be faced with an increasing expectation from submitters that a 'case-by-case' approach will be taken to risk assessment when this is backed up by appropriate mechanistic studies. (Chapter 2 discusses PB-PK modelling in detail.) 3.7.3 Biologically based dose-response (BB-DR) models Biological models are likely to become of increasing importance in QRA in the future. These models are considered by some to be more plausible than the existing mathematically based approaches. The most familiar model of this family at present is the Moolgavkar-Venzon-Knudson (MVK) model. This model is based upon a biological model of the mechanism for the occurrence of the childhood cancer retinoblastoma. This occurs in two forms: a familial and a sporadic form. Study of these cancers through the 1970s led Knudson to develop a hypothesis that the cancer was caused by mutations in the two copies of a gene at a specific locus: sporadic cases resulted from two mutations or 'hits' in the somatic cell; in the inherited form an affected child inherited one copy of the mutated gene from a parent followed by a mutation in the other copy in a somatic cell during development. Such a model explained the occurrence of the different types of the disease and the dominant inheritance of the inherited form. Moolgavkar and co-workers provided a biological model describing the hypothetical mechanism in a series of papers. The model involved the
transition from a normal cell to a malignant cell in two stages or steps. The rate of moving from one stage to another was a feature of the model as was the rate of the normal and intermediate cell either dividing or dying. The model was derived from earlier models developed by Armitage and Doll for fitting the age-specific incidence of different cancers. These earlier models were also the precursors of the multi-stage model from which the LMS default model used by the EPA was derived. The MVK models were effective at fitting a number of different data sets, while developments in molecular biology showed that the model hypothesized by Knudson as early as 1971 for retinoblastoma was, in fact, correct. The models were further developed to explain other findings, and the implications of the model for classes of compounds that could be called initiators, completers and promoters were worked out. Moolgavkar and co-workers described how each such class of compound might affect the results of such a model. A more general model for use in QRA modelling was developed and incorporated into some computer packages. Moolgavkar, however, warned that this model may not be appropriate for use with the cancer incidence rates seen in studies such as the LTRCB. The formulation developed for this modified model also requires estimates for some of its critical parameters. Although such data can be obtained from separate experiments, they are not collected on a routine basis in the LTRCB. It is likely that the use of such models will need to be validated for a number of compounds before regulatory agencies will accept the method uncritically. A further complication is that although retinoblastoma may be a good model of some inherited cancers, it is probably an oversimplification of other types of cancer even when these have a heritable component. It seems likely that biological events which explain the production of tumours are likely to be more complex than that envisaged by the MVK model. However, the underlying genetic models of mutations being either inherited or arising de novo interacting with the various environmental influences is stimulating considerable interest among biostatisticians. The statistical methods they develop are likely to be of importance in toxicology in the future. 3.7.4 Benchmark doses Attempts are being made to develop a common harmonized procedure for estimating tolerable levels of exposure to minimize risks from exposure to hazardous chemicals. Crump (1984b) has proposed a method called the benchmark dose approach as an alternative to the NOAEL as the starting point for quantitative assessment in the safety factor approach. The precise definition of what a 'benchmark dose' actually is has yet to be fully worked out, although it involves identifying the lower statistical
Abnormal responses
Upper confidence limit on estimated risk
Dose response fitted to experimental data
LED10/UF
LED10
ED10
Dose
Figure 3.5 An illustration of the benchmark dose approach. A mathematical model is fitted to the experimentally derived data. An upper confidence limit is derived for the fitted doseresponse relationship. The upper confidence limit on the dose estimated to cause a 10% incidence over background is used as a starting point for an extrapolation to zero. A safety factor or uncertainty factor (UF) is used to derive a dose level which is considered 'safe'.
confidence limit for a dose corresponding to a specific increase in effect over the biological level near the lower limit of the experimental range (e.g. 1% or 10% increase in effect) (Figure 3.5). The confidence interval is derived from the use of a mathematical model applied to the dose-response data. A safety factor or similar margin of safety can then be applied to determine the equivalent of the ADI or TDI. Supporters of the benchmark approach argue that it is a promising alternative to the NOAEL because it uses all the available dose-response data, it reflects the dose-response curve, it explicitly indicates risk at doses at or below the benchmark dose, it can use data sets where no NOAEL can be determined and it can be used to integrate toxicological and carcinogenic risk assessment methodologies. The method has not yet, however, received acceptance by regulatory authorities and still requires validation. 3.7.5
Biomarkers
New methodologies in molecular toxicology based upon the use of biomarkers are providing powerful tools for researching problems in risk assessment. The term biomarker refers to a wide range of alterations occurring at the biochemical, cellular or molecular level on the continuum between exposure and disease, which can be measured by assays performed on body fluids, cells or tissues (Perera et al, 1991).
Three types of biomarkers can be distinguished (Schulte, 1989): measures of exposure to a chemical; indications of interactions with biological material with potential toxicological implications; and markers of susceptibility such as polymorphisms for drug-metabolizing enzymes (Gonzalez and Gelboin, 1993). Biomarkers can be used to identify the mechanisms involved at low-dose exposures in both in vivo and in vitro systems. They can indicate that a mechanism, such as a particular metabolic pathway, relevant to the toxicology event occurs in animals but does not occur in humans. Alternatively, they can provide quantitative data which can be included in a BB-DR model. Many biomarkers can be measured quantitatively, increasing the statistical power of any experiment using them. This sensitivity, in statistical terms, means that small but important differences can be detected between control and treated groups in carefully designed experiments. This property can be used in appropriate experimental designs to investigate and characterize the dose-response relationships between low-level exposures and biological responses. It is important, though, that such biomarkers are relevant and valid. The relationship between the biomarker and the toxicological endpoint may, in fact, be non-linear; for instance, the dose-response relationship for DNA adducts produced by a chemical may be linear but the relationship between dose and tumour production may be curvilinear (Gupta et al, 1993).
3.8 Future developments in QRA 3.8.1 New EPA guidelines The US EPA has been refining its carcinogen risk assessment guidelines and a draft version of the proposed guidelines was released for public comment in 1996 (US Environmental Protection Agency, 1996). These Draft Revisions show that the agency plans a more flexible approach giving more emphasis to mechanistic studies of the mode of action of the chemical. In practice, the existing methods are likely to remain the basis for risk assessment in the foreseeable future, with the new guidelines continuing to stress the use of default approaches. However, they will accept that relevant mechanistic information should be given more prominence in the EPA's assessments. More stress will be placed upon non-carcinogenic endpoints in risk assessments, with an attempt to break down the dichotomy between how the EPA estimates cancer risks and other health risks. Less emphasis will be given to the LMS model, although any low-dose extrapolation method, such as the benchmark dose or other non-parametric approaches, is likely to remain conservative. Greater emphasis will be put
on risk characterization. There is also considered to be a need to make proper use of numerical distributions in human exposure assessment. The EPA is also likely to put more emphasis into the development of receptor-based models as part of evolutionary as opposed to revolutionary approaches to the development of BB-DR models and their linkage to PB-PK models. These biologically based methods will be used to put previous estimates into perspective. 3.8.2 Linkage of PB-PK and BB-DR models The major potential advance in the development of more accurate QRA estimates is the linkage of PB-PK and BB-DR models. One potential obstacle to this linkage is the need to identify the pathway between the biomarker and the toxicological event. Attempts to couple pharmacokinetic and pharmacodynamic models involve the development of a number of linked stages (Figure 3.6). The objective may, for instance, be to develop a model for more accurate QRA estimates of food chemicals. A PB-PK model of the pharmacokinetics of food chemicals would need to be developed which explained their ADME. The objective of this model would be to provide a more realistic estimate of the dose of the chemical that reaches the target organ. External dose
Max. allowed external dose
PB-PK (rodent)
PB-PK (human)
Target organ dose
Max. allowable delivered dose
Biologically-based dose-response model (BB-DR)
Rodent VSD (IO'6)
Inter-species
BB-DR model (human)
Human VSD (IO'6)
conversion factor Figure 3.6 Diagrammatic representation of the linkage of physiologically based pharmacokinetic (PB-PK) and pharmacodynamic (BB-DR) models to derive more accurate estimates of the dose reaching the 'target' organ and its relationship to toxic events. Such a scenario should lead to a more relevant approach to extrapolation of the risks to the human population from the levels of the chemical the population is actually exposed to.
Simultaneously, there would need to be the development of BB-DR models of the pharmacodynamics of food chemicals through biomarkers which would relate the dose delivered at the target organ to toxicological effects. This would then allow the identification of an accurate estimate of risk in the rodent from exposure to the chemical. The VSD, for instance, in rodents could then be converted to a VSD in humans by taking into account relevant aspects of interspecies scaling. This VSD could then be related back to the likely VSD dose at the target site in humans, using an appropriate BB-DR model for humans, followed by the use of a PB-PK model, suitably modified for human exposures, to arrive at a maximum allowable administered dose in humans concomitant with the acceptable or tolerable risk levels associated with the VSD. Potential developments in this scenario include the need to have suitable experimental data to develop and validate models in, for instance, the rat and mouse. Approaches would need to be developed for the coupling of PB-PK and BB-DR models and the potential to refine the model by the replacement of in vivo by in vitro systems. A theoretical example of the scheme for linking the administered dose to the toxicological endpoint is shown in Figure 3.7. The dose-response relationship linking the administered dose to a biomarker is characterized. In this hypothetical example, increasing the dose lowers the response of the biomarker (Figure 3.7a). The dose-response is represented by some function of dose. The relationship between the initial biomarker and a second biomarker on the pathway to the toxicological insult is characterized (Figure 3.7b), as is that between a second and third biomarker (Figure 3.7c). Finally, in this example, the relationship between this third biomarker and the toxic endpoint is described (Figure 3.7d). The full pathway is then represented as a series of linked relationships relating the administered dose to the toxic endpoint which is capable of reflecting more realistically the non-linear nature of the dose-response relationships and providing more accurate estimates of the potential risks from low-level exposures (Figure 3.7e). The potential exists to include in the models estimates derived from in vitro methods which may provide a more generic approach capable of being more widely applicable.
3.9 Conclusion There is a continuing scientific and regulatory interest in the development of scientifically valid methods for the quantification of the risk to humans from exposure to chemicals. It is clear, however, that the area of QRA is undergoing considerable change. The majority of this change relates to attempts to adapt the existing US system of QRA to provide a more
Biomarker 1 = f(dose)
(b)
Dose
Biomarker 1
(d)
TE = !(Biomarker 3)
Toxic endpoint
Biomarker 3 = !(Biomarker 2)
Biomarker 3
(c)
Biomarker 2 = !(Biomarker 1)
Biomarker 2
Biomarker 1
(a)
Biomarker 3
Biomarker 2
Toxic endpoint
TE = w(h(g(f(dose)))) where: w is TE = f(Biomarker 3) h is Biomarker 3 = f(Biomarker 2) g is Biomarker 2 = f(Biomarker 1) f is Biomarker 1 = f(dose)
Dose
Figure 3.7 An illustration of the linking of models using biomarkers to derive more realistic and accurate QRA estimates. The diagram shows a series of graphs portraying hypothetical relationships linking the administered dose of a compound through its effects on a series of biomarkers to the toxic endpoint.
realistic framework to meet the pressures of the regulatory and legislative systems within that country. It is by no means certain that the outcome will be appropriate for other countries and other systems. One complication in the use of QRA is that risk assessment is a wide discipline and covers areas which, while superficially similar, have important differences. QRA approaches have, for instance, been developed for assessing the risks of accidents in the chemical and transport industry,
where fault trees with relatively good probabilistic data are available on the failure rates of key stages in a process. Similarly, realistic estimates of risk from certain types of activities or occupations can be obtained when actuarial-type data exist. In both cases, relative risks can be compared and management options evaluated. In the case of the carcinogenic risks associated with chemicals, the probabilities are derived from extrapolation from high-dose studies in rodents. The risks associated with specific exposures are based upon a large number of assumptions, often deliberately chosen to represent worst-possible cases. As a consequence, the dose defined as associated with a 1 in 1 million increased cancer incidence is based upon a series of conjectures rather than empirical probabilities. Direct comparisons, therefore, between the risks from chemical exposures and the risks from occupational, transport or engineering hazards are potentially misleading. The statistical limitations of the existing quantal models, particularly the LMS model, are relatively well known to the 'modelling community'. This has led to the development of more sophisticated models such as the PB-PK and MVK models. Linking these models together represents a potentially powerful method for estimation of risks to the human population from low-level exposure to chemicals. These would then provide more accurate estimates in case-by-case assessments of risk for specific compounds. However, they are unlikely to provide an overall 'generic' approach to the problem of QRA. Model-free non-parametric and benchmark dose approaches represent ways of providing risk estimates as conservative as those provided by the LMS model but avoiding the criticism relating to the lack of biological relevance of the mathematical model used. Future development in QRA will probably have to put more emphasis on the accuracy rather than the precision of risk estimates. It may, in practice, be more useful to have coarser divisions than precise numbers. Accurately distinguishing between very high, high, medium, low and very low risks (for instance) may be of more practical value than the precise 10~6 risk from particular chemicals, which is very dependent upon a range of contentious assumptions. An important aspect of any innovative approach to the assessment of risk is the need to evaluate the potential approaches and validate their performance. Such approaches will not be easy, as the obvious validation against existing high-dose animal studies will not necessarily be relevant. Investigation of the reliability of the proposed methods and the relevance of the mechanisms underlying the systems to the toxicological events being studied will continue to be of importance. In this context the relevance of the mechanistic study will be how it relates to the objective of identifying potential hazards to humans and the risk of low-dose exposure rather than an explanation of the effects seen in high-dose animal studies.
There will be a continuing trend towards an increasing use of numerical information in risk assessment. There are trends to make more use of mechanistic information in the development of risk estimates based upon QRAs and to provide more generic and less accurate risk estimates by applying simpler approaches than those used in the existing mathematical models. QRA is likely to remain a major area of development because of the increasing interest, particularly in the USA, in the balancing of the costs of risks and benefits. The case for quantifying risk remains persuasive and offers many advantages for the appropriate management of risks. However, the goal of producing numbers should never detract from the objective of generating and using biological data to get appropriate information which can be used to evaluate the risk to the human population from exposure to a chemical. Future developments of QRA such as PB-PK and biologically based modelling will help in this endeavour but will in the short to medium term act as a complement to the use of expert toxicological knowledge rather than replace it.
References Anderson, E.L. and the Carcinogen Assessment Group of the US Environmental Protection Agency (1983) Quantitative approaches in use to assess cancer risk. Risk Analysis, 3, 277-295. Barnes, D.G. and Dourson, M. (1988) Reference dose (RfD): description and use in health risk assessments. Regulatory Toxicology and Pharmacology, 8, 471-486. Berry, C.L. (1988) The no-effect level and optimal use of toxicity data. Regulatory Toxicology and Pharmacology, 8, 385-388. British Medical Association (1990) The BMA Guide to Living with Risk (Henderson, M., ed.). Penguin Books, Harmondsworth. Crump, K.S. (1984a) An improved procedure for low-dose carcinogenic assessment from animal data. Journal of Environmental Pathology, Toxicology and Oncology, 6, 339-348. Crump, K.S. (1984b) A new method for determining allowable daily intakes. Fundamental and Applied Toxicology, 4, 854-871. Crump, K.S., Hoel, D.G., Langley, C.H. and Peto, R. (1976) Fundamental carcinogenic processes and their implications for low-dose risk assessment. Cancer Research, 36, 2973-2979. Department of Health (1991) Guidelines for the Evaluation of Chemicals for Carcinogenicity. Report on Health and Social Subjects No. 42. HMSO, London. Flamm, G.W. (1986) Risk assessment policy in the United States. In: Oftedal, P. and Brogger A. (eds) Risk and Reason: Risk Assessment in Relation to Environmental Mutagens and Carcinogens. Alan R. Liss, Inc,. New York, pp. 141-149. Food Safety Council (1980) Quantitative risk assessment. Food and Cosmetic Toxicology, 18, 711-734. Gonzalez, FJ. and Gelboin, H.V. (1993) Role of human cytochrome P-450s in risk assessment and susceptibility to environmentally based disease. Journal of Toxicology and Environmental Health, 40, 298-308. Gupta, K.P., van Golen, K.L., Putman, K.L. and Randerath, K. (1993) Formation and persistence of safrole-DNA adducts over a 10,000-fold dose range in mouse liver. Carcinogenesis, 14, 1517-1521. Hopper, L.D. and Oehme F.W. (1989) Chemical risk assessment: a review. Veterinary and Human Toxicology, 31, 543-554.
Jasanoff, S. (1986) Comparative risk assessment - the lessons of cultural variation. In: Richardson, M. (ed.) Toxic Hazard Assessment of Chemicals. The Royal Society of Chemistry, London pp. 259-281. Johannsen, F.R. (1990) Risk assessment of carcinogenic and non-carcinogenic chemicals. Critical Reviews in Toxicology, 20, 341-367. Lehman, AJ. and Fitzhugh, O.G. (1954) 100-fold margin of safety. Quarterly Bulletin of the Association of Food and Drug Officials of the United States, 18, 33-35. Lorentzen RJ. (1984) FDA procedures for carcinogenic risk assessment. Food Technology, 28, 108-111. Lovell, D.P. (1986) Risk assessment - general principles. In: Richardson, M. (ed.) Toxic Hazard Assessment of Chemicals. Royal Society of Chemistry, London, pp. 207-222. Lovell, D.P. and Thomas, G. (1996) Quantitative risk assessment and the limitations of the linearized multistage model. Human and Experimental Toxicology, 15, 87-104. Lu, F.C. and Sielken, R.L. Jr (1991) Assessment of safety/risk of chemicals: inception and evolution of the ADI and dose-response modelling procedures. Toxicology Letters, 59, 5-40. Mantel, N. and Bryan, W.R. (1961) 'Safety' testing of carcinogenic agents. Journal of the National Cancer Institute, 27, 455-470. National Academy of Science/National Research Council (1983) Risk Assessment in the Federal Government: Managing the Process. National Academy Press, Washington. Perera, F., Mayer, J., Santella, R.M. et al. (1991) Biologic markers in risk assessment for environmental carcinogens. Environmental Health Perspectives, 90, 247-254. Peto, R., Gray, R., Brantom, P. and Grasso, P. (1991) Effects on 4080 rats of chronic ingestion of N-nitrosodiethylamine or N-nitrosodimethylamine: a detailed dose-response study. Cancer Research, 51, 6415-6451. Renwick, A.G. (1991) Safety factors and the establishment of acceptable daily intakes. Food Additives and Contaminants, 8, 135-149. Richardson, M.L. (ed.) (1985) Toxic Hazard Assessment of Chemicals. Royal Society of Chemistry, London. Rosenthal, A., Gray, G.M. and Graham, J.D. (1992) Legislating acceptable cancer risk from exposure to toxic chemicals. Ecology Law Quarterly, 19, 269-363. Royal Society (1983) Risk Assessment - A Study Group Report. The Royal Society, London. Royal Society (1992) Risk: Analysis, Perception and Management. Report of a Royal Society Study Group. The Royal Society, London. Rubery, E.D., Barlow, S.E. and Steadman, J.H. (1990) Criteria for setting estimates of acceptable intakes of chemicals in food in the UK. Food Additives and Contaminants, 7, 287-302. Schulte P.A. (1989) A conceptual framework for the validation and use of biologic markers. Environmental Research, 48, 129-144. Staffa, J.A. and Mehlman, M.A. (eds) (1980) Innovations in cancer risk assessment (EDOl study). Journal of Environmental Pathology and Toxicology, 3, 1-249. US Environmental Protection Agency (1986) Guidelines for carcinogen risk assessment. Federal Register, 51, 33992-34003 US Environmental Protection Agency (1996) Proposed guidelines for carcinogen risk assessment. EPA/600/P-92/0036, April. Weil, C.S. (1972) Statistics vs safety factors and scientific judgement in the evaluation of safety for man. Toxicology and Applied Pharmacology, 21, 454-463. World Health Organization (1962) Principles governing consumer safety in relation to pesticide residues. Report of a Joint FAO/WHO Meeting of Experts on Pesticide Residues. WHO Technical Report Series 240.
4 Biomarkers in epidemiological and toxicological nutrition research G. van POPPEL, H. VERHAGEN and B. HEINZOW
4.1 Introduction Toxicological risk assessment has long been based on animal experiments and in vitro studies. Apart from the question of whether or to what extent these data can be extrapolated to humans, many of these (animal) models are not suitable for studying the effects of the low doses to which humans are frequently exposed (Henderson et al., 1989). Moreover, these models do not account for the large individual variation in sensitivity among human beings. Because of the limitations inherent in both animal experiments and in vitro studies, as well as ethical issues connected with animal experiments, interest arose many years ago in exploring exposure, early health effects and variation in sensitivity in humans based on parameters that act as indicators of effects of various xenobiotic substances in the human body ('biomarkers') (Jenderson et al., 1989; Harris, 1989; Shields and Harris, 1991; Hulka et al., 1990). The term biomarker is used in a broad sense to describe parameters reflecting an interaction between a biological system and a potential hazard of a chemical, biological and physical nature. The measured response may be functional, physiological, and biochemical at a cellular or molecular level. Biomarkers are used to assess exposure and the risk of possible health-related outcomes of exposure in environmental epidemiology. The concept of 'biomarker' covers, as such, the continuum between external exposure and the clinical manifestation of a disease, such as cancer. Practically, it has proven useful in distinguishing three main types: biomarkers of exposure, of effect and susceptibility. A biomarker, therefore, can give an impression of the internal burden of a certain substance or of a (subclinical) effect, ideally before the disease becomes manifest. Through measurement of carefully selected biomarkers, the positive or negative effects of substances can thus be assessed in human beings. Sensitivity screening seeks to determine whether an individual possesses certain inherited or acquired traits that may predispose to an increased risk of disease if exposed. This makes the use of biomarkers an excellent approach for experimental studies with volunteers and epidemiological studies and refinement of the risk assessment process. Their primary purpose is still the prevention of disease (Rylander, 1995).
Biomarkers have predominantly been applied thus far in occupational toxicology (Schulte, 1993, 1995). However, human experimental and epidemiological studies with biomarkers could also play an important role in the toxicological assessment of diet. In contrast to occupational toxicology, dietary exposure is likely to be heterogeneous, given the numerous constituents of the diet. Both negative effects (e.g. carcinogens in foods) and positive effects (e.g. protection by antioxidant vitamins) will play their roles. Biomarkers appear to offer good prospects of examining these interactions and combined effects in human studies. However, they have hardly been applied to dietary assessment to date. This chapter starts with an elaboration on the concept of biomarkers. On the aetiological path between exposure and disease, various types of biomarkers on different levels can be distinguished. Next, the various types of biomarkers are discussed in more detail and classified, in particular according to their application in environmental and occupational toxicology. Finally, the possibilities and limitations of the application of biomarkers in toxicological nutrition research are discussed.
4.2 Classification of biomarkers Figure 4.1 shows a framework for the application and classification of biomarkers. This scheme was modified after the example of the National Research Council's Committee on Biological Markers in Environmental Health (Committee on Biological Markers, National Research Council, 1987) and comprises six steps, from exposure (external dose) to disease. Individual differences in sensitivity play a role in all transitional stages. The thinking behind this scheme can be exemplified by the relation between smoking and lung cancer. External exposure can be measured by recording the number of cigarettes smoked or, in the case of passive exposure, by measuring cigarette smoke components in the surrounding air. Internal exposure to cigarette smoke can be assessed by measuring the nicotine metabolite cotinine in plasma or urine (Jarvis et al, 1987). The biologically effective dose represents a presumably relevant interaction between a substance and a body component. It can be defined as the amount of material interacting with subcellular, cellular and tissue targets or with an established surrogate. This point can be illustrated by DNA adducts of benzo[0]pyrene diolepoxide (BDPE) (van Schooten et al, 1990): the highly reactive BDPE is formed enzymically in the body from the non-reactive benzo[0]pyrene (B[^]P), one of the polycyclic aromatic hydrocarbons (PAHs) that can be found in cigarette smoke (Gelboin, 1980). Examples of early biological effects in smokers are the increased frequency of sister chromatid exchanges (SCEs) in cultured lymphocytes
Diet
Exposure
Internal dose
Biologically effective dose
Individual susceptibility
Early biological effects
Altered structure/ function
Disease
Figure 4.1 Framework for a classification of biomarkers in which dietary factors are taken into account. (Redrawn from Rylander, 1995.)
(International Agency for Research on Cancer, 1986) and the increased number of micronuclei in bronchial salivary cells (Fontham et al., 1986). Examples of markers of modified structure or function are metaplastic and dysplastic changes in bronchial sputum cells (Saccomano et al., 1974); severe dysplastic changes have been found to be predictive of the development of lung cancer (Risse, 1987). Over the whole path from exposure to disease, not only the level of exposure but also the genotypically or phenotypically predisposed individual sensitivity plays a role. A sensitivity/ susceptibility marker determines an individual's inherited or acquired trait that may predispose to an increased risk to develop a disease. This individual sensitivity is determined by various factors, including age, gender, differences in kinetics and metabolism, DNA repair capacity and immune response. For example, there are genetically determined differences in biotransformation enzymes such as aryl hydrocarbon hydroxylase (cytochrome P450IA1) (Gelboin, 1983) and glutathione S-transferase (GST) (Seidegard et al., 1986). Genetically determined differences in these enzymes have been reported to be associated with the risk for lung cancer among smokers (Gelboin, 1983; Seidegard et al., 1986). We have observed in our studies elevated SCE frequencies in smokers with a genetically determined GST-|a deficiency (Van Poppel et al., 1992a). Additionally, the important role of lifestyle, mainly dietary factors, is accounted for in Figure 4.1. From a conceptual point of view, it is part of the individual sensitivity: along the whole path from exposure to disease, dietary factors may play an important modifying role, adding to or protecting from exposure. However, dietary components, such as B[^]P in burnt food products, may also contribute to exposure or be the very source of exposure. In the example of smoking/lung cancer, the protective influence of dietary factors is illustrated by the fact that many epidemiological studies have found a high intake of (3-carotene to be associated with a lower risk for lung cancer among smokers (Willett, 1990).
We have found in our studies that supplementation with p-carotene leads to a decrease in the number of micronuclei in bronchial sputum cells of smokers (Van Poppel et al, 1992).
4.3 Markers of external and internal exposure External exposure of an individual to environmental, occupational and dietary factors can be assessed by measuring components in food, air and water or through questionnaires, recording time of exposure, etc. Assessment of external exposure suffers from a number of uncertainties, including those related to individual variation in bioavailability, kinetics and metabolism. That is why biomarkers of internal exposure are increasingly being used. Biomarkers of internal exposure are commonly seen as a more accurate way of examining exposure. These markers include chemicals and their metabolites in blood, urine, hair, fat and exhaled air. For example, urinary levels of food additives (Verhagen and Kleinjans, 1991) and blood lead levels (Agency for Toxic Substances and Disease Registry and US Environmental Protection Agency, 1990) can be seen as biomarkers of internal exposure (reviews: Fennel, 1990; Henderson et al., 1989). There are also biomarkers of exposure for a large number of nutrients, such as vitamin levels in blood and composition of fatty acids in fat biopsies. (See Kok and van't Veer (1991), Hunter (1990) and Riboli and Saracci (1987) for overviews of these food-specific biomarkers.)
4.4 Markers of biologically effective dose Although markers of biologically effective dose are a step nearer to the final effect (disease; see Figure 4.1), biomarkers of this type are used much less frequently. These markers integrate internal exposure and metabolism of the compound in the body (Figure 4.2). This position is exemplified by the measurement of binding of chemicals to macromolecules such as DNA and proteins. Some authors point out that protein adducts should be seen primarily as markers of exposure, whereas DNA adducts indicate exposure of the critical target modified by repair mechanisms. In principle, DNA adducts can be established with various methods in any DNA-containing cell (review: Bartsch and Hemminki, 1988). Also, DNA adducts eliminated by DNA repair can be determined in urine or plasma (Table 4.1). The most sensitive method is the 32P-post-labelling (Randerath) method, which makes use of the difference between nucleotides with or without adducts to be phosphorylated. These nucleotides can subsequently be separated chromatographically and quantified. Although this method is
Exposure Absorption
Protein
Internal dose RNA
Metabolism; detoxification
Metabolism; bioactivation
Excretion
DNA
Binding to macromolecules Binding to receptors
Mutation Initiation Promotion Cancer
Figure 4.2 Internal exposure to and metabolism of a compound in the body.
very sensitive in general, it can only be used to demonstrate the presence of 'large' adducts. The method is very time-consuming and the amount of radioactivity required puts limits to the number of analyses that can be done by one technician. Therefore, it is not very likely that the use of this method as a biomarker will expand enormously. In addition to the 32P-post-labelling method, there are immunochemical methods for measuring DNA adducts. In these methods, a specific DNA adduct must first be synthesized and (monoclonal or polyclonal) antibodies must be raised in experimental animals. Radioactive or immunofluorescence techniques, for example, can then be used to quantify the numbers of DNA adducts. Immunochemical methods can also be used to establish the presence of smaller numbers of DNA adducts, such as 06- and JV7-methylguanine produced by methylating agents (e.g. nitrosamines). Immunochemical methods are sensitive (sometimes as sensitive as the 32P-post-labelling method) and specific, although the antibodies may cross-react. In principle, these methods are suitable for large Table 4.1 Markers of exposure and effects of reactive compounds Biochemical marker
Biological marker
Protein adduct Hb, albumin adduct DNA adduct Urinary DNA adduct
CA, SCE, DIC, micronuclei Mutation-HPRT, thymidine kinase assay Gene expression assay (a fetoprotein)
CA, chromosome aberration; SCE, sister chromatld exchange; DIC, dicentrics.
numbers of analyses, which is an essential trait for epidemiological studies. However, immunochemical methods have the drawback that the production of suitable antibodies is very time-consuming. Finally, the physicochemical determination of some small DNA adducts, such as 8-hydroxy-2-deoxyguanosine (8-oxo-dG), resulting from oxidative DNA damage, has attracted much interest. Although the sensitivity that can be obtained with these methods is relatively low (about one adduct for 106 bases), the endogenous oxidative DNA damage caused by reactive oxygen species is probably very large, making this parameter probably useful and relevant (Fraga et al., 1990). Besides DNA adducts, protein adducts are used to estimate a compound's biologically effective dose. Binding to haemoglobin is held to reflect binding to macromolecules such as DNA and protein in cells. An advantage of the use of protein adducts is that these adducts are not repaired and hence produce a temporal integration of the biologically effective dose. Both chromatographic and immunochemical techniques have been used for measuring protein adducts. The methods of analysis and applications of protein adducts have been reviewed by ECETOC (1990). As for other biomarkers, the knowledge of their kinetics of formation and decay and hence applicable time is essential for interpretation (Table 4.2). In addition to the measurement of DNA and protein adducts in body tissues, adducts or their degradation products can be determined in urine; this yields an alternative temporal integration of the biologically effective dose. Interpretation of the results, however, is hampered by the fact that those products are measured to which the body has indeed been exposed but which have subsequently been excreted successfully. Recently, we have shown that the levels of 8-oxo-dG in 24-h urine samples Table 4.2 Kinetic characterization of selected biomarkers Biomarker
Interindividual variation
Applicable time post-exposure
Translocation Dicentrics Micronuclei HPRT GPA TCR HLA SCE DNA adducts Protein adduct
Low Low High Medium High High ? ? ? ?
O to lifetime 0-6 months 0-6 months 1 month to 1 year 6 months to lifetime? 1 month to 2 years 1 month to 2 years 0-6 months 0-6 months 0-6 months
HPRT, hypoxanthine phosphoribosyltransferase assay; GPA, glycophorin-A somatic mutation assay; TCR, T-cell antigen receptor mutation assay; HLA, human leukocyte antigen mutation assay; SCE, sister chromatid exchange. Data from Straume and Lucas (1995).
Biomarker level
are decreased upon consumption of Brussels sprouts (Verhagen et al., 1995) but not following (3-carotene supplementation (Van Poppel et al., 1995). Besides the measurement of specific degradation products, nonspecific determinations are also done in urine to examine the biologically effective dose. Examples of the latter category are excretion of mercapturic acid formed by coupling of electrophilic bonds to glutathione and subsequent metabolism (Van Doom et al., 1982) and determination of mutagenicity of urine in the Ames test (Willems et al., 1989; Rahn et al., 1991). Further, faecal mutagenicity (Willems et al., 1989) and the lytic activity of faecal water (Lapre and van der Meer, 1992) may be considered as markers of the biologically effective dose. A biologically effective dose can also be determined for nutrients. Plasma retinol, for example, is considered to reflect the integration of dietary intake of both retinol and 0-carotene, in which process the body converts part of the (3-carotene into retinol. It should be noted, however, that at adequate retinol and (3-carotene intake levels, plasma (3-carotene levels, but not plasma retinol levels, will increase further with an increased (3-carotene intake (Figure 4.3). For some other vitamins, too, markers exist for establishing the biologically effective dose. These markers are predominantly the 'functional parameters' of nutritional status. Examples are the aspartate aminotransferase activation coefficient used to assess vitamin B6 status and seleniumdependent glutathione peroxidase activity. These functional parameters are reviewed by Livingston (1989). It should be kept in mind that functional parameters might show considerable interindividual (genetic) variation.
B-carotene
retinol
3-carotene intake
Figure 4.3 Nutrient intake and biomarker level (Verhagen and Kleinjans, 1991).
4.5 Markers of early biological effects Traditionally, epidemiological studies have focused on observable disease as an outcome of concern, but this approach may not allow for the timely assessment of, for example, carcinogenic hazards because of long latency periods. A potential solution to this dilemma is the use of biomarkers of early effect. Markers of biologically effective dose reflect a presumably biologically relevant integration of internal exposure and metabolism without physiological or aetiological consequences being known. Although it seems plausible, for example, that a larger number of DNA adducts will eventually result in a higher risk for aetiologically relevant mutations, such a relationship cannot be established with certainty: the DNA damage can be repaired, after all. Adducts are a step towards mutation but not mutation per se, and thus only comprise an indicator of mutagen exposure, whereas gene-mutational assays are indicators of mutational exposure and effect (Compton et al., 1991). Markers of early biological effects - one step nearer to the final effect, disease - are assumed to provide more insight into aetiologically relevant changes. Such markers could reflect the consequences of DNA damage. Changes in accepted clinico-chemical parameters could be used as markers too. The plasma cholesterol level is probably the most often tested marker and the best-known example of a biomarker of early biological effects. It provides exactly what biomarkers are meant to: it is indicative of an elevated risk for a certain effect (i.e. cardiovascular disease) on a population level, namely a clinically measurable effect, without the individuals under survey having complaints or being ill. Functional tests (e.g. of blood coagulation or immune response) can help to obtain insight into the early biological effects of exposure to a certain substance. A number of accepted markers of early biological effects will be discussed below. For estimating mutagenic exposures and cancer risk, several mostly nonspecific parameters are used. Genetic monitoring consists of cytogenetic monitoring (chromosomal changes) and non-cytogenetic monitoring (DNA changes). Chromosomal changes such as frequency of chromosome aberrations (CAs), dicentrics, micronuclei or SCEs and translocations are assessed in cultured peripheral lymphocytes. Dicentric chromosomes are validated for estimating radiation (biodosimetry); however, sensitivity at low exposure is lacking and the markers are unspecific. The use of modern techniques like the polymerase chain reaction may soon spawn a new generation of sensitive and specific biomarkers of DNA damage. The damaging effects in cancer genes might be regarded as molecular manifestations of disease and the biomarkers can be seen as prior epi-pathogenesis.
Mutational spectra are defined as a set of mutations found in a defined DNA sequence (base-pair substitutions, insertions, deletions, larger chromosomal changes and rearrangements). Of most interest are mutations in loci that are possibly related to the generation of malignant disease. Reporter genes are surrogate DNA targets, assuming that these genes, which are not in the pathway of malignant disease itself, undergo genetic alterations that support the ability of an exposure to induce damage in genes that are involved in carcinogenesis. Such a gene is the HPRT gene in lymphoblastoid cell lines and peripheral lymphocytes. In some studies the number of hypoxanthine-guanine phosphoribosyltransferase (HPRT) mutants is determined (Tates et aL, 1991a,b); however, this parameter is probably much less sensitive and hence less useful as a biomarker. In vitro, the SCE parameter appears to be sensitive to a large number of carcinogens (Wolff, 1979). In vivo, the SCE frequency appears to be elevated in cases of occupational exposure to various substances, among cigarette smokers and among specific groups of cancer patients (Das, 1988); among smokers, an increase of 10-50% is commonly found (International Agency for Research on Cancer, 1986). The intra- and interindividual variance for the SCE parameter is large, probably because many factors, including time of day, play a role (Perry and Thomson, 1984). SCE results from different laboratories are hard to compare (Van Poppel et aL, 1989). Even results obtained at different times in the same laboratory cannot be compared indiscriminately (Van Poppel et aL, 1992b). An overview of factors playing a role in SCE determinations has been presented by Das (1988), and Wilcosky and Rynard (1990) have discussed the applicability of SCE determinations in epidemiology. SCE can be used as an exposure marker but cannot predict an individual's risk; the interpretation is most useful in assessing population risks (Hagmar et aL, 1994), since environmental factors and variation within individuals may bias or confound the results. The general consensus is that interpretation of results on an individual basis is as yet unjustified, and it is not possible to quantitate the health risk associated with increased frequency of SCE. The frequency of micronuclei in cells (as determined in vivo or in vitro) is also often used as a biomarker of early effects. Micronuclei are small DNA fragments present in isolation in the cytoplasm. Micronuclei are formed at mitosis if chromosome fragments or intact chromosomes are not incorporated into the daughter nuclei. This may be a consequence of chromosomal breaks or of defects in the nuclear spindle apparatus, thus reflecting DNA damage prior to or during mitosis. Micronuclei can be determined in cultured lymphocytes (in general, in binucleated cells obtained via the cytokinesis-block method with cytochalasin B) and in various types of mucosal cells, e.g. from the respiratory tract, the oral cavity (buccal cells) or the bladder (urine) (Stich and Rosin, 1984). Stich and Rosin (1984) have reviewed the factors leading to an elevated
frequency of micronuclei in humans. Vine (1990) has discussed the applications of this parameter in epidemiology. Stich has found a remarkable resemblance between the causes of elevated counts of micronuclei in buccal mucosa cells and the causes of oral cancer. By analogy, the frequency of micronuclei in sputum cells and bronchial brushings has been found to be elevated in smokers (Fontham et al., 1986). Intra- and interindividual variation in micronuclei counts is large, and in particular if the origin of the cells is hard to identify or to standardize, as is the case for sputum cells (Van Poppel et al., 1992). The discussion of the effects of frequencies of SCEs and micronuclei should be seen as exemplary. The same holds for the determination of CAs and HPRT mutant frequencies in the field of genetic toxicology, and also for parameters in many other fields. For example, the effects of changes in enzyme activities on clinico-chemical indices, on biotransformation capacity (see also below) or on immune status (Hagmar et al., 1997) can be seen as early effects. Markers for immune status and immune response (Riethmueller et al., 1987; Zbinden, 1987) encompass, first of all, the determination of haematological indices and the quantification of lymphocyte subsets. Further, the non-specific cellular branch of the immune system can be tested by measuring the proliferative capacity of N- and T-cells after stimulation with various mitogens. Both specific and non-specific humoral factors can be measured, such as immunoglobulins, cytokines and complement al factors. For all these parameters it is important to note that expression and persistence related to exposure and the time window for application might vary. 4.6 Markers of modified structure or function In contrast to markers of early biological effects, markers of modified structure or function represent an early stage of disease or are indicative of the development of disease. An elevated L-aspartate aminotransferase activity in plasma is an indicator of liver damage, e.g. in alcoholics. In various types of cancer, metaplastic changes, e.g. in the lung (sputum cells) (Seidegard et al., 1986) or in the cervix (de Vet, 1990), are considered as such early signs. Metaplasia is established microscopically as a certain degree of loss of morphological differentiation of cells. Such markers also encompass the presence of particular tumour proteins (e.g. a-fetoprotein) in plasma and are used as diagnostic criteria for a disease. Oncogenic activation (Taylor, 1989) or mutations in tumour suppressor genes (Shields and Harris, 1991) can also be classified as markers in this category. In order to determine these markers, invasive techniques are frequently needed. Moreover, the duration between exposure and the development of such an early stage of disease is usually long. Therefore, these markers (including tumour markers) are not discussed here in more detail.
4.7 Markers of individual sensitivity Individual sensitivity plays a role in each step of the process leading to the development of a disease (Figure 4.1). Differences in individual sensitivity are commonly the result of differences in physiology or metabolism acquired in the course of one's life (phenotypic differences) which, in their turn, may be the result of external factors such as nutrition or physical activity. In addition, genetic factors play an important role (genotypic differences). It is often not known to what extent individual differences have a phenotypic or a genotypic basis. Table 4.3 summarizes some traits of increased sensitivity. In cancer epidemiology, the interest in certain markers for genetically determined biotransformation enzymes has gained interest in recent years. The influence of enzyme polymorphism on the cytogenetic toxicity of methyl bromide, ethylene oxide and dichloromethane in vitro suggests that this may be an important factor in individual susceptibility and perhaps carcinogenic effects in humans (Hallier et al., 1993). Metabolism of xenobiotics encompasses both oxidative phase I metabolism (cytochrome P450 isozymes in particular) and phase II conjugation reactions (Mulder, 1990). During phase I metabolism, reactive intermediates are formed which are subsequently detoxified by phase II conjugation reactions (Noach et al., 1987). A classic example is provided by differences in conversion rate because of a genetically determined polymorphism of phase II TV-acetyltransferases. Upon exposure to an aromatic amine, 'slow acetylators' are prone, in particular, to kidney tumours, and 'fast acetylators' to liver tumours (Noach et al., 1987). Two genetically determined phase I enzymes, aryl hydrocarbon hydroxylase (AHH, cytochrome P450IA1) and debrisoquine hydroxylase (cytochrome P450IID6), reportedly are correlated with cancer risk (Gelboin, 1983; Cough et al., 1990). A correlation with cancer risk has also been reported Table 4.3 Traits of increased susceptibility Genetic Ataxia teleangiectasia Xeroderma pigmentosa Enzyme polymorphism a, antitrypsin Glucose-6-phosphate dehydrogenase Sickle cell anaemia Thalassaemia Porphyria Other Deficient diet Induced enzymes Allergy
for GST isozyme JJL of the phase II system (Seidegard et al., 1986). Besides the hereditary markers of individual sensitivity mentioned, there are differences in kinetics and metabolism for which the role of a genetic component has not been established. Many, but not all, of the phase I and phase II enzymes can be induced by external exposure (e.g. Brussels sprouts (Bogaards et al., 1994; Nijhoff et al., 1995a,b), oral contraceptives (Miners et al., 1983)). Many studies have focused on finding the most appropriate model substances for the various isozymes. An impression of the activity of an individual's phase I and phase II metabolic capacity can be obtained, for example, by using the model substances antipyrine and paracetamol (acetaminophen) (Verhagen et al., 1989). In erythrocytes and leukocytes the isozyme pattern and the activity of GST isozymes can be determined (Beckett et al., 1990). In saliva the rate of conversion of exogenous nitrate to the toxic nitrite (due to the formation of carcinogenic nitrosamines) can be measured (Bos et al., 1988). Excretion of nitrosoproline (upon an oral dose of proline) is an index of endogenous formation of nitrosamines (Moller et al., 1991). DNA repair capacity, e.g. in leukocytes, can be considered a biomarker for individual sensitivity. However, this biomarker is infrequently used. Both genotypic (e.g. relatives of cancer patients) and phenotypic variations in DNA repair capacity (e.g. smokers) have been reported (Oesch et al., 1987; Pero et al., 1989; Celotti et al., 1989; Verhoeven et al., 1996). Recently we have shown that consumption of Brussels sprouts by human volunteers results in an elevated level of GST in plasma, peripheral lymphocytes, colorectal cells and bladder cells, most markedly in males (Bogaards et al., 1994; Nijhoff et al., 1995a,b). The most promising phase II biotransformation-inducing compounds are glucosinolates found in cruciferous vegetables. Indeed, cruciferous vegetables and their active ingredients (glucosinolates and breakdown products) are frequently associated with chemoprevention against cancer (Verhoeven et al., 1996,1997).
4.8 Selection, evaluation and application of biomarkers In the above discussion, a number of biomarkers were placed in a framework. However, this categorization is not an invariable one, and every classification is actually artefactual. For example, some scientists may be inclined to consider DNA adducts as early biological effects rather than as markers of biologically effective dose. In measuring a variety of enzyme activities or the immune response, one may question whether a biological effect or a marker of individual sensitivity is involved. The biological mechanism theorized and the proof for the validity of this theory, therefore, are determinants in the choice and interpretation of biomarkers. In addition to biological aspects in the choice and interpretation of
biomarkers, ethical, practical and analytical considerations play a role, as well as such aspects as sensitivity, specificity and individual variation in humans. For every parameter separately, one could develop a survey covering questions related to the pros and cons of the use of the biomarker in question with regard to predictive value, inter- and intra-run variability, inter- and intra-laboratory variation, standardization, and so on. Many laboratory and epidemiological studies will be needed to find adequate answers to all these questions. 4.8.1 Biological aspects An important factor in the choice of biomarkers is the mechanism assumed between exposure and disease. Biological insights must also contribute to an estimation of the duration between exposure and the time at which the (alteration in) biomarkers in question become manifest. The persistence of the marker is also a relevant aspect. If measurements take place shortly after exposure, the biomarker may not have changed yet, whereas too late a time of measurement will either produce no relation at all between exposure and biomarker or a strongly diluted one. It is also important to consider whether a marker produces an integration of exposure. It is conceivable, for example, that the number of DNA adducts in lymphocytes rapidly rises immediately upon exposure, but that these adducts are eliminated just as rapidly by repair enzymes. Measurement of adducts excreted in urine as a result of excision repair or of haemoglobin adducts can result in a better integration. On the other hand, the adduct peak immediately upon exposure and the roles of repair enzymes and dietary factors may constitute the very topic of interest. Finally, in applying some type of biomarker, one should question what is the biological relevance of the tissue in which the biomarker is measured. For example, measurements in lymphocytes may be of limited relevance if biological processes in the liver or the lung constitute the matter of interest. Figure 4.4 presents an overview of organs, tissues and matrices that can be used for biomarker research. In cases of studies with humans, measurements are not done in the target tissue (e.g. liver, lung, intestines) but elsewhere in the body (e.g. blood cells, urine) at realistic doses. In contrast, in animal experiments the biomarkers can readily be measured in the target tissue, but usually at relatively high doses. 4.8.2 Ethical implications and constraints For human studies one has usually to turn to tissues that can be sampled by non-invasive methods. Not only invasiveness, but also acceptance plays a role. For example, collection of faeces is non-invasive but is likely to
flakes of skin
lngestion
Inhalation
Dermal absorption
exhaled air
sweat
buccal cells
mucus
vomit
saliva skin nails
gastrointestinal tract
respiratory tract
hairs
cervical mucosa
bile
reproductive organs sperm
adipose tissue fat biopsy
organs
liver
blood
teeth breast
kidneys
brains
milk
urine
cerebrospinal fluid
faeces
media for biological monitoring
Figure 4.4 Organs and media for the determination of biomarkers.
meet less acceptance among volunteers than venipuncture. From an ethical point of view, human studies (studies with volunteers or epidemiological studies) can only be conducted if the exposure to chemicals studied is 'natural' (e.g. dietary or occupational exposure) or if sufficient information on the exposure under survey is available from animal experiments. A major goal of biomarker use is prevention and, in cases of presumed critical events in a disease pathway, identification of individuals at risk to allow for more effective intervention. Biomarkers, especially in the occupational setting, cannot be used without considering some important ethical implications, although with the discussion not finally resolved, it seems appropriate that some genetic biomarkers must be seen as being different from other medical tests. Since testing might enable identification of individuals predisposed to a certain disease or greater risk of adverse health effects related to a specific exposure, we are faced with important but conflicting decisions. The possible 'labelling' of individuals with heightened risk has economic and social implications and the potential for insidious discrimination. The information gained through biomarkers must be handled in such a way as to protect the individuals from both harmful exposure and serious social consequences. The basic measures include: informed consent of participants, voluntary participation, confidentiality, conveyance of the uncertainties, and outline of interventions (e.g. removal from work).
Table 4.4 Components of appropriate use of biomarkers in epidemiology and nutrition research 1. Practical methods and analytical reproducibility 2. Validity, specificity, sensitivity 3. Reference ranges 4. Understanding of kinetics (absorption, distribution, elimination) 5. Understanding of relationship between exposure and effect 6. Valid disease prediction potential 7. Knowledge of confounding variables 8. Ethical consideration of use and consequences 9. Prevention of premature application under public pressure
Since some biomarkers are still in the validation stage, reports and interpretations must be made with caution. Information on results and their meaning should be outlined beforehand, as well as how and whether to report on a group or individual basis. Otherwise, unnecessary concern and anxiety might occur, resulting in a change in the life of a participant. The medical community should resist the temptation to prematurely apply biomarkers. For social, legal and ethical reasons, biomarkers should only be used when the interpretation of the results and likely consequences can be foreseen (Table 4.4). 4.8.3 Practical and analytical aspects Besides acceptance to volunteers, considerations connected with sample processing and storage may play a part. For functional tests and SCE determinations, for example, live blood cells are required, thus introducing specific conditions to the conduct of the study. To determine micronuclei in sputum cells or bladder cells no viable cells are required, but sputum cells must be collected in a fixative and bladder cells must be isolated from urine in a brief period of time. Other indices, such as blood vitamin C level, appear to be highly unstable and sensitive to oxidation and require a rapid processing of blood samples and immediate storage at -8O0C. Some analytical aspects also play a role in the selection of biomarkers. Conducting a conceptually simple determination, e.g. of polychlorinated biphenyl (PCB) levels in adipose tissue, can be a huge problem, given the complexity of the matrix and the required sensitivity of the test. Standardization and the inclusion of blind controls will reduce the variance of the determination and improve insights into the nature of that variance. In some cases, however, standardization with a pool of blind samples is impossible because the determination is done in viable biological material. In such cases it is even more important to order the different laboratory runs in such a way that known or unknown sources of variance do not distort the outcomes. The time required and costs of the determination will be other important factors in the choice of a
parameter. In many cases the disadvantages of a limited sensitivity and specificity of a biomarker have to be weighed against the advantages of a lower burden in terms of time and money: a less laborious biomarker allows us to examine more subjects using equal laboratory capacity and financial resources. 4.8.4 Sensitivity and specificity In evaluating and selecting biomarkers, the sensitivity and specificity of the marker play an important role. In epidemiological studies, sensitivity is commonly defined as the proportion of a population with a certain characteristic (e.g. disease or exposure) that is classified correctly on the basis of measurements as subjects with that characteristic. In the scheme of Figure 4.5, therefore, sensitivity can be quantified as AJ(A + C). A high sensitivity implies a low proportion of false-negatives (category C). However, the test is only useful if the proportion of false positives (positive test results in the absence of disease or exposure, category B) is small as well. In other words, the test has to be highly specific as well, meaning that a large proportion of subjects without disease or exposure are correctly classified as such; D/(B + C) must be high. The demonstration of the presence or absence of DNA adducts, for example, may be a sensitive measure of exposure to some carcinogen, but the test will have a low specificity if other carcinogens produce similar DNA adducts which are also demonstrated by the assay. In studies in which biomarkers are applied, the quantification described above usually falls short because both the level of the biomarker and the underlying biological processes are considered to be a continuum (see Figure 4.3). Although the underlying view is the same as for Figure 4.5, the sensitivity of biomarkers is considered to be the extent to which the Reality
A
B
C
D
Test
Figure 4.5 Sensitivity and specificity of a test (Hulka et al, 1990).
assay is able to demonstrate or discriminate differences in exposure. For example, protein adducts of a certain substance may be more sensitive to exposure than DNA adducts, because protein adducts produce a temporal integration and hence are more capable of registering individual differences in long-term exposure. The concept of sensitivity is used in the same way to express the extent to which a marker can demonstrate a specific change in a biological process. By analogy, the concept of specificity is used to express to what extent a biomarker reacts exclusively to one specific type of exposure or one specific change of function. Summarizing, it can be said that the usefulness of a biomarker depends strongly on the exposure and disease under survey and the mechanism hypothesized. Further, the applicability of tests in practice depends on the extent of exposure and the prevalence of disorders in the study population. Because quantitative aspects cannot be mapped in practice as simply as schematized in Figure 4.5, the choice of markers will usually be based on qualitative considerations: as compared with chromosomal aberrations, SCEs react to a broader variety of xenobiotics (i.e. lower specificity) but at 10- to 100-fold lower concentrations (i.e. higher sensitivity). 4.8.5
Human variability and study design
A challenge in understanding and interpreting biomarkers is the inherent variability across the population with genetic factors, age, neuropsychological influences, lifestyle and exposures. A biomarker is useful only if it shows some extent of variability as a matter of response to endogenous and exogenous factors. Many biomarkers score high sensitivity but unfortunately rather low specificity; often, larger differences exist within the control group than between the control group and the exposed group. Evaluation must be performed in the light of degree of interindividual variability and known dynamics of the markers in relation to exposure. A knowledge of normal background variability is a prerequisite for an 'ideal' marker with a specificity of 100%; and a sensitivity of 100%; both the intra- and the individual variance are determined by variation in exposure or variation in the biological function reflected in the marker. A first hitch in this attractive straightforward concept can be seen from Figure 4.1: not only exposure, but also individual sensitivity and, in our opinion, dietary factors, play important roles. The contributions of genotype, phenotype and diet/lifestyle complicate the picture but make it interesting at the same time, because the roles of these different sources of variation can be explored separately. Depending on the research question, however, many sources of variation will be experienced as a nuisance, in particular when the contribution of these sources is larger than the factor under investigation and when the origin becomes uncertain. For many applications, a marker with a relatively small interindividual variance will be an
attractive choice. A small interindividual variance implies that a small number of measurements, or even a single one, will suffice to obtain a reliable impression of the level of the marker on an individual level. A large interindividual variance, however, offers the opportunity to relate individual differences to factors relevant to the research question. To effect a small intraindividual variance, effects of analytical sources of variance must first be excluded. Besides standardization in the laboratory and in the stage of sample processing, intraindividual variation can be reduced by standardizing experimental conditions. An example is blood sampling at a fixed time while the subject is in a fasting state. However, standardization of conditions will be more practicable in an experimental setting than in epidemiological studies. Not only standardization but also the choice of a proper study design may be a remedy against distortion of results by unwanted sources of variation. A homogeneous study population can be obtained by excluding subjects on the basis of factors supposed to be of influence. Further, the comparability of study populations can be improved through pre-stratification or matching, which techniques are also applied to evenly distribute, over the study groups, variation in sample processing or variation in assays. Finally, an even distribution of sources of variation, even if their nature is not known, can be attained through randomization of study groups. Provided the appropriate study design has been selected, a known or unknown variation in a research parameter will not distort the results. It is possible, however, that the 'random noise' in the marker selected is so strong that the effects supposed to be relevant are likely to be missed in a study of limited size. Therefore, insights into the nature of intra- and interindividual variance are essential to calculate the required size of the study groups. Such 'power calculations' provide insights into the discriminative power of the study (the chance that a real effect is found and the chance that a non-existing effect is erroneously found). Obviously, the discriminative power must be high (a level of 90-95% is often chosen) and the chance of spurious findings must be small (usually 5% or 10%).
4.9 Conclusions The area of biomarker research has attracted much interest in recent years. Obviously, use of biomarkers adds to existing knowledge and may in the future link more precisely a particular exposure to effects, and place the deliberations in the cases of risk assessment on a firmer scientific basis. Biomarkers have been used predominantly thus far in environmental and occupational toxicology studies, but could also play an important role in nutrition research. Although the discriminative power is not known for a large number of biomarkers, studies based on these markers may provide
insights into biological mechanisms in humans and allow short-term effects to be evaluated. Biomarker studies appear to offer opportunities, in particular, for exploring the interactions between positive and negative effects of dietary components. The application of biomarkers requires a thorough knowledge of the aetiological mechanism of interest as well as the presumed position of the biomarker in that mechanism and its fate in an organism (absorption, distribution, biotransformation, excretion). This knowledge will often be based on in vitro studies and animal experiments. The marker must be acceptable from both an ethical and a practical point of view, and standardization or controlling for analytical variance must be possible. To evaluate biomarkers for all these aspects, human experiments with patients or volunteers seem to be indicated prior to applying them in studies of a larger scale. Moreover, such small-scale experiments may provide insights into intra- and interindividual variance which should be the basis for estimations of population size in larger-scale epidemiological studies. Finally, a careful choice of the study design is a prerequisite for making valid statements based on biomarker studies. If all these conditions are met, biomarkers appear to offer good prospects for applications in epidemiological and toxicological nutrition research. Acknowledgement The authors are indebted to the contribution made by Mr D.G. van der Heij. References ATSDR/USEPA (1990) Toxicological profile for lead. Agency for Toxic Substances and Disease Registry and US Environmental Protection Agency, Office of Toxic Substances, Washington DC, USA. Bartsch, H. and Hemminki, K. (eds) Methods for detecting DNA damaging agents in humans: applications in cancer epidemiology and prevention. IARC Scientific Publication 89. International Agency for Research on Cancer, Lyon. Beckett, GJ., Howie, A.F., Hussey, A.J., et al. (1990) Radioimmunoassay measurements of the human glutathione S-transferases. In: Hayes, J.D., Pickett, C.B., and Mantle, TJ. (eds) Glutathione S-transferase and Drug Resistance. Taylor and Francis, London, pp. 399-408. Bogaards, J.J.P., Verhagen, H., Willems, M.I. et al. (1994) Consumption of Brussels sprouts results in elevated ot-class glutathione S-transferase levels in human blood plasma. Carcinogenesis, 15, 1073-1075. Bos, P.J.M., Bovens, M., Hulshof, K.F.A.M. and Wedel, M. (1988) Nitraat-nitriet conversie in de mondholte; een epidemiologisch onderzoek. TNO Rapport V 88.357. CIVO Instituten TNO, Zeist. Celotti, L., Furlan, D., Ferraro, P. and Levis, A.G. (1989) DNA repair and replication in lymphocytes from smokers exposed in vitro to UV light. Mutagenesis, 4, 82-86. Committee on Biological Markers, National Research Council (1987) Biological markers in environmental health research. Environmental Health Perspectives, 77, 3-9. Compton, P.J.E., Hooper, K. and Smith, M.T. (1991) Human somatic mutation assays as biomarkers of carcinogenesis. Environmental Health Perspectives, 94, 135-141.
Cough, A.C., Miles, J.S., Spurr, N.K. et al. (1990) Identification of the primary gene defect at the cytochrome P450 CYP2D locus. Nature, 347, 773-776. Das, B.C. (1988) Factors that influence formation of sister chromatid exchanges in human lymphocytes. CRC Critical Reviews in Toxicology, 19, 43-86. de Vet, H.C.W. (1990) The role of beta carotene in cancer prevention. Epidemiological studies on cervical dysplasia. Thesis, Rijksuniversiteit Limburg, Maastricht. ECETOC (1990) DNA and protein adducts: evaluation of their use in exposure monitoring and risk assessment. Monograph no. 13. European Chemical Industry Ecology and Toxicology Centre, Brussels. Fennel, T.R. (1990) Biological markers of exposure to chemical carcinogens. CUT Activities, 10, 1-7. Fontham, E., Correa, P., Rodriguez, E. and Lin, Y. (1986) Validation of smoking history with the micronuclei test. In: Hoffmann, D. and Harris, C.C. (eds) Mechanisms in tobacco carcinogenesis. Banbury Report 23. Cold Spring Harbor Laboratory, New York, 113-118. Fraga, C.G., Shigenaga, M.K., Park, J.W. et al. (1990) Oxidative damage to DNA during aging: 8-hydroxy-2'-deoxyguanosine in rat organ DNA and urine. Proceedings of the National Academy of Sciences of the USA, 87, 4533-4537. Gelboin, H.V. (1980) Benzo[a]pyrene metabolism, activation, and carcinogenesis: role and regulation of mixed function oxidases and related enzymes. Physiological Reviews, 60, 1107-1166. Gelboin, H.V. (1983) Carcinogens, drugs, and cytochromes P-450. New England Journal of Medicine, 309, 105-107. Hagmar, L., Brogger, A., Hansteen, L-L. et al. (1994) Cancer risk in humans predicted by increased levels of chromosome aberration in lymphocytes: Nordic Study Group on the Health Risk of Chromosome Damage. Cancer Research, 54, 2919-2922. Hagmar, L., Hallberg, T., Leja, M. et aL (1995) High consumption of fatty fish from the Baltic Sea associates with changes in human lymphocyte subsets levels. Toxicology Letters, 77, 335-342. Hallier, E., Langhof, T., Dannappel, D. et al. (1993) Polymorphism of glutathione conjugation of methyl bromide, ethylene oxide, and dichloromethane in human blood, influence in the induction of sister chromatid exchanges (SCR) in lymphocytes. Archives of Toxicology, 67, 173-178. Harris, C.C. (1989) Interindividual variation among humans in carcinogen metabolism, DNA adduct formation and DNA repair. Carcinogenesis. 10, 1563-1565. Henderson, R.F., Bechtold, W.E., Bond, J.A. and Sun, J.D. (1989) The use of biological markers in toxicology. Critical Reviews in Toxicology, 20, 65-82. Hulka, B.S., Wilcosky, T.C. and Griffith, J.D. (eds) (1990) Biological Markers in Epidemiology, 1st edn. Oxford University Press, New York, Oxford. Hunter, D. (1990) Biochemical indicators of dietary intake. In: Willett, W.C. (ed.) Nutritional Epidemiology. Oxford University Press, New York. International Agency for Research on Cancer (1986) Tobacco smoking. Monographs on the Evaluation of the Carcinogenic Risk of Chemicals to Humans, Vol. 38. World Health Organization/International Agency for Research on Cancer, Lyon. Jarvis, MJ,, Tunstall-Pedoe, H., Feyerabend, C. and Salojee, Y. (1987) Comparison of tests used to distinguish smokers from nonsmokers. American Journal of Public Health, 77, 1435-1438. Kok, FJ. and van't Veer, P. (eds) (1991) Biomarkers of dietary exposure: Proceedings of the 3rd Meeting on Nutritional Epidemiology. Smith Gordon and Company, London. Lapre, J.A. and van der Meer, R. (1992) Diet induced increase of colonic bile acids stimulates lytic activity of fecal water and proliferation of colonic cells. Carcinogenesis, 13, 41-44. Livingston, G.R. (ed.) (1989) Nutritional Status Assessment of the Individual. Food and Nutrition Press Inc., Trumbull, Connecticut. Miners, J.O., Attwood, J. and Birkett, DJ. (1983) Influence of sex and oral contraceptive steroids on paracetamol metabolism. British Journal of Clinical Pharmacology, 16, 503-509. Moller, H., Landt, J., Pedersen, E. et al. (1991) Urinary excretion of N-nitrosoproline in relation to consumption of raw and cooked vegetables in a Danish rural population. In: O'Neill, I.K., Chen, J. and Bartsch, H. (eds) Relevance to Human Cancer of N-nitroso Compounds. IARC Scientific Publication No. 105, pp. 168-171. International Agency for Research on Cancer, Lyon.
Mulder, GJ. (1990) Conjugation Reactions in Drug Metabolism; an Integrated Approach. Taylor & Francis, London. Nijhoff, W.A., Mulder, T.P.J., Verhagen, H. et al (1995a) Effects of consumption of Brussels sprouts on plasma and urinary glutathione S-transferase class-a and -TT in humans. Carcinogenesis, 16, 955-957. Nijhoff, W.A., Nagengast, P.M., Grubben, M.J.A.L. et al. (1995b) Effects of consumption of Brussels sprouts on intestinal and lymphocytic glutathione and glutathione S-transferases in humans. Carcinogenesis, 16, 2125-2128. Noach, E.L., Henderson, P.Th. and Breimer, D.D. (1987) Inleiding Tot de Algemene Farmacologie. Samson Stafleu, Alphen aan den Rijn. Oesch, F., Aulmann, W., Platt, K.L. and Doerjer, G. (1987) Individual differences in DNA repair capacities in man. Archives of Toxicology, Suppl. 10, 172-179. Pero, R.W., Johnson, D.B., Markowitz, M. et al. (1989) DNA repair synthesis in individuals with and without a family history of cancer. Carcinogenesis, 10, 693-697. Perry, P.E. and Thomson, EJ. (1984) The methodology of sister chromatid exchanges. In: Kilbey, BJ., Legatorm M., Nichols, W. and Ramel, C. (eds) Handbook of Mutagenicity Test Procedures, 2nd edn. Elsevier Science Publishers, Amsterdam. Rahn, C.A., Howard, G., Riccio, E. and Doolittle, DJ. (1991) Correlations between urinary nicotine or cotinine and urinary mutagenicity in smokers on controlled diets. Environmental and Molecular Mutagenicity, 17, 244-252. Riboli, R.M.H. and Saracci, R. (1987) Biological markers of diet. Cancer Surveys, 6, 685-718. Riethmueller, G., Ziegler-Heitbrock, H.WL. and Rieber, E.P. (1987) Monitoring the human immune system. In: Berlin, A., Dean, J., Draper, M.H. et al. (eds) Immunotoxicology. Nijhoff Publishers, Dordrecht, pp. 98-103. Risse, E.K.J. (1987) Accuracy of sputum cytology in lung cancer diagnosis. Thesis, Katholieke Universiteit Nijmegen. Rylander, G. (1995) Genes and agents, how to prioritize to prevent disease. Archives of Environmental Health, 50, 333-334. Saccomano, G., Archer, V.E., Auerbach, O. et al. (1974) Development of carcinoma of the lung as reflected in exfoliated cells. Cancer (Philadelphia.), 33, 256-270. Schulte, P.A. (1993) A conceptual and historical framework for molecular epidemiology. Molecular Epidemiology, 3, 44, Schulte, P.A. (1995) Opportunities for the development and use of biomarkers. Toxicol. Letters, 77, 25-29. Seidegard, J,, Pero, R.W., Miller, D.G. and Beattie, EJ. (1986) A glutathione transferase in human leukocytes as a marker for the susceptibility to lung cancer. Carcinogenesis, 7, 751-753. Shields, P.G. and Harris C.C. (1991) Molecular epidemiology and the genetics of environmental cancer. Journal of the American Medical Association, 266, 681-687. Stich, H.F. and Rosin, M.P. (1984) Micronuclei in exfoliated human cells as a tool for studies in cancer risk and cancer intervention. Cancer Letters, 22, 241-253. Straume, T. and Lucas, NJ. (1995) Validation studies for monitoring of workers using molecular cytogenetics. In: Mendelson, M.L., Peeters, J.P. and Normandy, MJ. (eds) Biomarkers and Occupational Health. Joseph Henry Press, Washington DC. Tates, A.D., van Dam, FJ., van Mossel, H. et al. (199Ia) Use of the clonal assay for the measurement of frequencies of HPRT mutants in T-lymphocytes from five control populations. Mutation Research, 253, 199-213. Tates, A.D., Grummt, T., T0rnqvist, M. et al. (199Ib) Biological and chemical monitoring of occupational exposure to ethylene oxide. Mutation Research, 250, 483-497. Taylor, J.A. (1989) Oncogenes and their application in epidemiologic studies. American Journal of Epidemiology, 130, 6-13. Van Doom, R., Bos, R.P., Brouns, R.M.E. and Henderson, P.Th. (1982) Is de bepaling van thioethers in urinemonsters bruikbaar bij de biologische monitoring van genotoxische belasting door de arbeidsomgeving? Tijdschrift voor sociale geneeskunde, 60, 30-34. Van Poppel, G., de Vogel, N., van Bladeren, PJ. and Kok, FJ. (1992a) Increased cytogenetic damage in smokers deficient in glutathione S-transferase isozyme. Carcinogenesis, 13, 303-305.
Van Poppel, G., Kok, F.J., Duijzings, P. and de Vogel, N. (1992b) No influence of betacarotene on smoking induced DNA damage as reflected by sister chromatid exchanges. International Journal of Cancer, 51, 355-358. Van Poppel, G., Poulsen, H., Loft, S. and Verhagen, H. (1995) No influence of beta-carotene on oxidative DNA damage in male smokers. Journal of the National Cancer Institute, 87, 310-311. Van Poppel, G., Kok, FJ. and Hermus, RJJ. (1997) Beta carotene supplementation in smokers reduces the frequency of micronuclei in sputum. British Journal of Cancer, 66, 1164^1168. Van Poppel, G., Gorgels, W.J.M.J., de Vogel. N. and Stenhuis, W.H. (1989) Genotoxiciteits parameters bij niet-rokers en passief-rokers. TNO Rapport V89-607. CIVO Instituten TNO, Zeist. van Schooten, FJ., van Leeuwen, F.E., Hillebrand, MJ.X. et al. (1990) Determination of benzo[a]pyrene diol epoxide-DNA adducts in white blood cell DNA from coke-oven workers: the impact of smoking. Journal of the National Cancer Institute, 82, 927-933. Verhagen, H. and Kleinjans, J.C.S. (1991) Some comments on the dietary intake of butylated hydroxytoluene - rejoinder. Food and Chemical Toxicology, 29, 74-75. Verhagen. H., Maas. L.M., Beckers, R.H.G. et al. (1989) Effect of subacute oral intake of the food antioxidant butylated hydroxyanisole on clinical parameters and phase-I and -II biotransformation capacity in man. Human Toxicology, 8, 451-459. Verhagen, H., Poulsen, H.E. and Loft, S. (1995) Reduction of oxidative DNA-damage in humans by Brussels sprouts. Carcinogenesis, 16, 969-970. Verhoeven, D.T.H., Goldbohm, R.A., Van Poppel, G. et al. (1996) Epidemiological studies on Brassica vegetables and cancer risk. Cancer Epidemiology, Biomarkers and Prevention, 5, 733-748. Verhoeven, D.T.H., Verhagen, H., Goldbohm, RA. et al. (1997) A review of mechanisms underlying anticarcinogenicity by Brassica vegetables. Chemico-Biological Interactions, in press. Vine, M.F. (1990) Micronuclei. In: Hulka, B.S., Wilcosky, T.C. and Griffith, J.D. (eds) Biological Markers in Epidemiology, 1st edn., Oxford University Press, New York. Wilcosky, T.C. and Rynard, S.M. (1990) Sister chromatid exchange. In: Hulka, B.S., Wilcosky, T.C. and Griffith, J.D. (eds) Biological Markers in Epidemiology, 1st edn., Oxford University Press, New York. Willems, M.I., de Raat, W.K., Wesstra, J.A. et al. (1989) Urinary and faecal mutagenicity in car mechanics exposed to diesel exhaust and in unexposed office workers. Mutation Research, 222, 375-391. Willett, W.C. (1990) Vitamin A and lung cancer. Nutrition Reviews, 48, 201-211. Wolff, S. (1979) Sister chromatid exchange: the most sensitive mammalian system for determining the effect of mutagenic carcinogens. In: Berg, K. (ed.) Genetic Damage in Man Caused by Environmental Agents. Academic Press, New York, pp. 229-246. Zbinden, L.C. (1987) A toxicologist's view of immunotoxicology. In: Berlin, A., Dean, J., Draper, M.H. et al. (eds) Immunotoxicology. Nijhoff Publishers, Dordrecht, pp. 1-9.
5 Expert systems for hazard evaluation P. JUDSON
5.1 Introduction Computers are often seen as very powerful, but very stupid, 'numbercrunchers' - because they work so much more quickly than human beings, will continue diligently with mindless chores, and, given appropriate precautions, are less prone to random errors, they can do calculations that are beyond practical human ability. But there are other activities to which computers are well suited, one of them being the application of logical reasoning. Reasoning is an aspect of intelligence, and so this use of computing contributes to research into the creation of artificial intelligence. One area of human intelligence which has received a lot of attention is how experts use general knowledge to solve specific problems. The most popular example is the way in which a doctor diagnoses an illness by using his or her training and experience to interpret symptoms. Various conditions might cause headache, such as migraine attack, brain tumour, having had one too many whiskies, and so on. Confronted with a patient complaining of headache, a doctor might ask 'did you go to a party last night?' Given the answer 'yes,' she might make a judgement that drink was the most likely cause of the problem. That is not to say that more serious explanations would be ruled out, but only that it was a reasonable preliminary hypothesis. Further questions might follow, about frequency and severity of headaches, and so on, and the doctor might decide that monitoring was needed before a firm decision could be made. Expert systems are intended to emulate the ways in which human experts solve problems. Although they may be intended to behave like experts, they are not normally seen as potential replacements for human beings. Their more likely application is as tools in the hands of experts, giving the kind of help that specialists share through lunch-time conversations or more formal meetings - even within a specialized field, the amount of information that might be relevant to every problem is more than an individual can be confident of remembering, or even of ever having known. Research has broadly been in parallel streams, leading to systems that recognize patterns in sets of data and use them to make predictions, and knowledge-based systems, which base their predictions on existing human
knowledge. Some workers use the term 'expert system' only for knowledge-based systems, but in this chapter 'expert system' is used in its broader sense. There are many views on what constitutes intelligent behaviour, but it can be argued that an intelligent machine would have to be able to solve problems by learning, reasoning and being creative. To this might be added that being able to reason implies being able to explain the reasoning. In practice it is hard to meet all of these requirements together, and most systems concentrate on one or two of them. Some systems are good at learning but do not reason and are not creative. Others - knowledge-based systems, for example - do not learn for themselves, depending on knowledge provided by human experts, but they are good at explaining their reasoning. Whether any current expert systems are truly creative is a matter for conjecture. Expert systems use generalized information to make predictions, rather than simply recalling facts. As an example, vinegar turns litmus paper red. This fact could be stored in a database, and a database management system could recall it if a user asked questions such as 'what colour does litmus paper turn on contact with vinegar?' or 'what turns litmus paper red?' Its answer to the second question, 'vinegar does,' would be totally unimaginative, and the system would be unable to answer related questions, such as 'what colour does litmus paper turn on contact with hydrochloric acid?' A knowledge base is analogous to a database, but it contains generalized information, rather than specific items of data. So, for example, it might contain the information that 'acids turn litmus paper red'. In the case of the question about hydrochloric acid, a knowledge-based system might recognize the word 'acid' and give the correct answer straight away. In the case of the question 'what turns litmus paper red?', it would be able to say 'any acid'. The question about vinegar would be harder, since it is not apparent from the question that vinegar is an acid, but a welldesigned knowledge-based system would call up a database to see if vinegar was listed as an acid, and/or would ask the user about it. This example hinges on the use of the word 'acid'. Most expert systems in chemistry (knowledge-based and otherwise) work at a higher level of generalization than dependence on chemical nomenclature would allow, by recognizing chemical features in structural diagrams. Given the structural diagram for the acetic acid in vinegar, such a system might recognize the significance of the carboxyl group it contains, or calculate the pKa of the substance and use it to work out the expected colour of an indicator (litmus) that turns from blue to red when the pH falls below pH 7. This chapter first describes factors influencing biological activity which are relevant to the design of an expert system to predict toxicity, followed by the different technical approaches to creating or gathering generalized information. The ways in which chemical structures are represented by
scientists and in computer systems are described and then the practical implications of choosing between them are discussed. A few comments follow on the potential for using computers to go beyond hazard assessment and support risk assessment. Some examples of existing systems to support toxicological hazard assessment are listed as examples. The chapter ends with observations on the implications of choosing different types of systems and, finally, on the applicability of expert systems to hazard assessment for food chemicals.
5.2 Factors influencing biological activity Toxic substances exert their effects in a number of ways. Some are very specific, interfering directly with biological molecules - often called a lockand-key mechanism of action. Others, because of their physicochemical properties, cause more general, non-specific disruption of biological processes. Very frequently, substances that would be harmless in themselves are converted into harmful ones by metabolic processes. A well-known example of a lock-and-key mechanism is that of the organophosphorus insecticides and war gases which mimic acetylcholine (Figure 5.1). Acetylcholine conveys messages between nerve cells where they meet in synapses. A signal passing down a nerve causes release of acetylcholine, which, on reaching the other nerve, triggers a signal in it. Having done its job, the acetylcholine must be removed so that the system is ready for the next message, and the enzyme acetylcholinesterase does this by catalysing the hydrolysis of an ester bond, converting acetylcholine to acetic acid and choline. The conversion involves passing through a reaction intermediate of particular shape and size, carrying a characteristic distribution of electronic charges. The enzyme promotes the reaction by having complementary structural and electronic features, so that the intermediate is stabilized and the energy barrier to the reaction is lower than it would otherwise be (Figure 5.2). Acetylcholine also fits into the binding site of this enzyme, though less well than the intermediate. The reaction products fit less well than either, and leave the site easily. Thus the enzyme
Figure 5.1 Acetylcholine.
Intermediate stabilized by enzyme Figure 5.2 The enzymic hydrolysis of acetylcholine.
creates an imbalance in favour of the steady conversion of acetylcholine into acetic acid and choline. Externally, the organophosphorus toxins bear a close resemblance to the reaction intermediate (Figure 5.3), but, unlike the intermediate, they are stable compounds. As a result, they bind strongly to acetylcholinesterase and remain there unchanged, blocking access to acetylcholine. The process of acetylcholine removal by hydrolysis is disabled, nerve communication is disrupted, and the nervous system fails. The lock-and-key mechanism of biological action is typically very specific, activity being confined to interfering with one site of one enzyme or other specialized molecular structure, with minimal side-effects. That is not to say, of course, that no clinical side-effects should be expected interfering with an enzyme may lead to secondary side-effects. The point is that the active agent would not usually be expected to interfere directly with other processes. Active substances are often extremely potent - as the target enzyme binds the active agent so strongly, it effectively extracts it from the surroundings, and a dose of only just enough of the agent to bind with the relatively small amount of the enzyme in a living system
Intermediate
Phosphorus analogue Figure 5.3 A generalized structure for a phosphorus compound resembling the intermediate in hydrolysis of acetylcholine.
is needed. For these reasons, most research into the rational design of drugs and pesticides centres on the study of lock-and-key mechanisms. The field is vast, and, although knowledge is increasing rapidly, it remains confined to a relatively few cases that are socially and commercially important. The complete description of the essential features for activity of a drug (i.e. a substance having desirable biological activity) is commonly called a pharmacophore and that of a toxin (i.e. a substance having undesirable biological activity) a toxophore or toxicophore. In contrast, strong acids, for example, by lowering the pH of cell contents and/or the surrounding environment, disturb many metabolic reaction rates, destroy proteins by straightforward chemical hydrolysis, and so on. There is nothing specific about the way in which they act of the lock-and-key kind - all that is required is that they have the right physicochemical properties to lower pH to the level at which these damaging processes take place. Skin sensitizers are typically substances that react with functional groups in skin proteins. On first contact with skin, the sensitizer, by reacting with proteins, creates what appear to the immune system to be new, foreign proteins. Antibodies are produced and the next time the skin is exposed to the same substance, some of the same protein derivatives are generated and an allergic response ensues. For such activity, it is not necessary for the sensitizer to react by a lock-and-key mechanism. It is sufficient for it to be of the right order of reactivity to attack any or many of the functional groups in a variety of proteins. Different computational approaches may be better suited to modelling either non-specific or lock-and-key activity. The organophosphorus toxins discussed above also provide an example of how metabolism influences biological activity. Although the war gases work more or less as described, most of the insecticides are more subtle. Acetylcholine, being an ester, contains an oxygen atom which is part of a carbonyl group. The oxygen atom that mimics it in the toxins is typically part of a phosphate or phosphonate group. But many of the insecticides contain instead a thiophosphate or thiophosphonate group. That is, an oxygen atom essential to the effective mimicking of the carbonyl group in acetylcholine is replaced with a sulphur atom. The substances may inhibit the enzyme but they are, as might be expected, poor inhibitors. Their usefulness as insecticides comes from the fact that another enzyme converts thiophosphates and similar substances to phosphates and their equivalents. The relatively harmless thio-compounds are converted into their toxic oxygen analogues by metabolic activation (Figure 5.4). Conveniently, mammals carry out the conversion less speedily and effectively than insects, with competing conversions leading to substances that are easy to excrete and thus safely disposed of.
Figure 5.4 Metabolic conversion of a thiophosphorus compound to a toxic oxophosphorus compound.
Metabolic activation of toxins and metabolic detoxification are extremely common (indeed, almost all food contains natural substances that would cause severe damage if they were not prevented from reaching other parts of the body by metabolic detoxification in the liver), and computer systems must be able to take account of them to be useful. There is another factor that controls whether a substance that is inherently capable of toxicity shows its effect in a complete living system. A substance may powerfully inhibit an enzyme in a test-tube experiment, and it may be resistant to metabolic deactivation, but it will not be active in a whole animal if it does not reach the place where the enzyme is found. A host of things can interfere with the transport of sufficient quantities of a substance to the site of action for it to be toxic. In the context of food chemicals, no activity will be observed if the food is not eaten, or if it is eaten only in small quantities (the critical amount being dependent on the nature of the toxin). The fact that this is obvious does not make it any less important, although most computer systems assume that it is a consideration to be left to the user. Assuming that the toxin is ingested, whether it reaches the site of action depends on its ability to pass from cell to cell, and across membranes inside cells. Such ability can be predicted quite well from properties that are easy to measure or to compute, such as water solubility, fat/water partition coefficient (usually assumed to be closely related to the octanol/water partition coefficient, log P), pKa, and molecular volume. In summary, computer systems for predicting toxicity may be suitable for specific or non-specific modes of action, or both. They need to take account of metabolic processes and the ability of substances to reach their sites of action. 5.3 Making rules for expert systems An expert system solves problems by applying generalized information. As mentioned in section 5.1, some systems analyse data and build rules for themselves (a process usually termed rule induction), while others use knowledge bases written by human experts.
Systems that build their own rules to associate toxicity with chemical structure generate them by analysing data for sets of active and inactive compounds. No method of analysis can be completely reliable, being limited by the quality and availability of data as well as the kinds of fragments chosen to represent the components of the substances of interest. The aim of an expert system is to suggest what may be likely, rather than to state certainties. Three automatic methods are described below, based on building a binary tree, analysing data statistically, or using probability theory, followed by some comments on knowledge bases written by experts. 5.3.1 Binary trees There are several ways of building and using binary trees, illustrated by the following description of one of them (Figure 5.5). Suppose there is a set of compounds, some of which are active and some inactive in a toxicity test, and that it is confidently believed that the active compounds have a common mode of action. For the sake of simplicity, suppose that the structures of all the compounds are made up of different combinations of just three fragments, a, b and c. As a first step, divide all the possible combinations of a, b and c into two subsets, one composed of all the structures containing fragment a, the other of all those not containing it. In this discussion the new nodes will be described as being at level 1 in the tree, and referred to as nodes 1.1 and 1.2, respectively. Now divide node 1.1 on the basis of the presence or absence of fragment b, to create second-level nodes 2.1 and 2.2, and do the same for node 1.2 to create nodes 2.3 and 2.4. Repeating the process for the four
Level O
Level 1
Level 2
Level 3
Figure 5.5 A simple binary tree for a set of substances containing fragments a, b and c.
nodes at level 2 on the basis of the presence or absence of fragment c, creates a third level with eight nodes - one for every theoretically possible combination of the three fragments. If activity is associated with a particular pattern of features, all of the active compounds should now be found clustered together at the corresponding node on the tree. To take the simplest possible example, if activity depends only on the presence of a, all of the structures associated with node 1.1 will be active and all of those associated with 1.2 will be inactive. Active compounds will appear at other nodes (2.1, 2.2, 3.1, 3.2, 3.3 and 3.4), but level 1 by itself can give the complete information. Applying these observations to the analysis of a real, large set of structures with many more features, it is possible to work progressively down the tree, finding likely groups of features associated with activity. Analyses are not restricted to looking for a single cause of toxicity - if there are different groups of features that can lead independently to activity, separate groups of active compounds should appear at corresponding nodes. Real sets of compounds do not include all those that are theoretically possible. Indeed, there would be no point in constructing a rule to predict toxicities within a set of compounds for all of which the toxicities are already known! But completeness is not necessary if a degree of uncertainty in the conclusion is acceptable. For example, if activity depends solely on the presence of a, any pair of compounds, one containing and one not containing a, and with no differences with regard to b and c, will show whether a is necessary for activity. That may be useful information in itself. Questions that would be unanswered are whether a alone confers activity, or only when associated with the presence (or absence) of b and c that obtains in the test compounds, and whether some other combination of b and c might confer activity independently. For a real set, given the nature of biological activity, it can typically be assumed that groups of just two or three features really are responsible for activity, and so there is less subjective uncertainty about drawing conclusions from a limited data set. Real data sets invariably contain errors, particularly where studies of biological activity are concerned because of the difficulties of reproducing biological tests. This means that, in practice, a clean separation of active and inactive compounds as just described is rarely possible. The algorithms in systems depending on binary trees are modified to tolerate some deviation from exactness. 5.3.2 Statistical methods Counting the frequency of occurrence of all features in all structures in the active set in comparison with the inactive set is used by some systems for rule induction. Given that a small set of structural features is
responsible for some property of a substance (in this case, toxicity through a particular mechanism), it follows that those features might be expected to occur more frequently in a group of active compounds than in a group of inactive ones. To take the simplest example, if a single feature is alone responsible for activity, it should be found in all the structures of active compounds and none of the structures of inactive ones. The extension to cases depending on several fragments is not so straightforward. If two features were essential for activity, it would still be true that every active compound must contain both, but up to half of the inactive compounds might contain one feature and half the other. Although the distinction between active and inactive compounds would not be as absolute as in the simple case of a single feature, each feature would be twice as common in the active set as in the inactive set. If, on the other hand, either of two features alone led to activity, both would be completely absent from the inactive set but it is not clear what their levels of incidence in the active set would be. In the extreme case, if one occurred in all the active structures, the other one might not occur at all. However, at least one must be there, and either one, if present at all, would be more common in the active set than in the inactive set, where its incidence is zero. One of them, at least, would be discoverable from simple statistics. Sometimes a particular feature may prevent activity, rather than cause it (for example, a critically located bulky group may prevent a molecule from entering the site of action in an enzyme). It is, in effect, a reason for inactivity; the arguments made above apply in reverse, and, to discover it, it would be necessary to look for features more common in the inactive than in the active set. Thus differences in frequency of occurrence of fragments in the active and inactive sets can be expected, but there is no preset threshold ratio that would discriminate in all cases. By adjusting parameters such as a cut-off value for the minimum proportion of active compounds that must contain a feature for it to be classed as necessary for activity and/or one for a minimum ratio between frequency of occurrence in the active and inactive sets, the sensitivity of the rule induction process is adjusted until features more common in the active set are detected. This also makes it possible to balance tolerance of errors, such as the false labelling of an inactive compound as active, against oversensitivity to chance variations in frequency of occurrence of features between the active and inactive sets and the presence of molecules containing the features normally associated with activity and yet inactive. If some molecules like this are found in the inactive set, they can subsequently be compared with the active compounds, reversing the analysis to look for additional features that might be responsible for their inactivity.
5.3.3 Probabilities Suppose that out of 100 compounds containing a given substructural feature, 20 are active in a given toxicological test. It is tempting to estimate the likelihood of toxicity of a novel compound containing the same feature to be 20 in 100 (i.e. as having a probability of 0.2). A similar argument could be applied to a second feature, and a third, and so on. Suppose that the probability for the second feature was 0.6. Arithmetic would conclude that the probability of activity for a compound containing both features would be 0.68 (0.2 + 0.6 - 0.2 x 0.6). Note that the probability relates to the likelihood of activity, and not to its likely consequences. If, for example, the prediction were for carcinogenicity, it would suggest that the substance had a 0.68 probability of being a carcinogen, not that a person exposed to it had a 0.68 probability of developing cancer. Note also that it is not a prediction about the potency of the substance. The apparent logic of this method of analysis and its ability to give numerical guidance are attractive and it is used in some systems. However, in the view of the author, structure-activity correlation based on classical probabilistic mathematics is of doubtful scientific validity. The mathematics are rigorous for predictions about chance events, but the links between chemical structure and biological activity are not matters of chance. There is no a priori reason to expect the probability of activity arising from the presence of a single feature to be predictable from the frequency of activity of compounds containing it, or to expect the rules for combining probabilities to hold. 5.3.4 Knowledge bases Not all systems for predicting toxicological hazard depend on automatic rule induction. An alternative is for human experts to encode the knowledge that they already have. Quite a lot is known about important toxicophores, and an experienced toxicologist with an encyclopaedic memory might be expected to recognize any one of them in the structure of a novel molecule. In reality, of course, few people know and always remember about everything. In addition, the presence of a toxicophore in a given structure is not always immediately obvious to the human eye, whereas computers are good at finding them. The purpose of a knowledge-based system is to provide a useful source of reference for the human expert, or other sufficiently informed user. A program, often called the inference engine, uses information stored in a separate knowledge base to reason about problems. The knowledge is normally recorded in a specially-designed language suited to its scientific purpose. For example, systems handling knowledge about relationships
between chemical structures and biological activity have languages that include ways of representing chemical structures and substructures. An advantage of a knowledge-based system is that its contents are written, and can be reviewed, by human experts. Two benefits arise from this: users have the reassurance that the advice the computer gives is based on the best judgements of experts in the field rather than on an automated computer analysis of data; and the basis of the expert judgements can be explained to the user in terms that allow him or her to make a personal evaluation, whereas it may be difficult to take a view on the outcome of an automated analysis. The corresponding disadvantage is that a knowledge base is, by definition, limited to the current scope of human knowledge. That does not mean that a knowledge-based system cannot create new ideas based on existing knowledge - only that it cannot create new knowledge. Knowledge-based systems are not restricted to predictions based directly on correlation of chemical features with biological activity. If a human expert can see a connection, then it can be written into a knowledge base. For example, a group of substances might be safe in themselves, but an expert might realize that a particular method of manufacture could lead to contamination with trace amounts of dioxins. A rule could be written to that effect, so that a user asking about potential hazards associated with a new substance of that kind would be warned to check for the presence of traces of dioxins. It was thought, early in their development, that building and maintaining adequate knowledge bases would be too big a job for them to be commercially useful. In practice, this has turned out not to be a serious problem.
5.4 Representation of chemical structural information Expert systems, like human experts, need suitable descriptions of the structures of molecules with which to reason. Molecular modelling provides three-dimensional information about chemical structures, although there are difficulties, of which the most challenging arises from the flexibility of molecules. How can you compare the shapes of perhaps several hundred molecules to see what they have in common, when each one might adopt any of tens of thousands of shapes? For a more complete discussion of these points, see Chapter 7. Alternatively, the standard chemical structural diagram used by chemists provides topological information. In mathematical terms, a chemical structural diagram is a graph. If a diagram shows a nitrogen atom and a carbon atom with a pair of closely spaced straight lines between them, it does not mean, as a geographical map might, that in the real molecule
those atoms are to be found at places computable from the x and y coordinates used in the picture. The essential information conveyed by the diagram to a chemist is that there is a nitrogen atom connected through a double bond to a carbon atom, which makes it possible to reason about the chemical behaviour of the substance that is represented, whereas someone seeing the diagram simply as a two-dimensional picture could learn how to redraw it, but gain little other information from it. Much research on the computer prediction of biological activity grew out of work on chemical structure database management systems which interpret structural diagrams as a chemist would. Typically, they store the topological information derived from diagrams in connection tables listing atom types, bond types, and connectivities. A simple example of a connection table for acetamide (Figure 5.6) is given in Table 5.1. There are many other ways of tabulating the same information. Chemical diagrams contain some non-topological information which is also stored by most systems for example, dashed and wedged bond symbols carry chemically important information about spatial relationships between parts of a molecule. Some systems store two-dimensional coordinates so that the original diagram can be re-created exactly as it was drawn, and three-dimensional coordinates for cases where they are known, as well as the connection tables. The information contained in a connection table is often all that is needed to reason about laboratory reactions and non-specific biological interactions. Although it may not be immediately apparent, it can also be used for reasoning about lock-and-key interactions, because topological information about molecules implies distance ranges. Bond lengths and
Figure 5.6 The structure of acetamide with atoms and bonds numbered to correspond to the connection table (Table 5.1). Table 5.1 An example of a simple connection table for acetamide Atom number Atom type Connected to: By bond numbers: Bond number Bond type 1 2 3 4
C C O N
2 1 3 4 2 2
1 1 2 3 2 3
1 2 3
single double single
non-torsional angles vary little, and predictably. So, for example, to state that two atoms are connected via three single bonds through intermediate tetrahedral carbon atoms is equivalent to saying that they are separated approximately by between 2.6 and 3.8 A through space. Truly three-dimensional approaches have to take account of molecular flexibility. Usually, analyses are based on distance ranges, and rules are applied to restrict the number of potential conformations of each molecule. Favoured torsional angles can be predicted. For example, the rules would argue against torsional angles corresponding to the so-called eclipsed arrangement of the atoms, because they would be close enough to begin to repel each other. So, for example, a molecule might contain one pair of atoms separated by between 2.6 and 3.8 A, another, in a rather flexible part, by between 3.5 and 10.0 A, and so on. Thus modelling based on topological information is quite close to threedimensional modelling. The actual distance ranges differ between the two approaches, but they are conceptually similar. In practice, given the current state of knowledge, the same uncertainties about conformation arise when working with three-dimensional information as with topological information and there may be little to choose between them.
5.5 Structural descriptors used in expert systems To be able to do any kind of statistical analysis, or to describe a connection between structure and activity in a knowledge base or database, it is necessary to have ways of recognizing and identifying substructural features. Chemists make use of the concept of functional groups - substructural units whose chemical behaviour can be predicted, no matter in what structure they are found. Thus when a chemist sees the fragment C-C(^O)-O-C in a formula he or she recognizes it to be an ester group and can make predictions about its reactivity. But do functional groups provide a suitable basis for understanding biological activity? Do mechanisms in biological chemistry parallel laboratory chemistry, and how relevant are functional groups to the ways in which enzymes influence the behaviour of molecules (and vice versa) without reacting chemically with them? The concept of the functional group is rather imprecise. There is a standard set of functional groups that every chemist knows about, but any new arrangement of a group of elements within a molecule might have its own characteristic behaviour. Such new arrangements are recognized as unique by chemists and treated like functional groups for the purposes of mechanistic reasoning, but they are rarely called functional groups and given formal names.
This ad hoc way of naming functional groups does not lend itself to computerization. Some toxicology systems do use large predefined sets of functional-group-like fragments, but four alternative kinds of fragments have been described which can be generated algorithmically from connection tables: augmented atoms, atom and bond sequences, ring descriptors, and atom pairs. They are unbiased in the sense that the algorithms make no prior assumptions about implications for reactivity (or other properties), and they have been found useful for similarity searching in databases of chemical structures (Downs, 1995). However, some kinds of fragments may model the processes of toxicological activity better than others. Database systems use codes for predefined fragment types as search keys for chemical substructure searching. The codes' for the fragments in each structure in a database are used as keys to locate structures which are sufficiently like a user's query to need to be compared in detail with it. The keys are designed for best performance in reducing sets of structures for detailed matching to the smallest size in the shortest time with realistic use of computer storage and memory space. They are not optimized for studies relating to biological activity, but they are easily available from database systems. Database search keys can be used directly for structure-activity studies, but only in favourable cases. Toxicological expert systems that depend on predefined fragments normally use different definitions for them from those used for database search keys. For computational purposes, it is usual to generate integer identifiers for fragments, rather than textual names. Some workers refer to the set of integers as 'descriptors', while others refer to the fragments themselves as 'descriptors'. It is usually clear from the context which meaning is intended. In this chapter, 'descriptor' will be used in the latter sense. Whichever usage you prefer, notice that a descriptor is not necessarily the identifier for, or the exact description of, a single fragment - identifiers for small groups of related fragments may also be included, for reasons that will be discussed later (chemists similarly use collective names, such as 'halide' for the set of bromide, chloride, fluoride and iodide). Before further discussion, a technique for reducing computational memory and processing usage needs to be mentioned. Most organic compounds contain large numbers of hydrogen atoms. With certain exceptions, which can be ignored for the moment, it is trivially easy to work out how many hydrogen atoms are attached to every other atom in a molecule if you only know what those other atoms are and how they are interconnected. Chemists do not draw most hydrogen atoms in structural diagrams, and computer systems usually do not include them in connection tables and other files. That does not mean that hydrogen atoms are ignored - their presence is implied by the properties of the atoms to which they are attached.
5.5.7 Augmented atoms Perhaps the simplest possible set of descriptors for a chemical structure would be comprised of the atoms and bonds it contained. Thus (Figure 5.7), leaving out hydrogen atoms and the bonds to them, the atom descriptors for ethanol would be C and O, and there would be one bond descriptor, 'single'. Those for dimethylether would be the same. The atom descriptors for acetamide would be C, N and O and the bond descriptors would be 'single' and 'double'. These descriptors would be of little use, because they carry no information about the shape or topology of the molecules, and they do not even distinguish between ethanol and dimethylether. Being able to distinguish between instances of the same atom type in different environments would be an improvement, and this is what augmented atoms are for. Ethanol contains carbon atoms in two different environments. One of them is connected to three hydrogen atoms and another carbon atom, while the second is connected to two hydrogen atoms, another carbon atom, and an oxygen atom. They might be called atom types Cl and C2. There is just one oxygen, connected to one hydrogen atom and one carbon atom which could be called Ol. Dimethylether contains two carbon atoms of a new type, C3, connected to three hydrogen atoms and an oxygen atom, and it contains an oxygen atom connected to two carbon atoms, O2. Acetamide contains a carbon atom of type Cl. In addition, it contains a carbon atom of a new type, C4, connected through single bonds to another carbon atom and a nitrogen atom, and through a double bond to an oxygen atom. It contains an oxygen atom of a new type, O3, having only one neighbour, a carbon atom, connected through a double bond, and it contains a nitrogen atom, Nl. Descriptors of this kind carry information about an atom plus information about its immediate neighbours, and they are termed augmented atoms. While the descriptor 'C' merely conveys the information that a molecule contains a carbon atom, the descriptor 'Cl' conveys the information that a molecule contains a carbon atom connected to one other carbon atom and three hydrogen atoms. Using augmented atoms makes it possible to distinguish between ethanol (Cl, C2, Ol) and dimethylether (C3, O2), as well as distinguishing them from acetamide (Cl, C4, O3, Nl).
ethanol
dimethylether
acetamide
Figure 5.7 The structures of ethanol, dimethylether and acetamide.
The process that has just been described was based on looking at each atom and its immediate neighbours (sometimes referred to as the alpha shell, or first-level shell). It can be extended to take into account the nextdoor-but-one neighbours (the beta shell, or second-level shell), or as many levels beyond that as the size of a molecule allows. Different computer systems use augmented atoms extending to different levels, and use a variety of ways of representing them, and they may or may not take bond types into account, but the principles remain the same. 5.5.2
Atom and bond sequences
Generating atom and/or bond sequences is a completely different way of fragmenting molecules. The complete set of atom sequences is a list of all linear chains of atoms (of all lengths) that can be found in a molecule. So, for ethanol (ignoring hydrogen) they are C.C, C.O and C.C.O; for dimethylether they are C.O and C.O.C (O.C being topologically the same as C.O); for acetamide they are C.C, C.O, C.N, C.C.O, C.C.N and O.C.N. There is only one bond sequence for ethanol, -.-, and for dimethylether it is the same. The bond sequences for acetamide are -.- and =.-. In some systems, the two types of sequence are combined. So, for example, the atom-and-bond sequences (or 'chains') for acetamide are C-C, C=O, C-N, C-C-O, C-C-N and O=C-N. 5.5.3
Ring descriptors
Rings of atoms are very common in organic molecules. They can be classified into many different kinds because, although the most common are five- and six-membered, they contain a variety of atom and bond types, and rings may be bridged, or two or more may be fused together. The presence of rings in structures has a big effect on the chemical and physical properties of substances, and algorithms for detecting rings in structures from their connection tables are used in almost all systems that handle chemical structures. They are included in the sets of descriptors used by some expert systems for predicting biological activity. 5.5.4 A torn pairs The concept of lock-and-key interactions with biological molecules (see section 5.2) implies that, for activity, molecules must contain binding centres at suitable distances from one another, but that how they are interconnected is unimportant. One way of trying to model this more closely is to use atom pairs. At its simplest, the topological descriptor for an atom pair defines two atom types and the distance between them expressed as a number of bonds (or number of intervening atoms, which implies the
Table 5.2 Different types of descriptors for ethanol, dimethylether and acetamide Descriptor type
Ethanol
Dimethylether
Acetamide
Atoms Augmented atoms (first level) Atom sequences Bond sequences Atom-and-bond sequences Atom pairs
bond count). In Table 5.2, distances are represented as bond counts (e.g. C.2.O means 'a carbon atom connected to an oxygen atom via two bonds'). In practice, systems based on the concept of atom pairs may actually use augmented atoms of some kind, so that they are able to identify pairs such as 'a nitrogen atom carrying two hydrogen atoms connected through four bonds to an oxygen atom double-bonded to a carbon atom', rather than merely 'a nitrogen atom connected through four bonds to an oxygen atom'. 5.5.5 Three-dimensional descriptors The simplest three-dimensional descriptors are pairs of atoms with the distances between them expressed as actual through-space distances. The total of all such pairs can be extremely large in practice because of the need to allow for conformational flexibility, even if rules are used to limit the possibilities to the most likely ones. Lock-and-key interactions depend on the complementarity of binding centres in active molecules and binding sites in proteins, so, instead of using atom pairs, it is possible to base reasoning on binding centres. There are fewer types of binding centre than of elements, and fewer centres than atoms in a molecule. It has been suggested that for many analyses it is sufficient to consider five kinds of binding centre (Martin et al, 1988; Davies and Briant, 1995) - hydrogen bond donors, hydrogen bond acceptors, aromatic 7r-systems, charged centres and lipophilic regions. Many atoms in a given molecule may collectively constitute one such centre, and some have no relevance at all, being, in effect, spacers between the centres. Using molecular modelling methods to locate binding centres and the distances between them, binding-centre pairs can be generated for expert systems. Numbers can still be large, but computation is possible in favourable cases.
Binding effectively to a biological site normally depends on the presence of three or four binding centres, not just two - i.e. on triangles or tetahedra of centres, rather than pairs of them. Given all the bindingcentre pairs in a molecule, or set of molecules, it is possible to compute all the possible triangles and/or tetrahedra that might be constructed from them, and to regard them as potential toxicophores. For a real set of molecules, the number of such toxicophores is less than the number of centre pairs - three atom pairs make only one triangle, for example, and the variations in distance ranges for different molecules in an active set may rule out many triangles and tetrahedra on the basis of geometrical rules. These potential toxicophores can be used as descriptors for computer analyses.
5.6 The effects of choosing different types of descriptors Database search keys can be used as descriptors for systems to predict biological activity, but the keys are designed for a quite different purpose and do not perform well. Most systems for predicting toxicity use their own descriptors, but there are differing views on the best types to choose. Compromises have to be made between notions of the ideal models of toxicological behaviour and what is practical, given the limitations of computer power and our current state of knowledge. Some systems use large predefined sets of descriptors for substructural fragments (such as functional groups), ring systems, etc. Augmented atoms may also be used. Because of the way in which such fragments are defined, no information, or very little, is available about their relative positions in molecules. There is unlikely to be a single descriptor that happens to relate to a complete toxicophore. Typically, a computer analysis will find several descriptors, each defining subsections of the toxicophore. The resultant set of descriptors can be used to recognize active molecules (if they contain the toxicophore, they must contain the set of fragments). Inactive compounds containing the fragments not joined together in the right way to make the toxicophore will be mistakenly flagged as toxic as well, but it can be hoped that they will not be too common. Attempts have been made to use quantitative techniques which assume that the contributions made to activity by different fragments are independent both of each other and of the relative positions of the fragments in molecules. Even in the case of non-specific toxicity these assumptions will rarely be valid. For example, acetic acid is corrosive, chloroacetic acid more so. In a simplistic system assuming purely additive effects, the conclusion might be reached that the chloride group itself contributes the increased corrosive effect, and thus that all chloride-containing compounds
must be acidic. A more subtle error is that even the conclusion that all chlorinated carboxylic acids will be more corrosive than their unchlorinated analogues is incorrect. It is not true because the location of the chloride group alpha to the carboxyl group is crucial. The method of analysis cannot take that into account. Of course, in real systems a pair of groups that were so common, so close together, would constitute a fragment with its own descriptor, but this simplified example illustrates the inherent problems. Using atom and bond sequence descriptors avoids some of the problems associated with independent fragments. If all the sequence descriptors for a molecule are generated automatically, then many will overlap with each other, and the set of descriptors associated with a particular toxicophore will include those that link together all its extremities. The descriptor set for chloroacetic acid would include Cl-C—C(=O) and Cl-C—C—O, for example, which would prevent subsequent matching to molecules only containing a chlorine atom more widely separated from the carboxyl group. Systems based on the use of sequence descriptors do not actually use all possible sequences because the number of them would be impractically large. Chain lengths are limited to a pre-determined maximum number of bonds. As a result, cases would be expected in which the systems failed to discriminate between different widely-separated arrangements of the same fragments. Failures are rare in practice as long as the restriction on length is not set too low. A disadvantage with atom and bond sequences is that each one defines exactly all the atoms (and/or bonds) in a chain. For lock-and-key activity, the biological requirement is normally, more simply, for a pair of features at a certain approximate distance apart - what is contained in the intervening space is of secondary, or no, importance. As a result, when diverse sets of structures are analysed, the use of sequences may fail to find the features responsible for activity because intervening atoms differ from molecule to molecule. Using atom pair descriptors avoids this limitation. Predictions based solely on the presence of sets of descriptors without interrelating them to define complete toxicophores lead to false warnings for inactive compounds that contain the features associated with activity, but not arranged in the right way to make a toxicophore. This does not mean that the predictions are not useful - it depends upon the degree of error that is acceptable. Some topologically-based systems and most molecular modelling methods provide ways of building complete toxicophores which are more accurate models (some researchers have used 'toxicophore' simply to mean the sets of independent fragments responsible for activity but this usage is potentially misleading - here, as elsewhere in this chapter, it means the complete description of a single, complex fragment).
5.7 Assessment of hazard and risk The systems discussed in this chapter assess toxicological hazard. To assess the risks associated with predicted hazards, even qualitatively, many other factors must be taken into account. Some expert systems provide help with some of the additional factors, but none provides full support for risk assessment. Substances which are inherently toxic will not be harmful if they do not reach their site of action at high enough concentration. In practice, for food chemicals this means being eaten, surviving digestive processes, being absorbed, passing through the liver unmodified (the liver also creates toxins from food chemicals), being transported through aqueous and fatty media to the target organ, and, typically, passing through membranes to reach the protein to which they bind. Expert systems can take some account of digestive and liver metabolic processes. For example, a knowledge-based system can warn of the potential carcinogenicity of polyaromatic hydrocarbons without needing to 'know' that the active carcinogens are really epoxides created from them in the liver. There are knowledge-based systems for the prediction of metabolism. However, limitations on current human knowledge make it very hard to decide when a given metabolic process will not take place, and there are no systems that provide sufficiently reliable or comprehensive information for automatic risk assessment. Transport through the body to the site of action depends mainly on physicochemical properties of a substance, such as fat solubility, water solubility, and its partition ratio between fat and water. In practice, it is generally possible to make good predictions on the basis of the fat/water partition ratio, and a laboratory method which measures the partition ratio between n-octanol and water is regarded as a good model for behaviour in vivo. The octanol/water partition coefficient is usually referred to as log P, and computer methods are available for calculating it, e.g. ClogP (Leo, 1993). The calculations are reasonably reliable and the circumstances under which they fail are well understood. Almost all expert systems for assessing toxicological hazard are able to take log P into account to provide guidance on the likelihood that potential toxicity will be expressed. Other aspects of risk assessment are not covered by the toxicological expert systems discussed in this chapter, although support of risk assessment, as distinct from hazard assessment, is the subject of some research (e.g. Krause et al, 1993/4). 5.8 Some examples of expert systems Only a few expert systems relating to toxicological hazard are listed here, chosen because they illustrate different approaches discussed in this
chapter. Inclusion or omission of a system is not intended to imply anything about its virtues or failings. All the systems are described elsewhere and most are commercially available, and so only an outline is given in each case. Many papers have been published about some of the systems and only selected ones are included in the references, through which others may be traced. Several systems use methods based on the work of Corwin Hansch and co-workers (Hansch, 1969) for modelling quantitative structure activity relationships (QSAR). QSAR is a major field in itself, covered briefly in Chapter 7. TopKat (Gombar et al, 1991) was one of the earliest systems to become available. It uses ideas from the Hansch approach to modelling quantitative structure-activity relationships and has a library of several thousand predefined fragment descriptors. The program suppliers build and distribute models for various substance types and endpoints, and the system is not normally used by customers to search for new rules or models. CASETOX was the first system based on atom and bond sequence descriptors, and MULTI-CASE is a more advanced version of the program (Klopman and Rosenkranz, 1994). Among other things, in addition to the standard, linear fragments, MULTI-CASE is able to make use of information about branching. Analysis is statistical, and Hansch methods provide quantitative predictions. HazardExpert (Smithing and Darvas, 1992) uses classical methods to derive the probability of activity of a query substance from probabilities of activity associated with the features that it contains. Physicochemical properties are also taken into account. REX (Judson, 1994) uses atom pairs, and includes ways of linking together the members of sets associated with activity to build full toxicophores suitable for entering into knowledge-based systems. Chem-X (Chemical Design Ltd, Roundway House, Cromwell Park, Chipping Norton, Oxfordshire OX7 5SR, UK), a molecular modelling system, includes modules for generating three- and four-centred threedimensional toxicophores containing three or four protein-binding centres, and mapping them onto novel structures. Customers run their own analyses to discover toxicophores. DEREK (Ridings et al, 1996) is a knowledge-based system with a knowledge base developed and maintained by a consortium of chemical companies, universities and government bodies. It bases predictions on descriptions of full toxicophores, and provides references to published information about precedents. Tools for adding to the knowledge base are supplied to customers. META (Klopman et al, 1994; Talafous et al, 1994) and MetabolExpert (Darvas, 1987) are knowledge-based systems for predicting metabolism which may be used in conjunction with toxicity prediction systems.
Clementine (Integral Solutions Ltd, Berk House, Basing View, Basingstoke, Hampshire RG21 4RG, UK) is a tool for data mining, which can be used for analyses based on combinations of methods, including binary trees and neural networks. Research in the StAR project (Krause et al, 1993/4)) used the logic of argumentation (Krause et al., 1995) for a knowledge-based system to support assessment of some aspects of toxicological risk, as well as hazard.
5.9 The implications of choosing different types of system The types of computer system discussed in this chapter fall broadly into two categories - those with automated methods for extracting and generalizing information from sets of data ('rule induction'), and knowledgebased systems which require human input of the generalized information that they use. Summarizing some of the points made in section 5.3, there are a few key advantages and disadvantages of these alternatives. Knowledge-based systems, being driven by information that is compiled by human experts making their best interpretation of what is currently known, may be perceived as more trustworthy than systems based on automatic analysis of data. In addition, they can be expected to provide end-users with better explanations of their reasoning, helping users to make judgements about its quality and validity. You can argue with a person, or a machine (in the sense of holding reasoned dialogue, rather than contention), if they say something like: I think this will be toxic because it contains this and that structural feature, which Smith and Jones reported to be associated with hepatotoxicity. So did Brown. In addition, my experience is that most compounds with the physical properties that this compound has do reach the liver and tend to accumulate there.
You can take a view both on whether you think the structural feature in the substance under consideration really is like the one the researchers described, and on whether you think they were right to conclude that it was responsible for the activity. You may even have views on their experimental competence to have determined that the substances themselves were hepatotoxic. It is harder to argue about, and take a view on, advice of the form: I think this will be toxic because it scored 83 out of 100 in a similarity test comparing it with a model derived from 417 hepatotoxins.
This does not mean, of course, that knowledge-based prediction is necessarily more likely to be correct. The point is only that the advice may make it more useful in practice. It is certainly likely to be more appealing
to someone who is skilled in making judgements about toxicity, and looking to the computer for useful support, rather than pronouncements. In different circumstances, it might be an advantage to have a system giving information that is not open to subjective interpretation and debate. But for human decision-making such as risk assessment, in which reasoning plays a central role, 'black box' information, however reliable and objective, may not be very helpful. The problem with knowledge-based systems is that someone has to provide the generalized knowledge in the first place. Human beings find it difficult to interpret and to generalize from masses of data, especially if there is a high incidence of misleading or erroneous data. This is precisely what systems with rule induction can be good at doing. In the author's view, the two kinds of systems will be used increasingly to do the jobs for which they are best suited - human experts will use rule induction systems to discover and highlight possible toxicophores, which they will explore and rationalize in order to expand knowledge; the knowledge will be documented and made searchable by being entered into knowledge-based systems.
5.10 Applicability of expert systems to food chemical hazard evaluation The methods described in this chapter, and the computer systems that use them, have largely been developed by scientists involved in drug and agrochemical research. Drugs comprise a special set of chemicals in at least two respects - by definition, they are biologically active in humans, and they are administered in very specific, controlled doses to selected individuals, often by injection. Studies relating to agrochemicals may be less narrowly confined than those relating to drugs, but they are also specialized. Has research been biased in ways that make its results less useful for predictions about food chemicals? Little has been published on this subject. An evaluation of the DEREK system showed a bias in its knowledge-base content, in that certain wellknown toxins of limited relevance to the drug industry were not covered in sufficient detail for food chemical hazard assessment (e.g. the polyaromatic hydrocarbon carcinogens), but it was possible to write adequate supplementary rules, suggesting that the basic design of the system had not been limited by its origins. Within the chemical industry there is increasing recognition that expert systems, particularly knowledge-based systems, can be useful for a variety of assessment tasks, and there is no reason to think that this should not apply to food chemical hazard, and ultimately risk, assessment.
References Downs, G.M. and Willett, P. (1995) Similarity searching in databases of chemical structures. Reviews in Computational Chemistry, 7, 1-66. Darvas, F. (1987) METABOLEXPERT: an expert system for predicting metabolism of substances. In: Kaiser, K.L.E., (ed.) QSAR. Environmental Toxicology, Proceedings of International Workshop 1986. Reidel, Dordrecht, pp. 71-81. Davies, K. and Briant, C. (1995) Combinatorial chemistry library design using pharmacophore diversity. Web Journal of Science and Computers, 1(1). Gombar, V.K., Enslein, K., Hart, J.B. et al (1991) Estimation of maximum tolerated dose for long-term bioassays from acute lethal dose and structure by QSAR. Risk Analysis, 11, 509-517. Hansch, C. (1969) A quantitative approach to biochemical structure-activity relationships. Accounts of Chemical Research, 2, 232-239. Judson, P.N. (1994) Rule induction for systems predicting biological activity. Journal of Chemical Information and Computer Science, 34, 148-153. Klopman, G. and Rosenkranz, H.S. (1994) Approaches to SAR in carcinogens and mutagens. Prediction of carcinogenicity and mutagenicity using MULTI-CASE. Mutation Research, 305(1), 33-46. Klopman, G., Dimayagu, M. and Talafous, J. (1994) META. 1. A program for the evaluation of metabolic transformations of chemicals. Journal of Chemical Information and Computer Science, 34, 1320-1325. Krause, P.J., Fox, J. and Judson, P.N. (1993/4) An argumentation-based approach to risk assessment. IMA Journal of Mathematics Applied in Business and Industry, 5, 249-263. Krause, PJ., Ambler, S.J., Elvang-G0ransson, M. and Fox, J. (1995) A logic of argumentation for uncertain reasoning. Computational Intelligence, 11(1), 113-131. Leo, AJ. (1993) Calculating log F001 from structure. Chemical Reviews, 30, 1281-1306. Martin, Y.C., Donaher, E.B., May, C.S. and Weininger, D. (1988) MENTHOR: a database system for the storage and retrieval of three-dimensional molecular structures and associated data searchable by substructural, biological, physical or geometric properties. Journal of Computer-Aided Molecular Design, 2(1), 15-29. Ridings, J.E., Barratt, M.D., Cary, R. et al. (1996) Computer prediction of possible toxic action from chemical structure: an update on the DEREK system. Toxicology, 106, 267-279. Smithing, M.P. and Darvas, F. (1992) HazardExpert: an expert system for predicting chemical toxicity. In: Finley, J.W., Robinson, S.F., Armstrong, DJ. (eds) Food Safety Assessment, American Chemical Society, Washington, pp. 191-200. Talafous, J., Sayre, L.M., Mieyal, JJ. and Klopman, G. (1994). META. 2. A dictionary model of mammalian xenobiotic metabolism. Journal of Chemical Information and Computer Science, 34, 1326-1333.
6 Risk assessment: alternatives to animal testing C.L. BROADHEAD, R.D. COMBES and M. BALLS
6.1 Introduction All new chemicals manufactured specifically for use in food are required to be evaluated for safety. Such chemicals which are intentionally incorporated into foods are additives, and this chapter focuses on their safety assessment. Each proposed food additive is subjected to various in vivo and in vitro assays, which may include tests for genotoxicity, acute and subchronic toxicity, carcinogenicity and teratogenicity (Chapter 2). In the UK, on the basis of the results obtained and the estimated daily intake, the additive is assigned an acceptable daily intake (ADI) value as an indication of the levels which may be consumed without the likelihood of adverse effects. The public is becoming increasingly concerned about the use of large numbers of animals in the safety testing of what are perceived to be "luxury' products, e.g. cosmetics and toiletries. Intense public pressure has forced cosmetics and toiletries manufacturers to seek alternative methods of testing their ingredients and products. This demand has been recognized by the EU, and the Cosmetics Directive (76/768/EEC) (European Economic Community, 1976) has been amended to include the statement: 'testing on animals of ingredients or combinations of ingredients should be banned as from 1 January 1998; . . . that date should be postponed where alternative methods of testing have not been scientifically validated' (European Economic Community, 1993). The numbers of animals used in regulatory food toxicity testing are greater than the numbers used in the testing of cosmetics and toiletries. Therefore, the public may soon refocus its attention on the use of animals in food toxicity testing, and demand that food additives also be tested by alternative methods which can reduce the numbers of animals, refine the experimental procedures used and, in some cases, totally replace the use of animals. As very few, if any, serious instances of toxicity have arisen due to the consumption of specific food additives, it might be claimed that the current methods for food additive safety assessment are reliable. However, food additives pose a number of special problems for toxicity testing. Foods are complex mixtures of many potentially toxic compounds and the testing of a single additive is unlikely to predict its effects when combined with other food constituents. Humans are subjected to highly variable, repeated
low-dose exposures to food additives over long periods of time. This means that it is difficult to attribute a disease or toxicological effect to any specific food component. In addition, the possible long-term effects of food additives cannot be accurately predicted in 2-year animal bioassays. Finally, many of the animal tests used have not been adequately validated for the specific purpose of food toxicity testing. Therefore, the data they provide may be scientifically inadequate, and the subsequent risk assessments based on such data are therefore of dubious value. Thus, it is not unreasonable to assume that current animal studies for the purpose of food additive safety assessment are far from ideal. The main issue that needs to be addressed is how to develop methods for assessing the effects of long-term, low-dose (chronic) exposure to food substances, by using refined procedures and minimum numbers of animals in experiments which are only carried out when absolutely necessary. The eventual goal should be the development of assays which can completely replace the need for animal studies in food additive safety evaluation.
6.2 The Three Rs concept In 1959, Russell and Burch pioneered the concept of the Three Rs - reduction, refinement and replacement - in their book, The Principles of Humane Experimental Technique (Russell and Burch, 1959). The UK Animals (Scientific Procedures) Act 1986 contains a commitment to replacement alternatives in the following important clause: 5(5). The Secretary of State shall not grant a project licence unless he is satisfied that the applicant has given adequate consideration to the feasibility of achieving the purpose of the programme to be specified in the licence by means not involving the use of protected animals.
In Europe, Directive 86/609/EEC (European Economic Community, 1986) states that: 7.2. An experiment shall not be performed if another scientifically satisfactory method of obtaining the result sought, not entailing the use of an animal, is reasonably and practicably available.
Thus, the concept of alternatives is promoted via UK and European legislation and the use of animals has fallen significantly since the mid-1970s. However, there is still considerable room for progress and improvement. At a workshop (Balls et al, 1995), the current status of the Three Rs was discussed and recommendations were made which aim to achieve greater acceptance of the concept of humane experimental technique and the more active implementation of alternatives. This chapter aims to highlight the areas within food safety assessment where there is potential, both in
the short term and in the longer term, for the implementation of alternative approaches, with the aim of reducing the numbers of animals and refining the experimental procedures used, and, ultimately, replacing the use of animals with alternative methods.
6.3 Statistics for the use of animals in food safety evaluation
6.3.1
UK
The term 'food additives', as used in the Home Office Statistics of Scientific Procedures on Living Animals (Home Office, 1995) refers to substances deliberately added to food, but the statistics do not include studies on the nutritive value of food, accidental contamination or infection of food, or medicines administered to animals or humans in food. In 1995, the number of procedures for the safety evaluation of food additives (6272 procedures) was 3.2 times higher than the number of procedures conducted for cosmetics toxicity testing (1935 procedures), and 3.6 times higher than for household substances (1738 procedures) (Home Office, 1996; Figure 6.1). Taking into account the pressure from the public to reduce toxicity testing of cosmetics and toiletries in animals, food additive safety evaluation is therefore a potential cause for public concern. The species of animals used for the safety evaluation of food additives between 1990 and 1995 are shown in Table 6.1. It is not possible to determine from the Home Office statistics what proportions of the total procedures were used for each type of toxicity test and which species of animals were used for each test.
Number of procedures
Food additives Cosmetic and toiletries
Year Figure 6.1 Numbers of animals used for the safety evaluation of food additives and cosmetics and toiletries in the UK between 1989 and 1995. (Statistics of Scientific Procedures on Living Animals, Great Britain, published annually by HMSO, London.)
Table 6.1 Safety evaluation of food additives in the UK by species of animal No. of procedures
1995
1994
1993
1992
1991
1990
Mouse Rat Rabbit Guinea-pig Dog Monkey Bird Fish
153 5183 200 56 7 24 649 -
228 7520 24 376 18 60
64 7432 29 60 32 -
150 4052 144 91 12 1685
339 10287 3 86 105
386 10250 100 18 64 4 -
Total
6272
8226
7617
6134
10855
35
10822
Data from Statistics of Scientific Procedures on Living Animals, Great Britain, published annually by HMSO, London.
6.3.2
Europe
In May 1994, the EC published the first report on the statistics of the numbers of animals used in 1991 for experimental and other scientific purposes (Commission of the European Communities, 1994). However, an overall picture of the numbers of animals used in Europe in 1991 cannot be obtained, due to problems with harmonization of the data gathered (Straughan, 1994). In addition, Belgium and Luxembourg did not supply any statistics, and data submitted by certain member states were incomplete. Some member states submitted data for years other than 1991, and the tables submitted by some countries differed from the models adopted by the Council of Europe. It should also be noted that the national figures for some countries do not include data from all laboratories, since they were unable to complete the tables, or failed to return them. Despite these inconsistencies, it is possible to determine the approximate numbers of animals used in safety evaluations of food additives in some member states (Table 6.2).
6.4 Legislation relating to food additive safety assessment 6.4.1
UK legislation
UK Ministers are advised on additives by two committees of independent experts: the Food Advisory Committee (FAC), which is responsible for establishing a need for a certain food additive, and the Committee on Toxicity in Food, Consumer Products and the Environment (COT), which establishes whether a substance is sufficiently safe to be acceptable. The COT published general guidelines for toxicity testing in 1982 (Committee
Table 6.2 Numbers of animals used for the safety evaluation of food additives in Europe Country
Numbers of animals
UKa Germany Spain France5 Greece Netherlands3 Sweden3 Portugal0
6272 60 3529 O 2991 O 280
3 1995; b 1990; c 1992. Note that data for the numbers of animals used for the safety evaluation of food additives are not gathered separately in Germany, where 70 456 animals were used for the purposes of testing substances used in industry, households, cosmetics and toiletries, and food. Data from Commission of the European Communities (1994), and Home Office (1995).
on Toxicity of Chemicals in Food, Consumer Products and the Environment, 1982). In contrast to the US Food and Drug Administration (FDA) guidelines, there are no regulatory requirements for specific tests to be carried out on food additives. The toxicity tests are carried out in accordance with Organization for Economic Co-operation and Development (OECD) guidelines (Organization for Economic Co-operation and Development, 1994). In the case of novel foods, a toxicological assessment must be carried out on any chemical for which no toxicological data, or only incomplete data, are currently available. The minimum testing requirements are a 90day oral study, normally in the rat, and a battery of in vitro and in vivo mutagenicity screening tests (Advisory Committee on Novel Foods and Processes, 1991). If the extent of human exposure to the chemical is likely to be widespread and the intake significant, then further toxicological studies will also be required, such as tests for chronic toxicity/carcinogenicity, embryotoxicity and reproductive toxicity. The studies are carried out in accordance with COT (Committee on Toxicity in Chemicals in Food, Consumer Products and the Environment, 1982) guidelines. 6.4.2 European legislation In 1980, the EU Scientific Committee for Food (SCF) published guidelines for toxicity testing of additives (Commission of the European Communities, 1980). As in the UK, there are no prescribed requirements for testing. However, in general, certain tests are required to be carried out for the evaluation of new food additives or the re-evaluation of
existing additives (Table 6.3). When evaluating a food additive, the SCF accepts studies carried out according to OECD protocols. 6.4.3
US legislation
The primary agency concerned with the regulation of food additives in the USA became known in 1930 as the Food and Drug Administration (FDA). In 1982, the FDA published Toxicological Principles for the Safety Assessment of Direct Food Additives and Color Additives Used in Food. A draft revision of this (the so-called Redbook II) was written in 1993 (Food and Drug Administration, 1993). The FDA states that the amount of testing required for a substance should reflect the 'level of concern' that a substance poses in terms of safety. The draft Redbook II defines three levels of safety concern, based on anticipated human exposure to the substance and its chemical structure. The three structure categories adopted were low (I), intermediate (II) or high (III) toxic potential. The tests that the FDA will generally require for substances at each concern level were stated (Table 6.3). In addition to the conventional types of toxicity tests recommended in Redbook /, the revised version included new or significantly expanded sections, such as tests for metabolism and pharmacokinetics, immunotoxicity, neurobehavioural toxicity, alternatives to whole animal testing, emerging issues in toxicity testing, pathology and statistical considerations, human studies, epidemiological studies and carcinogenic risk assessment. Table 6.3 Summary of the toxicity tests recommended by the US Food and Drug Administration and the EU Scientific Committee for Food SCFa
FDA Concern Level
Short-term genotoxicity Acute oral toxicity Metabolism a n d pharmacokinetics Short-term rodent toxicity Subchronic rodent toxicity Subchronic non-rodent toxicity Reproductive toxicity One-year non-rodent toxicity Rodent carcinogenicity Combined rodent carcinogenicity and chronic toxicity a
I
II
III
X X
X X
X X
Xb
X b
X Xb Xb
X X X
X
b
Xb X Xc Xc'd
X X X X
X
For flavouring substances, food processing aids and food packaging components, the toxicological data required may be modified appropriately. b Including neurotoxicity and immunotoxicity screens. c Combined study may be performed as a separate study. d An in utero phase is recommended for one of the two recommended carcinogenicity studies with rodents. (Commission of the European Communities, 1980; Food and Drug Administration, 1993)
6.5 Tests required for food safety assessment Detailed descriptions of the tests required for food safety assessment are given in Chapter 2. This section serves to highlight important differences between international test guidelines which have implications for the numbers of animals used and the severity of the experimental procedures. 6.5.7 Acute oral toxicity tests The FDA requires acute oral toxicity data, including neurotoxicity and immunotoxicity screens, for substances in Concern Level I. However, the FDA does not recommend that petitioners determine the precise LD50 value for food additives. Instead, several alternatives are suggested: the limit test, the dose-probing test, the up-and-down test and the pyramiding test (Food and Drug Administration, 1993). These modified LD50 tests reduce the number of animals used, although most still require some animals to be killed. The OECD recommends the use of the limit test (Organization for Economic Co-operation and Development, 1987) and the fixed-dose procedure (FDP), which has a non-lethal endpoint (Organization for Economic Co-operation and Development, 1992). It is notable that the OECD guidelines accept the use of the FDP, while the FDA guidelines do not. There has been no published critique which implies that the FDP is not acceptable on a scientific basis. In theory, tests carried out in compliance with OECD guidelines should be universally accepted, as OECD member countries are bound by the OECD statement on mutual acceptance of data. However, some countries are still reluctant to accept data on substances tested using OECD guideline methods (Animal Procedures Committee, 1994) and this unsatisfactory situation must be rectified as soon as possible. 6.5.2
Short-term genetic toxicity tests
Both the FDA and the SCF recommend a battery of short-term in vitro and in vivo genetic tests. Tests which have been suggested for screening include the Ames test, the detection of gene mutation in mammalian cells in vitro, and the concurrent determination of micronuclei and chromosomal aberrations in vivo. 6.5.3 Metabolism and pharmacokinetic studies The FDA recommends that oral dosing studies for substances classified in Concern Levels II and III should be carried out in two rodent species and one non-rodent species. The SCF recommends the studies for all
substances to be tested. The FDA also recommends that in vitro studies of metabolism are carried out before in vivo studies, to screen for dose dependencies, and provide more accurate descriptions of the enzyme kinetics or other processes underlying dose dependencies observed in the whole animal. In addition, the likely major metabolites of the test substance can be identified. 6.5.4 Immunotoxicity tests In the Redbook 7/, the recommended minimum set of toxicity tests for food additives was augmented to include screens for immunotoxicity and neurotoxicity tests with rodents at each concern level. Two types of immunotoxicity testing procedures are defined. Type 1 tests (basic and expanded) are those assays that do not require the study animals to be treated with an agent that presents an immunological challenge. Primary indicators of immune toxicity are derived from basic and expanded Type 1 tests. Type 2 tests include injections or exposures to test antigens, vaccines, infectious agents or tumour cells. Type 1 basic tests do not require manipulation of animals, and can, therefore, be included in the short-term or subchronic study without the need for additional animals. However, because Type 2 tests require treatment of animals with an immunological challenge, additional animals must be included in the study design. If a substance provides evidence of immunotoxicity in a basic Type 1 test, further testing (expanded Type 1 tests or Type 2 tests) may be recommended; such decisions will be made on a case-by-case basis and will take into account the concern level of the test substance. These immunotoxicity tests are designed primarily to characterize the mechanism of an effect on the immune system, rather than to detect induction of allergies and pseudo-allergies, phenomena more likely to be caused by food ingredients. Moreover, the proposed methods for detecting immunotoxic effects by using animals have not been sufficiently well validated for routine usage, and may therefore yield equivocal data. 6.5.5
Neurotoxicity tests
The FDA recommends that a basic neurotoxicity screen should include an examination of tissue samples and an examination of animals to detect behavioural changes. Substances which produce possible neurotoxic effects will be subjected to further characterization. This assessment will include a battery of behavioural and physiological tests and further neuropathological investigations. The FDA recommends that cross-species comparisons of the neurotoxic potential of the test compound should also be carried out in a non-rodent species, e.g. the dog.
6.5.6
Reproductive and developmental (teratogenic) toxicity tests
The rat is the preferred species for reproductive studies. A minimum of two generations, with one litter per generation, is generally required for reproduction studies, although this may be expanded if significant toxicity is observed. A minimum of three dose levels, usually in 30 animals/sex/group, should be administered 2 weeks before mating, through mating and pregnancy, and to the weaning of the litter. Reproductive and developmental studies may be combined in one or two species, as required by the FDA and SCF, respectively. The rat and the rabbit are the preferred species for developmental studies. The FDA specifies that each test and control group must consist of at least 20 pregnant rats or 12 pregnant rabbits, at or near term, with at least three dose levels administered. The question of the number of generations necessary for reproductive studies has been discussed by Christian (1986). Based on a review of published studies, she reported that most adult primary reproductive effects were seen in the first generation. She concluded that a one-generation study would be sufficient for the evaluation of a compound which was not accumulated in the tissues. However, in a workshop sponsored by the Environmental Protection Agency (Francis and Kimmel, 1988), panel members concluded that a one-generation study is insufficient to identify all potential reproductive toxicants, and a two-generation study is necessary for an adequate assessment. A thorough review should be carried out of reports of food-related reproductive toxicity studies, both for general and bioaccumulating substances, to determine the proportion of food additives which produced positive results only in the second generation. The results of such a review would indicate the usefulness of two-generation reproductive studies for food additive safety assessment. 6.5.7 Carcinogenicity and chronic toxicity tests Table 6.4 shows the tests required by the FDA and the SCF for the safety assessment of food additives. The FDA does not require petitioners to carry out chronic or carcinogenicity studies for additives classed as Concern Levels I and II. In contrast, the SCF requires a combined chronic/carcinogenicity study in rats and mice for all food additives. The highest administered dose is the maximum tolerated dose (MTD) and the lowest dose should show no signs of toxicity, although it should be no lower than 10% of the highest dose used. The SCF states that, in cases where the substance is relatively non-toxic, the highest dose should not exceed 5% of the diet.
Table 6.4 Summary of the US Food and Drug Administration and EU Scientific Committee for Food guidelines for the conduct of 1-year and chronic toxicity and carcinogenicity studies Toxicity study
FDA Concern Level III
One-year study in non-rodents Species No. of animals Exposure time No. of dose levels
Usually dogs 4/sex/group 12 months 3 + 1 control
Carcinogenicity study in rodents Species No. of animals Exposure time No. of dose levels
Preferably mice 50/sex/group 24 months 3 + 1 control
Combined chronic/carcinogenicity study in rodents Species No. of animals Exposure time No. of dose levels
Preferably rats 50/sex/group 24 months 3 + 1 control
SCF All substances
Rats and mice 50/sex/group 24 months 3 + 1 control
Commission of the European Communities (1980) and Food and Drug Administration (1993).
Based on the figures given in Table 6.4, a Concern Level III food additive would have to be tested using a minimum of 32 dogs, 400 mice and 400 rats to satisfy the FDA's requirements, and a minimum of 400 mice and 400 rats to satisfy the SCFs requirements. Furthermore, although the FDA does not require a Concern Level II substance to be tested for potential carcinogenicity, if this substance is to be marketed in Europe at least 400 mice and 400 rats would be required for carcinogenicity testing. 6.5.8 Determination of the no observed adverse effect level The determination of the no observed adverse effect level (NOAEL) for each effect of every chemical is the initial stage in the setting of the ADI. Data are usually obtained from chronic and reproductive studies. Where several studies in different species are available, data are derived from the most sensitive species and/or the lowest NOAEL. However, the NOAEL used depends on the relevance of the effect to humans. Thus, the lowest NOAEL may not be used if the effect can be demonstrated to be irrelevant to toxicity in humans, i.e. the lowest NOAEL for an effect of relevance to human toxicity is usually used. 6.5.9 Determination of the acceptable daily intake The ADI for human exposure is derived from animal studies by dividing the NOAEL by a safety factor. The safety factor, usually 100, is calculated as the product of a 10-fold factor to allow for species differences
between the test animal species and humans, and a 10-fold factor to allow for interindividual differences. The latter safety factor should allow for differences in exposure between individuals and individual variation in metabolism and pharmacokinetics. However, the 100-fold safety factor may be modified under certain conditions. For example, it may be reduced if human toxicology data are available, or increased if there is uncertainty about the NOAEL or concern about serious effects at doses above the NOAEL. In addition, the value of the ADI must take into account the population group likely to be most highly exposed or most susceptible. The derivation of the safety factor, whose purpose is to compensate for the low power of animal studies for detecting toxic effects at low doses, is highly subjective, and this approach makes a number of assumptions which are not necessarily true (Balls and Fentem, 1992). Genotoxic carcinogens can act at extremely low concentrations, without any discernible threshold. Therefore, it is not possible to set an ADI. It is possible that genotoxic carcinogens may have a threshold value, below which their activity is not expressed, although it is impossible to establish the absence of any effect at low doses. It is for this reason that researchers feel justified in administering high doses of a substance in rodent bioassays. Renwick (1993) proposed a scheme for the derivation of the ADI based on a scientific judgement of the toxicity of the compound. Information from the total database available is weighted into factors for adequacy of the database, toxicodynamics, toxicokinetics, human variability in sensitivity and disposition, and various other considerations. The final safety factor is derived by multiplying the individual factors. This approach permits the identification of why a substance was given a certain ADI value, and provides a way to incorporate appropriate data into safety evaluation as toxicological science develops. Although the methodology suggested by Renwick remains to be widely accepted, it attempts to provide a more rigorous, science-based and transparent procedure for setting safety factors. It is not, however, applicable in all circumstances. A comparison between the estimated daily intake (EDI) and the ADI provides the measure of safety which constitutes the basis for regulatory decisions about the foods and the concentrations in which the additive may be used. For example, if the ADI is calculated to be less than, or equal to, the EDI, then the levels of the additive in foods must be reduced. 6.6 Problems with animal tests 6.6.7 Determination of the NOAEL and the ADI The overall conclusion of a toxicology assessment is the derivation of the ADI value. Whilst it seems that this risk assessment system is satisfactory for protection of humans against most acute adverse reactions, we have
no clear evidence that the risk assessment process is safeguarding us from the occurrence of chronic effects which can involve many years between exposure and manifestation of toxicity. There are many inherent problems associated with using animal tests for food components consumed at repeated, low concentrations. Many of these problems could be alleviated if there was more emphasis placed on studying mechanisms of toxicity and applying knowledge gained from pharmacokinetics, identification of target organs, likely levels of compound/metabolites at target sites, modelling and in vitro studies. 6.6.2
Use of high doses
The toxicity of many compounds can be affected by enzymic processes of metabolism and elimination. Activation, detoxification and elimination systems can be saturated under high-dose conditions, and high-dosespecific toxicity may be attributed to toxic mechanisms which are seen only under conditions of saturated metabolism or elimination. For example, in the case of ortho-phenylphenol (OPP), rats fed high doses developed malignant bladder tumours, whereas lower doses provided no evidence of bladder tumours (Hiraga and Fujii, 1981). Metabolic studies showed that OPP was completely metabolized to sulphate and glucuronide conjugates at low doses, while at higher doses increasing amounts of the chemical escaped conjugation and were subjected to mixed function oxidase metabolism to yield reactive quinone metabolites (Reitz et al, 1984). In this case, the high doses administered to the rats were much higher than the expected human exposure, and the data were, therefore, irrelevant to human hazard and risk assessment. In addition, the administration of high dose levels of food constituents in animal experiments is unlikely to be indicative of the predicted human exposure to the substances. This is particularly true in the case of food additives, where most sections of the population consume low doses of a given compound over long periods of time. Thus, the use of high doses in carcinogenicity bioassays is almost certainly irrelevant to the human situation, and the data produced are almost certainly misleading. 6.7 Currently available alternatives The current scale of use of animals in food toxicity testing cannot be justified on either scientific or ethical grounds. Many reduction and refinement alternatives exist and more are being developed, but these alternative strategies are not being used effectively. There are currently only a few validated methods which can completely replace the use of animals in any food toxicity test. At the present time, in vitro tests are successfully being used for mutagenicity testing (Combes, 1995) and for the ranking of different
chemicals according to their acute cytotoxicities (Balls and Fentem, 1992; Garle et al., 1994). Combined with a knowledge of the chemical structure and physicochemical properties of the test compound, these tests can be used as general predictors of the likelihood of the compound being genotoxic or causing cytotoxic effects at low concentrations. Other in vitro tests for the investigation of specific activities associated with toxicity have been developed, although all in vitro systems have a number of limitations: • • • •
They lack the integrated systemic mechanisms of absorption, distribution, metabolism and excretion. They lack the complex, interactive effects of the immune, blood, endocrine and nervous systems. Models are not yet available for all tissues and organs. The nature of the test compound may complicate the interpretation of the in vitro studies.
Attempts are being made to overcome some of these limitations, e.g. via the ERGATT/CFN integrated toxicity testing scheme (ECITTS) (Walum et al, 1992) and two ECVAM workshops on integrated testing (Barratt et al, 1995) and on the incorporation of biokinetic factors into in vitro studies (Blaauboer et al, 1996). In any case, despite these limitations, in vitro systems have several advantages which encourage their development and utilization, particularly for food safety assessment: •
•
• •
Validated in vitro test systems can provide toxicity information in a cost-effective and time-saving manner. Information generated from in vitro systems can be used to increase the efficiency of whole animal studies and decrease the number of animals used in toxicity testing. In vitro systems possess several toxicity endpoints and are ideal for investigations on the molecular and cellular mechanisms of toxicity, as well as for target organ and target species toxicity studies. Human tissues can be used in in vitro systems. The use of human cells obviates the need for cross-species extrapolation. In vitro systems can be more easily used to assess the interactions of the individual components of a food.
The following section will describe both currently available and potential alternatives which could lead to the implementation of the Three Rs in food safety assessment. 6.7.1 Reduction alternatives Harmonization of guidelines. Manufacturers of a food additive must meet the conditions of any regulatory requirement that applies where they intend to transport and/or market their product. This might mean that food additive producers duplicate some toxicity tests in order to meet
various regulatory authority requirements. In some cases, where national and international regulatory authorities have different requirements for the safety assessment of food additives, a food additive manufacturer may seek to avoid duplication of toxicity studies by meeting the most demanding of these different requirements. In the first situation, more animals than necessary are used, and in the second case the procedures applied can be more severe than are scientifically justifiable. Increased harmonization of international regulations would lead to a reduction in the numbers of animals used for several reasons. First, it would reduce the likelihood that testing would need to be repeated so as to satisfy more than one regulatory agency. Multiplicity of testing still occurs, despite the OECD Agreement on the Mutual Acceptance of Data (Animal Procedures Committee, 1994). Second, there is the possibility that consensus would result in fewer animals being required. For example, it may be that only one sex, less treatment groups, fewer repeat tests and smaller group sizes would be deemed necessary. In addition, the severity of testing would be lessened by agreeing to shorten the length of studies, and, if rationalization were combined with harmonization, to reduce the maximum dose levels required. Many of the differences between the FDA and the SCF requirements cannot be justified scientifically or statistically, and lead to an unnecessarily large use of animals. The Animal Procedures Committee recently recommended that the Home Secretary should ask the principal UK and overseas regulatory bodies to provide a formal statement re-confirming their commitment to the OECD Agreement on the Mutual Acceptance of Data (Animal Procedures Committee, 1994). Protocols for carcinogenicity bioassays Reduction of the study design. Reduced protocols for the rodent carcinogenicity bioassay were advocated at an EPA/NIEHS workshop held in 1994 (Lai et al, 1994). It was suggested that a two-species/one-sex protocol might be appropriate for confirming negative or strong positive carcinogenicity anticipated from other information, e.g. responses in the Salmonella mutagenicity assay and structure-activity considerations. It has been shown that up to 92% of compounds in the National Toxicology Program (NTP) database, classified as carcinogens on the basis of the full two-species/two-sex protocol, would have been predicted by a twospecies/one-sex protocol (Gold and Stone, 1993). A case can also be made for a single-species study. The detection of non-genotoxic carcinogens. Of 301 carcinogens tested in the US NTP, over 36% were classed as non-genotoxic (Ashby and Purchase, 1993). Non-genotoxic carcinogens are negative in the Salmonella
mutagenicity assay, show evidence of threshold dosage effects and are usually species/tissue specific (Ashby and Tennant, 1988; Ashby and Purchase, 1992; Rosenkranz and Klopman, 1993).The possible mechanisms of action of certain non-genotoxins, known as peroxisome proliferates, have recently been reviewed (Ashby et al, 1994). As the mechanisms of action of non-genotoxins are not yet fully understood, no satisfactory battery of short-term in vitro tests has been developed to permit early identification of such chemicals. The detection of non-genotoxic carcinogens in vitro is problematic, due to the possible involvement of several intracellular targets (Tennant, 1993). There is therefore a need to develop a battery of short-term tests, which encompasses the necessary wide spectrum of endpoints, to detect and characterize non-genotoxic carcinogens (Combes, 1995). Such a battery will have to cover at least the following phenomena, thought to be relevant for detecting such carcinogenesis: cell transformation, peroxisome proliferation, hepatomegaly, cell proliferation, hyperplasia, altered intercellular communication, changes in immunoglobulins, and spindle disruption (Ashby, 1992). The Fund for the Replacement of Animals in Medical Experiments (FRAME) Toxicity Committee reported that test systems for non-genotoxic carcinogens are urgently needed and that research in this area should be encouraged (Fund for the Replacement of Animals in Medical Experiments, 1991). This viewpoint is endorsed here. Human studies. The use of pre-marketing clinical studies is a key issue facing the food industry. Guidelines for the conduct of human clinical studies of foods and food ingredients were included for the first time in the draft of the FDA's Redbook IL Although clinical studies are not required by the FDA, it is recommended that such studies are conducted after adequate toxicity tests in animals. The use of clinical studies is intended to address aspects of toxicity which cannot be adequately assessed by non-human experiments. In particular, the FDA acknowledges that clinical studies may be necessary in the case of food additives which are intended to substitute for major nutrients such as fat and sugar. The FDA recognizes that it may not be possible to administer high doses of such additives to animals. Human studies could therefore prove useful in providing convincing evidence of food safety. The Redbook II does not, however, indicate the potential role of human studies in the safety evaluation of novel foods, which present problems for testing in animals. Nevertheless, the introduction of 'post-marketing' surveillance would contribute an additional aspect to the safety evaluation of novel foods. Reporting of observed adverse effects of novel foods in consumers would permit the identification of possible toxic and/or intolerance reactions not observed in animal studies.
Use of transgenic animals. The number of scientific procedures involving the use of transgenic animals in the UK has dramatically increased from 48 255 in 1990 to 215 293 in 1995. This upward trend will undoubtedly continue. The advent of transgenic technology may result in a reduction in the numbers of animals used in toxicity testing in the longer term. Various rodent strains have been developed for in vivo mutagenesis testing. Muta™Mouse and Big Blue™ contain a lac Z gene and a lac I gene from E. coli, respectively (Gossen et al, 1989; Kohler et al, 1991). These have a defined base sequence to act as the target, and such transgenic systems would seem to be particularly suitable for toxicity testing. In particular, the principle of being able to investigate gene mutation in any tissue following in vivo exposure could be an important development. In the USA, several transgenic mouse models are currently being investigated by the NTP with respect to their usefulness in evaluating the potential carcinogenicity and mutagenicity of chemicals (Stokes, 1994). Carcinogenicity studies employing these models are completed in 6 months or less, compared with the standard 2-year bioassay, and involve fewer animals per dose group. In addition to their use in investigating the effects of genotoxic agents, transgenic rodents have also been used to evaluate the mechanisms of activity of non-genotoxins (Lefevre et al., 1994). Methyl clofenapate (MCP) was inactive in both in vitro and in vivo short-term assays, and no evidence of mutagenesis was detected in the livers of Muta™Mouse and Big Blue™. However, administration of nine daily doses of MCP at cancer bioassay dose levels doubled the weight of the liver of non-transgenic Muta™Mice and Big Blue™, as well as leading to a dramatic proliferation of peroxisomes in the livers of each strain. These combined observations suggested a non-genotoxic mechanism of action of the hepatic carcinogenicity of MCP. Regulatory acceptance of the use of transgenic rodents for carcinogenicity testing has a number of important implications for animal welfare. The total numbers of animals required would be significantly reduced, as there would be less need for short-term in vivo assays. If the number of in vitro systems was increased to allow for the more accurate prediction of genotoxicity, a decision as to whether to continue toxicity testing with a carcinogenicity bioassay in a transgenic species could be made on the basis of the results of the battery of in vitro tests. Any such decision should take into account the necessity for further testing of a particular food constituent which may have resulted in one or more positive effects in the in vitro assays. In the majority of cases, a positive in vitro result would provide sufficient information about the potential genotoxicity of a substance, and testing would proceed no further. In general, only a substance which produced no genotoxic effects at all in in vitro assays should be tested in an in vivo bioassay.
Although there are other advantages of using transgenic animals, such as the ability to test topically applied substances, the concept of the development of transgenic animals raises many scientific and ethical issues which need to be addressed. For example, the development of transgenic animals requires the use of large numbers of animals in the short term (Moore and Mepham, 1995). It is vital that transgenic models should not be considered as the only solution or the final solution to the many problems associated with toxicity testing in animals. Serious consideration needs to be given to whether efforts and resources should be redirected to the development of in vitro systems which should, ultimately, replace in vivo methods completely. Such in vitro systems could involve human material, and could thus be a means of overcoming doubts about the relevance of tests on laboratory animals to carcinogenicity in humans. 6.7.2 Refinement alternatives Group housing. The FDA's Redbook II recommends single housing of both large and small laboratory animals. The scientific argument advanced in favour of this requirement relates to the apparent difficulties in determining whether body weight losses incurred during food intake are due to actual toxicity or to lack of feeding, as a consequence of unpalatability. In an open letter to toxicologists, Zbinden (cited in Anon., 1993) said that single-caging of laboratory animals 'Provides no advantage whatsoever for safety studies with large animals and minimal benefits, if any, for the assessment of rodent experiments . . . it is ethologically wrong, ethically unacceptable, and adds to the cost, without providing significant increase of the scientific information'. In a recent study, the effects of individual housing on body weight, survival and tumour incidence in B6C3F1 mice were examined. The results indicated a marked increase in liver tumour incidence and body weight in both sexes when animals were housed individually (Haseman et al, 1994). The effects of individual housing on tumour incidence may, therefore, have implications for chronic toxicity studies. The improved analysis of food and water intake obtained from singlehoused animals does not outweigh the benefit from the improved health and well-being of group-housed animals. It is difficult to argue a case for single housing for food additive testing studies. 6.7.3 Replacement alternatives. Foods contain a complex mixture of naturally occurring chemicals and food additives which have the potential to induce a toxic reaction. The assessment of the many possible synergistic and/or antagonistic effects of complex mixtures of compounds is almost impossible using animal studies.
There is, therefore, a need for the development of rapid, low-cost, in vitro test methods to screen for the biological properties of food contaminants, naturally occurring substances and food additives. The advantages of in vitro models over in vivo systems include their high sensitivity, which is important in the assessment of the toxicity of substances which occur at low levels, such as food contaminants. It was for this reason that 19 organizations from 11 EU or EU-affiliated countries initially participated in a FLAIR Concerted Action Programme, called 'In vitro toxicological studies and real time analysis in food'. In the published proceedings of workshops held in 1991 (FLAIR, 1991), many in vitro assays were presented for the assessment of the potential toxicity of contaminants, including: isolated perfused gut, various cell lines, isolated hepatocytes and cell fractions. Such in vitro systems could also be used for the assessment of food additive toxicity. Genotoxidty testing. The potential benefits of in vitro methods as replacements for various in vivo toxicity testing protocols have been assessed (Gray, 1995), and it was concluded that the greatest benefit to be gained from the development of suitable in vitro methods would be achieved in the area of carcinogenicity testing, where both the numbers of animals used and the financial costs are high. The development of in vitro carcinogenicity screens would significantly reduce the numbers of animals which are used. In addition, the efficiency of compound selection and problemsolving, and the elucidation of the mechanisms of action of carcinogens, would be improved. Genotoxic carcinogens are currently identified by using the Salmonella/ microsome mutagenicity assay, or bacterial or mammalian cell cultures supplemented with exogenous activation systems. These tests tend to be oversensitive, because the bacteria, some of which are deficient in DNA repair, possess defective cell envelopes which enhance the uptake of test chemicals (Gatehouse et al, 1990). However, as was noted earlier, these tests are used only as a prescreen for detecting potential carcinogens. In order to determine the effect of a chemical over a lifetime of exposure, regulatory authorities recommend the use of in vivo animal bioassays. However, the relevance of in vivo animal bioassays and in vitro systems in genotoxicity testing is limited, since they do not necessarily have humanspecific metabolism and they do not measure endpoints closely related to pathological processes in humans. Furthermore, the testing of the effects of chronic low-dose exposure to a substance is problematical in in vivo and in vitro systems. To overcome some of the problems associated with the deficiencies of in vitro systems, many cell lines are being produced which express various cloned cytochrome P450 isozymes (Combes, 1992). For example, human P450 cDNAs have been expressed in human b-lymphoblastoid AHH-I cells (Crespi et al., 1990a,b). AHH-I cells, expressing
various P450 enzymes, are sensitive to mutations induced by benzo[ajpyrene, Af-nitrosodimethylamine and aflatoxin B1, all well-characterized genotoxic carcinogens. Other types of improved cell lines are also being generated, e.g. by introducing shuttle vectors as target DNA sequences for genotoxicity studies (Yagi et al, 1994), and also by producing cells possessing stably integrated copies of a lambda bacteriophage shuttle vector, containing the lad gene, which can be used as a target base sequence for detecting mutations, in a similar way to that employed when transgenic rodent strains are used. The pivotal role of oncogenes and tumour suppressor genes in the process of carcinogenesis is now recognized, and assays are being developed which involve the activation of various proto-oncogenes, such as c-myc and c-fos, or the inactivation of tumour suppressor genes, especially p53, as additional methods to cell transformation assays, for the characterization of potential carcinogens and tumour promoters (Skouv et al, 1995). Several phenomena, such as chromosomal deletion and insertion, point mutation, chromosomal translocation and gene amplification, are thought to be involved in altering the functions of these genes (Scrable et al, 1990). Some of these events can be caused by direct effects on targets other than DNA, and such assays may therefore prove to be useful for detecting non-genotoxic carcinogens. In vitro gut systems. The gut microflora play an important role in many diverse responses and functions, e.g. metabolism, physiology, nutritional status and toxicology (Rowland, 1988). It is therefore particularly important to study the effects of food on the composition and activity of the gut microflora of humans, and, conversely, the role of the microflora in the metabolism of food. There are three types of in vitro models of human colonic microflora: static culture, continuous culture and semi-continuous culture (Rumney and Rowland, 1992). Each model has a number of advantages and disadvantages. The static systems are suitable only for very short periods of incubation, while the more complex systems require considerable technical and scientific expertise and are more costly (Rumney and Rowland, 1992). A major advantage of the continuous-flow models is the maintenance of a stable population which can be used to study the effects of diet, drugs and toxic chemicals on the flora. In addition, continuous-culture models of human gut microflora are an ideal way of studying the effects of a compound over extended periods of time (Mallett et al, 1985). The applications of in vitro models include: investigations of the metabolism of compounds (e.g. TV-nitroso compounds) by the gut bacteria, studies to determine whether exposure of the flora to a foreign compound (e.g. cyclamate) over extended periods results in an increased rate of metabolism of the xenobiotic, and investigation of the effects of food
components (e.g. fibre) on gut bacteria metabolism (e.g. nitrate reduction). In vitro models could also be potentially useful in studies on the effects of novel foods or food components on bacterial metabolism and fermentation products. Teratogenicity testing. Developmental toxicants are likely to act via various mechanisms. This precludes the use of a single in vitro assay which might produce false-negative results. Several in vitro methods have been introduced for teratogenicity screening. However, when submammalian species, e.g. Hydra (Johnson et al, 1982), are used, prediction of human risk is difficult. Furthermore, mammalian systems such as whole embryo culture (Steele et al, 1983) and rodent limb bud culture (Guntakatta et al., 1984), still require a considerable number of experimental animals. The use of permanent mammalian cell lines, such as embryonic stem cell (ESC) lines, is a promising way to establish an assay for teratogenicity testing in vitro (Heuer et al, 1994). Under appropriate culture conditions, ESCs differentiate into different cell types (Doetschman et al, 1985; Wobus et al, 1995). The effects of chemicals on cell development can, therefore, be investigated. Further research is needed to establish the potential usefulness of ESC lines in teratogenicity testing. Neurotoxicity testing. In view of the complexity of the nervous system, particularly the central nervous system (CNS,) in vitro models increasingly have a role to play in elucidating both the potential for, and mechanisms of, neurotoxic insults that result in neurobehavioural malfunction in laboratory animals and humans. In the 1983 FRAME report (Fund for the Replacement of Animals in Medical Experiments, 1983), emphasis was placed on the development of suitable organotypic in vitro tissue culture models of the nervous system. At the present time, no single in vitro neurotoxicity test or package of tests has been adequately evaluated and validated. However, much progress has been made, both in mechanistic neurotoxicology and in devising potential prescreening strategies. Dispersed cell cultures, explant cultures, reaggregate cultures, whole organ cultures, whole embryo models and cell lines are currently being used for neurotoxicological investigations (review: Atterwill et al, 1991). Recently, a multicentre pre-validation study of the first tier of the stepwise in vitro model using cell lines and primary organotypic cultures has taken place (Atterwill et al, 1993). Various other models are also being investigated (Atterwill et al, 1994). There is considerable scope for the application of an in vitro battery of tests for use in a tiered-testing approach to neurotoxicity studies. The function of the CNS depends on the regulation of the uptake and release of most substances across the blood-brain barrier (BBB). Disruption of the BBB, which is localized mainly in the endothelial cells
of the brain capillaries, can result in several pathological conditions. Primary cultured brain endothelial cells have been proposed as in vitro models of the BBB. However, evidence suggests that these cells lose the differentiation characteristics of the in vivo BBB. Recently, a BBB coculture model has been developed with bovine brain capillary endothelial cells and newborn rat astrocytes (JoIy et al, 1995). This model mimics the in vivo situation and is a promising tool for studies of the neurotoxic potential of food additives. Pharmacokinetics and metabolism studies. Pharmacokinetic studies should be performed in animals, early in the process of the evaluation of the toxicity of a chemical. The objective is to determine, first, whether a chemical is absorbed by the gastrointestinal tract, and, if so, the levels of metabolites formed, and which target organs are potential sites for toxicity. These studies are usually conducted in rodents, whose metabolic capacities can differ significantly, both qualitatively and quantitatively, from those of humans. One of the biggest problems for in vitro approaches is to determine systemic toxicity and potential target organs. Once these have been defined, the susceptibility of the relevant cells in culture to the test compound can be investigated, and such information can be used in conjunction with knowledge of likely metabolites, obtained from other studies with hepatocytes, and other cell cultures, as appropriate. Freshly isolated and cultured hepatocytes, derived from different species, including humans, have been widely employed for studying the biotransformation of chemicals. A number of recommendations from the first ECVAM workshop (Blaauboer et al, 1994) aimed to facilitate the further implementation of in vitro systems employing hepatocytes. It was recommended that the maintenance of hepatocyte-specific functions during long-term culture should be explored further, in particular in co-cultures and in three-dimensional hepatocyte culture systems. Human bronchial and liver epithelial cell lines have been produced by cell immortalization and cDNA transfection. These cells retain phase II enzyme activity and cytochrome P450 inducibility (A. Pfeiffer, personal communication). The objective of a 3-year EC project, started in December 1993, is to develop in vitro assays which use immortalized human cell lines for food safety evaluation. The metabolic capacities of the cell lines derived from human target organs will be characterized and, if necessary, corrected by genetic manipulation. It is hoped that, as the cell lines will be well defined and not limited by short lifespans, endpoints such as cytotoxicity, genotoxicity and transformation (including promotion and progression by the use of cell lines at different stages of progression to malignancy) will be developed. At the present time, an immortalized human keratinocyte cell line (HaCaT), a human buccal mucosa cell line (SvpgC2a) and a human bronchial epithelial cell line have been analysed
for the presence of cytochrome P450-dependent monooxygenase activities (V.A. Baker, personal communication). These cell lines represent a significant advance in in vitro toxicology for a number of reasons. First, cells from various organs could be immortalized and used to assess target organ toxicity. Second, the use of human cells obviates the need for cross-species extrapolation. Third, cell culture systems expressing human in vivo metabolic capacities would represent a significant step forward, as metabolic transformation of test chemicals could proceed intracellularly, rather than outside the cell. Therefore, active metabolites can be generated closer to the DNA, thereby increasing the sensitivity of the assay. Finally, the increased longevity of the cells provides a potential in vitro method for the study of chronic toxicity, which is difficult to determine in normal cell lines. Considerable progress towards modelling absorption, distribution, metabolism and excretion (ADME) is being achieved using physiologically based pharmacokinetic (PB-PK) approaches. PB-PK modelling refers to the development of mathematical descriptions of the uptake and disposition of chemicals based on quantitative interrelationships among the critical biological determinants of these processes. The disposition of a substance and its metabolites can be predicted by integrating three types of information: (1) species-specific physiological parameters (e.g. organ volumes and perfusion rates); (2) partition coefficients for the chemical; and (3) species-specific metabolic parameters (e.g. Km and Vmax). Metabolic parameters can be estimated using cultured cells or from in vivo studies, partition coefficients can be measured by vial equilibration techniques, and the physiological information is available from the literature. On the basis of this information, PB-PK models can predict tissue exposure to a substance. In addition, with limited animal experimentation, PB-PK models can be used for extrapolation of the kinetic behaviour of chemicals from high dose to low dose, from one exposure route to another, and, importantly, from test animal species to humans. Inter-route extrapolation for food additives is necessary because, although toxicity testing is carried out orally, people involved in the production and processing of food additives can be exposed by inhalation or skin contact. At a ECVAM workshop (Barratt et al, 1995) the development of integrated testing strategies for predicting the systemic toxicity of chemicals was recommended. It was noted that PB-PK modelling has an important role to play, and could be used in combination with structure-activity relationship predictions and in vitro systems. A multicentre collaborative research project was established with the purpose of developing integrated in vitro toxicity testing (Walum et al., 1992). The strategy included a number of non-animal methods which could provide information on biokinetics. At another ECVAM workshop (Blaauboer et al., 1996), proposals were made for conducting biokinetic studies in vitro and integrating them with other in vitro studies. Such
integrated testing strategies have considerable potential to reduce, refine and possibly replace conventional animal procedures for predicting systemic toxicity, including ADME studies. Acute oral toxicity testing. Acute systemic toxicity can be caused by various mechanisms. Such potentially complex effects in animals effectively preclude the development of a single in vitro assay to predict in vivo acute toxicity. However, the individual toxic mechanisms of a compound can be modelled in a battery of in vitro assays, whose endpoints of toxicity indicate specific adverse effects. Seibert et al (1994) reported that a battery of five in vitro assays, employing bovine sperm, Balb/c 3T3 cells, rat hepatocytes, rat muscle cells and co-cultures of the latter two, could collectively measure the following endpoints: cytostasis, cytolethality, hepatotoxicity/ metabolism, inhibition of contraction and membrane damage. Using a tiered-testing approach, chemicals are classified as very toxic, toxic or harmful. Only if the results indicate the lowest toxicity class (i.e. unclassified) should a fixed-dose procedure be conducted. This approach could be used to evaluate the effects of high doses of single or complex mixtures of compounds for short periods of time. The assays overcome some of the problems associated with in vivo acute toxicity assays, such as the costly and time-consuming efforts involved, and the difficulties of feeding animals with high doses of a possibly unpalatable substance. In addition, in vitro systems allow the elucidation of the specific mechanisms of toxicity. Cytotoxicity testing. A recent paper described the use of three cell lines for the safety assessment of food contaminants (De Angelis et al, 1994). The cell lines Hep-2 (human larynx carcinoma), Caco-2 (human colon adenocarcinoma) and V79 (Chinese hamster lung fibroblasts) were exposed to furazolidone (FZ), a widely used veterinary drug, for up to 24 h. The endpoints measured were cell death, cell proliferation and inhibition of DNA synthesis. In all the cell lines, 5 jxg/ml FZ caused a marked decrease in cell viability and cell proliferation. In V79 cells, the decrease in cell number was accompanied by increased lactate dehydrogenase leakage due to membrane damage. Exposure of Caco-2 and Hep-2 cells caused an increase in oxygen consumption. The results demonstrated that cell lines derived from human tissues can be valuable tools in the investigation of tissue-specific metabolic capacities and mechanisms of toxicity of food contaminants. The main advantages of the use of cell lines are that they provide rapid, low-cost systems which allow strict control of the experimental conditions and, as a consequence, reproducible results (Balls and Fentem, 1992). In addition, the effects of various mixtures of compounds can be tested. However, in vitro cytotoxicity assays must measure endpoints relevant to the in vivo situation. The development of such assays relies on the
elucidation of the mechanisms of toxicity, including the identification of specific tissue, organ and systemic targets and responses. The use of cell lines in a prescreening phase could substantially reduce the numbers of animals used and contribute to studies of the mechanisms of action of cytotoxicants. Computer models. Several expert systems and structure-activity approaches have been proposed for predicting toxicity, including COMPACT, Hazardexpert and DEREK. As these are discussed in detail in Chapter 5, only short descriptions will be given here. The COMPACT methodology for predicting chemical toxicity mediated by cytochrome P450-dependent activation involves a determination of the molecular and electronic structures of the chemicals, followed by a comparison between specific structural parameters and previously evaluated criteria for specificity towards cytochromes P4501 and P4502E (Lewis et al, 1994). The Hazardexpert program is used to determine structurally alerting substructures known to give rise to various forms of toxicity. The logarithms of the octanol/water partition coefficients (log P) and pKa values are calculated, and possible metabolites of a parent structure are predicted. Deductive estimation of risk from existing knowledge (DEREK) is a knowledge-based expert system which permits the interfacing of on-screen structural information, drawn by a toxicologist, with a toxicity rulebase. Toxicophores in chemicals of interest can be identified and qualitative activity predictions of novel structures can be made (Sanderson and Earnshaw, 1991). DEREK has been assessed for its potential as a screen for the prediction of the genotoxicities and carcinogenicities of some chemicals found in foods (Long and Combes, 1995). A set of rules was produced, which covered a wide spectrum of structures and identified structurally alerting toxicophores. The results indicated that DEREK has potential as a screen for detecting the genotoxicity of food chemicals. Recently, a comparison was made between a number of predictive systems, including bacterial mutagenicity, structure alert and chronic toxicity (Tennant et al, 1990), COMPACT, Hazardexpert and DEREK, and the outcome of the two-species rodent carcinogenicity bioassay for 44 chemicals conducted according to the NTP protocol (Lewis, 1994). The combination of the Tennant et al (1990) method and COMPACT resulted in a 95% concordance with the in vivo data. The best prediction by an individual method was 83%, for the method of Tennant et al (1990). In a comparison between the COMPACT and Hazardexpert evaluations of 80 chemicals, previously tested in the NTP/NCI rodent bioassay (Brown et al, 1994), Hazardexpert showed a low (51%) concordance in
identifying positive rodent carcinogens. However, it correctly identified 81 % of non-carcinogens. In contrast, the COMPACT procedure was able to predict both carcinogens (71%) and non-carcinogens (67%) with a similar level of accuracy. Comparisons of the results of the COMPACT predictions obtained by Lewis (1994) and Brown et al (1994) show that they were similar: sensitivity 82% and 70%, specificity 57% and 67%, and predictive value 78% and 85%, respectively. It appears that it is possible to make reasonable predictions of the outcome of the rodent bioassay on the basis of previous experience and short-term test data.
6.8 Conclusions To date, the use of in vitro techniques in food safety assessment has been viewed mainly as a prescreening process which can reduce the numbers of animals used in the initial phases of food additive toxicity testing. At the present time, there are no validated non-animal methods which can completely replace the use of animals in any toxicity testing procedures. In the short term, there is the potential for a number of modifications which would result in a decrease in the numbers of animals used and a refinement of the procedures applied to the animals used. In the longer term, the replacement of the use of animals in food safety assessment will depend on the validation of individual alternative methods, or batteries of non-animal methods. There is an urgent need for the development of tissue-specific assays which are based on the measurement of defined endpoints which are relevant to the in vivo effects of the compound and the target organ. Whilst it is often difficult to identify such endpoints, advances in cellular and molecular biology are providing opportunities to achieve this goal. Further work requires adequate resourcing of the development of alternatives based on fundamental research into mechanisms of toxicity. There are currently no suitable in vitro or in vivo methods available or published guidelines specifically recommended for the prediction of toxicity of novel, genetically engineered foods. The safety assessment of novel foods created by new chemical, physical or biotechnological methods poses a number of problems. Biotechnology-derived food additives of large molecular weight might be incompatible with in vitro assays for genotoxicity and cytotoxicity, and in vitro tests cannot predict the potential systemic toxicities of novel foods, which may be the only adverse effects elicited. In addition, current testing protocols using animals might not be technically feasible in the case of novel foods which constitute more than 1% of the expected human diet. Suitable in vitro test systems need to be developed to enable an accurate prediction of the potential toxicity of novel foods.
Figure 6.2 outlines a potential tiered-testing strategy. The initial stages of the strategy involve the use of PB-PK modelling to identify the metabolites produced and their potential target organs. PB-PK modelling would also assess the concentrations likely to be encountered in humans. This is particularly important as the demonstration of the effects of a substance at high concentrations in an in vitro assay might not necessarily imply that the same effects would be observed in vivo. After identification of the potential target organs, a battery of cell lines could be used to assess the genotoxicity and tissue-specific toxicity of the substance. Short-term and long-term genotoxicity and non-genotoxicity studies would be carried out concurrently. Should any toxic effects be observed, the substance would be rejected. If no cytotoxic or genotoxic effects were observed at relevant concentrations, further testing would continue for neurotoxicity, immunotoxicity, reproductive toxicity and acute toxicity. The current use of animal tests in food additive safety assessment is irredeemably flawed, because of species differences and because of the use of relatively short-term, high-dose regimes which attempt to evaluate the long-term effects of much smaller doses in humans. The challenge for those involved in the development of alternative test methods is to find ways of integrating approaches involving computer modelling, expert systems, quantitative structure-activity relationships and in vitro tests of various kinds into a relevant and reliable way of assessing potential hazard and risk to humans. PB-PK MODELLING
IN VITRO GUT SYSTEM 1.ADME 2. Systemic effects 3. Major active/detoxified metabolites 4. Target organs 5. Effects on gut flora
GENOTOXICITYAND CYTOTOXICITY
Structural alerts Computer modelling In vitro studies
ACUTE TOXICITY NEUROTOXICITY REPRODUCTIVE TOXICITY IMMUNOTOXICITY
CHRONIC TOXICITY
In vitro cells Modified animal bioassays
Figure 6.2 A possible tiered-testing approach to the safety evaluation of food additives.
References Advisory Committee on Novel Foods and Processes (1991) Guidelines on the assessment of novel foods and processes. Department of Health Reports on Health and Social Subjects, 38. HMSO, London. Animal Procedures Committee (1994) Report to the Home Secretary on Regulatory Toxicity. Home Office Publications, London. Anon. (1993) Zbinden criticises the FDA. FRAME News, 33, 8. Ashby, J. (1992) Prediction of non-genotoxic carcinogenesis. Toxicology Letters, 64/65, 605-612. Ashby, J. and Purchase, I.F.H. (1992) Non-genotoxic carcinogens - an extension of the perspective provided by Perera. Environmental Health Perspectives, 98, 223-226. Ashby, J. and Purchase, I.F.H. (1993) Will all chemicals be carcinogenic to rodents when adequately evaluated? Mutagenesis, 8, 489-493. Ashby, J. and Tennant, R.W. (1988) Chemical structure: Salmonella mutagenicity and extent of carcinogenicity as indicators of genotoxic carcinogenesis among 222 chemicals tested in rodents by the US NCI/NTP. Mutation Research, 204, 17-115. Ashby, J., Brady, A., Elcombe, C.R. et al (1994) Mechanistically-based human hazard assessment of peroxisome proliferator-induced hepatocarcinogenesis. Human and Experimental Toxicology, 13(S2). Atterwill, C.K., Simpson, M.G., Evans, RJ. et al. (1991) Alternative methods and their application in neurotoxicity testing. In: Balls, M., Bridges, J. and Southee, J. (eds) Animals and Alternatives in Toxicology: Present Status and Future Prospects. Macmillan Press, London, pp. 121-152. Atterwill, C.K., Davenport-Jones, J., Goonetilleke, S. et al. (1993) New models for the in vitro assessment of neurotoxicity in the nervous system and the preliminary validation stages of a 'tiered-test' model. Toxicology in Vitro, 7, 569-580. Atterwill, C.K., Bruinink, A., Drejer, J. et al. (1994) In vitro neurotoxicity testing. The report and recommendation of ECVAM workshop 3. Alternatives to Laboratory Animals, 22, 350-362. Balls, M. and Fentem, J.H. (1992) The use of basal cytotoxicity and target organ toxicity tests in hazard identification and risk assessment. Alternatives to Laboratory Animals, 20, 368-388. Balls, M., Goldberg, A.M., Fentem, J.H. et al. (1995) The Three Rs: the way forward. Alternatives to Laboratory Animals, 23, 838-866. Barratt, M.D., Castell, J.V., Chamberlain, M. et al. (1995) The integrated use of alternative approaches for predicting toxic hazard. The report and recommendations of ECVAM workshop 8. Alternatives to Laboratory Animals, 23, 410-429. Blaauboer, B.J., Boobis, A.R., Castell, J.V. et al. (1994) The practical applicability of hepatocyte cultures in routine testing. The report and recommendations of ECVAM workshop 1. Alternatives to Laboratory Animals, 22, 231-241. Blaauboer, B.J., Bayliss, M.K., Castell, J.V. et al. (1996) The use of biokinetics and in vitro methods in toxicological risk evaluation. The report and recommendations of ECVAM workshop 15. Alternatives to Laboratory Animals, 24, 473-497. Brown, S.J., Raja, A.A. and Lewis, D.F.V. (1994) A comparison between COMPACT and Hazardexpert evaluations for 80 chemicals tested by the NTP/NCI rodent bioassay. Alternatives to Laboratory Animals, 22, 482-500. Christian, M.S. (1986) A review of multigeneration studies. Journal of the American College of Toxicology, 5, 161-180. Combes, R.D. (1992) The in vivo relevance of in vitro genotoxicity assays incorporating enzyme activation systems. In: Gibson, G.G. (ed.) Progress in Drug Metabolism, Vol. 13. Taylor and Francis, London, pp. 295-321. Combes, R.D. (1995) Regulatory genotoxicity testing: a critical appraisal. Alternatives to Laboratory Animals, 23, 352-379. Commission of the European Communities (1980) Guidelines for the safety assessment of food additives. Reports of the Scientific Committee for Food (Tenth series) (EUR 6892). CEC, Luxembourg, pp. 5-21.
Commission of the European Communities (1994) First report from the Commission to the Council and the European Parliament on the statistics on the number of animals used for experimental and other scientific purposes, COM (94), 195 final, 27 May. CEC, Brussels. Committee on Toxicity of Chemicals in Food, Consumer Products and the Environment (1982) Guidelines for the testing of chemicals for toxicity. Department of Health Reports on Health and Social Subjects, 27. HMSO, London. Crespi, C.L., Steimel, D.T., Aoyama, T. et al (199Oa) Stable expression of human cytochrome P4501A2 cDNA in a human lymphoblastoid cell line: role of the enzyme in the metabolic activation of aflatoxin B1. Molecular Carcinogenesis, 3, 5-8. Crespi, C.L., Langenbach, R. and Penman, B.W. (199Ob) The development of a panel of human cell lines expressing specific human cytochrome P450 cDNAs. Progress in Clinical and Biological Research, 34OB, 97-106. De Angelis, L, Hoogenboom, L.A.P., Huveneers-Oorsprong, M.B.M. et al (1994) Established cell lines for safety assessment of food contaminants: differing furazolidone toxicity to V 79, HEp-2 and Caco-2 cells. Food and Chemical Toxicology, 32, 481-488. Doetschman, T.C., Eistetter, H.R., Schmidt, W. and Kemler, R. (1985) The in vitro development of blastocyst-derived embryonic stem cell lines: formation of visceral yolk sac, blood islands and myocardium. Journal of Embryology and Experimental Morphology, 87, 27-45. European Economic Community (1976) Council Directive 76/768/EEC of 27 July 1976 on the approximation of the laws of the Member States relating to cosmetic products. Official Journal of the European Communities, L262, 169-172. European Economic Community (1986) Council Directive 86/609/EEC of 24 November 1986 on the approximation of laws, regulations and administrative provisions of the Member States regarding the protection of animals used for experimental and other scientific purposes. Official Journal of the European Communities, L358, 1-29. European Economic Community (1993) Council Directive 93/35/EEC of 14 June 1993 amending for the sixth time Directive 76/768/EEC on the approximation of the laws of the Member States relating to cosmetic products. Official Journal of the European Communities, L151, 32-36. FLAIR (1991) Concerted Action no. 8. In vitro toxicological studies and real time analysis in food. In: Hoogenboom, L.A.P. and Broex, N.J.G. (eds) Proceedings of the workshops held in Berlin, 8-9 March 1991, and Swansea, 21-22 March 1991. Rikilt, Wageningen, The Netherlands. Food and Drug Administration (1993) Draft: Toxicological Principles for the Safety Assessment of Direct Food Additives and Color Additives Used in Food. US Food and Drug Administration Center for Food Safety and Applied Nutrition, Washington, DC. Francis, E.Z. and Kimmel, G.L. (1988) Proceedings of the workshop on one- vs two-generation reproductive effects studies. Journal of the American College of Toxicology, 7, 911-925. Fund for the Replacement of Animals in Medical Experiments (1983) Report of the FRAME Toxicity Committee. Alternatives to Laboratory Animals, 10, 4-43. Fund for the Replacement of Animals in Medical Experiments (1991) Animals and alternatives in toxicology: present status and future prospects (the Second Report of the FRAME Toxicity Committee). Alternatives to Laboratory Animals, 19, 116-138. Garle, MJ., Fentem, J.H. and Fry, J.R. (1994) In vitro cytotoxicity tests for the prediction of acute toxicity in vivo. Toxicology in Vitro, 8, 1303-1312. Gatehouse, D., Rowland, I.R., Wilcox, P. et al. (1990) Bacterial mutation assays. In: Kirkland, DJ. (ed.) Basic Mutagenicity Tests: UKEMS Recommended Procedures, UKEMS Subcommittee Report on Guidelines for Mutagenicity Testing, Part I (Revised). Cambridge University Press, Cambridge, pp. 13-61. Gold, L.S. and Stone, T.H. (1993) Prediction of carcinogenicity from two versus four sexspecies groups in the Carcinogenic Potency Database. Journal of Toxicology and Environmental Health, 39, 143-157. Gossen, J.A., De Leeuw, W.J.F., Tan, C.H.T. et al. (1989) Efficient rescue of integrated shuttle vectors from transgenic mice. A model for studying mutations in vivo. Proceedings of the National Academy of Sciences of the USA, 86, 7971-7975.
Gray, TJ.B. (1995) Safety evaluation in vitro: the changing environment. Meeting report. In Vitro Toxicology Society Newsletter, 1, 2-3. Guntakatta, M., Matthew, EJ. and Rundell, J.O. (1984) Development of a mouse embryo limb bud cell culture system for the estimation of chemical teratogenic potential. Teratogenesis, Carcinogenesis and Mutagenesis, 4, 349-364. Haseman, J.K., Bourbina, J. and Eustis, S.L. (1994) Effect of individual housing and other experimental design factors on tumor incidence in B6C3F1 mice. Fundamental and Applied Toxicology, 23, 44-52. Heuer, J., Graever, L, Pohl, I. and Spielmann, H. (1994) An in vitro embryotoxicity assay using the differentiation of embryonic mouse stem cells into haematopoietic cells. Toxicology in Vitro, 8, 585-587. Hiraga, K. and Fujii, T. (1981) Induction of tumours of the urinary system in F-344 rats by dietary administration of sodium o-phenyl phenate. Food and Cosmetic Toxicology, 19, 303-310. Home Office (1994) Statistics of Scientific Procedures on Living Animals, Great Britain. Cm 3516. HMSO, London. Home Office (1996) Statistics of Scientific Procedures on Living Animals, Great Britain. Cm 3516. HMSO, London. Johnson, E.M., Gorman, R.M., Gabel, B.E.G. and George, M.E. (1982) The Hydra attenuata system for detection of teratogenic hazards. Teratogenesis, Carcinogenesis and Mutagenesis, 2, 263-276. JoIy, B., Fardel, O., Cecchelli, R. et al. (1995) Selective drug transport and P-glycoprotein activity in an in vitro blood-brain barrier model. Toxicology in Vitro, 9, 357-364. Kohler, S.W., Provest, G.S., Fieck, A. et al. (1991) Spectra of spontaneous and mutageninduced mutations in the Lac I gene in transgenic mice. Proceedings of the National Academy of Sciences of the USA, 88, 7958-7962. Lai, D.Y., Baetcke, K.P., Vu, V.T. et al. (1994) Evaluation of reduced protocols for carcinogenicity testing of chemicals: report of a joint EPA/NIEHS workshop. Regulatory Toxicology and Pharmacology, 19, 183-201. Lefevre, P.A., Tinwell, G., Galloway, S.M. et al. (1994) Evaluation of the genetic toxicity of the peroxisome proliferator and carcinogen methyl clofenapate, including assays using Muta™Mouse and Big Blue™ transgenic mice. Human and Experimental Toxicology, 13, 764-775. Lewis, D.F.V. (1994) Comparison between rodent carcinogenicity test results of 44 chemicals and a number of predictive systems. Regulatory Toxicology and Pharmacology, 20, 215-222. Lewis, D.F.V., Moereels, H., Lake, B.G. et al. (1994) Molecular modelling of enzymes and receptors involved in Carcinogenesis: QSARs and COMPACT-3D. Drug MetabolismReviews, 26, 261-285. Long, A. and Combes, R.D. (1995) Using DEREK to predict the activity of some carcinogens/mutagens found in foods. Toxicology in Vitro, 9, 563-569. Mallett, A.K., Rowland, I.R., Bearne, C.A. et al (1985) Metabolic adaptation of rat faecal microflora to cyclamate in vitro. Food and Chemical Toxicology, 23, 1029. Moore, CJ. and Mepham, T.B. (1995) Transgenesis and animal welfare. Alternatives to Laboratory Animals, 23, 380-397. Organization for Economic Co-operation and Development (1987) Acute oral toxicity. Guidelines for Testing of Chemicals No. 401. OECD, Paris. Organization for Economic Co-operation and Development (1992) Acute oral toxicity fixed dose method. Guidelines for Testing of Chemicals No. 420. OECD, Paris. Organization for Economic Co-operation and Development (1994) Guidelines for Testing of Chemicals. OECD, Paris. Reitz, R.H., Fox, T.R. and Quast, J.F. (1984) Biochemical factors involved in the effects of orthophenylphenol (OPP) and sodium orthophenylphenate (SOPP) on the urinary tract of male F-344 rats. Toxicology and Applied Pharmacology, 73, 345-349. Ren wick, A.G. (1993) Data-derived safety factors for the evaluation of food additives and environmental contaminants. Food Additives and Contaminants, 10, 275-305. Rosenkranz, H.S. and Klopman, G. (1993) Structural relationships between mutagenicity, maximum tolerated dose, and carcinogenicity in rodents. Environmental and Molecular Mutagenesis, 21, 193-206.
Rowland, LR. (1988) Interactions of the gut microflora and host in toxicology. Toxicology and Pathology, 16, 147-150. Rumney, CJ. and Rowland, LR. (1992) In vivo and in vitro models of the human colonic flora. Critical Reviews in Food Science and Nutrition, 31, 299-331. Russell, W.M.S. and Burch, R.L. (1959) The Principles of Humane Experimental Technique. Methuen, London. Sanderson, D.M. and Earnshaw, C.G. (1991) Computer prediction of possible toxic action from chemical structure: the DEREK system. Human and Experimental Toxicology, 10, 261-273. Scrable, H.J., Sapienza, C. and Cavanee, W.K. (1990) Genetic and epigenetic losses of heterozygosity in cancer predisposition and progression. Advances in Cancer Research, 54, 25-62. Seibert, H., Gulden, M. and Voss, J-U. (1994) An in vitro toxicity testing strategy for the classification and labelling of chemicals according to their potential acute lethal potency. Toxicology in Vitro, 8, 847-850. Skouv, J., Rasmussen, E.S., Frandsen, H. et al. (1995) Activation of c-myc and c-fos protooncogenes by the reducing agent DTT (dithiothreitol) in human cells. Alternatives to Laboratory Animals, 23, 497-503. Steele, C.E., New, D.A.T., Ashford, A. and Copping, G.P. (1983) Teratogenic action of hypolipidemic agents: an in vitro study with postimplantation rat embryos. Teratology, 28, 229-236. Stokes, W.S. (1994) Alternative test method development at the National Toxicology Program. In: Proceedings of the Toxicology Forum Winter Meeting, Washington, DC. Toxicology Forum, Washington, DC, pp. 302-312. Straughan, D.W. (1994) First European Commission report on statistics of animal use. Alternatives to Laboratory Animals, 22, 289-292. Tennant, R.W. (1993) A perspective on nonmutagenic mechanisms in carcinogenesis. Environmental Health Perspectives Supplements, 101, 231-236. Tennant, R.W., Spalding, J.W., Stasiewicz, S. and Ashby, J. (1990) Prediction of the outcome of rodent carcinogenicity bioassays currently being conducted on 44 chemicals by the US NTP. Mutagenesis, 5, 3-14. Walum, E., Balls, M., Bianchi, V. et al. (1992) ECITTS: an integrated approach to the application of in vitro test systems to the hazard assessment of chemicals. Alternatives to Laboratory Animals, 20, 406-428. Wobus, A.M., Rohwedel, J., Maltsev, V. and Hescheler, J. (1995) In vitro cellular models for cardiac development and pharmacotoxicology. Toxicology in Vitro, 9, 477-488. Yagi, T., Sato, M., Nishigori, C. and Takebe, H. (1994) Similarity in the molecular profile of mutations induced by UV light in shuttle vector plasmids propagated in mouse and human cells. Mutagenesis, 9, 73-77.
7 Molecular modelling D.F.V. LEWIS
7.1 Introduction Molecular graphics systems originated from the requirement to produce visual images of the three-dimensional structures obtained from X-ray crystallographic determinations of organic, inorganic and biological chemicals, including macromolecules. The recent advances in computer technology have facilitated the development of fully integrated systems which can both display colour representations of molecular structures in many different formats, and enable these often striking images to be manipulated in real time. Furthermore, the powerful techniques of molecular modelling provide the means to construct and engineer even quite complex molecular structures according to the laws of physics and chemistry, such that their properties may be investigated. It is also possible to perform calculations which determine the distribution of electrons within a given molecule, such that its chemical reactivity can be assessed. As far as biological applications are concerned, the integrated nature of current molecular modelling systems facilitates the exploration of interactions between, for example, small effector molecules and large biomolecular entities, such as proteins, receptors, antibodies, enzymes and nucleic acids, especially where these structures are known from X-ray crystallographic studies. The energies of these drug-receptor or enzymesubstrate interactions can be measured, following energy calculations on the various species involved, and this can lead to estimations of, for example, a compound's biological potency or specificity towards a particular biostructure, thus leading to an evaluation of the likely consequences of such interactions, which may be toxicological or pharmacological in nature. One of the aims of science is to provide rationalizations for the occurrence of a particular effect; for if we can find explanations of such phenomena, then there is a possibility of being prepared for future recurrences. Molecular modelling enables one to find answers to such questions regarding why a particular biological effect is observed, and to give a rational explanation of the likely mechanism at the molecular, or even submolecular, level. Therefore, an understanding of the molecular mechanisms of toxicity is important in providing a means of predicting the possible biological effects of an unknown compound. The molecular
structure of the chemical thus enables one to make certain predictions regarding its potential toxicity, even towards different species, including humans. Moreover, protein modelling provides a powerful complement to, and is a natural consequence of, the information derived from the newly emerging techniques of molecular biology, such as site-directed mutagenesis, complementary DNA sequencing and heterologous expression systems. In fact, it would be better to use the term 'molecular structural studies' rather than 'molecular modelling' to describe the current state of the art, as this area is now steadily moving from the purely theoretical to being generally regarded as a parallel experimental technique alongside the physical methods of X-ray crystallography, NMR and other types of spectroscopy, and a number of other physicochemical measurement procedures. Molecular modelling systems utilize the known structural information from these physical techniques, in the form of chemical bond lengths, angles and atomic masses, and then use the fundamental laws of physics to carry out investigations on molecular conformations, molecular mechanics and molecular dynamics, and also facilitate the analysis of molecular structure by derivation of molecular geometric parameters. Furthermore, the formulation and solution of the Schrodinger wave equation for the hydrogen atom has led to the development of electronic structure calculations via molecular orbital (MO) methods, both semiempirical and ab initio, which produce energy values and other data that agree closely with experimentally determined parameters from photoelectron spectra and other physical techniques. Molecular modelling can, therefore, enable calculation of the threedimensional structure of a given chemical, together with an estimation of its electronic distribution. The technique, consequently, has many applications in both the chemical and biological sciences where such factors are important to chemical reactivity and biological activity. In the biological area, molecular modelling is closely associated with the complementary procedure of quantitative structure-activity relationship (QSAR) analysis. The biological activity of a chemical is due to its structure, i.e. the threedimensional arrangement of the constituent atoms and their associated electrons (Lewis, 1995a). The goal of QSAR and molecular modelling is to find which combination of structural parameters is able to describe, as fully as possible, the particular form of bioactivity in question, including toxicity (Kubinyi, 1990). In fact, QSAR is able to utilize other descriptors than the purely molecular or electronic structural ones, such as physicochemical properties, substituent parameters and topological values (Lewis, 1990). As the techniques of molecular modelling enable the rapid calculation of molecular and electronic structural properties which may then be used to generate QSARs, the two fields overlap considerably.
Molecular modelling can, in theory, provide all of the data required to explain fully the structural reasons for bioactivity and toxicity. However, it is likely that, in many instances, only a relatively small amount of structural information is necessary; sometimes only the two-dimensional representation of the chemical structure is required and, under these circumstances, modelling is unnecessarily complicated. Nevertheless, there are many occasions where only the use of molecular modelling in three dimensions is able to explain a biological phenomenon and, frequently, this is due to the topographical match between the substrate and an enzyme, or between a ligand and a receptor protein: this situation is commonly referred to as 'molecular recognition', and appears to govern many biological phenomena, including antigen-antibody interactions, for example. Under such circumstances, the factors governing activity are likely to be stereo-electronic in nature and, therefore, a knowledge of the three-dimensional molecular structure, including its stereochemistry, is important, together with information relating to electronic distribution in the molecule and energies of the relevant MOs associated with reactivity (Lewis, 1995a). These are commonly known as frontier orbitals, namely, the highest occupied MO (HOMO) and lowest unoccupied MO (LUMO), which are generally associated with nucleophilic and electrophilic characteristics, respectively. 7.2 Chemical safety evaluation and risk assessment Due to the fact that humankind's so-called civilized, technological society has progressed faster than the rate of natural evolution of genes encoding for proteins, the enzymes, which may have originally evolved to detoxify potentially harmful natural plant products, are now associated with the metabolism of other essentially artificial foreign compounds, such as environmental pollutants, industrial solvents, food pyrolysis products, Pharmaceuticals, agrochemicals and food additives, sometimes with toxic consequences. It is now known that the cytochromes P450 are the key enzymes of phase I metabolism, with a pivotal role in chemical toxicity, being able to both detoxify and activate foreign compounds (Ioannides et al, 1993a; Parke and Lewis, 1992). (The majority of foreign compounds are metabolized via phase I and phase II stages which involve different enzymes. Phase I is the first stage; it generally involves oxygenation and, primarily, this occurs via P450 isozymes.) Although a number of these enzymes have been associated with the metabolic activation of pro-carcinogens (P4501) and with the production of other reactive intermediates, such as oxygen radicals (P4502E), there are other mechanisms of toxicity which do not involve these enzymes. However, these are relatively minor and may not necessarily represent a
significant hazard to humans, although they have been shown to be involved in rodent carcinogenicity such as via peroxisome proliferation (Lake, 1995) and p-lyase cleavage of cysteine conjugates (Dekant et al, 1989). The safety evaluation of chemicals destined for human exposure constitutes an area of intense scientific activity, and large organizations have been set up to regulate and test the increasing numbers of chemicals that are being introduced annually. In addition to regulation in the food industry, pharmaceutical, agrochemical and other industrial chemical areas, environmental protection agencies monitor the potential hazard, and assess the likely risk, of new chemicals in the environment (Wilbourn and Vainio, 1994). The risk assessment and hazard identification of chemicals has, for many years, involved the use of large numbers of laboratory-bred (and maintained) rodents and others animals as surrogates for humans. It is now well established that there are significant species differences in toxic activation pathways and protective mechanisms, which prove the inadequacy and scientific invalidity of many animal studies (Gori, 1992). For example, there are several hundred known rodent carcinogens (Ames and Gold, 1990) but only about 20 known human carcinogens, although this discrepancy may be partially due to the different ways in which evaluation studies are conducted, to the genetic diversity of human populations, and to their differing habits and varying lifestyles. Due, in part, to the situation outlined above, and also because of the increasing costs of long-term animal studies, a number of short-term in vitro test procedures have been developed to provide an early indication of potential toxicity. Probably the best known, and most widely used, of these is the Ames test for bacterial mutagenicity, which assesses the genotoxic potential of a chemical by its effect on bacterial DNA. Of course, the species differences are likely to be much greater between bacteria and humans than between other mammalia and Homo sapiens, although the constitutive DNA base pairs and polydeoxyribose will be, essentially, the same. Originally demonstrated to be highly accurate in predicting carcinogenicity, the Ames test was subsequently found to show significant shortcomings, mainly due to the lack of the necessary enzyme activation systems in bacteria which were known to be present in mammalia, particularly the P450 enzymes. Consequently, modifications to the Ames test were developed which utilized microsomal fractions from rat liver pretreated with known P450 inducers, such as phenobarbital or Arochlor 1254, in what is known as the S9 mix (Maron and Ames, 1983). The concordance between the Ames test and rodent carcinogenicity has been shown to be about 55%, which suggests that there may be a significant number of non-genotoxic mechanisms of carcinogenicity in rodent test species. In fact, a recent analysis has shown that bacterial mutagenicity
gives a correlation with rodent carcinogenicity that is as low as 39% in 105 compounds tested (Bogen, 1995). However, it should be remembered that the Ames test is designed to assess the ability of a chemical to cause mutations in bacterial systems rather than identify mammalian genotoxicity, although one might expect that there is some degree of similarity between the two endpoints. In order to rationalize the considerable discrepancies between the results of bacterial mutagenicity and rodent carcinogenicity, and between rodent and human carcinogenicity, it is necessary to consider both the different mechanisms for detoxication and activation between species, and also the effects of dietary and other factors which distinguish experimental animals from humans. Mammals, and humans in particular, are well equipped to deal with cellular damage caused by certain chemicals, by the use of a number of biological defence mechanisms against reactive oxygen species (ROS), including the utilization of glutathione (GSH) and other antioxidants (e.g. vitamins A and C), together with the enzymes superoxide dismutase (SOD) and catalase, which catalyse the conversion of superoxide (a common ROS produced from atmospheric oxygen) to peroxide (via SOD), and peroxide to water (via catalase). It has been shown, for example, that overexpression of SOD and catalase provides an extension of lifespan in Drosophila (Orr and Sohal, 1994). One possible reason why there are so many rodent carcinogens (>200) but relatively few human carcinogens (-20) could be that the smaller-sized rodent species (e.g. rat and mouse), which are commonly used for chemical safety evaluation, are highly susceptible to oxygen radical toxicity (Parke, 1987, 1994; Parke and Ioannides, 1990). It is probably a combination of relatively low body weight, which is inversely proportional to rate of metabolism (Martin and Palumbi, 1993), and employment of cellular GSH as a radical scavenger, which leads to the greater sensitivity of rodent species towards ROSgenerating chemicals that are likely to represent a significantly lower risk to humans, who use epoxide hydrolase (EH) rather than GSH as a response to ROS (Lorenz et al, 1984). Depletion of GSH levels is likely to give rise to ROS-mediated toxicity, including carcinogenesis, although there are many other dietary factors involved (Parke and Ioannides, 1994). For example, folic acid, TV-acetylcysteine (which is a precursor of GSH) and vitamins A, C and E have all been shown to act as protective agents against potentially carcinogenic species (Ioannides and Lewis, 1995; Parke and Ioannides, 1994). In fact, (3-carotene (the precursor of vitamin A) exhibits a very high propensity for epoxide formation (Traylor and Xu, 1988) and, consequently, is likely to prevent oxidative damage to tissue from ROS. Furthermore, agents such as diallyl sulphide (present in garlic), which inhibit P4502E (a potential source of ROS), are likely to represent important dietary constituents for protection against ROS-mediated toxicity.
Moreover, the largely restricted and controlled diet of experimental animals, coupled with their inbred susceptibility to tumour formation, is understandably going to give rise to a higher incidence of cancer compared with the generally unrestricted diet and genetic diversity of humans, who are not normally exposed to potentially harmful chemicals at dose levels equivalent to the maximum tolerated doses used in animal studies. Additionally, there is a lack of correlation between the two rodent test species used in carcinogenicity bioassays, and also between sexes of the same rodent species (DiCarlo, 1984). The currently accepted definition of a carcinogen involves either clear or some evidence of neoplasia in any one of the four segments (two species, two sexes) of the rodent bioassay, even without dose dependency, and at any tissue site. This protocol tends to give rise to a relatively large number of chemicals being regarded as positive in the rodent assay, when there may be no evidence of carcinogenicity from human epidemiological studies. A further difficulty is the assumption than an equivocal result in the rodent carcinogenicity bioassay represents a negative response, whereas there is a significant number of equivocal human carcinogens resulting either from inadequate studies or opposing results in two separate evaluations (Ennever et al, 1987).
7.3 The COMPACT approach Against this backdrop of potential difficulties associated with carcinogenicity studies, various groups world-wide have developed a number of alternatives to the rodent bioassay, some of which use computer technology (Lewis, 1992a; Phillips and Anderson, 1993; Wang and Milne, 1993; Waters et al, 1994). The major ones currently being investigated include ADAPT, CASE, COMPACT, DEREK, HazardExpert, Onco-logic, TOPKAT, and a QSAR system developed by Benigni and coworkers to predict A:e values (where kQ is the rate constant for electron uptake), which are related to electrophilic potential. Although there is some degree of molecular modelling involved in production of the TOPKAT training set and in other QSAR techniques, the only procedure which utilizes molecular modelling entirely is COMPACT (Lewis, et al, 1995a; Ioannides et al, 1995; Parke et al, 1988, 1990) and, unlike most other systems, it does not require the purchase of specific software. Consequently, anyone who has access to any of the standard molecular modelling systems (e.g. Sybyl, Insight/Discover, COSMIC, HyperChem, Molldea, Chem-X, Macro-Model, CAche, Charmm/Quanta and Nemesis) can carry out COMPACT evaluations, and details of the methodology and results have been extensively published (Lewis et al, 1993,1995a; Lewis, 1992a,b, 1994, 1995a; Brown et al, 1994). COMPACT (computer-optimized molecular
parametric analysis of chemical toxicity) is, therefore, relatively easy to perform and fairly straightforward to execute, depending on the type of hardware and software used to generate the parameters required. Essentially, COMPACT is a form of QSAR or discriminant analysis which is based on the structural requirements for chemicals to exhibit specificity for one or more of the cytochromes P450 associated with the metabolic activation of carcinogens (which is discussed in the following section). The technique has been modified recently such that MO calculations via the AMI procedure can be employed, as this is more readily available (in MOPAC) than the CNDO/2 method originally used. In this case, the expression for calculation of the COMPACT radius (CR) (the COMPACT radius is a combination of molecular shape/planarity and electronic activation energy parameters which provides an indication of a chemical's likely metabolic activation via P4501 interactions) is given by: CR - vWea/depth2 -1O)2 + (AE-4)2 where area/depth is the quotient of molecular area and the square of molecular depth, and AE is the difference between the frontier orbital energies, i.e. AE = E (LUMO) - E (HOMO). Based on extensive studies of known P450 substrates and inducers, it has been found that a CR value of 8 (± 0.5) represents the boundary defining P4501 specificity, whereas CR values greater than this demarcation line tend to be associated with chemicals exhibiting specificity for other P450s. Furthermore, there is a good correlation (r = 0.92) between the magnitude of CR and P4501 induction potential for structurally diverse P4501 inducers. The relevant data used to derive this QSAR are shown in Table 7.1 and a plot of the correlation is presented as Figure 7.1. For conformationally flexible molecules there are potential difficulties in the evaluation of overall molecular planarity (area/depth2), as different results will be obtained depending on the structure used to calculate the relevant dimensional parameters. To overcome this problem, it is suggested that the values for the two extremes of planarity (i.e. an upper and lower limit of area/depth2) are calculated, such that estimates of P450 specificity for each conformer can be made. A mean of these two limiting values can be taken as a guide, but it is possible that different conformations of the same molecule can fit the active sites of more than one type of P450. (The complementary fit between a substrate and enzyme can be visualized as a key fitting exactly into a lock. In reality, however, the fitting process tends to allow some degree of conformational flexibility in both substrate and enzyme.) It is generally found that compounds of this nature (i.e. conformationally flexible and of mixed P450 specificity) may be only weakly carcinogenic because of these factors, and cimetidine represents one particular example of a chemical which possesses such characteristics. Most molecular modelling systems are able to perform 2
Table 7.1 COMPACT and P4501 induction data for six chemicals Compound
COMPACT radius3
1. 3-Methylcholanthrene 2. (3-Naphthylamine 3. 2-Aminofluorene 4. a-Naphthylamine 5. Benoxaprofen 6. Cimetidine
3.7534 4.8207 6.2591 6.5377 8.4532 9.1337
a
Log induction potential5 2.6021 1.8751 1.2041 1.0000 0.3010 -0.0969
COMPACT radius is given by the expression: C R - V (area/depth2 -1O)2 + (A£-4)2 2
where area/depth is the molecular planarity and A£ is the activation energy, £(LUMO) E(HOMO). b Induction potential is the ratio of P4501 induction and dose concentration (Ioannides and Parke, 1993). Figure 7.1 shows a plot of these data according to the equation: log induction - - 0.48CR(± 0.02) + 4.27 n = 6;s = 0.1157; r = 0.995; F= 274.78 where n = number of observations, s = standard error, r = correlation coefficient, and F = variance ratio.
Log Induction potential
automated conformational searches, although this can be rather timeconsuming where there are many rotatable bonds in the structure. Fortunately, the AE and molecular diameter values do not appear to vary significantly for different conformations, and it is usually a relatively simple procedure to ascertain minimum energy conformers and their
COM PACT radius Figure 7.1 A plot showing the correlation between the COMPACT radius and P4501 induction potential for six chemicals (data shown in Table 7.1 and points numbered accordingly).
extrema. Many molecular modelling packages include a facility for the computation of solvent-accessible molecular surfaces via the Connolly method, and the volumes of such surfaces can be used to estimate molecular diameters by a straightforward geometric transformation. Individually, the COMPACT parameters area/depth2 and AE have been found to correlate with P4501 induction and aryl hydrocarbon (Ah) receptor binding affinity in polychlorinated biphenyls (Parke et al, 1986) and with P4501 specificity in methylene dioxybenzenes (Lewis et al., 1995b), whereas AE values correlate with the carcinogenicity of nitrosamines (Parke et al., 1988) and with the mutagenicity of structurally diverse cooked food mutagens (Lewis et al., 1995c). The frontier orbital energies, which make up the AE parameter, can also be shown to explain P4501 potency differences in coumarin derivatives (Lewis et al., 1994a), and can also rationalize the carcinogenicity and mutagenicity variations of methyl benzanthracenes (Lewis and Parke, 1995). These and many other related QSAR analyses have been collated in a recent explanatory review on the importance of frontier orbitals in toxicity (Lewis, 1995a). COMPACT has been validated against rodent carcinogenicity data produced by the National Toxicology Program (NTP) of the US National Cancer Institute (NCI) and has been shown to produce concordances as high as 93% for chemicals which are positive in the rodent test (Lewis et al., 1993). However, when sample sets of roughly 50:50 positives and negatives are evaluated, the concordances with COMPACT are somewhat lower at around 72% (Brown et al., 1994; Lewis et al., 1995a). Nevertheless, consideration of structural alert by the use of the HazardExpert procedure (Smithing and Darvas, 1992) tends to improve the overall concordances to about 86% (Lewis, 1994b; Lewis et al., 1995a). Although it is sometimes difficult to make decisions regarding equivocal results, there is an indication that many equivocal rodent carcinogens cluster about the COMPACT curve and, therefore, it is possible to describe an equivocal region of the COMPACT plot as a curved area centred on the actual demarcation line, which is probably a more realistic approach, as the true situation is likely to be a continuum, as has been outlined by Ashby (1994). In addition to the evaluation of P4501 specificity, COMPACT is also able to give an indication of potential toxicity mediated by P4502E by the use of a third structural parameter, molecular diameter (Lewis et al., 1994b, 1995a). Figure 7.2 shows a COMPACT-3D plot of the three parameters area/depth2, AE and diameter for over 40 chemicals, where it can be seen that points corresponding to different compounds tend to form clusters associated with specificities for different P450s. It was found that inclusion of the third parameter (molecular diameter) to evaluate P4502E specificity and activation improves the concordance between COMPACT and rodent carcinogenicity from 64% (for P4501) to 72% (for P4501 and P4502E activation) (Lewis et al., 1995a). However, in order to provide an
improved concordance with carcinogenicity, it is recommended that the hydrophobicity parameter, log P (the logarithm of the octan-1-ol/water partition coefficient), is used in conjunction with COMPACT. It is clear that many of the false positives predicted by COMPACT are likely to be readily metabolized via phase II conjugation, as they are relatively polar molecules. Consequently, the use of log P as a screen will eliminate the majority of these; some type of structural alert system should also be used as a complement to COMPACT, as the former will identify direct-acting carcinogens, for example, which would not normally be picked up by COMPACT because this is designed to identify metabolic activation via P450. However, in order to conduct analyses with several descriptor variables as outlined above, it would be necessary to utilize either principal components analysis (PCA) or neural network systems. To date, over 2000 chemicals have been investigated using COMPACT, including about 400 food flavours and related compounds. The results of COMPACT evaluations on terpenoids (Lewis et al, 1994c) and cooked food mutagens (Lewis et al, 1995c) have been published recently, and there are plans to publish additional findings in the near future. Furthermore, the COMPACT research programme has been expanded to encompass molecular modelling of the P450 enzymes themselves (Lewis, a/d2
diameter
AE
Figure 7.2 A COMPACT-SD plot of the three descriptor variables, area/depth2 (a/d2), A£ and molecular diameter, for over 40 chemicals of known P450 specificity. The 3D plot shows that the COMPACT descriptors can differentiate between chemicals exhibiting specificity towards P4501 (A), P4502E (+), and other P450s (•) generally associated with detoxifying pathways, such as P4502B and P450C.
1995b), together with investigations of other proteins associated with P450 induction, peroxisome proliferation (Lewis and Lake, 1993) and oestrogenic responses (Lewis et al, 1995d).
7.4 Cytochromes P450 and their role in metabolic activation The cytochromes P450 are ubiquitous enzymes of phase I metabolism that are present in most forms of life, and are thought to have evolved from an ancestral haemoprotein about 3.5 billion years ago (Nelson et al, 1993). These enzymes metabolize over 90% of all foreign compounds known, and play a pivotal role in toxicity (Lewis, 1996). Consequently, any predictive method needs to take into account the crucial importance of P450-mediated metabolism in the toxicology of xenobiotics. As far as the microsomal systems are concerned, there is tight coupling between the P450s and many of the phase II enzymes, such as epoxide hydrolase (EH) and the conjugases, on the endoplasmic reticular membrane, which is an essentially hydrophobic environment. Consequently, the lipophilic parameters TT and log P are important measures of potential membrane binding and interaction with various P450s which, when combined with COMPACT, tend to give a good overall correlation with carcinogenicity; although, as mentioned previously, structural alert should also be considered. In mammals, the main organ of metabolism is the liver, and it is perhaps not surprising that this contains by far the largest proportion of P450 relative to other tissues. In particular, P450s are concentrated within the membrane of the endoplasmic reticulum (ER) of hepatocytes, where they appear to exist as macromolecular clusters or hetero-oligomers comprising up to about six or more P450s surrounding a central reductase flavoprotein, which supplies the two reducing equivalents required for P450-catalysed reactions. These macromolecular complexes (i.e. several P450s and reductase) 'trawl' the ER membrane for potential substrates of the optimum molecular structure to fit the appropriate complementary active site of the relevant P450 enzyme. The hydrophobic nature of the phospholipid bilayer and other components which constitute the ER membrane ensures that the chemical's partition coefficient between lipid and aqueous phases (roughly equivalent to the octan-1-ol/water partition coefficient) is an important prerequisite for absorption, interaction with P450 and subsequent metabolism. On binding, the orientation of P450 in the membrane phospholipid is altered from about 0° to 45° or more, possibly due to the change in the centre of gravity of the P450 and its overall conformation. This tilting effect brings about the interaction with reductase via electrostatic ionpairing such that the transfer of a single electron occurs, leaving the haem
iron of P450 primed for oxygenation. Thus, the binding of an appropriate substrate 'triggers' the entire P450 catalytic cycle for monooxygenase activity, which can be represented by the equation: RH + O2 P450 ROH + H2O substrate 2H+, 2e~ metabolite Substrate binding to P450 brings about a desolvation of the enzyme's active site, and the resultant entropy change makes a major contribution to the binding free energy and overall thermodynamics of the P450 cycle. This key role of substrate hydrophobicity explains, at least to some extent, the success of the Hansch approach as applied to P450-mediated activity and substrate binding, where the log P parameter has been shown to be important (Hansch and Zhang, 1993). Furthermore, the loss of proteinbound water molecules, including one which ligates the haem iron at the distal site, brings about a change in the ferric iron spin-state equilibrium in favour of the high-spin form. Reduction of ferric P450 to the ferrous state, which remains high-spin, considerably enhances the enzyme's affinity for oxygen binding, which probably occurs via a spin-spin coupling interaction, as molecular oxygen is naturally in the triplet high-spin state. It is generally accepted that the cysteinate fifth ligand in P450 pushes the equilibrium between Fe11O2 and Fe111O^ towards the ferric-superoxide form, especially as superoxide can be detected in P450-mediated oxygenations under certain circumstances, and there is firm evidence for the presence of superoxide in P450 under catalytic conditions. The highly reactive superoxide ion has a high affinity for protons, and the resultant radical species O2H* is likely to by stabilized by the ferrihaem of P450 at this stage of the cycle. However, following the second reduction, the complex rapidly decomposes to form the oxygenated substrate and a molecule of water, leaving the haem iron in the ferric state, so that the catalytic cycle can begin again. The mechanistic details and active oxygen species in the later stages of the P450 catalytic cycle are incompletely understood, but there is evidence for the presence of hydrogen peroxide, which could homolytically cleave to form hydroxyl radicals, although an iron oxene intermediate has also been postulated. Nevertheless, because of the diversity of P450-catalysed reactions, it would appear that substrate oxygenation via either a positive or a negative oxygen species can occur, although an oxygen radical mechanism could also provide an explanation of the experimental findings. It is likely that both of the substrate frontier orbitals will be important for P450 oxygenations, as these will relate to electrophilicity and nucleophilicity. Furthermore, in the case of epoxidation, for example, it is known that the rate constant for the formation of alkene epoxides (a P450-mediated reaction) is proportional to the ionization energy (Traylor and Xu, 1988), which is equivalent to the energy of the highest occupied frontier orbital,
E (HOMO). However, when the epoxide (which is highly electrophilic) interacts with DNA, the nucleophilic DNA bases (especially guanine) will donate electrons to the lowest unoccupied frontier orbital of the epoxide. Consequently, both frontier orbital energies are important and such an analysis provides some rationalization for the use of the AE parameter, which is the difference between the frontier orbital energies, in COMPACT (Lewis, 1995a). There is clear evidence for the role of certain P450 enzymes in the metabolic activation of pro-carcinogens (Guengerich, 1988, 1994; Guengerich and Shimada, 1991), especially those of P450 family 1 and subfamily 2E (Table 7.2). In fact, it is known that human forms of these isozymes activate many of the chemicals which have been shown to be carcinogenic in rodents (Gonzalez and Gelboin, 1994). In order to understand these findings more completely, three-dimensional models of many mammalian P450s
Table 7.2 Summary of inducible families of hepatic cytochrome P450 proteins involved in xenobiotic metabolism P450 Subfamily Family
Substrate characteristics
Role in xenobiotic metabolism
1
A
Essentially planar molecules (PAHs)
Metabolism almost always leads to the formation of reactive intermediates
2
A
Endogenous steroids
B
Non-planar molecules of broad structural classes
C
Non-planar molecules of which some are carboxylic acids and amides Basic molecules containing an ionizable nitrogen atom about 5-7 A from site of metabolism Small-sized molecules of broad structural classes
Some orthologues are involved in coumarin metabolism Metabolism leads primarily to deactivation, with a few exceptions, e.g. cyclophosphamide Shows genetic polymorphism in humans but is not generally involved in bioactivation Shows genetic polymorphism in humans but is not involved in bioactivation
D
E
Metabolizes many low molecular weight organic solvents. Involved in the activation of short-chain dialkylnitrosamines and halothanes. Acts as an oxygen radical generator
3
A
Large-sized molecules of broad structural classes
Deactivates many high molecular weight drugs. Participates in the activation of aflatoxin B1 and 6-aminochrysene
4
A
Long-chain fatty acids.
Few exogenous substrates, e.g. valproic acid and MEHP. It is, however, readily induced by peroxisomal proliferators (mostly propionic and phenoxy acid derivatives), which are epigenetic carcinogens in rodents
PAH, polycyclic aromatic hydrocarbons; MEHP, mono-(2-ethyl)hexylphthalate.
(including human isoforms) have been generated, based on a novel multiple sequence alignment with a bacterial P450 of known crystal structure (Lewis, 1995b). To date, each of these enzyme models provides rationalizations of known P450 substrate specificity (Ioannides et al, 1993b; Lewis et al., 1994d) and, furthermore, gives confirmation of the original COMPACT approach. For example, Figure 7.3 shows how the substrate caffeine can fit into the active site of human P4501A2, which metabolizes the chemical. Although similar, the rat form of this enzyme displays a number of key differences which help to explain the subtle variations in caffeine metabolism between the two species, and similar findings have been observed in other substrates, including carcinogens and mutagens. Likewise, there are species differences in the P4502A and P4502E isozymes which explain the sometimes important substrate metabolism variations in these enzyme subfamilies, both of which are known to activate carcinogens. Consequently,
PHE181
TYR437
ASN82 THR268
THR87
Figure 7.3 The putative active site of P4501A2 showing how the specific marker substrate, caffeine, can be orientated via hydrogen-bonding interactions (—) with key amino acid residues such that metabolism in the known position is possible. In addition to the two hydrogen bond donor residues, Thr87 and Asn82, complementary interactions between the coplanar aromatic rings of PhelSl and Tyr437, and those of the relatively planar substrate, assist in defining the specificity of the enzyme. The haem group with a bound oxygen atom is shown at the bottom of the figure, with the conserved threonine residue, Thr268, which mediates in the oxygenation mechanism, positioned between the haem and substrate.
the use of molecular modelling to derive three-dimensional structural models of P450 enzymes can both aid chemical safety evaluation and assist in explaining the many examples of species differences in metabolism.
7.5 Protein modelling In addition to the generation of P450 models, it is possible to apply the same techniques of protein modelling to investigate the interactions between other enzymes and their substrates, and between receptors and their ligands, some of which have a bearing on toxicity evaluation studies. For example, the enzyme cysteine conjugate (3-lyase has been shown to catalyse a crucial step in the toxic activation of halogenated alkenes (Dekant et al, 1989). Apparently, the initial P4502E-mediated phase I metabolism of these chemicals leads to cysteine conjugation, which would normally be expected to represent a detoxification pathway. However, in certain tissues the (3-lyase enzyme is able to cleave these cysteine conjugates to form reactive intermediates, such as acyl halides, which give rise to toxicity, including carcinogenicity (Dekant et al, 1989). It has been possible to model both rat and human p-lyase following protein sequence alignment with homologous proteins for which the crystal structures are known. Although this work is at an early stage, it is possible to demonstrate how the known cysteine conjugates can fit the p-lyase active site (Figure 7.4) such that metabolic activation to the toxic species can occur. It is hoped that similar techniques will facilitate the modelling of other toxicologically important enzymes, such as EH. It is known that certain members of the steroid hormone receptor superfamily have relevance to the potentially toxic effects of a number of chemicals, such as peroxisome proliferators and oestrogenic compounds. Consequently, modelling of the relevant receptor proteins can enable predictions to be made regarding the potency of oestrogenic chemicals and those associated with peroxisome proliferation. This is of relevance to food chemical risk analysis, as it has been shown recently that a number of flavonoids (Obermeier etal, 1995) and phthalates are oestrogenic, whereas terpenoids found in natural food flavours, and a variety of packaging migrants, such as phthalate and adipate esters, possess peroxisome-proliferating activities (Walker, 1993; Conning, 1995). Because of our interest in these areas, we have constructed three-dimensional models of both the mouse and human peroxisome proliferator-activated receptors (Lewis and Lake, 1993; Lewis et al., 1994b) and of the human oestrogen receptor (Lewis et al, 1995d). In the former, it has been found that calculated interaction energies between structurally diverse peroxisome proliferators and the ligand-binding domain of the mouse peroxisome proliferator-activated receptor (PPAR) show a parallelism with relative potency for peroxisome
GLY246 ALA245 LYS247 THR97
SER244
CYSTEINE ASP213
MET219
PMP1
ALA357
ASN185
VAL358
Figure 7.4 The putative active site of human cysteine conjugate (3-lyase showing a possible mode of interaction between a typical substrate (in this case, the cysteine conjugate of dichloroethene) and a number of key complementary amino acid residues which may be involved in substrate binding and metabolic activation of the cysteine conjugate. An intermediate stage in the reaction is depicted, where the cysteine conjugate forms a covalent complex with the bound cofactor, pyridoxal monophosphate (PMP).
proliferation (Lewis and Lake, 1993). Furthermore, other epigenetic carcinogens may be identifiable via their calculated binding interaction energies with the Ah receptor, which has been sequenced recently. There is considerable evidence demonstrating the involvement of the Ah receptor in carcinogenesis, and it is also known that ligand binding is associated with induction of P4501 (Ioannides and Parke, 1993; Hankinson, 1995). Therefore, modelling of the Ah receptor may assist in the screening of nongenotoxic, epigenetic carcinogens, such as 2,3,7,8-tetrachlorodibenzo-/?dioxin (TCDD), which are potent inducers of P4501 and also exhibit high binding affinities towards the Ah receptor. Other proteins of potential interest to the food industry where molecular modelling may prove to be important in assessing a number of different types of interactions include serum albumins, myosin and other muscle proteins, milk proteins such as a-lactalbumin and p-lactoglobin, and antibodies; some of these have either been modelled or have had their structures determined by X-ray crystallography, thus facilitating investigation of their possible roles in various food processes, and in other areas of relevance to the food industry.
7.6 Quantitative structure-activity relationships
NCEs (millions)
QSARs constitute a vast area of scientific activity covering many different forms of biological activities and properties, including toxicity of various types (Hermens and Opperhuizen, 1991; Borman, 1990; Hansch, 1993; Purchase et al., 1990; Kubinyi, 1990; Ghauri et al, 1992; Roberts and Basketter, 1990; Dearden et al., 1994). Although not essential, a molecular modelling capability facilitates QSAR studies (Ramiller, 1984; Livingstone, 1994), especially where there are relatively transparent interfaces between separate modules for QSAR analysis and structure calculation within the same fully integrated software package, which can include access to relevant databases, such as CLOGP, Chemical Abstracts and the Cambridge crystallographic databank. New chemical entities are increasing in number yearly and in large quantities (Figure 7.5). There is, therefore, a need for the rapid testing of these compounds for potential hazard, and risk, to humans and other animals (Ashby, 1994). The conventional methods of testing, which use experimental animals, are becoming expensive and there are many instances where the species differences are sufficiently marked to demonstrate that the use of animals as surrogates for humans is scientifically flawed (Gori, 1992). It is self-evident, however, that the biological activity of a chemical is due to various features of its molecular structure (Lewis, 1995a; Vogel and Ashby, 1994) and, therefore, in theory it should be capable of calculation from first principles given the chemical formula of the compound (Waters et al., 1994). Probably the most extensively used physicochemical parameter in QSAR analyses is log P, where P is the octanol/water partition coefficient (Lewis, 1990). There is overwhelming evidence for the
Year
Figure 7.5 Increase in numbers of new chemical entities (NCEs) (in millions) discovered or manufactured over a 60-year period from 1930 to 1990.
importance of this descriptor in explaining potency differences in many series of chemicals, as exemplified by the pioneering work of Hansch (1993). As this factor describes the balance between a compound's lipid solubility and its aqueous solubility, it is clear that the value of log P will determine the extent to which a substance will be absorbed in various biophase compartments, and represent a measure of its relative ease of transport across biological barriers, such as cell membranes. Consequently, log P is often related to the absorption or clearance of compounds and Figure 7.6 shows an example of the correlation between half-life and log P for arylalkylamines. However, it is unlikely that log P will exhibit much specificity towards particular biological endpoints, as it is a factor in almost all forms of activity, including toxicity (Dearden et al, 1994). Although somewhat different from both molecular modelling and QSAR, physiologically based pharmacokinetic (PB-PK) analysis is another form of modelling which is of potential importance in risk assessment. A recent publication has demonstrated the use of this technique in predicting the absorption of halogenated alkanes in the lung tissue of small rodents (Loizou et al., 1994). It is apparent that log P is a major factor in describing the lung absorption characteristics of these haloalkanes, and we have shown that the calculated molecular polarizabilities of these compounds give a high correlation (r = 0.86) with log P (Lewis, 1995a). However, for
tl/2
Rates vs. log P
log P
Figure 7.6 Correlation (r = 0.99) between half-lives in rat brain of 2-aminotetralin analogues and their log P values. Compounds are: (1) amphetamine, (2) 2-aminoindane, (3) 2-aminotetralin, (4) 2-aminobenzocycloheptane. As there are only four points, this relationship cannot be used predictively, however (data from Jenner and Testa, 1980).
larger numbers of structurally diverse chemicals, log P appears to be related to a combination of polarizability, dipole moment and energy of the highest occupied frontier orbital (Lewis, 1989), all of which can be calculated using molecular modelling techniques. Hansch and co-workers are in the process of compiling a database of QSARs (Hansch, 1993) such that investigators can readily access the appropriate QSAR expressions which describe the variation in the relevant biological activity of interest for a particular series of chemicals, thus aiding the assessment of potential bioactivity in novel compounds. Although primarily concerned with the employment of MO parameters in QSAR (Lewis, 1990), we have also noted that log P can often improve the degree of correlation obtained using other structural descriptors. For example, in a series of structurally diverse peroxisome proliferators, the inclusion of log P values increases the correlation between PPAR binding interaction energy and relative potency for peroxisome proliferation (Lewis and Lake, 1993), although the pKa of the ligand also appears to be important. In polyaromatic hydrocarbons, a combination of log P and dipole moment gives a good correlation with carcinogenic potency (Lewis, 1995a) whereas, in a series of phthalate esters, the hydrophobic substituent parameter, TT, is able to correlate with peroxisome-proliferating activity (Lake et al, 1986). Furthermore, we have found that the binding of aliphatic primary amines to P4502B can be explained via the use of a quadratic expression in log P (Lewis, 1995a) although the partial atomic charge on the amine nitrogen is also an important factor. Many examples of the use of log P, sometimes in combination with other descriptors, in the P450 field can be found in a recent review by Hansch and Zhang (1993), although, as this parameter is related to lipophilicity in general, log P is not very specific for describing different P450 isozyme interactions. For describing the interaction between various epoxides and the phase II enzyme, EH, the log P values of these substrates improve the original correlation (r = 0.95) with molecular electrostatic potential (MEP) obtained from molecular modelling and MO calculation (Politzer and Laurence, 1984). Consequently, it is found that an expression of the type log A = 0.11 log P + 11.21 Es/Vmin + 1.12 where A is the EH activity, P is the octanol/water partition coefficient, Es is the Taft steric parameter and Vmin is the MEP minimum, provides a means of assessing the likely substrate-binding affinity of structurally unrelated epoxides to the epoxide hydrolase (EH) enzyme. Electrostatic potential energy calculations are also useful for evaluating the sites of electrophilic attack on DNA bases, and base pairs, by alkylating agents, such as methyl nitrosourea (Lewis and Griffiths, 1987), where there
appears to be a good correlation between electrophilic superdelocalizability and DNA alkylation. Moreover, the positions of electrostatic isopotential (EIP) minima and maxima can indicate the sites of metabolism in different compounds, and aflatoxin is one example where each known position of metabolism (Eaton and Gallagher, 1994) is matched precisely by EIP energy maxima and minima (Figure 7.7). Similar results have been obtained for caffeine and some of its related heterocyclic amines (Sanz et al, 1994) and use of EIP calculations in other areas has been reviewed relatively recently (Politzer and Murray, 1991). There is often some degree of interrelationship between electronic parameters such as EIPs, superdelocalizabilities and frontier orbital electron densities, and it depends on the particular circumstance under investigation which of these descriptors provides the best correlation with activity. Ackland has shown, for example, that the position of metabolism in substrates of P4502D6 is related to the frontier orbital electron density on the relevant aromatic carbon atom (Ackland, 1993), and it has been demonstrated that similar parameters partially govern the rate of P4502Bmediated hydroxylation of toluene derivatives in the rat (Lewis et al., Aflatoxin
-5 kcal/mo I +5 kcal/mo I Figure 7.7 An electrostatic isopotential (EIP) energy contour surface of the carcinogen aflatoxin, showing the lobes of positive and negative potential energy, together with their respective maxima and minima. Molecular modelling of aflatoxin interactions with the P450 enzymes involved in its activation indicates that the EIP minima relate to hydrogen-bonding possibilities with active site amino acid residues. These and other interactions orientate the substrate for oxygenation at the experimentally observed positions, including the metabolic activation step which leads to the formation of the 2,3-epoxide.
1995e). Furthermore, the frontier orbital electron populations in the HOMO and LUMO of benzo(0)pyrene-7,8-diol-9,10-epoxide and the DNA base guanine, respectively, appear to govern the interaction (Figure 7.8) between the electrophilic diolepoxide and nucleophilic attack by guanine, which leads to the formation of the known DNA adduct. It would appear that the molecular dipole moment also has a bearing on the genotoxic potential of certain chemicals (Lewis et al, 1995c). In particular, this electronic parameter can differentiate between mutagens and non-mutagens for many structurally diverse chemicals, even when their molecular planarities and activation energies (AE" values) are similar. Presumably, there is a dipolar component governing the interaction between intercalating agents and DNA base pairs, as there is between the individual base pairs themselves. The interbase-pair stacking angle
Figure 7.8 The interaction complex between benzo(0)pyrene-7,8-diol-9,10-epoxide (left) and the DNA base, guanine. The lobes of electron density in the HOMO and LUMO frontier orbitals indicate that a charge-transfer complexation process governs the interaction whereby electron donation from the HOMO on guanine (nucleophile) to the LUMO on the diolepoxide (electrophile) is likely to lead to the formation of the covalent adduct between the ultimate carcinogenic metabolite of benzo(a)pyrene and guanine.
between guanine-cytosine and adenine-thymine pairs of 35° provides the maximum interaction between their MO-calculated dipole moment vectors, and is in close agreement with the known stacking angle of 36° in the crystal structure of DNA. Consequently, planar molecules possessing high dipole moments are likely to exhibit greater mutagenicity than those with negligible dipoles, due to the difference in their ability to intercalate and interact with the DNA base pairs. For example, the strongly mutagenic heterocyclic amines, formed during the cooking of meat products, all possess high dipole moments, whereas non-mutagens or antimutagens, such as anthraflavic acid and ellagic acid, have dipole moments close to zero (Lewis et al, 1995c). In the case of the pro-carcinogen benzo(a)pyrene, metabolic activation via P4501 and EH to form the diolepoxide considerably increases the dipole moment such that interaction with DNA is likely to be more favourable. It is well established that interaction with DNA is a key stage in mutagenesis and carcinogenesis, and potentially toxic chemical agents may be either direct-acting electrophiles (Figure 7.9) or are metabolically activated to electrophilic species (or other reactive intermediates) via one or more of the xenobiotic-metabolizing enzymes, such as P450, EH and Plyase. A number of such pathways for the activation of toxic chemicals is shown in Figure 7.9. It is possible to utilize this type of information for the formulation of structural alert systems, although it would be preferable to model candidate structures within the active sites of the relevant enzymes (or ligand-binding sites on their respective receptor proteins, in the case of non-genotoxic agents). However, in addition to the molecular parameters mentioned previously, Rosenkranz and Klopman (1995) have formulated an electrophilic parameter, calculated from frontier orbital energies, that appears to be discriminatory towards a large number of structurally diverse mutagens and non-mutagens (Rosenkranz and Klopman, 1995). Molecular modelling thus explains many of the biological activities of different chemicals, either qualitatively or quantitatively, when combined with the techniques of QSAR, and it may be possible to undertake the safety evaluation of new chemical entities via some combination of a variety of short-term test procedures, such as those outlined in Figure 7.10.
7.7 Conclusions There are many areas of risk assessment and chemical safety evaluation where the various techniques of molecular modelling can have an application, including those of relevance to the food industry. Molecular modelling is extensively used by the pharmaceutical industry for the design and
1.
Alkyl nitrosamines and nitrosoureas 2.
Alkyl diazohydroxide
CYPl
Benzo(a)pyrene
Diazonium ion
Benzo(a)pyrene -7,8-epoxide
^^^ epoxide hydrolase (EH) Benzo(a)pyrene-7,8-diol 3.
2-Acetylaminofluorene (2-AAF)
4.
AfIaIoXiIiB 1
5.
Phthalate diesters
6.
Pyrrolizidine alkaloids
7.
CYP3A
Heterocyclic amines C Y P 2 E Benzene
0
2-AAF N-hydroxide
Phthalate monoesters
CYP3A
CYPl A2
9.
Benzo(a)pyrene -7,8-diol-9,10-epoxide nitrenium ion
Aflatoxin Bj-2,3-epoxide
. , J C Y P Polyaromatic hydrocarbons (possessing bay regions) Aromatic amines
IU.
CYP1A2
CYP4? '
8.
in
CYPl
CYPl A2
CYP4
pyrrolic intermediates l
epoxides
CYPl
co/co-1-hydroxylated products carbonium ions
diolepoxides
triol carbonium ions
N-hydroxides
nitrenium ions
N-hydroxides
benzene epoxide
nitrenium ions
phenol, mucondialdehyde
11.
Dichloromethane
dehalogenation
carbene
Figure 7.9 A compilation of direct-acting agents and those associated with metabolic activation via cytochromes P450. The list is intended to be representative rather than exhaustive (Guengerich, 1987).
12. Cyclophosphamide
4-hydroxycyclophosphamide ring opening
aldophosphamide
acrolein + phosphamide mustard
13.
Nitriles 14. Safrole
hydroxysafrole
15. p-Xylene
hydroxysafrole epoxide
p-methylbenzylalcohol
p-methylbenzaldehyde
16.
Pulegone 17. Phenyldimethyltriazene
Menthofuran demethylation
18.
Urethane 19.
haem alkylation 20.
Dibromoethane 21.
Chloroform 22.
Tetrachloromethane
23.
Paracetamol Figure 7.9 Continued
24.
(low oxygen tension)
(normal oxygen tension) Halothane
25. bromobenzene
epoxide
and other isozymes
26. chloramphenicol
27. thioacetamide
28. trichloroethane
29. vinylidene chloride
30.
hydralazine
Figure 7.9 Continued
31. parathion
paraoxon
32.
33.
dieldrin
aldrin
acetyltransferase
34.
amidase isoniazid
35.
MAO procarbazine
36. thiobenzamide
Figure 7.9 Continued
sulphine
37.
COS carbon disulphide
38. phenacetin
39.
furosemide
40.
iproniazid
Figure 7.9 Continued
development of novel therapeutic agents, and protein modelling of the G-protein-coupled receptors represents a major area of interest and intense scientific activity. Also, there have been numerous examples of the successful application of QSAR techniques to therapeutic activity correlation and prediction, which have produced new pharmaceutical agents. Although the powerful combinative techniques of molecular modelling and QSAR have had relatively less application in drug metabolism and toxicity evaluation, it would appear that modelling of human enzymes, such as the cytochromes P450 and [3-lyase, and proteins, such as the human oestrogen receptor and human peroxisome receptor, enables one to address, partially at least, the problem of species differences in risk assessment. These new developments in our understanding of the structural basis of biological activity therefore give an important edge to molecular modelling as a valid alternative to the use of animals as human surrogates in risk assessment.
Input molecular formula
Conduct other hi vitro and in vivo tests Yes
Calculate: MW, log E pX a and screen for structural alerts
Is Yes compound related to known direct-acting toxicants?
Is compound likely to be highly toxic?
No
Substituents probably decrease the toxicity of the structural alert moiety
Perform QSAR analysis
No Yes
Calculate: COMPACT parameters a/d , AE and diam.
Is Yes compound a likely substrate of CYPl or 2E? No
Is compound an inducer of CYPs?
No
Compound is not a strong inducer of CYP enzymes
Perform ENACT determination
Yes
Calculate: Molecular structure and EIP contour surface
Is Yes compound a likely peroxisome proliferator? No
Is No compound a likely inducer of CYP4?
Compound is not likely to present significant toxicity in humans
Perform PPAR receptor interaction determination
Compound is unlikely to possess overt toxicity Figure 7.10 A structural approach to toxicity evaluation via a decision tree method which utilizes both molecular modelling and other short-term test procedures. ENACT, enzyme activation in chemical toxicity.
Acknowledgement The financial support of Glaxo Research and Development Ltd is gratefully acknowledged.
References Ackland, MJ. (1993) Correlation between site specificity and electrophilic frontier values in the metabolic hydroxylation of aromatic substrates: a molecular modelling study. Xenobiotica, 23, 1135-1144. Ames, B.N. and Gold, L.S. (1990) Too many rodent carcinogens: mitogenesis increases mutagenesis. Science, 249, 970-971. Ashby, J. (1994) Two million rodent carcinogens? The role of SAR and QSAR in their detection. Mutation Research, 305, 3-12. Bogen, K.T. (1995) Improved prediction of carcinogenic potencies from mutagenic potencies for chemicals positive in rodents and the Ames test. Environmental and Molecular Mutagenesis, 25, 37-49. Borman, S. (1990) New QSAR techniques eyed for environmental assessments. Chemical and Engineering News, 68, 20-23. Brown, S.J., Raja, A.A. and Lewis, D.F.V. (1994) A comparison between COMPACT and HazardExpert evaluations for 80 chemicals tested by the NTP/NCI rodent bioassay. Alternatives to Laboratory Animals, 22, 482-500. Conning, D.M. (1995) Toxicology of food and food additives. In: Ballantyne, B., Marrs, T. and Turner, P. (eds) General and Applied Toxicology, abridged edition. Macmillan, London, pp. 1213-1241. Dearden, J.C., Calow, P. and Watts, C. (1994) A predictable response? Chemistry in Britain, 30, 823-826. Dekant, W., Vamvakas, S. and Anders, M.W. (1989) Bioactivation of nephrotoxic haloalkenes by glutathione conjugation: formation of toxic and mutagenic intermediates by cysteine conjugate p-lyase. Drug Metabolism Reviews, 20, 43-83. DiCarlo, FJ. (1984) Carcinogenicity bioassay data: correlation by species and sex. Drug Metabolism Reviews, 15, 409-413. Eaton, D.L. and Gallagher, E.P. (1994) Mechanisms of aflatoxin carcinogenesis. Annual Review of Pharmacology and Toxicology, 34, 135-172. Ennever, F.K., Noonan, TJ. and Rosenkranz, H.S. (1987) The predictivity of animal bioassays and short-term genotoxicity tests for carcinogenicity and non-carcinogenicity to humans. Mutagenesis, 2, 73-78. Ghauri, F.Y., Blackledge, C.A., Glen, R.C. et al (1992) Quantitative structure-metabolism relationships for substituted benzoic acids in the rat. Biochemical Toxicology, 44, 1935-1946. Gonzalez, FJ. and Gelboin, H.V. (1994) Role of human cytochromes P450 in the metabolic activation of chemical carcinogens and toxins. Drug Metabolism Reviews, 26, 165-183. Gori, G.B. (1992) Cancer risk assessment: the science that is not. Regulatory Toxicology and Pharmacology, 16, 10-20. Guengerich, P.P. (1987) Mammalian Cytochromes P-450. CRC Press, Boca Raton, Florida. Guengerich, F.P. (1988) Roles of cytochrome P-450 enzymes in chemical carcinogenesis and cancer chemotherapy. Cancer Research, 48, 2946-2954. Guengerich, F.P. (1994) Catalytic selectivity of human cytochrome P450 enzymes: relevance to drug metabolism and toxicity. Toxicology Letters, 70, 133-138. Guengerich, F.P. and Shimada, T. (1991) Oxidation of toxic and carcinogenic chemicals by human cytochrome P-450 enzymes. Chemical Research in Toxicology, 4, 391^407. Hankinson, O. (1995) The aryl hydrocarbon receptor complex. Annual Review of Pharmacology and Toxicology, 35, 307-340. Hansch, C. (1993) Quantitative structure-activity relationships and the unnamed science. Account of Chemical Research, 26, 147-153.
Hansch, C. and Zhang, L. (1993) Quantitative structure-activity relationships of cytochrome P-450. Drug Metabolism Reviews, 25, 1-48. Hermens, J.L.M. and Opperhuizen, A. (1991) QSAR in Environmental Toxicology - IV. Elsevier, Amsterdam. Ioannides, C. and Lewis, D.F.V. (1995) Drugs, Diet and Disease, Vol. 1: Mechanistic Approaches to Cancer. Ellis Horwood, Chichester. Ioannides, C. and Parke, D.V. (1993) Induction of cytochrome P4501 as an indicator of potential chemical carcinogenesis. Drug Metabolism Reviews, 25, 485-501. Ioannides, C., Ayrton, A.D., Lewis, D.F.V. and Walker, R. (1993a) Modulation of cytochromes P450 and chemical toxicity by food constituents. In: Parke, D.V., Ioannides, C. and Walker, R. (eds) Food, Nutrition and Chemical Toxicity. Smith-Gordon, London, pp. 301-310. Ioannides, C., Cheung, Y.-L., Wilson, J.P. et al. (1993b) The mutagenicity and interactions of 2- and 4-acetylamino fluorene with cytochrome P450 and the aromatic hydrocarbon receptor may explain the difference in their carcinogenic potency. Chemical Research in Toxicology, 6, 535-541. Ioannides, C., Lewis, D.F.V. and Parke, D.V. (1995) Mechanisms of chemical carcinogenesis and molecular parametric analysis in the safety evaluation of chemicals. In: Ioannides, C. and Lewis, D.F.V. (eds) Drugs, Diet and Disease, Vol. 1: Mechanistic Approaches to Cancer. Ellis Horwood, Chichester, pp. 1-46. Jenner, P. And Testa, B. (eds) (1980) Drug Metabolism. Dekker, New York, p. 59. Kubinyi, H. (1990) Quantitative structure-activity relationships (QSAR) and molecular modelling in cancer research. Journal of Cancer Research and Clinical Oncology, 116, 529-537. Lake, E.G. (1995) Mechanisms of hepatocarcinogenicity of peroxisome proliferating drugs and chemicals, Annual Review of Pharmacology and Toxicology, 35, 483-507. Lake, E.G., Lewis, D.F.V., Gray, T.J.B. et al. (1986) Structure activity studies on the induction of peroxisomal enzyme activities by a series of phthalate monoesters in primary rat hepatocytes. Archives of Toxicology, Suppl. 9, 386-389. Lewis, D.F.V. (1989) The calculation of molar polarizabilities by the CNDO/2 method: correlation with the hydrophobic parameter, log P. Journal of Computational Chemistry, 10, 145-151. Lewis, D.F.V. (1990) MO-QSARs: a review of molecular orbital-generated quantitative structure-activity relationships. Progress in Drug Metabolism 12, 205-255. Lewis, D.F.V. (1992a) Computer-assisted methods in the evaluation of chemical toxicity. Reviews in Computational Chemistry, 3, 173-222. Lewis, D.F.V. (1992b) Computer modelling of cytochromes P-450 and their substrates: a rational approach to the prediction of carcinogenicity. Frontiers in Biotransformation, 7, 90-136. Lewis, D.F.V. (1994a) Molecular structural studies in the rationalization of xenobiotic metabolism and toxicity. Toxicology and Ecotoxicology News, 1, 108-112. Lewis, D.F.V. (1994b) Comparison between rodent carcinogenicity test results of 44 chemical and a number of predictive systems. Regulatory Toxicology and Pharmacology, 20, 215-222. Lewis, D.F.V. (1995a) COMPACT and the importance of frontier orbitals in toxicity. Toxicology Modelling, 1, 85-97. Lewis, D.F.V. (1995b) Three-dimensional models of human and other mammalian microsomal P450s constructed from an alignment with P450102 (P450bm3). Xenobiotica, 25, 333-366. Lewis, D.F.V. (1996) Cytochromes P450: Structure, Function and Mechanism. Taylor and Francis, London. Lewis, D.F.V. and Griffiths, U.S. (1987) Molecular electrostatic potential energies and methylation of DNA bases: a molecular orbital-generated quantitative structure-activity relationship. Xenobiotica, 17, 769-776. Lewis, D.F.V. and Lake, E.G. (1993) The interaction of some peroxisome proliferators with the putative mouse liver peroxisome proliferator activated receptor (ppar): a molecular modelling and quantitative structure-activity relationship study. Xenobiotica, 23, 79-96.
Lewis, D.F.V. and Parke, D.V. (1995) The genotoxicity of benzanthracenes: a quantitative structure-activity study. Mutation Research, 328, 207-214. Lewis, D.F.V., Ioannides, C. and Parke, D.V. (1993) Validation of a novel molecular orbital approach (COMPACT) to the safety evaluation of chemicals by comparison with Salmonella mutagenicity and rodent carcinogenicity data evaluated by the US NCI/NTP. Mutation Research, 291, 61-77. Lewis, D.F.V., Lake, E.G., Ioannides, C. and Parke, D.V. (1994a) Inhibition of rat hepatic aryl hydrocarbon hydroxylase activity by a series of 7-hydroxy coumarins: QSAR studies. Xenobiotica, 9, 829-838. Lewis, D.F.V., Moereels, H., Lake, E.G. et al (1994b) Molecular modelling of enzymes and receptors involved in carcinogenesis: QSARs and COMPACT-3D. Drug Metabolism Reviews, 26, 261-285. Lewis, D.F.V., Ioannides, C., Walker, R. and Parke, D.V. (1994c) The safety evaluation of food chemicals by COMPACT I. A study of some acyclic terpenes. Food and Chemical Toxicity, 32, 1053-1059. Lewis, D.F.V., Ioannides, C. and Parke, D.V. (1994d) Molecular modelling of cytochrome P4501A1: a putative access channel explains activity differences between structurally related benzo[a]pyrene and benzo[e]pyrene, and between 2-acetylaminofluorene and 4acetylaminofluorene. Toxicology Letters, 71, 235-243. Lewis, D.F.V., Ioannides, C. and Parke, D.V. (1995a) A retrospective evaluation of the outcome of rodent carcinogenicity testing from the NTP rodent bioassay results of 40 chemicals. Environmental Health Perspectives, 103, 178-184. Lewis, D.F.V., Ioannides, C. and Parke, D.V. (1995b) Computer graphics analysis of the interaction of alkoxy methylenedioxybenzenes with cytochromes P4501. Toxicology Letters, 76, 39-45. Lewis, D.F.V., Ioannides, C., Walker, R. and Parke, D.V. (1995c) Quantitative structure-activity relationships within a series of food mutagens. Food Additives and Contaminants, 12, 715-724. Lewis, D.F.V., Parker, M.G. and King, R.J.B. (1995d) Molecular modelling of the human estrogen receptor and ligand interactions based on site-directed mutagenesis and amino acid sequence homology. Journal of Steroid Biochemistry and Molecular Biology, 25, 55-65. Lewis, D.F.V., Ioannides, C. and Parke, D.V. (1995e) A quantitative structure-activity relationship study on a series of 10 para-substituted toluenes binding to cytochrome P4502B4 (CYP2B4), and their hydroxylation rates. Biochemical Pharmacology, 50, 619-625. Livingstone, DJ. (1994) Computational techniques for the prediction of toxicity. Toxicology in Vitro, 8, 873-877. Loizou, G.D., Urban, G., Dekant, W. and Anders, M.W. (1994) Gas uptake pharmacokinetics of 2,2-dichloro-l,l,l-trifluorethane (HCFC-123). Drug Metabolism and Disposition, 22, 511-517. Lorenz, J., Glatt, H.R., Fleischmann, R. et al. (1984) Drug metabolism in man and its relationship to that in three rodent species: monooxygenase, epoxide hydrolase and glutathione S-transferase activities in subcellular fractions of lung and liver. Biochemical Medicine, 32, 43-56. Maron, D.M. and Ames, B.N. (1983) Revised methods for the salmonella mutagenicity test. Mutation Research, 113, 173-215. Martin, A.P. and Palumbi, S.R. (1993) Body size, metabolic rate, generation time, and the molecular clock. Proceedings of the National Academy of Sciences of the USA, 90, 4087-4091. Nelson, D.R., Kamataki, T., Waxman, DJ. et al. (1993) The P450 superfamily: update on new sequences, gene mapping, accession numbers, early trivial names of enzymes and nomenclature. DNA and Cell Biology, 12, 1-51. Obermeier, M.T., White, R.E. and Yang, C.S. (1995) Effects of bioflavonoids on hepatic P450 activities, Xenobiotica. 25, 575-584. Orr, W.C. and Sohal, R.S. (1994) Extension of life-span by overexpression of superoxide dismutase and catalase in Drosophila melanogaster. Science, 263, 1128-1130. Parke, D.V. (1987) Activation mechanisms to chemical toxicity. Archives of Toxicology, 60, 5-15.
Parke, D.V. (1994) The cytochromes P450 and mechanisms of chemical carcinogenesis. Environmental Health Perspectives, 102, 852-853. Parke, D.V. and Ioannides, C. (1990) Role of cytochromes P-450 in mouse liver tumour production. In: Stevenson, D.E., McClain, R.M., Popp, J.A. et al (eds) Mouse Liver Carcinogenesis: Mechanisms and Species Comparisons. Alan R. Liss, New York, pp. 215-230. Parke, D.V. and Ioannides, C. (1994) The effects of nutrition on chemical toxicity. Drug Metabolism Reviews, 26, 739-765. Parke, D.V. and Lewis, D.F.V. (1992) Safety aspects of food preservatives. Food Additives and Contaminants, 9, 561-577. Parke, D.V., Ioannides, C. and Lewis, D.F.V. (1986) Structure activity models for toxicity testing. In: Hodel, C.M. (ed.) Toxicology in Europe in the Year 2000, FEST supplement. Elsevier, Amsterdam, pp. 14-19. Parke, D.V., Lewis, D.F.V. and Ioannides, C. (1988) Chemical procedures for the evaluation of chemical safety. In: Richardson, M.L. (ed.) Risk Assessment of Chemicals in the Environment. Royal Society of Chemistry, London, pp. 45-72. Parke, D.V., Ioannides, C. and Lewis, D.F.V. (1990) Computer modelling and in vitro tests in the safety evaluation of chemicals: strategic applications. Toxicology in Vitro, 4, 680-685. Phillips, J.C. and Anderson, D. (1993) Predictive toxicology. Occupational Health Review, January/February, 27-30. Politzer, P. and Laurence, P.R. (1984) Relationships between the electrostatic potential, epoxide hydrase inhibition and carcinogenicity for some hydrocarbon and halogenated epoxides. Carcinogenesis, 5, 845-848. Politzer, P. and Murray, J.S. (1991) Molecular electrostatic potentials and chemical reactivity. Reviews in Computational Chemistry, 2, 273-312. Purchase, R., Phillips, J. and Lake, B. (1990) Structure-activity techniques in toxicology. Food and Chemical Toxicology, 28, 459-466. Ramiller, N. (1984) Computer-assisted studies in structure-activity relationships. American Laboratory, 16, 78-83. Roberts, D.W. and Basketter, D.A. (1990) A quantitative structure-activity/dose relationship for contact allergenic potential of alkyl group transfer agents. Toxicology in Vitro, 4, 686-687. Rozenkranz, H.S. and Klopman, G. (1995) Relationships between electronegativity and genotoxicity. Mutation Research, 328, 215-227. Sanz, F., Lopez-de-Brinas, E., Rodriguez, J. and Manaut, F. (1994) Theoretical study on the metabolism of caffeine by cytochrome P-450 1A2 and its inhibition. Quantitative Structure-Activity Relationships, 13, 281-284. Smithing, M.P. and Darvas, F. (1992) HazardExpert: an expert system for predicting chemical toxicity. In: Finley, J.W., Robinson, S.F. and Armstrong, DJ. (eds) Food Safety Assessment. American Chemical Society, Washington, pp. 191-200. Traylor, T.G. and Xu, F. (1988) Model reactions related to cytochrome P-450. Effects of alkene structure on the rates of epoxide formation. Journal of the American Chemical Society, 110, 1933-1958. Vogel, E.W. and Ashby, J. (1994) Structure-activity relationships: experimental approaches. In: Tardiff, R.G., Lohman, P.H.M. and Wogan, G.N. (eds) Methods to Assess DNA Damage and Repair: Interspecies Comparisons. Wiley, Chichester, pp. 231-254. Walker, R. (1993) Food toxicity. In: Garrow, J.S. and James, W.P.T. (eds) Human Nutrition and Dietetics, 9th edn. Churchill Livingstone, Edinburgh, pp. 354-367. Wang, S. and Milne, G.W.A. (1993) Applications of computers to toxicological research. Chemical Research in Toxicology, 6, 748-753. Waters, M.D., Richard, A.M., Rabinowitz, J.R. et al. (1994) Structure-activity relationships: computerized systems. In: Tardiff, R.G., Lohman, P.H.M. and Wogan, G.N. (eds) Methods to Assess DNA Damage and Repair: Interspecies Comparisons. Wiley, Chichester, pp. 201-229. Wilbourn, J. and Vainio, H. (1994) Identification of cancer risks - qualitative aspects. In: Richardson, M. (ed.) Chemical Safety: International Reference Manual. VCH, Weinheim, pp. 241-258.
8 Estimation of dietary intake of food chemicals J.S. DOUGLASS and D.R. TENNANT
8.1 Introduction The accuracy of assessed risk from intake of a chemical in the food supply depends to a great extent on the accuracy of the dietary intake data on which the assessment is based. Until recently, international efforts to develop food chemical risk assessment methodology have focused more on toxicological evaluation than on accurate estimation of dietary intake. Although internationally recognized acceptable daily intakes (ADIs) have been established by the Joint FAO/WHO Meeting on Pesticide Residues (JMPR) and Joint FAO/WHO Expert Committee on Food Additives and Contaminants (JECFA), current methods for determining the margin of safety between population intakes and the ADIs vary from country to country. Differences in intake assessment methodology may result in technical barriers to trade under recently established General Agreement on Tariffs and Trade (GATT) Sanitary and Phytosanitary Measures criteria. Development of standard methodology, therefore, has become an international priority. All food chemical intake assessment methods are rooted in the following expressions: Intake from a food = Food chemical concentration x food consumption Total intake = Sum of intake from all foods containing the chemical The specific methods used in food chemical intake assessment may depend on the degree of estimation accuracy required, the type of chemical (e.g. pesticide or food additive), the data available for use in the analysis, and a variety of other factors. Available methods range from crude 'screens' based on theoretical concentration and consumption data to sophisticated methods utilizing statistical distributions of analytical chemical concentration data and food consumption data obtained from carefully designed nationwide surveys based on probability samples of target populations. Because most food chemicals are thought to be consumed at acceptable levels, it may not always be cost-effective to begin the intake assessment process with a sophisticated analysis. Tiered' approaches allow prioritization of food chemicals for detailed assessment. In a tiered approach, food
chemical intake is first assessed using screening methods to produce worstcase estimates, sacrificing estimation accuracy for simplicity and speed. If results of screening analyses indicate that population intake levels may be unacceptable, a progression of upper-tier methods may be used to produce intake estimates with progressively greater accuracy. The tiered approach to food chemical intake assessment has won general acceptance internationally, but the specific methods to be incorporated into tiered assessment systems have been the subject of much debate. This debate has resulted in part from national preferences for specific assessment methods. However, the major obstacle to development of internationally relevant food chemical intake assessment methodology has been the great disparity among nations in the quality of data, particularly food consumption data, available for use in intake assessment.
8.2 Intake assessment methods for pesticides and other agricultural chemicals 8.2.1
Total diet studies
Average population intakes of food chemicals can be estimated roughly by conducting a total diet study. In total diet studies, representative samples of widely consumed foods in the food supply are collected and analyzed for the constituent(s) of interest. The accuracy of population intakes estimated using total diet study results depends on the extent to which the foods analyzed represent important dietary sources of the chemical. An example is the US FDA Total Diet Study (TDS), conducted on a yearly basis since 1961 (Food and Drug Administration Pesticide Program, 1996). Although not statistically based, the TDS does yield data useful in assessing food chemical intake. Samples of 265 foods chosen to represent the US food supply are collected four times each year, from three cities in each of four US regions. Samples of individual foods from these three cities are composited for analysis. All composited samples are shipped to the FDA laboratory in Kansas City, Missouri, for analysis. The FDA uses TDS results mainly for identifying trends in concentrations of pesticide residues, contaminants and nutrients in the food supply and for identifying trends in population intakes of these substances based on summarized food consumption data (Pennington, 1992). Because the TDS uses only a few hundred foods to represent thousands of foods, it is not appropriate to make extrapolations from the amounts of a contaminant in the sampled foods to the amounts consumed by individuals. However, concentration data on the foods sampled can be used as reference points in intake assessment.
Several other countries conduct total diet studies, but the rationales and methods for conducting total diet studies vary from country to country. While the US TDS is based on analysis of individual food items, studies performed elsewhere are based on analysis of food composites. Total diet studies in some countries are performed using a 'duplicate portion' approach, in which all foods representing the national diet are processed into a single composite for analysis. 8.2.2 Food grouping model The Pesticide Safety Directorate, Ministry of Agriculture, Fisheries, and Food (1995) developed a pesticide intake screening method based on the premise that an estimate of the theoretical maximum daily intake (TMDI) of a chemical can be obtained using national or Codex maximum residue limits (MRLs) for the pesticide and intake data for widely defined commodity-based food groups, adding the two highest 97.5th percentile food group intakes to the sum of the mean population intakes from the remaining groups. Tables of mean and 97.5th percentile food consumption have been published for use with this simple screen. 8.2.3 Federal Biological Agency for Agricultural and Forestry Management (BBA) The method used by the German BBA recommends estimation of food chemical intakes based on potential consumption by females 4-6 years of age, as this population group is said to have the highest food consumption per kg body weight (Federal Biological Agency for Agricultural and Forestry Management (BBA), Federal Republic of Germany, 1993). Intake by women 36-50 years of age is to be substituted in calculations involving chemicals in coffee, tea, wine, beer and other foods not generally consumed by children. 8.2.4
World Health Organization (WHO) tiered approaches
WHO tiered approach - 1989. The Joint UNEP/FAO/WHO Food Contamination Monitoring Programme, in collaboration with the Codex Alimentarius Committee on Pesticide Residues, prepared guidelines for estimating pesticide residue intake (World Health Organization, 1989). These procedures are outlined in Tables 8.1 and 8.2. Four tiers are proposed for intake estimation, beginning with screening calculations and proceeding towards more and more realistic predictions of intake.
Table 8.1 Options for the prediction of dietary intake of pesticide residues
T Increasingly realistic predictions
Tier
Type of estimate
4 3 2
Measured pesticide residue intake 'Best estimate' - estimated daily intake (EDI) 'Intermediate estimate' - estimated maximum daily intake (EMDI) 'Crude estimate' - theoretical maximum daily intake (TMDI)
1
World Health Organization, Joint UNEP/FAO/WHO Food Contamination Monitoring Programme and Codex Committee on Pesticide Residues (1989).
Table 8.2 Outline of proposed procedures for predicting pesticide residue intake TMDP
EMDP
Residue level Codex or national MRL Codex or national MRL Corrections for: (i) edible portion; and (ii) losses on storage, processing, and cooking. Food consumption
Hypothetical global or national diet
All commodities with a Codex or national MRL
'Cultural' or national diet All commodities with a Codex or national MRL
EDIb Known residue level Corrections for: (i) edible portion; and (ii) losses on storage, processing, and cooking National diet Known uses of pesticide, taking account of: (i) range of commodities; (ii) proportion of crop treated; and (iii) home-grown and imported crops
a
May be estimated at either the national or international level. Can be estimated only at the national level. World Health Organization, Joint UNEP/FAO/WHO Food Contamination Monitoring Programme and Codex Committee on Pesticide Residues (1989). b
Tier 1 of WHO approach: calculation of theoretical maximum daily intake (TMDI). The residue concentration data and food consumption data used in WHO TMDI analyses are highly theoretical, leading to very conservative estimates of pesticide residue intake. PESTICIDE RESIDUE CONCENTRATION DATA The pesticide residue concentration data used in TMDI calculations are MRLs issued by the Codex Alimentarius Committee on Pesticide Residues or by national authorities. In calculating the TMDI for a pesticide residue, it is assumed that 100% of each relevant crop is treated with the pesticide. In fact, the
proportion of crop treated with a particular chemical generally is far less than 100%. In addition, it is extremely unlikely that any pesticide would be present at maximum levels allowed on a crop, even if the crop is treated at the maximum allowed rate, the maximum allowed frequency, and the shortest pre-harvest interval. In calculating the TMDI, it is also assumed that the pesticide is present at maximum levels in foods as consumed. However, during food processing, pesticide residues are generally reduced. In a study of pesticides in processed foods (Chin, 1991), 81.2% of 85 000 raw and finished products had no detectable residues, and 93% of 20310 processed food products had no detectable residues. Commercial washing appears to be the major cause for the decrease in residues during processing (Elkins, 1989). It has been theorized that processes promoting hydrolysis contribute to degradation of residues, further decreasing total content (Chin, 1991). Cooking may reduce the residues in foods but increase contents of harmful metabolites (Tomerlin and Engler, 1991). It should be noted that, in some cases, residues concentrate during food processing. This is most likely to occur when processing results in the loss of moisture, leaving a smaller mass of product containing the same total amount of residue. FOOD CONSUMPTION DATA Food consumption data used in TMDI calculations are national food balance sheet data or 'global diet' data based on food balance sheets (Food and Agriculture Organization of the United Nations, 1994). Food balance sheets describe the supply of staple foods in countries around the world, derived using the following equation: Food availability = (Food production + imports + beginning inventory) - (exports + ending inventory + non-food uses) (Non-food uses include animal feed, pet food, seed, and industrial use.) Availability is determined at different points for different foods or commodities. Mean per capita availability of a food or commodity is calculated by dividing total availability of the food by the country's total population. Waste at the household and individual levels is not considered in food availability calculations, and pesticide residue intakes based on food balance sheet data are therefore likely to be overestimates. WHO 'cultural' or 'regional' diets for the Middle East, the Far East, Africa, Latin America and Europe were developed by grouping food balance sheet data from countries with similar food supply patterns. The 'global diet' was derived by selecting the highest per capita supply level found for each commodity among the regional diets and then normalizing these maximum values to a total daily consumption of 1.5 kg of solid food per person, excluding the liquid content of juices or milk, to represent global per capita consumption (World Health Organization, Joint UNEP/ FAO/WHO Food Contamination Monitoring Programme and Codex Committee on Pesticide Residues, 1989).
Estimates of pesticide residue intake produced by multiplying MRLs (or tolerance levels) and food balance sheet (or global diet) intakes for each food consumed are converted from mg/person to mg/kg body weight to allow comparison with ADIs, using 60 kg as an assumed average body weight. Because TMDIs are expected to be overestimates of intake, it is assumed that population intake of a pesticide is not a health or safety hazard if the TMDI is below the ADI. Population intakes of chemicals with the TMDI greater than the ADI are re-evaluated using Tier 2 calculation of an estimated maximum daily intake (EMDI). Tier 2 of WHO approach: calculation of estimated maximum daily intake (EMDI) PESTICIDE
RESIDUE
CONCENTRATION
DATA
In
Calculating
the
EMDI, MRLs or tolerances are corrected to reflect the edible portion of each food and to account for reduction or concentration of residues during processing and preparation. Correction factors to be used are to be derived from information provided by WHO. The EMDI is a more realistic estimate of intake than is the TMDI, but remains an overestimate because it is assumed that 100% of the crop is treated and that pre-processing residues are at MRLs or tolerance levels. FOOD CONSUMPTION DATA The EMDI may be based on 'cultural diet' data or on national food balance sheet data. If both the TMDI and the EMDI for a pesticide residue exceed the ADI, intake is recalculated to produce an EDI. Tier 3 of WHO approach: calculation of estimated daily intake (EDI) PESTICIDE RESIDUE CONCENTRATION DATA
WHO
Specifications
for EDI calculation require consideration of the following factors in compiling pesticide residue concentration data: known uses of the pesticide known residue levels proportion of crop treated ratio of amount of home-grown to imported food reduction in the level of pesticide during storage, processing, and cooking The most comprehensive sources of data on pesticide residue concentrations in food are manufacturers' data and government monitoring or surveillance programs. National regulatory approval requirements for pesticides generally include field trial data documenting the extent to which the pesticide and pesticide by-products remain on the crop after
harvest. Because the purpose is to determine the maximum residue concentration level resulting from legal use of the product, field trials are conducted under extreme conditions of pesticide use, i.e. at the maximum application rate, maximum application frequency and minimum preharvest interval. In estimating the EDI using field trial data, correction factors may be developed to adjust for regional and seasonal differences in use to yield 'anticipated residue' concentration data in raw agricultural commodities. Pesticide manufacturers in the USA and other countries must conduct food processing studies and document the extent to which pesticides proposed for use on crops destined for use as animal feeds are incorporated into muscle meat, organ meat, milk and eggs. Animal feeding studies are required for all pesticides used on animal feeds. Animals are given feed containing the pesticide at MRL or expected levels for 30 days, and then sacrificed; edible animal parts are analyzed for residue content. Most countries conduct various types of monitoring or surveillance for toxic chemicals in commodities or in foods, and these data may be used in EDI analyses. Monitoring and surveillance studies are conducted to assess compliance with state, federal or international regulations governing pesticide use. At the US federal level, the Department of Agriculture monitors residue levels in domestic and imported meat and poultry products, and the Food and Drug Administration (FDA) monitors residue levels in all other foods. California, Florida and a number of other states have pesticide monitoring programs; a national database, FOODCONTAM, incorporates data from monitoring programs in 10 states (Minyard and Roberts, 1991). Depending on the specific US monitoring program, foods or commodities may be sampled at the point of entry to the country, at the farm gate, at the food processing plant, or at the retail level. To conserve resources, studies are often conducted on target samples suspected to be out of compliance. Data on such samples cannot be considered representative of the food supply, but often these are the only data available for residue levels of specific pesticides on certain crops. It should be noted, however, that the majority of FDA and FOODCONTAM samples have not contained detectable residues (Food and Drug Administration Pesticide Program, 1996). FOOD CONSUMPTION DATA WHO specifications for EDI calculation require 'data on food consumption, including that of subgroups of the population'. Food consumption is defined as 'an estimate of the daily average per capita quantity of a food or group of foods consumed by a specified population'. Guidelines on sources of appropriate food consumption data are not provided.
Tier 4 of WHO approach: direct analysis. Tier 4 of the WHO approach represents measured pesticide residue intake, presumably through laboratory analysis of duplicate portions of foods consumed by representative samples of the population. In theory, duplicate portion studies based on actual consumption by individuals in a population could be conducted to provide pesticide intake data, but these studies are rarely conducted outside of clinical settings, due to the high costs involved and to the great potential for respondent bias in providing food for analysis. Proposed revision of WHO tiered approach. The WHO approach outlined above provides detailed specifications for the pesticide residue concentration data to be used in EDI assessment, but provides little guidance regarding use of food consumption data. In theory, food consumption data used in EDI assessments should be obtained from food consumption surveys of individuals in relevant populations rather than from food supply data. However, nationwide food consumption surveys of individuals have been conducted in few countries, and methods used have varied significantly. A discussion of this issue at the 1995 joint FAO/WHO consultation on dietary intake of pesticide residues resulted in a recommendation that all countries should conduct appropriate food consumption studies to obtain better population and subpopula-
Evaluate data, establish ADI, propose MRLs
TMDI calculations
Compare TMDI with ADI
International level
National level
IEDI using all available relevant data
NEDI using all available relevant data
Compare NEDI with ADI
Compare IEDI with ADI
ADI TMDI IEDI NEDI
Acceptable Daily Intake Theoretical Maximum Daily Intake International Estimated Daily Intake National Estimated Daily Intake
Figure 8.1 Scheme for the assessment of dietary intake of pesticide residues for chronic hazards (World Health Organization, 1995).
tion intake estimates. A proposed revision of the tiered approach includes assessment of national estimated dietary intakes (Figure 8.1).
8.3 Intake assessment methods for food additives 8.3.1 Analysis of additive usage data Rough per capita food additive intake estimates may be produced based on production and usage figures from additive manufacturers and food manufacturers, if these data are available: ^ . ,,. . . , Production + Imports - Exports Per capita additive intake = Population These estimates are useful as screens for prioritizing the need for more detailed intake assessments. 8.3.2 Food and Nutrition Division of the French Council of Public Health method The food additive screening method used by the Food and Nutrition Division of the French Council of Public Health is a food grouping method focusing on intake from the 'main vector' for intake of the additive (Verger, 1995). The TMDI is calculated by estimating 95th percentile consumption of foods in the main vector and adding per capita intakes of the chemical from other vectors. The 95th percentile consumption level is estimated as three times the mean intake by consumers. 8.3.3 Budget method The budget method was developed by the National Food Administration of Denmark to convert food additive ADIs into 'ceilings of use' calculated on the basis of maximum intakes of food and beverages potentially containing the additives (Hansen, 1966, 1979). In budget calculations for additives used in both solid foods and beverages, the ADIs are split into two fractions. The proportion of the ADI allocated to food and the proportion allocated to beverages are decided upon arbitrarily to accommodate technological requirements. An adaptation of the budget method has been proposed as an initial screening step in determining whether the additive uses listed in the Codex General Standard for Food Additives (GSFA) pose any risk to public safety. Use of the Danish budget method as proposed by Codex is intended to yield a TMDI. The budget method provides a basis for simple, inexpensive prediction of additive intake because it relies on knowledge of physiological requirements
for energy and liquid and on assumptions regarding energy density of food rather than on food consumption survey data. The TMDI is calculated assuming that all foods contributing to the energy intake and all beverages contributing to the liquid intake will contain the additive at maximum permitted use levels. Under the Codex proposal, an additive is said to 'pass' the budget method screen if the calculated TMDI is lower than the additive's corresponding ADI. Examination of the budget method (Douglass et al, 1997) showed assumptions regarding energy intake, beverage intake and soft drink intake of the general population to be overestimates of actual average intake based on analysis of food consumption survey data from the UK, former West Germany and USA. The budget method assump-tion regarding energy density of foods also was found to be an over-estimate. Budget method TMDIs were, in each of two cases studied, substantially larger than survey-based per capita additive intake estimates, providing evidence to confirm that the budget method produces overestimates of additive intake. Based on the limited evidence obtained, the method was viewed as conservative, with potential for Type II (false-negative) error quite small, indicating that the budget method may be a first appropriate screening method for assigning monitoring priority. It was noted that the budget method will not be appropriate for use in assigning monitoring priority for additives where there may be concerns about short-term exposure or where the proportion of consumers is low until the method has been evaluated further. 8.3.4
Codex proposal for tiered additive intake assessment
A tiered food additive intake assessment system was proposed for adoption by the Codex Alimentarius Commission Committee on Food Additives and Contaminants (CX/FAC/96/6) for determining whether the additive uses listed in the GSFA pose any risk to public safety. An adaptation of the budget method developed by the Danish National Food Administration for determining safe use levels for food additives (described above) is proposed as the Tier 1 screening step in the Codex proposal. A 'reverse budget method' is proposed for use as Tier 2. The Tier 3 method proposed is a modified version of the UK food grouping method described in section 8.2.2. Where results of the first three tiers indicate potential health and safety concerns, a Tier 4 EDI analysis based on food consumption survey data is to be performed. Tier 2 of Codex additive intake approach: reverse budget method. The 'reverse budget method' is not, as the Tier 2 designation might indicate, a more sophisticated method than the budget method. The reverse budget method actually is an alternative to the budget method, proposed for use with
additives with limited food applications. Reverse budget method estimates are estimates of the amount of food containing the chemical that would have to be consumed for the ADIs to be exceeded. Interpretation of results requires judgement as to whether these amounts are unrealistically high. Tier 3 of Codex additive intake approach: food grouping model A modification of the food grouping model developed in the UK for pesticide residue intake assessment (described in section 8.2.2) is proposed for use in estimating intakes of additives for which budget method TMDI estimates have been greater than the ADIs. This method is based on the premise that an estimate of the TMDI of an additive can be obtained using regulatory maximum additive use limits and intake data for widely defined food groups, adding the two highest 97.5th percentile food group intakes to the sum of the mean population intakes from the remaining groups. While the food groups used in the food grouping model for pesticides are commodity-based groups, the categories used in the modified version for additives represent different types of processed food. It is likely that a food additive intake estimation method based on the food grouping model would allow TMDI estimation with greater accuracy than would the budget method, which does not use additive concentrations in target foods as the basis for intake estimates. However, the potential for over- or underestimation of additive intakes using the food grouping model has not been studied to date. Tier 4 of Codex additive intake approach: EDI estimation. The most accurate food additive intake estimates for a population are produced using additive concentration data based on laboratory analysis of foods representative of those consumed by the population. Where such data are not available, market intelligence and technical information can be used to adjust regulatory maximum use levels to estimates of amounts required to achieve technical effects in foods actually containing the additive. Introduction of new food additives into the food supply may result in changes in food consumption patterns of the population, particularly when the additive is a new sweetener or other macronutrient substitute. EDI assessment should be conducted before and after such additives are introduced into the food supply of a population. The timeliness of the food consumption data used is an important consideration in post-approval monitoring when availability of the additive has the potential to change food consumption patterns. Food consumption data sources appropriate for use in EDI assessment of additives are discussed in the following sections.
8.4 Food consumption data sources for food chemical EDI assessment Household budget surveys have been conducted by many countries to provide food supply data at the household rather than national level. However, the general limitations of food supply data apply to household budget data. Waste at the household and individual levels is not considered. Food consumed outside the home generally is not considered. In addition, because individual users of foods cannot be distinguished from non-users, individual variations in intake cannot be assessed and nor can intake of potentially sensitive subpopulations be estimated. Although the most accurate EDI assessments are based on data from nationwide food consumption surveys of households or individuals, these surveys have been conducted in few countries, and methods used have varied significantly depending on the purpose of the survey and on the resources available. Information from an inventory of the food consumption databases in the European Union Member States is shown in Table 8.3 as an example of the variety of methods used to collect food consumption data. It is unlikely that the 1995 WHO call for countries to produce food consumption survey data appropriate for use in food chemical intake assessment will also result in standardization of procedures used for collecting food consumption survey data. The immediate goals for standardization of food consumption data used in intake assessment must be to identify appropriate uses for data collected using different methodologies and to develop methods for international comparison of these data. For example, in some cases, identification of similarities in household food budgetary patterns may provide justification for limited, carefully circumscribed use of food consumption survey data collected in one country for EDI assessments in other countries. Mathematical models may be developed for determination of foods consumed by individual family members by assigning age- and gender-specific coefficients and for assessment of edible matter in foods purchased (Trichopolou, 1992). 8.4.1 Food consumption survey methodology Methods used for collecting data on food consumption by households or individuals may be categorized as retrospective methods or prospective methods. Retrospective methods focus on food consumed during a time period which has already passed. Commonly used retrospective methods include 24-h or other short-term recalls, food frequencies, and diet histories. Recall methods require that survey respondents identify and quantify foods and beverages consumed during a specific period, usually the
Table 8.3 Food consumption survey databases in European Union member states Country
Year
Typology
1
Since 1947
Foods available
2 3 4
1991-1993 1993-1995 1995+
Foods consumed Foods consumed Foods consumed
Individual Individual Individual
5 6 7 8
1992 1994 1992-1993 1994-1995
Foods Foods Foods Foods
Individual Individual Individual Individual
1 2 3 4
1987/88 1980/85 1972/91 1990
Budgets Foods consumed Purchase records Foods consumed
5
1986/89/91
Foods consumed
6
1993-1994
Foods consumed
1
1985
Foods consumed
2
1995
Foods consumed
3 4
1981 1987
Foods purchased Foods consumed
1
1982
Foods consumed
2
1992
Foods consumed
3
1984/87/88
Foods consumed
4
1985-1988
Foods consumed
1 2
1963-1991 Continuous
Purchases, gifts Purchases, gifts
3
1988
Foods consumed
4
1993-1994
Foods consumed out of home
Germany
1
1985-1989
Foods consumed
Individual
7-day dietary record
Greece
1 2 3
1981/82 1987/88 1994
Foods purchased Foods purchased Foods purchased
Household Household Household
7-day purchase records 7-day purchase records
Ireland
1
1990
Foods consumed
Individual
Dietary history
Italy
1
1980
Foods consumed
Household
Inventory, individual intake
Netherlands
1 2
1987/88 1992
Foods consumed Foods consumed
Individual Individual
2-day record 2-day record
Portugal
1 2
1980 1988/89
Foods consumed Foods consumed
1-day record Dietary history
3
1993/94
Foods consumed
Individual Individual, elderly Individual, elderly
Austria
Belgium
Denmark
Finland
France
consumed consumed consumed consumed
Sampling unit
Survey method Calculation of per capita consumption 24-h recall, weighed 7-day record Weighed 7- and 3-day record Two weighed 7-day records, three 24-h recalls Weighed 7-day record 24-h recall, weighed 7-day record 24-h recall Weighed 7-day record
Household Individual Household Schoolchildren, 6-12 years Males only, 45-64 years Individuals, 35-59 years
Purchase records 24-h records Purchase diary 24-h record, preceded
Individual, 15-80 years Individual, 1-80 years Household Household
Dietary history
Individual, 25-64 years Individual, 25-64 years Individual, 9th grade Individual, men 50-69 years Household Household, except single men living alone Household Individual
3-day food records 24-h record
7-day record, preceded Purchase records Purchase records 3-day dietary record (estimated) 3-day dietary record (estimated) 24-h recall Dietary history 7-day purchase records Fifty-two 7-day purchase records, interview 1-day recall, interview (frequency, amount estimated) 7-day diary
Dietary history
Table 8.3 Continued Country Spain
Year
Typology
Sampling unit
Survey method
1 2 3 4
1964/65 1980/81 1989/93 1989/93
Foods consumed Foods consumed Foods consumed Foods consumed
Purchase records Purchase records Purchase records 3-day dietary recall, frequency
5 6 7 8 9
1991 1989 1987 1991 1984
Foods consumed Foods consumed Foods consumed Foods consumed Foods consumed
Household Household Household Individual, elderly Household Individual Individual Individual Individual
1990 1991/92
Foods consumed Foods consumed
Individual Household
Inventory weighing Record Dietary history, precise weighing Record Precise weighing, inventory weighing Record Purchase records
10 11
Sweden
1
1989
Foods consumed
Individual
Food record (preceded)
UK
1 2
1981/82 1986/87
Purchase records Foods consumed
7-day diary 7-day weighed diary
3
1985/86
Foods consumed
4
1992/93
Foods consumed
5
1983
Foods consumed
Household Individual, 16-64 years Individual, 6-12 months Individual, 1 '/2-41A years Individual, 10-15 years
Eurostat
1
Continuou s Budgets
Household
7-day diary records 4-day weighed diary 7-day weighed diary Purchase records
A. M011er, personal communication.
preceding day. Pictures, household measures or two- or three-dimensional food models may be used to help respondents quantify the food consumed. To aid recall memories, the interviewer may 'probe' for certain foods or beverages that are frequently forgotten; however, such probing has also been shown to introduce bias by encouraging respondents to report items not actually consumed. Recall interviews are relatively easy to conduct, require a minimum of time (about 20min or less for a 24-h recall) for completion, and can provide high-quality food consumption data for populations with low literacy (Block, 1989; Dwyer, 1988). Many nationwide food consumption surveys have been conducted using short-term recall methods. Use of a food frequency questionnaire (FFQ), or checklist, allows determination of the frequency of consumption of a limited number of foods, usually less than 100. FFQ food lists are in many cases developed to allow collection of data relevant for a very specific nutrition-related issue. Respondents (usually individuals rather than households) are asked how many times a day, week or month each food on the list is usually consumed. Semi-quantitative FFQs allow estimation of amounts consumed by asking subjects to indicate whether their usual portion size is small, medium or large compared to a stated 'medium' portion. The size of the
medium portion is usually based on mean intakes of large populations but may be standardized for various age/sex groups. The diet history is used to obtain information from individuals about the usual pattern of eating over an extended period of time (Burke, 1947; Hankin, 1989). It is used primarily in epidemiological research. The diet history is a more in-depth and time-consuming procedure than the recall, record and FFQ methods. A recall or FFQ may be included as a diet history component. In prospective food consumption studies, survey participants are asked to provide information on foods as the foods are consumed. Prospective data useful in intake assessment may be obtained from respondents using food diaries or food records. Respondent households or individuals are asked to keep a record of foods and beverages as the foods are consumed during a specific period. Quantities of foods and beverages consumed are entered in the record, usually after weighing, measuring or recording package sizes. Occasionally, subjects are asked to photograph foods before consumption to aid researchers in identification of foods consumed. In general, data from 24-h and other short-term recalls and from food diaries, which collect detailed information on the kinds and quantities of foods consumed, are the most accurate and flexible data to use in assessment of intake of food chemicals. Data from these surveys can be used to estimate either acute or chronic intake; averages and distributions can be calculated; and intake estimates can be calculated for subpopulations based on age, sex, ethnic background, socio-economic status, and other demographic variables, provided that such information is collected for each individual. 8.4.2
Validity, reliability and sources of error in food consumption survey data
It is difficult to measure the extent to which food consumption surveys capture data reflective of actual dietary intake. Surreptitious observation of actual food consumption has been used to validate 24-h recall, diaries and FFQ methods for obtaining these data from survey respondents (Baranowski et al, 1986; Gersovitz et al, 1978; Greger and Etnyre, 1978; Madden et al., 1976; Samuelsen, 1970; Stunkard and Waxman, 1981). Biological markers in urine, feces, blood and other tissues have also been used to validate survey methods (Bingham and Cummings, 1985; Block and Hartman, 1989). However, the validity of a survey method for use in obtaining accurate food consumption data is usually tested using another common survey method. The FFQ, for example, has been validated by comparison of results from food records (Block, 1989; Pietinen et al., 1988; Willett et al., 1988). Correlations in results obtained by different methods usually have been better for groups than for individuals.
The reliability of a method for yielding reproducible results depends somewhat on the number of days of dietary intake data collected for each individual in the population. The number of days of food consumption data required for reliable estimation of population intakes is related to each subject's day-to-day variation in diet (intraindividual variation) and the degree to which subjects differ from each other in their diets (interindividual variation) (Basiotis et al, 1987; Nelson et al, 1989). When intraindividual variation is small relative to interindividual variation, population intakes can be reliably estimated with consumption data from a smaller number of days than should be obtained if both types of variation are large. The number of intake days required for reliable estimation of intake is lower for a chemical widely distributed in the food supply than for a chemical with limited applications. In assessing food intake, it is generally accepted that the mean intake of a population may be reasonably estimated using a 1-day recall or diary if the number of subjects is sufficiently large. However, the percentage of the population estimated to be at risk of toxic effects from a chemical will be higher when food intake is assessed using a 1-day recall than with a multi-day record or dietary history. This is because extreme levels of intake (e.g. 90th or 95th percentiles) are invariably higher for a single day than they are for multiple days. In addition, large intraindividual variation associated with 1-day surveys may limit the power to detect differences between different population groups (Liu et al., 1978; Beaton et al, 1979; van Staveren et al, 1985). Errors in individual food consumption surveys may be due to chance or to measurement factors. Data variability due to chance may be related to the survey sample; any sample randomly drawn from a population will differ from any other sample, with the degree of difference depending upon the size of the sample and the homogeneity of the population from which it was drawn. Errors due to chance also arise from data collection at different times of the day, on different days of the week, or at different seasons of the year. Measurement error may be introduced by the survey instrument, the interviewer or the respondent. The instrument may bias results if questions are not clear, if probes 'lead the subject' to give a desired answer, if questions are culture-specific, or if questions do not follow a logical sequence. For self-administered questionnaires, responses will be influenced by the readability level, the use of abbreviations or unfamiliar jargon, clarity of instructions, and amount of space provided for answers. Interviewer bias may be introduced if interviewers make the respondent uncomfortable, are judgemental, or do not use a standard method and/or standard probes. Respondents may introduce bias if they omit reporting foods they actually ate because they are reluctant to report certain foods or beverages
(alcoholic beverages are a good example) or if they are forgetful. Alternatively, they may report the food but understate the quantity consumed. Foods consumed away from home, particularly on occasions when the focus of attention is on the event rather than on the food, are especially difficult for people to remember. Quantities may be underestimated for similar reasons. Foods and beverages that were not consumed may be reported as consumed because of faulty memories, desire to impress the interviewer, or confusion with similar foods. Measurement errors also include errors in coding due to unclear handwritten records or to erroneous data entry. It is important when using food consumption data from surveys that have already been conducted to be aware of the potential for error when making decisions based on those data. When designing food consumption research, the potential for error should be minimized by standardizing and testing all instruments for validity and reliability. 8.4.3 Food consumption data required for EDI analysis The specific type of food consumption data most appropriate for an EDI analysis varies depending on the specific food use for the chemical (pesticide or food additive), the toxicological characteristics of the chemical, and the population for which intake is to be assessed. Chemical use. Data requirements on forms of foods consumed in EDI assessment of pesticides and other agricultural chemicals differ from data requirements for EDI assessment of food additives. Accurate EDI assessment of food additives requires availability of detailed consumption data on processed foods. Food balance sheets report availability of raw agricultural commodities and therefore cannot be used directly even in rough assessments of food additive intake. Pesticides are applied to raw agricultural commodities, and food balance sheet data can be used in screening assessments for pesticides as described above, but commodity waste and pesticide reduction or concentration due to food processing cannot be quantified with any accuracy. Limitations in household budget data are similar. Food consumption data useful for EDI assessments for pesticides may be obtained from surveys based on short-term recalls, food records or food diaries, provided that respondents describe foods consumed in detail sufficient for development of 'recipes' breaking foods consumed into raw agricultural commodities, with notation of the degree of processing in the forms consumed. Technical Assessment Systems, Inc. (TAS) has developed such recipes for the 1988-1994 US Nationwide Health and Nutrition Examination Survey (NHANES III) and all US Department of Agriculture food consumption surveys conducted in 1977 and later years. Table
Table 8.4 Chocolate chip cookie 'recipe' of component raw agricultural commodities (TAS, Inc.) Food Code: 5320605 Cookie, rich, chocolate chip, with chocolate filling 14 ingredients Ingredient name
Percentage of total
Beet sugar Cane sugar Corn sugar Chocolate Cottonseed oil Soybean oil Eggs - whole Milk sugar (lactose) Milk-based water Milk-fat solids Milk-nonfat solids Water - tap No pesticide registration Wheat flour Other Total
7.00 8.00 15.00 5.00 10.70 16.00 2.00 0.37 0.02 0.01 0.11 4.00 0.80 28.00 2.99 100.00
8.4 displays a chocolate chip cookie recipe developed as part of the TAS system to allow EDI assessment for pesticides based on intakes of raw agricultural commodities. Toxicological characteristics of the chemical Accurate EDI assessment of chemicals with acute toxic effects must be based on detailed data on food consumption of individuals, obtained using food records or shortterm recall methodology. These data are needed for characterization of intakes from combinations of specific foods consumed during a very short period of time. A carefully tailored FFQ might provide useful data for assessment of chronic intake if the chemical in question is concentrated in only a few foods and if the food frequency instrument has been designed to target those foods. Information from FFQs cannot be used to estimate intake of acutely toxic chemicals, since data are collected on single food items or types, not on food combinations eaten at the same time. However, FFQ data could be useful in quantifying chronic upper-level intakes of food chemicals. Population for which intake is to be estimated. EDI assessments must make use of detailed food consumption data allowing calculation of intake distributions if there are subpopulations at special risk or if upper-limit estimates of population intake are required. However, summarized per
capita food consumption data may be used in EDI assessments if the goal is to characterize average lifetime intake of chemicals present in foods consumed by the overall population. Useful data may be obtained from results of surveys collecting individual diet histories or food frequencies if consumption is reported for specific foods containing the chemical. However, diet history data are usually based on a limited number of individuals, and may not be appropriate for use in intake assessment. Household-based survey data may be used if waste and foods consumed outside the home are accounted for.
8.5 Future trends in food chemical risk assessment In 1993, the National Research Council of the US National Academy of Sciences recommended that methods for assessing pesticide intake for infants, young children and other potentially vulnerable populations include examination of intake distributions and assessment of potential risks from combined intake from chemicals with similar toxic effects (National Research Council, 1993). Since that time, new methods for generating intake distributions and combined chemical intake data have been developed. 8.5.1 Probabilistic methods in food chemical intake estimation Probabilistic methods allow more complete characterization of food chemical intake patterns than is possible using means or other representative statistics. These methods combine distributions of food chemical concentration and food consumption (Figure 8.2). Frequency of occurrence in a distribution is taken to be equivalent to probability of occurrence. The most widely used probabilistic method for food chemical intake estimation is Monte Carlo analysis. In Monte Carlo analyses, actual or hypothetical distributions are generated from available food consumption and residue concentration data. In generating distributions based on limited data, it is assumed that the residue and consumption data distributions each belong to a parametric family (e.g. normal or log normal). A value is selected at random from the food consumption distribution curve and multiplied by a value drawn at random from the chemical concentration distribution curve. The process is repeated thousands of times so that an intake distribution is generated. Monte Carlo analysis usually requires repeated sampling from hypothetical parametric distributions, and the representativeness of results therefore depends on how close the theoretical distributions are to the true consumption and residue distributions. An adaptation of the Monte
Consumption Distribution
Residue Distribution
Intake Distribution Figure 8.2 Probabilistic estimation of food chemical intake.
Carlo method uses observed consumption and concentration distributions instead of simulated ones (National Research Council, 1993). A further variant is to multiply the food intake for the first individual in the distribution by a value drawn at random from the theoretical or observed concentration distribution. This is repeated several thousand times for the first individual and then for the second individual and so on for all individuals until an intake distribution is generated. The uncertainty in probabilistic estimates of intake can be quantified, as confidence intervals related to uncertainty in the measurements can be calculated for the parameters of all frequency distributions. Confidence intervals of the mean and standard deviation for a normal distribution can be used to estimate confidence intervals for any percentile of the distribution (Frey, 1993). The confidence intervals for the results of a multiple input model such as Monte Carlo analysis would be based on the joint confidence interval for all of the input distributions. 8.5.2 Intake of multiple chemicals Because chemicals with similar toxic effects may have different potencies, residues of chemicals cannot be simply summed for intake assessment. In calculating a combined EDI for two chemicals, concentrations are standardized to a common potency by applying a toxicity equivalency factor to residue levels for one of the chemicals. For example, the concentration of Chemical A in apples is 24 ppb, and the concentration of Chemical B is 13 ppb (Table 8.5). Both chemicals
Table 8.5 Intake assessment for multiple chemicals
Chemical
Detected concentration in apples (PPb)
Acceptable daily intake (ADI) (mg/kg body weight)
Toxic equivalency factor
A B
24 13
0.0013 0.0030
1 0.433
Standardized residue concentration in apples (PPb) 24 5.63
Estimation of combined intake of Chemical A and Chemical B from apples. These chemicals inhibit the same enzyme, but Chemical A is more potent than Chemical B. Residues of Chemical B can be 'standardized' to Chemical A based on relative potency to allow assessment of combined intake. Toxic equivalency quotient (combined concentration expressed as Chemical A): 29.63 ppb.
are inhibitors of the same enzyme, but the ADI for Chemical A is lower than that for Chemical B. One unit of Chemical A is equivalent to 0.433 units of chemical B based on the ratios of the ADIs. Therefore, the 13 ppb of Chemical B is standardized to 5.63 ppb. The standardized residues can then be summed to produce a toxic equivalency quotient (29.63 ppb), a combined concentration expressed in terms of Chemical A.
8.6 Uncertainty in intake assessment The uncertainty associated with intake estimates should be evaluated and presented in all food chemical intake assessments. Uncertainty can be characterized qualitatively, i.e. what thought processes were used to select or reject specific data, or quantitatively, i.e. ranges of intake (US Environmental Protection Agency, 1992). Uncertainty in EDI assessment, for example, may result from missing or incomplete data; measurement error, sampling error, use of surrogate data, gaps in scientific theory used to make predictions, and how well the theory or model represents the situation being assessed. Analysis of uncertainty provides decision makers with information concerning potential variability in intake estimates and the effects of data gaps on intake estimates.
8.7 Future needs for dietary intake assessment Food chemical intake assessments are most accurate and precise when based on good-quality food consumption data. The best data represent food intakes by individuals. Using such data, it is possible to estimate food chemical intakes for specific subpopulations, such as children, and for different time intervals from a meal or a day up to a lifetime.
At present, useful data on food intakes by individuals are limited. Most countries collect data on households rather than individuals, with the focus more often on food expenditures than on amounts consumed. These data may be useful for monitoring trends, but are of limited value for dietary risk assessment. Data on food consumption by individuals are available for a few countries, but in some cases are too old to be used in characterizing current dietary patterns. Information on processing, packaging and preparation of foods consumed is important for food chemical intake assessment, but may not be available from surveys conducted to evaluate nutritional status. In addition, the survey protocols used to obtain the available data have differed, and results from different countries are therefore not always directly comparable. Differences in intake assessment methodology due to differences in the quality of data available may result in technical barriers to trade under GATT Sanitary and Phytosanitary Measures criteria. National governments interested in maintaining or improving international trade status must commit the resources required for regular food consumption surveys and must participate in international development of standard methodology for generating these data. Reliable food consumption surveys are extremely expensive to undertake. However, there are many potential 'customers', including nutritionists, risk analysts and market researchers; data are needed by governments, health researchers, the food industry and the food chemical industry. Accurate assessment of risk from intake of food chemicals depends on accurate estimation of intake. The science is presently undermined by the paucity of good-quality food consumption data. International efforts must be co-ordinated to generate these data.
References Baranowski, T., Dworking, R., Henske, J.C. et al (1986) The accuracy of children's selfreports of diet: Family Health Project. Journal of the American Dietetic Association, 86, 1380. Basiotis, P.P., Welsh, S.O., Cronin, J. et al (1987) Number of days of food intake records required to estimate individual and group nutrient intakes with denned confidence. Journal of Nutrition, 117, 1638. Beaton, G.H., Milner, J., Corey, P. et al, (1979) Sources of variance in 24-hour dietary recall data: implications for nutrition study design and interpretation. American Journal of Clinical Nutrition, 32, 2456. Bingham, S. and Cummings, J.H. (1985) Urine nitrogen as an independent validatory measure of dietary intake: a study of nitrogen balance in individuals consuming their normal diet. American Journal Clinical Nutrition, 42, 1276. Block, G. (1989) Human dietary assessment: methods and issues. Preventative Medicine, 18, 653. Block, G. and Hartman, A.M. (1989) Issues in reproducibility and validity of dietary studies. American Journal of Clinical Nutrition, 50, 1133.
Burke, B.S. (1947) The dietary history as a tool in research. Journal of the American Dietetic Association, 23, 1041. Chin, H.B. (1991) The effect of processing on residues in foods: the food processing industry's residue database. In: Tweedy, B.G., Dishburger, H.J., Ballantine, L.G. and McCarthy, J. (eds) Pesticide Residues and Food Safety: A Harvest of Viewpoints. American Chemical Society, Washington DC, p. 175. Douglass, J.S., Barraj, L.M., Tennant, D.R., Long, W.R. and Chaisson, C.R (1997) Evaluation of the budget method for screening food additive intakes. Food Additives and Contaminants. In press. Dwyer, J.T. (1988) Assessment of dietary intake. In: Shils, M.E. and Young, V.R. (eds) Modern Nutrition in Health and Disease, 7th edn. Lea and Febiger, Philadelphia. Elkins, E.R. (1989) Effect of commercial processing on pesticide residues in selected fruits and vegetables. J. Assoc. Off. Anal. Chem., 72, 533. Federal Biological Agency for Agricultural and Forestry Management (BBA), Federal Republic of Germany (1993) Guidelines for Testing Pesticides in the Approval Process. Part IV, pp. 3-7: Testing the residue behaviour - estimating the intake of pesticide residues via food consumption. Food and Agriculture Organization of the United Nations (1994) AGROSTAT: Food Balance Sheets 1961-1993 (computer version). FAO, Rome. Food and Drug Administration Pesticide Program (1996) Residues in Foods 1995. US Food and Drug Administration, Washington, DC. Frey, H.C. (1993) Separating variability and uncertainty in exposure assessment: motivations and method. In: Proceedings of the 86th Annual Meeting of the Air and Waste Management Association, Denver, Colorado. Gersovitz, M., Madden, J.P. and Smiciklas-Wright, H. (1978) Validity of the 24-hour dietary recall and seven-day record for group comparisons. Journal of the American Dietetic Association, 73, 48. Greger, J.L. and Etnyre, G.M. (1978) Validity of 24-hour recalls by adolescent females. American Journal of Public Health, 68, 70. Hankin, J.H. (1989) Development of a diet history questionnaire for studies of older persons. American Journal of Clinical Nutrition, 50, 1121. Hansen, S.C. (1966) Acceptable daily intake of food additives and ceiling on levels of use. Food and Cosmetic Toxicology, 4, 427. Hansen, S.C. (1979) Conditions for use of food additives based on a budget for an acceptable daily intake. Journal of Food Protection, 42, 427. Liu, K., Stamler, J., Dyer, A. et al (1978) Statistical methods to assess and minimise the role of intra-individual variability in obscuring the relationship between dietary lipids and serum cholesterol. Journal of Chronic Diseases, 31, 399. Madden, J.P., Goodman, SJ. and Guthrie, H.A. (1976) Validity of the 24-hr recall. Journal of the American Dietetic Association, 68, 143. Minyard, J.P. Jr, and Roberts, W.E. (1991) FOODCONTAM: a state data resource on toxic chemicals in foods. In: Tweedy, B.G., Dishburger, HJ., Ballantine, L.G. and McCarthy, J. (eds) Pesticide Residues and Food Safety: A Harvest of Viewpoints. American Chemical Society, Washington, DC, p. 151. National Research Council (1993) Pesticides in the Diets of Infants and Children. Committee on Pesticides in the Diets of Infants and Children, Board on Agriculture and Board on Environmental Studies and Toxicology, Commission on Life Science, National Academy Press, Washington, DC. Nelson, M. Black, A.E., Morris, J.A. and Cole, TJ. (1989) Between- and within-subject variation in nutrient intake from infancy to old age: estimating the number of days required to rank dietary intakes with desired precision. American Journal of Clinical Nutrition, 50, 155. Pennington, J.A.T. (1992) The 1990 revision of the Total Diet Study. Journal of Nutrition Education, 244, 173. Pesticide Safety Directorate, Ministry of Agriculture, Fisheries, and Food (1995) UK Methods for the Estimation of Dietary Intakes of Pesticide Residues, July. Pietinen, P., Hartman, A.M., Haapa, E. et al. (1988) Reproducibility and validity of dietary assessment instruments, II. A qualitative food frequency questionnaire. American Journal of Epidemiology, 128, 667.
Samuelson, G. (1970) An epidemiological study of child health and nutrition in a northern Swedish county, 2. Methodological study of the recall technique. Nutrition and Metabolism, 12, 321. Stunkard, AJ. and Waxman, M. (1981) Accuracy of self-reports of food intake. Journal of the American Dietetic Association, 79, 547. Tomerlin, J. R. and Engler, R. (1991) Estimation of dietary exposure to pesticides using the dietary risk evaluation system. In: Tweedy, E.G., Dishburger, HJ., Ballantine, L.G. and McCarthy, J. (eds) Pesticide Residues and Food Safety: A Harvest of Viewpoints. American Chemical Society, Washington, DC, p. 192. Trichopolou, A. (1992) Monitoring food intake in Europe: a food data bank based on household budget surveys. European Journal of Clinical Nutrition, 46(Suppl. 5), 53-58. US Environmental Protection Agency (1992) Guidelines for exposure assessment notice. Federal Register, 57, 11888. van Staveren, W.A., de Boer, J.O. and Burema, J. (1985). Validity and reproducibility of a dietary history method estimating the usual food intake during one month. American Journal of Clinical Nutrition, 42, 554. Verger, P. (1995) One example of utilisation of the 'French Approach'. Paper presented at the ILSI Europe Workshop on Food Additive Intake, 29-30 March, Brussels, Belgium. Willett, W.C., Sampson, L., Browne, M.L. et al. (1988) The use of a self-administered questionnaire to assess diet four years in the past. American Journal of Epidemiology, 127, 188. World Health Organization (1995) Recommendations for the revision of guidelines for predicting dietary intake of pesticide residues. Report of a FAO/WHO consultation. WHO/FNU/FOS/95.11. World Health Organization, Geneva. World Health Organization, Joint UNEP/FAO/WHO Food Contamination Monitoring Programme and Codex Committee on Pesticide Residues (1989) L Guidelines for Predicting Dietary Intake of Pesticide Residues. World Health Organization, Geneva.
9 Assessing risks to infants and children N.R. REED
9.1 Introduction The approach to assessing the risk to infants and children has been one of the key issues in risk assessment over the last decade. Occasional reports of unexpected adverse effects in these younger subpopulations from the exposure to pharmaceutical agents or environmental chemicals has heightened the awareness that the risk to the younger subpopulations can differ substantially from those to the adults. The need for a special approach to account for the unique characteristics of the younger subpopulations in risk assessment was clearly outlined in a joint report by the International Programme on Chemical Safety (IPCS) and the Commission of the European Communities (CEC) (World Health Organization, 1986). The similarities and differences between children and adults were further documented in a conference sponsored by the International Life Sciences Institute (ILSI) and US Environmental Protection Agency (USEPA) (Guzelian et al.9 1992). Pesticides have been the focus of food safety evaluations due to their purposeful and widespread use. Designed to be poisonous, pesticides used in agricultural crops and foodstuffs have the potential to adversely affect human health if not appropriately controlled. However, pesticides are not the only chemicals present in foods that have the potential to cause adverse effects. The presence of other chemicals in foods can also render them unsafe for consumption. Food additives (e.g. preservatives, supplements, stabilizers), therapeutic drugs used on livestock, naturally occurring toxicants, mycotoxins produced by molds growing in or on foods and enterotoxins (e.g. Clostridium botulinum toxin) have all been subjected to food safety investigations. Concerns have been raised over whether regulations concerning food chemicals based on the current approach to risk assessment are sufficient to protect the health of infants and children (National Research Council, 1993). The concerns for the young are not just about the body size, but also about the varying sensitivity to risk agents. Infants and children differ from adults physiologically and developmentally. Rapid growth and functional development mark the first 2-3 years of life. These developmental changes affect the ways in which the body handles and responds to xenobiotics. The young individuals also differ from the adults in the pattern
of exposure to chemical toxicants in the environment and in food. With regard to exposures to food chemicals, infants and children generally have higher exposures due to the higher food intake rates per unit body weight. The younger subpopulations may also have preferences for certain foods or forms of food. It is apparent that the human adult model is inadequate for the evaluation of risk in infants and children. The safety evaluation of food chemicals in infants and children should not be focused only on the potential harm from direct exposures to foods containing chemical residues and from the exposure of a single chemical. Developmental effects as a result of in utero exposures and the reality of concomitant exposure to more than one chemical in foods are two aspects that are imporant to the overall evaluation of food safety for infants and children and could have significant impacts on the health of these younger subpopulations. This chapter begins with a description of the unique characteristics of infants and children. The implications of their unique characteristics for risk assessment will then follow. The two important subjects, in utero exposure and multiple chemical exposures, are presented last.
9.2 Infants and children - unique population subgroups In the literature, young individuals undergoing developmental changes are categorized into different groups based on somewhat different ranges of age. In this chapter, 'infants' refers to individuals from birth up to 1 year of age, and 'children' includes individuals beyond 1 year and up to 12 years old. Infants and children are distinctly different from adults in terms of physiology, development and size. The significance of these differences with regard to the evaluation of risks to food chemicals is two-fold: one is in the differential sensitivity in response to xenobiotics and the other is in the different levels of exposure to food chemicals. The issue of differential sensitivity pertains to both the pharmacokinetic and pharmacodynamic characteristics and the manifestation of toxicity. The issue of exposure pertains to both the amount and the pattern of food intake. The unique ways in which infants and children handle and respond to xenobiotics are well recognized in the field of therapeutics. Incidents of unexpected toxicological response have occurred in the pediatric population towards drugs that have been tested only in adults. The concern of untoward toxicity to the pediatric population prompted the issuance of warnings for drugs that have not been tested in the young subpopulation. Thus, over three decades ago, it was realized that infants and children became 'therapeutic orphans', deprived of therapeutic drugs that may be beneficial to them (Yaffe and Aranda, 1992). This awareness has led to a greater effort toward pediatric drug monitoring and research
on how these unique characteristics of the young affect the sensitivity to some therapeutic drugs. The knowledge gained from the field of therapeutics is also valuable for evaluating the potential risk of infants and children to chemicals in the environment. 9.2.7
Pharmacokinetics and pharmacodynamics
The toxicological response to a xenobiotic chemical is a function of both the pharmacokinetics and the pharmacodynamics of the chemical in the individual. Pharmacokinetics describes the entry of a chemical into the body, the movement and biotransformation pathways within the body, and the eventual elimination from the body. The pharmacokinetic process determines the delivery of a chemical or its metabolically transformed products (i.e. activated forms) to the sites of toxicological action. Pharmacodynamics describes the biochemical and physiological interactions of a chemical at these target sites (e.g. receptor binding and/or responses). The interaction determines the toxicological outcome of a chemical and is specific to the mechanism of action of the chemical or a group of chemicals with a similar mechanism of action. The components of pharmacokinetics (absorption, distribution, metabolism, elimination) for a chemical in food can be simply illustrated by the dietary exposure pathway in Figure 9.1. Chemical residues in foods become available to the body through dietary intake. In order for a chemical or its toxic metabolite(s) to exert systemic toxicity in a tissue, organ or system, it must first be absorbed into the body. The amount, rate and site of absorption are dependent on the properties of the chemical and the absorption capability of the
Chemicals in food
Intake
GI tract
Absorption
Systemic circulation
Liver
Distribution
Enterohepatic circulation
Remaining tissues, organs
Metabolism
Elimination Elimination
Feces
Milk, sweat, saliva
Expire air
Urine
Figure 9.1 Pharmacokinetic pathway of chemicals in foods.
gastrointestinal tract. After entering the portal blood, the chemical is available for systemic circulation and is distributed to various tissues and organs. The chemical may be differentially distributed to a specific tissue or organ where it is metabolized or accumulated. The metabolic processes convert the parent chemical either to lexicologically active metabolite(s) and/or to detoxified product(s) ready for elimination. The chemical and its metabolites are eliminated in feces, urine, expired air or secretions (e.g. milk, sweat, saliva). The biochemical and physiological changes occurring during early stages in life affect the pharmacokinetic capabilities and patterns in infants and children. These changes within the first several years after birth have been documented in pediatric therapeutics (Radde, 1985; Cohen, 1987; Blumer and Reed, 1992; Kauffman, 1992a). They are summarized in Table 9.1. The physiological changes occurring during rapid growth and development as shown in Table 9.1 collectively affect the pharmacokinetic processes of a chemical. The following examples are some of the ways in which these age-specific characteristics may affect each individual phase
Table 9.1 Biochemical and physiological changes in infants and children Absorption Gastric pH: pH 6-8 at birth, pH 1-3 within 1-2 days, pH at 2-6 months tends to be lower than in older children and adults Gastric acid concentration: low at birth, increases dramatically within 24 h, higher within the first 10 days, decreases to lower level at 20 days Gastric acid secretion: lower in young infants, approaches adult level at 3 months Gastric emptying time: lower in neonates; may reach adult level at 6-8 months GI motility: slow and irregular motility in neonates GI microflora: rapidly colonized after birth; high in neonates Distribution Total body water: 60-75% body weight within 1 year; 55-60% in adults Extracellular water: 40% body weight in neonates; 26-30% at 1 year, 20% in adults Total body fat: proportionally higher in infants Total plasma protein: lower in neonates/infants than in children/adults; alteration in amount and composition of plasma proteins in neonates/infants Volume of distribution: dependent on chemical Metabolism Biotransformation: alternate pathways in neonates; rates are lower in newborns Phase I: pronounced interindividual variation Phase II: glucuronidation deficient at birth; reach adult level by 3-4 weeks; sulfation active in neonates and young children Elimination Glomerular filtration rate: low in neonates, increases rapidly during the first year Renal tubular secretion: matures later than glomerular function; reaches adult level by 1 year. Renal function and clearance: greater in older infants and young children than in older children and adults Data taken from Radde (1985), Cohen (1987), Blumer and Reed (1992) and Kauffman (1992a).
of the pharmacokinetic process. In the absorption phase, for instance, the slower gastric emptying time in neonates may increase the absorption of chemicals that are absorbed in the stomach but delay the absorption of chemicals that are absorbed in the small intestine. One factor that may affect the subsequent distribution of a chemical is the volume of body water. The relatively greater volume of extracellular water in neonates may lower the concentration of a chemical reaching the site of action if the chemical is distributed through the extracellular water (Cohen, 1987). In the metabolism phase, the preference of sulfation over glucuronidation as a phase II conjugation pathway in infants and children may not affect the overall clearance of a chemical but may interact with other metabolic pathways in a way that impacts on the overall toxicity manifestation (e.g. toxicity from acetaminophen overdose) (Kauffman, 1992b). In the elimination phase, the lower glomerular filtration rate would result in generally slower clearance in neonates for chemicals that depend on renal function for elimination. In drug therapy, pharmacokinetics and pharmacodynamics, to the extent that these are applicable, are considered in the selection of drugs and prescription of dosing regimens (e.g. dosage, dosing frequency). Similarly, in risk analysis, these factors have been used in physiologically based pharmacokinetic models for estimating the biologically effective dose levels in animals (in laboratory studies) and/or in humans (human exposures). The model estimates can then be used to make better inter- and/or intraspecies adjustments of dose-response relationships. Compared to the pharmacokinetic data, a broad-based application is less available for pharmacodynamic data, since they are usually specific to a chemical and must be based on the mechanism of action of a chemical. 9.2.2
Toxicity
Many factors can potentially affect the overall sensitivity of an individual to the toxicity of a chemical. Among these are age, gender, genetic predisposition, nutritional status, disease state and concomitant exposures. Regarding the factor of age, infants and children are often identified as comprising the more sensitive population subgroup. For example, compared to the case in adults, the developing nervous system in children appears to be more sensitive to the neurotoxicity of lead exposures. Central nervous system impairment manifested in deficits of neurobehavioral function was detected in children at or below 10 |xg/dl in blood, while the same effects were not evident in adults below 40 juig/dl in blood (Davis, 1990). Another example is the greater sensitivity of infants to methemoglobinemia caused by inorganic nitrate in their drinking water. The higher sensitivity is mainly due to the greater susceptibility of fetal hemoglobin to oxidation, the lack of enzymes to reduce methemoglobin,
and the conversion of nitrate to nitrite by bacteria that thrive in the upper small intestine as a result of lesser stomach acidity in infants (Levine, 1990). The unique physiological and developmental characteristics of infants and children, however, do not always make them more sensitive to harm from chemical agents. For example, infants and children tend to be less sensitive to the ototoxicity and renal toxicity of aminoglycoside antibiotics. This may partly be due to the lowered accumulation of aminoglycoside in renal tubular epithelial cells (Kauffman, 1992a). Another example can be found in the lower risk of hepatotoxicity in infants and children from acetaminophen overdose. In adults, hepatotoxicity from overdose correlates to the formation of a highly reactive intermediate from an oxidative metabolic pathway. The greater sulfation capacity in children appears to reduce the oxidative metabolism and thereby protects against hepatotoxicity (Kauffman, 1992b). In considering the age-specific sensitivity, it is also appropriate to note that other population subgroups may also be more sensitive to certain chemicals. For example, the elderly may be more sensitive due to the declining abilities in metabolism and renal clearance (Levine, 1990). The increasing number of drugs that the elderly tend to require may lead to adverse drug-chemical interactions (United States Environmental Protection Agency, 1994) or compromise the ability of the body to handle additional chemical burdens. Besides the issue of sensitivity, another consideration for infants and children is that some adverse effects incurred during a developmental stage may not be apparent at the time of exposure but could be manifested later in life. Moreover, damage received during the developmental periods could result in permanent impairment for a large portion of a lifetime. 9.2.3 Exposures Food consumption patterns of infants and children are different from those of adults. Infants and children are unique not only in the amount of food they consume but also in the types and forms of food consumed. Table 9.2 provides some examples of the pattern of food consumption in various age groups. The data presented are based on the National Food Consumption Survey (NFCS) conducted by the United States Department of Agriculture (USDA) in 1987-88 (3 days of consumption data from approximately 10 000 individuals). The consumption rate shown in Table 9.2 is the average rate for foods that the surveyed individuals consumed on the days of surveying. These rates do not account for the days on which these foods are not consumed. Thus, they are not reflective of a long-term consumption pattern, since an individual usually does not consume these foods on all days for a prolonged period of time. The foods shown in Table 9.2 are among the top 20 commodities that have been determined by the US Food
Table 9.2 Food consumption rates by age group in a single day of eating in the USA3 Mean consumption rate (g/kg/day)b Age group < 1 year 1-6 years 7-12 years 13-19 years 20 + years
Apples and juice
Peaches
Potatoes
Tomatoes
Green beans
17.7 10.0 3.4 1.8 1.7
6.4 1.9 1.0 0.6 0.6
2.0 3.8 2.5 1.8 1.6
4.0 6.9 4.3 3.0 2.4
9.4 2.2 2.8 1.1 0.9
a
Consumption rates reflect the rate of food intake on the days on which a food was consumed. Data based on the National Food Consumption Survey conducted by the United States Department of Agriculture in 1987-88. b Gram of food per kg body weight per day.
and Drug Administration (FDA) as being the most frequently consumed fruits and vegetables (Food and Drug Administration, 1992). As shown in Table 9.2, the average consumption of potatoes varies approximately one- to two-fold among the age groups. On the other hand, the average consumption rate of apples and apple juice by infants is approximately 10-fold higher than for adults. A further comparison of the forms of apple consumed revealed that approximately two-thirds of the difference in the apple consumption by infants is due to the higher consumption of apple juice (canned, frozen), while only one-third of the difference is attributable to the consumption of apples (raw, baked, cooked, fried, canned, frozen). Food consumption patterns also vary with other demographic variables such as race or ethnic background, season and geographic location. Using the same NFCS data from the USDA, the variation by race or ethnic background can be illustrated with the consumption of rice in the USA (Table 9.3). In this example, the US population is divided into four groups: 'Hispanics', 'Whites', 'Blacks' and 'Others'. Asians are included in the subpopulation of 'Others'. As shown in Table 9.3, the consumption rates among children 1-12 years old varied by approximately three- to fourfold between the 'non-Hispanic Whites' and 'non-Hispanic Others'. These data serve to illustrate that, when data are available, these demographic variables can be used for fine-tuning the estimates of exposure for infants and children. It is appropriate to note that nursing infants are unique with regard to the dietary exposure patterns. In addition to the exposures to chemicals directly through the ingestion of table foods, they may also receive exposures to the same chemicals through mother's milk. As outlined in Figure 9.1, lactation is a potential route of elimination for some chemicals, especially the ones that are highly lipid soluble. Environmental chemicals present in the diet (e.g. DDT, polybrominated biphenyls) have been detected in human milk (Berlin, 1992). The level of chemicals in human
Table 9.3 Rice consumption rates by ethnic background in a single day of eating in the USA3 Mean consumption rate (g/kg/day)b Hispanics Age < 1 year 1-6 years 7-12 years 13-19 years 20+ years
0.9C 2.5 1.2 1.2 1.1
Non-Hispanics Whites
Blacks
Others
1.6 0.9 0.7 0.6 0.5
5.1C 1.6 0.9 0.9 0.9
2.1C 3.9 2.1 1.6 0.6
a
Consumption rates reflect the rate of food intake on the days on which a food was consumed. Data based on the National Food Consumption Survey (NFCS) conducted by the United States Department of Agriculture (USDA) in 1987-88. b Gram of food per kg body weight per day. c Small sample size in the survey (< 30).
milk can potentially be high enough to cause toxicity in infants. Incidents of poisoning were reported among nursing infants whose mothers accidentally consumed grains treated with the fungicides hexachlorobenzene and methylmercury (World Health Organization, 1986). Although information on the level of chemicals in human milk is rarely available, when there is indication that a chemical or its toxic metabolite(s) may exist in significant levels in human milk, this component of dietary exposure should also be considered in the overall estimation of food chemical exposure for infants. 9.3 Implications for risk assessment The differences between the young subpopulation and adults warrant special considerations for this subpopulation. The difficulty lies in the limitation of data. Safety evaluation of food chemicals in infants and children can only be carried out to the extent that data are available. The implications for risk assessment are presented in relation to the four components of risk assessment. The first two components, hazard identification and dose-response assessment, are included in section 9.3.1. 9JJ
Toxicological considerations
The inherent hazard posed by a food chemical can be identified in humans based on illness reports, epidemiological data and laboratory exposure studies. However, these data are generally limited. For obvious ethical reasons, laboratory exposure studies are few and the sample size is small. Epidemiological studies usually lack sufficient documentation on the level
of exposures. They are often compromised by many confounding factors and lack the statistical power to detect a low level of adverse effects. For these reasons, the toxicity of a chemical is identified largely from studies in laboratory animals. Pesticides as a group have the largest toxicological database. This is because a battery of toxicity tests in laboratory animals is usually required for the registration or approval of chemicals that are used on food or feed. These requirements are in general the same for the USA and the member nations of the Organization for Economic Cooperation and Development (OECD) and the European Union (EU) (General Accounting Office, 1993). A description of these toxicological studies can be found in the 1990 report on the toxicological methodology used by the Joint FAO/WHO Meeting on Pesticide Residues (World Health Organization, 1990). Available data. There is currently no requirement for testing all aspects of toxicity (e.g. acute and subchronic toxicity, neurotoxicity) in young animals. However, some information on the toxicity to young laboratory animals can be obtained from multi-generation reproduction studies and chronic and/or oncogenicity studies. In these studies, neonatal and/or young laboratory animals are included as part of the study protocol. A brief discussion on the endpoints of toxicity and the limitations of these studies for the evaluation of risk to infants and children is presented. Developmental toxicities from prenatal exposures are discussed in section 9.4. Reproduction toxicity studies for food chemicals typically entail exposing male and female laboratory animals, usually rats, to diets that contain the test chemical for approximately 8-10 weeks before mating. The treatment is continued throughout the gestation and the reproduction of two F1 litters (i.e. Fla and Flb). The reproduction cycle is repeated two or three times in multi-generation studies. These studies are designed to provide information about the toxic effects on reproductive functions and outcomes. Although they also provide some information about toxicities resulting from prenatal and postnatal exposures, the current toxicity evaluation protocol is largely limited to the designated purpose of identifying endpoints of reproductive toxicity. Oncogenicity studies for food chemicals typically entail exposing male and female rodents (i.e. rats or mice) to diets that contain the test chemical for a large part of their lifetime (18 months to 2 years), starting at approximately 6-8 weeks of age. These studies are designed to test the potential for the development of neoplastic lesions or tumors within a lifespan. Although the administration of the test agent begins early in life, the starting point (i.e. 6-8 weeks of age) is considered to be approaching sexual maturity for rodents (Jacoby and Fox, 1984; Kohn and Barthold, 1984). Thus, the effects of a chemical during the stages of rapid growth and development leading up to sexual maturity are not included in the
test. Moreover, since these studies focus on the toxicity after prolonged exposures, subtle changes, such as functional and neurobehavioral alterations, are generally not evaluated. Data needs. Of particular concern are three areas of toxicity for which data are lacking for a more thorough assessment of risks in the younger subpopulations. These are neurotoxicity, immunotoxicity and the sensitivity to oncogenic effects during early life stages. Neurotoxicity of food chemicals is of great concern because some chemicals in food are known to have the potential to cause neurotoxicity in humans (United States Environmental Protection Agency, 1994). Pesticides, such as organophosphates, carbamates and organochlorines, are found to have structural and functional toxicity effects on both the central and peripheral nervous systems. Naturally occurring chemicals in food also pose neurotoxicological problems. Examples of these chemicals are mycotoxins that cause ergotism and neurotoxic alkaloids from weeds (e.g. morning glories, jimpson weed) inadvertently included in field crops during harvest (United States Environmental Protection Agency, 1994). Infants and children are generally considered as high-risk subpopulations for neurotoxicological effects. One reason for this assumption is that the metabolic capabilities of the young are still undergoing development. For example, in rats, the age-related differences in oxidative detoxification ability appear to contribute to the higher sensitivity of the young to methyl parathion. The higher sensitivity in young rats is indicated by the lower LD50 values: the LD50 in 1-day-old rats is approximately 8-10-fold lower than the value for adult rats, and the LD50 for weanling rats is approximately two-fold lower than the value for the adults (Brodeur and DuBois, 1963; Benke and Murphy, 1975). In addition to the pharmacokinetic considerations, another cause for concern in the younger subpopulations is that chemicals can come in contact with the nervous system more readily when the blood-brain and blood-nerve barriers are incomplete during the early stages of life (United States Environmental Protection Agency, 1994). A third reason for concern is that damage to the nervous system while it is undergoing development and differentiation may have broader health implications. In light of the potential presence of neurotoxic chemicals in food, there is an obvious need for obtaining data to address the risk of neurotoxicity in infants and children. Another area of toxicological concern is immunotoxicity, an area of toxicology for which data are rarely available. Effects of xenobiotics on the immune system could be manifested by immunosuppression, hypersensitivity or autoimmunity. An altered defense mechanism against pathogens and neoplasia (e.g. lymphoma, leukemia) has been shown to be associated with the use of immunosuppressive drugs (Descotes and Vial, 1994; National Research Council, 1993). Immunotoxicity in young
subpopulations may be different from the toxicity in the adults because the immune system is not fully developed until adolescence (National Research Council, 1993). Other than the dermal sensitization test, there is currently no general requirement for testing immunotoxicity for all pesticides. Neither is there a general requirement for conducting tests in young animals. There has been increasing discussion in the field of risk assessment regarding the need to obtain immunotoxicity data on chemicals in the environment. At the same time, it is also important that some issues of uncertainty be investigated regarding the use of animal studies in the assessment of risks in humans. One area of uncertainty is in defining a most suitable model for predicting the immunotoxicity in humans. The other area of uncertainty lies in the biological significance of some of the endpoints detected in immunotoxicity studies (Selgrade et al, 1995). Assays at the molecular or cellular levels usually produce the most sensitive endpoints. However, the significance of these endpoints to the overall expression of effects at the organ or tissue level is often difficult to define. Establishing quantitative relationships between these endpoints and the immune response at the tissue/organ or organism level (e.g. susceptibility to pathogens or neoplastic growth) is essential for a meaningful use of these endpoints in risk assessment. The third area of concern is the sensitivity to oncogenic effects during early life periods and the sensitivity specific to the age of initial exposures. An answer cannot be found through the typical lifetime oncogenicity studies as described in the previous section. Studies for elucidating the differential sensitivity during in utero and postnatal developmental periods would require special protocols. They may be lifetime studies that include exposures during these windows of development. They may also be studies in which animals are treated exclusively during these windows, with the oncogenic outcomes evaluated at the end of the lifespan. McConnell (1992) conducted a literature-based investigation comparing the oncogenic potentials of more than 30 chemicals for which the above types of studies are available. It was concluded that the inclusion of perinatal exposures tends to increase the incidence and decrease the latency for tumor occurrence but does not uncover carcinogens or types of tumors that are not detected with the current protocol. However, without chemical-specific data, common weighting factors of sensitivity during early stages of life cannot be established for all chemicals. 9.3.2 Exposure assessment Compared to the many uncertainties and information gaps concerning the sensitivity of infants and children, the exposure component of the risk assessment is an area with sufficient data to enable a reasonable analysis with respect to age. The exposure to food chemicals is a product of the
rate of food consumption over a specified period of exposure (e.g., a few days, a season or a year) and the concentration of chemical residue in foods. The exposure is generally expressed as milligram chemical per kilogram body weight per day (mg/kg/day). Dietary exposure = Food consumption rate x Residue level The computation of exposures to a chemical from consuming one food item is a rather simple task. However, it is important to point out that a food-borne chemical can potentially be present in more than one food. For instance, a pesticide can be used on many commodities. Food additives can also be present in more than one food item. Consequently, one can potentially be exposed to a chemical through eating a number of foods. As the number of food items increases, the computation of exposures becomes considerably more complex. The multiple iterations of the above equation at different residue and consumption levels for each individual in a population would require the use of a computer program. The safety evaluation of food chemicals should typically include several exposure scenarios: acute (within a few days), subchronic (seasonal or within a few months), chronic (1 to a few years) and lifetime. The exposure scenarios other than a lifetime are important but often neglected. It is essential that both components of the risk assessment, the toxicity and the exposure, be considered in determining what exposure scenarios should be evaluated. The acute exposure captures the potential risk for episodically high exposures (e.g. high end of exposures from a single day of food consumption). It is crucial for the safety evaluation of chemicals that are acutely toxic (e.g. neurotoxicity of organophosphates). The subchronic exposure reflects the higher consumption rate of foods that are consumed seasonally (e.g. summer fruits). The exposure from these foods would be higher during the season of consumption than if the exposure were averaged over a year. This scenario is important for the evaluation of chemicals for which subchronic toxicity is of concern. The chronic exposure is important for evaluating the safety of chemicals that have the potential for adverse effects after a duration of exposure that is shorter than a lifetime (e.g. a few years). In all these three scenarios, the exposure for each age group in a population should be addressed separately. Infants and children, because of their higher food consumption rates, are generally expected to have higher risk due to the higher exposure levels. In addition, it may also be necessary to address the exposure of women at childbearing age. This exposure scenario is particularly important for chemicals with demonstrated potential for developmental toxicity through in utero exposures. Food consumption rate. The traditional human adult model used in exposure assessment is an individual of 60-70 kg body weight, breathing 20 m3
of air and drinking 21 of water a day, and consuming foods weighing about 2.5% of the body weight (World Health Organization, 1990). These parameters represent an approximation of an average person from 18 years of age to the expected lifespan of 70 years. Since adulthood constitutes approximately 75% (52 years in 70 years) of an average person's lifetime, an argument can be made that the exposure of an adult is a sufficient approximation of the lifetime average exposures of a person. For an exposure period shorter than a lifetime, the food consumption rates specific to infants and children should be applied. As pointed out previously, taking into account other demographic variables, such as season, race or ethnic background, and geographic location, provides further refinement in the evaluation of risk for infants and children. Residue level. As with the toxicological data, pesticides comprise the single group of food chemicals for which residue data are most abundant. Two types of residue data have been used by the Department of Pesticide Regulation (DPR) within the Cal/EPA and the USEPA in the evaluation of the safety of pesticides in foods. The maximum residue limit (MRL) or tolerance has been used for estimating the theoretical maximum exposure. There may be a possibility that one may consume a single food or commodity at this level in one setting or for a few days (i.e. acute exposures). However, when considering exposures from multiple foods, this approach tends to yield unrealistically high exposure estimates. This is especially true if there is evidence showing that chemical residues in foods ready for consumption rarely reach MRLs or tolerances. Such is the case with samples that were analyzed under the USDA Pesticide Data Program (PDP) (United States Department of Agriculture, 1995). In 1993, 7328 samples from 12 commodities originating in 38 states and 15 foreign counties were analyzed for residues of 58 pesticides. At the 90th percentile of residue distribution, only 12% of the commodities contained residues exceeding 10% of the tolerance. A similar pattern has been demonstrated in the monitoring programs on the DPR within the Cal/EPA. Approximately 10000 samples are collected each year in the two DPR programs (i.e. Marketplace Surveillance and Priority Pesticide Programs). The yearly data from 1991 to 1993 showed that less than 13% of the samples contained residues above 10% of tolerances (California Environmental Protection Agency, 1993, 1994b, 1995). The alternative to the use of MRLs is to use data from residue-monitoring or field studies. With sufficient sample sizes, proper representation in sampling and adequate detection limits, monitoring data provide a more realistic assessment of human exposures, especially for estimating longterm exposures. However, as with the food consumption data, the availability and the extent of pesticide residue monitoring vary from country to country. The recent General Accounting Office (GAO) survey
showed that OECD nations' residue survey programs generally targeted the imported foods, with less emphasis being given to the exported or domestically grown foods (General Accounting Office, 1993). Programs in the USA (e.g. federal programs under the FDA and USDA, and some state programs) appear to be relatively more extensive. The fresh-producemonitoring programs in the State of California, in particular, have substantially large sample sizes (approximately 10 000 samples per year). These data are routinely used in dietary exposures by the DPR (California Environmental Protection Agency, 1994a). In general, the upper statistical bound or the highest residue level from the monitoring data is used by the DPR in estimating the acute dietary exposures. For evaluating the safety of pesticides in the diets of infants and children, it is also important to note that infants generally have a higher consumption of processed foods (e.g. baby food, canned juice). Food processing may increase the residue level through loss of water or concentrating. Food processing can also convert chemicals to degradation product(s) that can be more or less toxic than the parent compound. Processing can reduce the residue level of a chemical through dilution or partitioning (e.g. into milk fat). Simple food preparation (e.g. peeling, washing) can reduce the total residue level or remove inedible portions that may contain a greater concentration of residues. When possible, these factors should be taken into account in the overall estimation of dietary exposures for all population subgroups, including infants and children (California Environmental Protection Agency, 1994a). Dietary exposure assessment. Two approaches, point estimate and distributional, can be used to characterize the dietary exposures to chemicals. In the point estimate approach, the population distribution of exposures is computed based on the distribution of consumption rates in a population while holding the residue level at a fixed value (i.e. point estimate). The point estimate could be the MRL or tolerance. It could also be the high end or the central tendency of the residue profile. As discussed above, using a data point from the residue profile, rather than the MRL, would yield a more realistic exposure profile. A single estimated exposure may subsequently be chosen from the exposure profile to characterize the exposures. For example, the current practice of the DPR is to characterize the acute exposure of each population subgroup (a total of up to 16 groups based on age, ethnicity and season) at its 95th percentile of exposure among individuals who consume the foods under analysis. The chronic exposure is based on the average exposures of each population subgroup, including on both days on which the foods are consumed and on the days on which the foods are not consumed. It is apparent that the point estimate approach is somewhat limited in providing a close representation of the actual exposure profile for a population. This is because the residue
level in foods ready for consumption is likely to vary from day to day, depending on the source or batch of food. The residue levels of a number of foods consumed are also different for each individual. The alternative is the distributional approach using a stochastic analysis (e.g. Monte Carlo simulation) that takes into account the entire distribution of the residue levels (National Research Council, 1993). This approach has the advantage over the point estimate in avoiding the repeated use of high values (e.g. using MRLs for residue level for all foods) that produces unrealistic exposure estimates. It also includes the plausible high ends of exposure which may otherwise be omitted when a central tendency of a residue profile is used in the point estimate approach. The distributional approach yields a realistic profile of human exposures. However, its application requires a large database on consumption and residue distributions. It also involves extensive computer programming and computation time. 9.3.3 Risk characterization It is essential that all available data are utilized in obtaining the most realistic estimates of risk for infants and children. Erroneous conclusions from either over- or underestimation of risk could have undesirable effects on these younger subpopulations. An underestimation of risk could lead to inadequate protection of infants and children from the risk of food chemicals. On the other hand, an overestimation of risk could result in unnecessarily stringent regulations that may ultimately reduce the availability of food or cause foods to be less affordable. These results can negatively affect the health of infants and children. In risk assessment, it is often necessary to extrapolate toxicological data from laboratory animals to humans. Unless otherwise indicated, it is assumed that the mechanism of action of a particular chemical operating in animals is also applicable to humans. As mentioned earlier, adjustment on the biologically effective dose may be possible between animals and humans when there are sufficient data for physiologically based pharmacokinetic modeling. However, it is not generally known how the dose-response relationship differs between animals and humans. For a non-oncogenic endpoint, it is generally assumed that humans can be as much as 10-fold more sensitive than laboratory animals. It is further assumed that the highly sensitive individuals can be up to 10-fold more sensitive than an average individual (World Health Organization, 1994). Thus, an uncertainty factor (UF) of 10 is used for interspecies and interindividual extrapolation of dose-response relationships. Alternatively, a quantitative approach may be taken for oncogenic effects, especially with chemicals for which there is sufficient weight of evidence for oncogenic potential in humans. The oncogenic potential of a chemical is determined based on an evaluation of the
overall available evidence. This includes not only the observations made directly in humans and from bioassays in laboratory animals, but also any supporting data from which the ongogenic potential can be inferred. With regard to the evidence in animal bioassays, greater weight is given to chemicals demonstrated to cause tumors that are malignant, in more than one tissue/organ site, in more than one test species/strain and/or gender, and in bioassays conducted by more than one group of investigators. With regard to the supporting data, greater weight is given to chemicals demonstrated to be genotoxic and/or to have positive structure-activity relationships to other chemical(s) with known oncogenic potential. The quantitative approach assumes that a biological threshold does not exist for oncogenic effects. Any incremental increase of dose would result in a proportional increase in the probability of risk. In this approach, the dose-response relationship is described mathematically by fitting a mathematical model to a set of tumor incidence data. The dose level in animals is scaled to humans, assuming the interspecies equivalence of dose based on the body weight to the three-fourth power (United States Environmental Protection Agency, 1992). The upper bound of the slope at the low-dose range (i.e. Q1*) of the mathematical model is generally considered to be adequate to account for the sensitivity in a human population. As discussed earlier, there is a lack of information on the comparative sensitivity between young and mature laboratory animals. A definitive conclusion cannot be drawn regarding the adequacy of the combined 100-fold (two 10-fold factors) UF in accounting for toxicity that has not been tested in young animals. There is also no general information regarding how well young laboratory animals model the response of young humans. It is believed that there are more similarities in the toxicological responses between adult animals and humans than between young animals and humans (National Research Council, 1993). The developmental ages during the early periods of life differ between young animals and young humans. Laboratory animals are less mature at birth than humans but have a faster rate of development after birth (World Health Organization, 1986; National Research Council, 1993). These differences may result in significant differences in the pharmacokinetics and pharmacodynamics between young animals and young humans and introduce more potential uncertainty in the interspecies extrapolation of toxicological data (National Research Council, 1993; United States Environmental Protection Agency, 1994). The National Research Council (NRC), in its review of the pesticide risk assessment in the diets of infants and children, recommended areas for improving the assessment of food safety for the younger subpopulations (National Research Council, 1993). The direction is clearly to obtain more comprehensive information about the sensitivity, toxicity and exposures of pesticides in these younger subpopulation. Recently, the Cal/EPA Pesticide Exposure to Children Committee
(PECC) evaluated the adequacy of the current risk assessment methodology as described in this chapter for the assessment of the safety of pesticides in foods for infants and children (California Environmental Protection Agency, 1994a). The methodology includes the use of all the available toxicological data, the use of the high end of exposures (e.g. 95th percentile of exposures) of each population subgroup (including infants and children) and the application of the current assumptions for characterizing the risks based on oncogenic and non-oncogenic endpoints. The PECC concluded that, given the available databases (e.g. toxicology, residue, consumption), the methodology is adequate for protecting infants and children from the risk of pesticides in foods (California Environmental Protection Agency, 1994a). This same methodology can also be used for the safety evaluation of other chemicals in foods for infants and children. As more data on sensitivities of infants and children become available, especially in the area of developmental neurotoxicity, immunotoxicity and oncogenic susceptibility during the early stages of life, they should be used to provide a more accurate evaluation of the risk of food chemicals to these subpopulations.
9.4 Other considerations Two additional issues pertinent to the overall evaluation of the food chemical safety of infants and children are presented. The first issue is the developmental effects from in utero exposures. The development of an individual is a continuum that starts at the point of conception. Therefore, developmental effects through in utero exposures can affect the health of an individual after birth and/or influence the response of the infants and children to further exposures to xenobiotics subsequent to birth. The second issue is the exposure to a multiplicity of chemicals. Although the risk of food chemical exposures is often evaluated for each individual chemical, in reality humans can be exposed to many chemicals in food. The pertinence of this issue is not limited to a particular population subgroup and it is certainly relevant to the evaluation of the overall safety of food for these younger subpopulations. 9.4.1 In utero exposures The toxicity of chemicals to the developing fetuses has been routinely studied for pesticides. Developmental toxicity studies typically entail exposing pregnant laboratory animals (e.g. rats, rabbits) to the test chemical orally during the period of organogenesis. The experiment is terminated 1 day before parturition and the fetuses are examined for structural abnormalities. These studies are designed to identify the poten-
tial effects of structural abnormalities in fetuses during the prenatal developmental period. The scheduled termination of pregnancy, however, does not permit the evaluation of developmental effects that may be manifested later in life. Historical incidents of health effects detected later in life due to in utero exposures give rise to concerns about the lack of postnatal toxicity evaluation. One well-known example is the occurrence of genital tract abnormalities and cancer in young women who were exposed to diethylstilbestrol in utero (Poskanzer and Herbst, 1977). Without including a thorough postnatal evaluation of functional and developmental effects throughout the maturation period, the current developmental toxicity study protocol is insufficient to address all aspects of developmental effects. More recently, the importance of testing for developmental neurotoxicity has also been recognized. Developmental neurotoxicity studies in animals investigate the various aspects of neurotoxicity (e.g. functional, behavioral, histopathological) manifested in neonates and young animals that have received exposures in utero and through mother's milk. Currently, developmental neurotoxicity studies may be required for pesticides that are demonstrated to have neurotoxic potential in mammals (e.g. rodents). As these data become available, the problems of in utero exposures can be more thoroughly addressed. 9.4.2 Multiple chemical exposures It is apparent that humans are likely to be exposed to more than one chemical in foods. However, most food safety evaluations generally address only the risk of a single chemical. The approach to addressing the risk of concomitant exposures to a number of chemicals has been a subject of much discussion (National Research Council, 1993). The difficulties in addressing the issue of multiple chemical exposures lie in the lack of information on how chemicals may interact in the body and how to realistically assess the exposure. The common approach to assessing the toxicity of a mixture of chemicals is the use of the toxicity equivalence factor (TEF). The TEF is an index of the comparative toxicity of one chemical to the lead or prototype chemical that has a common mechanism of action or demonstrated structure-activity relationship. Without information that would indicate otherwise, the general underlying assumption for the TEF approach is that the overall effects of all the chemicals under consideration are additive. The TEF approach has been used in assessing the risk of exposures to mixtures of polychlorinated dibenzodioxins (PCDDs) and dibenzofurans (PCDFs) congeners in the environment (United States Environmental Protection Agency, 1989). These congeners are expected to have the same mechanism of action (e.g. receptor binding). The TEFs for the congeners are developed based on an extensive evaluation of
human data, carcinogenicity and reproductive studies, and in vitro tests. The TEF approach has also been used for other groups of chemicals (e.g. polychlorinated biphenyls) and is suggested for use in the risk evaluation of more than one organophosphate pesticide in foods (National Research Council, 1993). One of the limitations of the TEF approach is that it does not give provision for chemical interactions other than additivity (i.e. synergistic or antagonistic interactions). Uncertainties also exist concerning whether the TEF developed based on one type of toxicity (e.g. LD50 ) would be adequate for use in addressing another type of toxicity (e.g. oncogenicity) of the same group of chemicals when the mechanisms of the two types of toxicity are not clearly known to be the same. The greater difficulty in evaluating the risk of chemical mixtures in food is in the definition of an exposure scenario. It is nearly impossible to realistically define the number of chemicals and their respective concentrations in foods. For pesticides with known application patterns, it may be possible to simulate an exposure scenario. However, the complexity of the possible combinations of scenarios and the implications of the model outcome are still largely undefined. This is an area that requires more research and clarification. Meanwhile, the reality of exposure to multiple chemicals should be considered in the overall evaluation of food safety for all population subgroups, including infants and children.
9.5 Conclusion Are current risk assessment practices adequate for protecting infants and children from the risk of chemicals in foods? Is the current uncertainty factor of 10 adequate for the inter-individual differences in sensitivity? These are difficult questions. Admittedly, data gaps exist regarding the sensitivity of the young subpopulations and for some specific toxicity endpoints. Greater assurance of protecting the young can only be achieved through further research and data collection. Further toxicity testing should include studies on the relative sensitivity of young and maturing animals, particularly in areas for which toxicological data are lacking. These areas include neurotoxicity, immunotoxicity, and sensitivity to oncogenic effects. In addition, the efforts of in utero exposures to chemicals should be studied beyond the post-natal stage and throughout the maturation period. Basic research and data compilation should be underway to collect information on age-specific physiological parameters in laboratory animals and in humans. This information serves to bridge the gap between the species and allow for a more accurate prediction of risk to humans. Young subpopulations tend to have higher exposures due to the greater amount of food consumed per unit body weight. However, data are lacking
on the residue levels of chemicals in foods and forms of foods that are most consumed by the young subpopulations, and for a comprehensive characterization of the consumption pattern. The benefit and practicality of establishing a residue data repository for a better characterization of dietary exposures should be evaluated. Guidance for evaluating the risk of exposures during the period of growth and development should be established. This includes the incorporation of physiological parameters, to the extent possible, in modelling the dose-response relationships. As data on inter-individual sensitivity become available, the conventional use of uncertainty factors and the interspecies extrapolation factor should be evaluated for their adequacy to protect sensitive subpopulations against the risk of chemical exposures from foods. Research is needed to characterize the extent of exposures to food chemicals through human milk. The approach to estimate the overall risk of exposures to more than one chemical should be based on realistic and defined patterns of co-existence of chemicals in food.
References Benke, G.M. and Murphy, S. D. (1975) The influence of age on the toxicity and metabolism of methyl-parathion and parathion in male and female rats. Toxicology and Applied Pharmacology, 31, 254-269. Berlin, C.M. Jr (1992) The excretion of drugs and chemicals in human milk. In: Yaffe, SJ. and Aranda, J.V. (eds) Pediatric Pharmacology, Therapeutic Principles in Practice. Saunders Company, Philadelphia, pp. 205-211. Blumer, J.L. and Reed, M.D. (1992) Principles of neonatal pharmacology. In: Yaffe, SJ. and Aranda, J.V. (eds) Pediatric Pharmacology, Therapeutic Principles in Practice. Saunders Company, Philadelphia, pp. 164-177. Brodeur, J. and DuBois, K.P. (1963) Comparison of acute toxicity of anticholinesterase insecticides to weanling and adult male rats. Proceedings of the Society for Experimental Biology and Medicine, 114, 509-511. California Environmental Protection Agency (1993) Residues in Fresh Produce - 1991. Cal/EPA, Department of Pesticide Regulation, Sacramento, California. California Environmental Protection Agency (1994a) A Joint Review of Existing Federal and State Pesticide Registration and Food Safety Programs, A Report to the California Legislature by the Pesticide Exposure to Children Committee (PECC). Cal/EPA, Department of Pesticide Regulation, Sacramento, California. California Environmental Protection Agency (1994b) Residues in Fresh Produce - 1992. Cal/EPA, Department of Pesticide Regulation, Sacramento, California. California Environmental Protection Agency (1995) Residues in Fresh Produce - 1993. Cal/EPA, Department of Pesticide Regulation, Sacramento, California. Cohen, M.S. (1987) Special aspects of perinatal and pediatric pharmacology. In: Katzung, B.G. (ed.) Basic and Clinical Pharmacology, 3rd edn. Appleton and Lange, Los Altos, California. Davis, J.M. (1990) Risk assessment of the developmental neurotoxicity of lead. Neurotoxicology, 11, 285-292. Descotes, J. and Vial, T.H. (1994) Immunotoxic effects of xenobiotics in humans: a review of current evidence. Toxicity in Vitro, 8, 963-966. Food and Drug Administration (1992) Food labeling. Federal Register, 59(45), 8174-8175. General Accounting Office (1993) Pesticides, A Comparative Study of Industrialized Nations' Regulatory Systems. United States General Accounting Office, Washington, DC.
Guzelian, P.S., Henry, CJ. and Olin, S.S. (eds) (1992) Similarities and Differences between Children and Adults, Implications for Risk Assessment. International Life Sciences Institute Press, Washington, DC. Jacoby, R.O. and Fox, J.G. (1984) Biology and diseases of mice. In: Fox, J.G., Cohen, BJ. and Loew, F.M. (eds) Laboratory Animal Medicine. Academic Press, Inc., San Francisco, pp. 31-90. Kauffman, R.E. (1992a) Drug therapeutics in the infant and child. In: Yaffe, SJ. and Aranda, J.V. (eds) Pediatric Pharmacology , Therapeutic Principles in Practice. Saunders Company, Philadelphia, pp. 212-219. Kauffman, R.E. (1992b) Acute acetaminophen overdose: an example of reduced toxicity related to developmental differences in drug metabolism. In: Guzelian, P.S., Henry, CJ. and Olin S.S. (eds) Similarities and Differences between Children and Adults, Implications for Risk Assessment. International Life Sciences Institute Press, Washington, DC., pp. 97-103. Kohn, D.F. and Barthold, S.W. (1984) Biology and diseases of rats. In: Fox, J.G. Cohen, BJ. and Loew, F.M. (eds) Laboratory Animal Medicine. Academic Press, Inc., San Francisco, pp. 91-120. Levine, R.R. (1990) Pharmacology: Drug Actions and Reactions, 4th edn, Little, Brown and Company, Boston. McConnell, E.E. (1992) Comparative responses in carcinogenesis bioassays as a function of age at first exposure. In: Guzelian, P.S., Henry, CJ. and Olin, S.S. (eds) Similarities and Differences between Children and Adults, Implications for Risk Assessment. International Life Sciences Institute Press, Washington, DC., pp. 66-78. National Research Council (1993) Pesticides in the Diets of Infants and Children. National Academy Press, Washington DC. Poskanzer, D. and Herbst, A. (1977) Epidemiology of vaginal adenosis and adenocarcinoma associated with exposure to stilbestrol in utero. Cancer, 39, 1892-1895. Radde, LC. (1985) Mechanisms of drug absorption and their development. In: MacLeod, S.M. and Radde, LC. (eds) Textbook of Pediatric Clinical Pharmacology. PSG Publishing Company, Inc. Littleton, Massachusetts, pp. 17-31. Selgrade, M.K., Cooper, K.D., Delvin, R.B. et al. (1995) Immunotoxicity - bridging the gap between animal research and human health effects. Fundamental and Applied Toxicology, 24, 13-21. United States Department of Agriculture (1995) Pesticide Data Program, Annual Summary Calendar Year 1993. USDA Agricultural Marketing Service, Washington, DC. United States Environmental Protection Agency (1989) Interim Procedures for Estimating Risks Associated with Exposures to Mixtures or Chlorinated Dibenzo-p-Dioxins and Dibenzofurans (CDDs and CDFs) and 1989 update. PB 90-145756. USEPA, Washington, DC. United States Environmental Protection Agency (1992) Draft report: a cross-species scaling factor for carcinogen risk assessment based on equivalence of mg/kg3/4/day; notice. Federal Register, 57(109), 24152-24173. United States Environmental Protection Agency (1994) Final report: principles of neurotoxicity risk assessment; notice. Federal Register, 59(158), 42360-42404. Yaffe, SJ. and Aranda, J.V. (1992) Introduction and historical perspectives. In: Yaffe, SJ. and Aranda, J.V. (eds) Pediatric Pharmacology, Therapeutic Principles in Practice. Saunders Company, Philadelphia, pp. 3-9. World Health Organization (1986) Principles for evaluating health risks from chemicals during infancy and early childhood: the need for a special approach. International Programme on Chemical Safety Environmental Health Criteria 59. Geneva, World Health Organization. World Health Organization (1990) Principles for the toxicological assessment of pesticide residues in food. International Programme on Chemical Safety Environmental Health Criteria 104. Geneva, World Health Organization. World Health Organization (1994) Assessing human health risks of chemicals: derivation of guidance values for health-based exposure limits. International Programme on Chemical Safety Environmental Health Criteria 170. Geneva, World Health Organization.
10 Dietary chemoprevention in toxicological perspective H. VERHAGEN, CJ.M. ROMPELBERG, M. STRUBE, G. van POPPEL and PJ. van BLADEREN
10.1
Introduction - nutrition and cancer
Nutrition is essential to support life, but at the same time it can paradoxically be considered the main cause of cancer. As concerns the latter, Doll and Peto (1981) estimated that in the USA the proportion of cancer deaths due to the diet was approximately 30%. Indeed, on the one hand, food contains a wide variety of mutagens and/or carcinogens, some of which occur naturally, and others that might be introduced during the preparation of food (Pariza et at., 1990; Wakabayashi et al, 1991), whereas, on the other hand, the human diet also contains a number of compounds that protect against cancer (Birt and Bresnick, 1991; Stich, 1991; Dragsted et al, 1993; Verhagen et al., 1993). This is in close agreement with epidemiological findings of negative associations between cancer and consumption of fibre-containing foods, fresh fruits, vegetables, vitamins and minerals (Archer, 1988; Birt and Bresnick, 1991; Steinmetz and Potter, 1991a,b). Many a compound of dietary origin has been claimed to have chemopreventive potential. Therefore chemoprevention of cancer is an area of great scientific, public and economic interest. Thus, both of these qualitative findings have a sense of truth and it appears to be possible to decrease or increase our cancer risk by taking the appropriate dietary measures. Nowadays, many 'functional foods', 'designer foods' and 'nutraceuticals' are being developed and brought to market (Caragay, 1992; Blenford, 1994). These may become a new generation of foods that protect humans against cancer and other degenerative diseases. In this chapter a risk assessment of genotoxic and non-genotoxic carcinogens will be given, followed by a short survey of genotoxic, carcinogenic and chemopreventive dietary constituents. Thereafter, these categories of bioactive dietary constituents will be discussed in the light of what we have learned from the established sciences of pharmacology and toxicology. The main focus of this chapter will be on mechanisms of action, and tiered test strategies to discover true beneficial compounds. Finally, a series of caveats will be given that one should take into account when making a health claim for a particular food (ingredient). Despite these
caveats, it will be shown that chemoprevention in humans is feasible under normal dietary conditions.
10.2
Risk assessment of carcinogens
Risk assessment of carcinogenic substances is often based on the underlying mechanism: a distinction is made between carcinogens with (1) a stochastic and (2) a non-stochastic mode of action. Stochastically acting carcinogens are capable of inducing irreversible structural changes in DNA with a self-replicating effect (i.e. are genotoxic). These carcinogens are considered to have no threshold dose for their initiating effect (i.e. they are complete carcinogens). In contrast, carcinogens acting by a non-stochastical mechanism have a mode of action that is regarded as reversible, implying a threshold dose at and below which no carcinogenic potential exists. Examples of the latter are tumour promoters and co-carcinogens acting through hormonal disturbance, non-specific microsomal enzyme induction, or suppression (or overstimulation) of the immune system. The terms 'genotoxic' and 'non-genotoxic' are used to distinguish these two classes of carcinogens. Often the term 'mutagenic' is used instead of 'genotoxic', although this is basically not correct. Mutagenicity refers to a structural modification of the DNA that cannot be repaired correctly. Genotoxicity is a somewhat broader definition, as this also includes effects like DNA binding, DNA repair and DNA breakage, which do not necessarily lead to non-repairable DNA lesions. However, for toxic effects both 'genotoxicity' and 'mutagenicity' are used, whereas with chemoprevention the term 'antimutagenicity' prevails over 'antigenotoxicity'. The decision on whether a carcinogen is capable of initiation, i.e. is genotoxic, is considered crucial for risk assessment. Genotoxicity is considered to be an intrinsic property of chemicals that might be relevant at all exposure levels. However, it is stressed that any risk extrapolation procedure leads to an 'estimate' of cancer risk that cannot be verified: cancer risk assessment is not as scientific an exercise as one would like it to be. In the quantitative risk evaluation, the differences in mechanism are accounted for. For stochastically acting compounds a non-threshold extrapolation method is used to estimate the cancer risk associated with a certain dose of the carcinogen. For non-stochastically acting compounds the NOAEL-SF (no observed adverse effect level-safety factor) approach is appropriate to estimate safe doses for human exposure. 10.2.1
Threshold approach for non-genotoxic carcinogens
In health risk assessment, safe levels for human exposure to chemicals are derived from dose-response data. It is assumed, and generally accepted,
that each compound has a threshold dose at and below which no toxic effect will occur. This basic principle of toxicology was introduced by the godfather of toxicology, Paracelsus, in the 17th century. Toxicity is an intrinsic property of each chemical, and so each compound is toxic. In toxicology this threshold is referred to as the 'no observed adverse effect level' (NOAEL). Only when this threshold dose is superseded may toxicity become manifest. The NOAEL is commonly determined in studies with experimental animals: several doses are tested in animals ranging from clearly toxic doses to doses with no apparent toxicity, the NOAEL. It is assumed that the established NOAEL in mg/kg body weight in animals is also a NOAEL in humans. Subsequently, a potentially safe level for human exposure is calculated by dividing the NOAEL by a 'safety factor' (SF), e.g. 100, to account for possible intra- and interspecies differences. This NOAEL-SF approach is common practice in toxicology in general. It is applicable for determining safe levels for human exposure to both non-carcinogens and to non-genotoxic carcinogens. For instance, the artificial sweetener and non-genotoxic (bladder) carcinogen sodium saccharin is allowed for food use: an acceptable daily intake (ADI) has been calculated by the NOAEL-SF approach (ADI = NOAEL/SF). The NOAEL-SF approach can easily be applied to synthetic compounds. However, it is more difficult with naturally occurring substances in the diet because dietary exposure is largely unavoidable. For non-nutrients, often little or no toxicity data are available to establish a NOAEL, while for nutrients there is another important factor: nutrients are necessary ingredients of the diet. A certain amount of each nutrient is needed to sustain life. Therefore, for nutrients 'recommended daily allowances' (RDA) have been set. For nutrients, the NOAEL-SF approach is generally unrealistic. In some instances the margin of safety (beware: a 'margin of safety' is basically different from a 'safety factor'!) between the RDA and the dose that elicits toxicity on chronic exposure may be very small (Table 10.1; Feron et al, 1990). Moreover, the NOAEL is in between this chronic toxic dose and the RDA, indicating a very small SF. Table 10.1 Recommended daily allowance versus chronic toxic dose, and the margin of safety for some nutrients Nutrient Nicotinic acid Vitamin A Selenium Vitamin D Fluorine Sodium chloride a
Required3 20 1.5 0.05-0.15 0.01 1 5000
Toxic3 1000 27 1.5 0.05 5 10 000
Margin of safety 50 18 10-30 5 5 2
In mg per day per person (60 kg); these figures are rough estimates based on data from various sources; in particular, the figures for selenium and sodium chloride may have to be adjusted in view of recent data and their interpretation.
10.2.2 Non-threshold extrapolation for genotoxic carcinogens The dose resulting in an acceptable risk level for a genotoxic carcinogen, generally one fatality in 1 000 000 in a lifetime, is often referred to as the 'virtually safe dose' (VSD). The major problem here is in determination of an appropriate mathematical model to fit the experimentally established dose-response curve and to extrapolate to a dose that would produce a response of, for instance, 1 in 1 000 000 in the treated animals. Several mathematical models have been developed for estimating the cancer risks of exposure levels well below the levels for which test data are available. These models can be categorized into (1) tolerance distribution models, and (2) mechanistic models (e.g. one-hit model). The various models usually fit the observed data at high doses (resulting in high tumour incidences) equally well, but they can predict very different potential risks at low doses. The concept underlying the most conservative one-hit model is that a tumour can be induced by a single molecule of a carcinogen, and this model is essentially equivalent to assuming that the dose-response curve is linear in the low-dose region. Proof for linearity of the dose-response relationship at low doses with no indication of any 'threshold' was obtained in a large chronic study, using 4080 rats that were exposed to 7V-nitrosodiethylamine or TV-nitrosodimethylamine and developed liver neoplasms (Peto et al, 1991). The one-hit method estimates the probability of cancer development (P(d)) as a function of the dose (d) by linear extrapolation through the origin or the intercept (the background tumour incidence) using dose-response data or the lowest tumorigenic dose whenever possible, and is based on the equation P(d) = 1 - exp(-pd) (where P is a constant), which at low doses is approximated by P(d) = p J.
10.3
Genotoxic substances in the diet
Naturally occurring genotoxic agents in foods can be subdivided into three main classes: genotoxins of fungal origin, genotoxins of plant origin, and genotoxins formed during the preparation of foods (Wakabayashi et al, 1991). Mycotoxins are highly toxic compounds produced by fungi. Aflatoxins, for instance, are produced by Aspergillus flavus. Biotransformation towards an epoxide is required for their genotoxicity to become effective. Aflatoxin B1, one of the various chemical congeners, has been classified by the IARC as a human carcinogen on the basis of available toxicological and epidemiological information. Other examples of genotoxic mycotoxins are sterigmatocystin, zearalenone, ochratoxin A, fusarin C, and the trichotecene compound T-2 toxin.
Genotoxic substances naturally occurring in plants cover a large variety of chemicals, such as pyrrolizidine alkaloids (e.g. petasitenine in coltsfoot, symphytine in comfrey), aquilidine A (bracken fern), hydrazine derivatives (edible mushrooms) and cycasin (a glucoside of the methylating agent methylazoxymethanol). Alkeny!benzenes and aldehydes are two groups of important flavouring substances. Alkenylbenzenes as safrole (oil of sassafras) and estragole (oils of tarragon and sweet basil) are genotoxic and carcinogenic via reactive sulphate conjugates of their I'-hydroxymetabolites; in contrast, their chemical congeners eugenol (oil of cloves) and trans-ancthole (oil of anise) are not genotoxic and may even have anticarcinogenic potential (Rompelberg et al, 1993, 1995). Several aldehydes may constitute a dietary risk (acetaldehyde, crotonaldehyde, furfural). However, for most dietary aldehydes there are no relevant data available, either on carcinogenicity or on genotoxicity (Feron et al, 1991). Genotoxic and carcinogenic nitroso compounds (e.g. nitrosamines) are formed from the reaction of nitrosating agents (e.g. nitrite/nitrous acid) with nitrosatable compounds (e.g. the alkaloid gramine in malt, piperine in pepper, tyramine in soybean fermentation products, indole compounds in cruciferous vegetables and in fava beans). Genotoxins can also be formed during the processing of foods. Nitrosamines, such as dimethylnitrosamine and Af-nitrosophyrrolidine, have been detected in fried meat and fish. Heating and pyrolysis of many crude foods may result in the formation of genotoxic and carcinogenic polycyclic aromatic hydrocarbons (e.g. benzo[0]pyrene) and nitroarenes. Genotoxic carbonyl compounds such as glyoxal and methylglyoxal are found in coffee and several alcoholic beverages as well as in bread, toast and in soybean products. The cooking and processing of meat-containing protein-rich foods generates a number of (potent) genotoxic and carcinogenic heterocyclic amines such as: quinolines (2-amino-3-methylimidazo[4,5-/]quinoline (IQ), 2-amino-3,4-dimethylimidazo[4,5-/|quinoline (MeIQ)), quinoxalines (2-amino-3-methylimidazo[4,5-/|quinoxaline (IQx), 2-amino-3,8-dimethylimidazo[4,5-/|quinoxaline (MeIQx), 2-amino-3,4,8-trimethylimidazo[4,5-/]quinoxaline (4,8-DiMeIQx)) and pyridines (2-amino-l-methyl-6-phenylimidazo[4,5-^]pyridine (PhIP), 2-amino-l,6-dimethylimidazopyridine (DMIP)) (Skog, 1993).
10.4
Chemoprev entive substances in the diet
Cancer chemoprevention can be defined as 'prevention of cancer by the administration of one or more chemical entities, either as individual drugs or as a naturally occurring constituents of the diet' (Morse and Stoner, 1993). Two terms frequently used in connection with chemoprevention
Cancer risk
Adverse substances Beneficial substances
Nutrition conditio sine qua non Figure 10.1 Nutrition is essential for survival. Appropriate dietary measures can modulate our cancer risk downward by either decreasing the load of adverse substances and/or increasing the load of beneficial substances.
are 'antimutagen' and 'anticarcinogen'. The word antimutagen is old and is now used for factors that reduce the rates of spontaneous or induced mutagenesis by various modes of actions. Kada et al (1986) made a distinction among categories of antimutagens and introduced the terms 'desmutagen' and 'bioantimutagen'. Kada defined desmutagens as 'factors that act directly on mutagens or their precursors and inactivate them'; they act outside the cell. Bioantimutagens are defined as 'factors that act on repair and replication processes of the damaged DNA, resulting in decreases in mutation frequency' and act inside the cell. Crabtree (1947) defined an anticarcinogen as 'any factor which delays or prevents the emergence of malignant characters in any tissue of any species or organism'. Both synthetic and naturally occurring substances may possess chemopreventive potential. Potential chemopreventive agents of natural origin are to be found among both nutrients and non-nutrients. The public demand for an 'additive-free' and 'natural' diet directs the main interest towards naturally occurring chemopreventive agents. 10.4.1
Tiered approach for studying chemopreventive agents
The genotoxicity of a compound is generally tested using a tiered approach: short-term in vitro tests with prokaryotic or eukaryotic cell systems are performed, e.g. the Ames test (Organization for Economic Cooperation and Development, 1983a, 1995a; Gatehouse et al, 1994), followed by shortterm in vivo tests in experimental animals, e.g. the bone marrow micronucleus test (Organization for Economic Cooperation and Development, 1983b, 1995b; Hayashi et al, 1994). Depending on the results of the short-
term genotoxicity tests, a long-term in vivo study in experimental animals (Organization for Economic Cooperation and Development, 1981) may be performed, in which the carcinogenic potential of a compound is established by lifetime exposure of experimental animals to various dose levels of the test compound up to some level of toxicity (Figure 10.2). Beyond these experimental studies, the most valuable data on carcinogenicity or chemoprevention by dietary constituents and foods in humans come from studies in humans. These can be performed in two ways. The first involves epidemiology based on dietary questionnaires or biomarkers. The second involves the rapidly evolving area of experimental 'biomarker research' in humans, which links the sciences of toxicology and epidemiology (van
Assessment of potential Genotoxicity/ carcinogenicity
Chemoprevention
Rapid chemtco-analytical methods (e.g. for assessing antioxidant potential)
Short-term in vitro (prokaryotic or eukaryotic cell systems)
Shortvterm in vivo (experimental animals)
Short-term in vitro (prokaryotic or eukaryotic cell systems)
Short-term in vivo (experimental animals)
Long-term in vivo (experimental animals)
Long-term in vivo (experimental animals)
Short-term in vivo (man: biomarkers)
Short-term in vivo (man: biomarkers)
Long-term in vivo (man: epidemiology)
Long-term in vivo (man: epidemiology)
Prevent human cancer Figure 10.2 Tiered approach for studying mutagenicity, carcinogenicity and chemoprevention.
Poppel et al, 1992c; Verhagen et al, 1993). A biomarker is defined as a parameter at the biochemical, physiological, enzymic or cellular level that reflects some phase between external exposure and eventual effect (disease), and includes factors that may modify transition states between those phases (individual susceptibility, nutrition) (Figure 10.3). For the appropriate application of biomarkers, it is necessary to have knowledge of ethical and practical aspects of studies in humans, the underlying biological mechanisms of chemoprevention, and intra- and interindividual variation of the selected biomarkers. If these requirements are met, there are good possibilities for the application of biomarkers in well-chosen study designs. All these experimental test systems for genotoxicity in vitro and in vivo and carcinogenicity in vivo can equally well be applied to determine the chemopreventive properties of compounds, by studying the effect of the compound on the response to established genotoxic or carcinogenic agents (Figure 10.2). For the study of chemoprevention, however, it is not a prerequisite to start with in vitro screening and only perform in vivo or epidemiological studies thereafter. Indeed, epidemiological observation may give valuable data concerning which compounds in our diet could be chemopreventive. In fact, this also holds true for carcinogenic agents, which can be well illustrated by the role the epidemiology played in the discovery of the (non-genotoxic!) carcinogen asbestos. In addition, for the assessment of chemopreventive agents, there are some very rapid chemico-analytical methods available, such as for the assessment of antioxidant potential. Moreover, with respect to beneficial effects, it is possible to actively perform studies with humans who are voluntarily exposed to a compound or foodstuff under investigation. In contrast, a study with human volunteers cannot be performed when examining the effects of genotoxic or carcinogenic agents: these can only be performed with, for example, smokers or people who are occupationally exposed.
Anticarcinogens
Exposure
Internal dose
Biologically effective dose
Susceptibility
Early response
Altered structure/ function
Disease
Figure 10.3 Classification of biomarkers from external exposure to disease, including the modulatory effects of individual susceptibility as well as the place of chemopreventive substances.
10.4.2 Mechanisms of action Carcinogenesis is a multi-stage process. In the simplest model of carcinogenesis, the process is assumed to occur in two stages: initiation and promotion/progression. Initiation is the primary event in which cellular DNA undergoes damage which remains unrepaired or becomes misrepaired. The resulting somatic mutation is reproduced at mitosis, giving rise to a clonal population of 'initiated cells'. 'Initiated cells' do not inevitably lead to a tumour until they have undergone 'promotion', a process which facilitates their further transformation to an invasive state, the progression. Compounds which function as promoters are often mitogenic (instead of genotoxic), and may interfere with the expression of genes controlling differentiation, growth and immunomodulation. The mechanisms of chemopreventive agents are multiple (De Flora and Ramel, 1988; De Flora et al., 1993; Dragsted et al., 1993). The multistage nature of carcinogenesis raises the possibility of intervention at each stage of the process, as well as many modes of action for chemopreventive agents. Furthermore, the beneficial activity of these may depend on many unrelated factors and conditions. The effect could be the result of a single event or the simultaneous action of several factors acting in concert. As a consequence there are many different classifications of the mechanisms of chemopreventive agents, e.g. those postulated by Hastings et al, (1976), Wattenberg (1985), Kada et al, (1986), Hartman and Shankel (1990), De Flora et al (1993) and von Borstel and Hennig (1993). For detailed description of the possible mechanisms of inhibition, the reader is referred to the reviews of De Flora and Ramel (1988), Kuroda (1990) and De Flora et al (1993). The use of these different classifications makes the field of chemoprevention unnecessarily complicated. In order to shed light on this, overviews are given of the most commonly used classifications in chemoprevention and their mutual connections (Figure 10.4). It is emphasized that the choice of a classification mentioned in Figure 10.4 is merely dependent on the test system used: with short-term genotoxicity tests, only antimutagenesis can be studied, whereas in long-term in vivo studies with experimental animals, chemoprevention can be studied. Therefore, we consider any classification as artificial. Rather, we stress that the mechanism underlying a compound's chemoprevention is all that matters. Moreover, knowledge of the mechanism underlying chemoprevention can enable one to judge whether a compound is suitable for cancer prevention in the general population or for cancer therapy: a compound that inhibits the formation of electrophilic intermediates is primarily suitable for cancer prevention in the general population, while a compound that prevents metastases is more suitable for cancer therapy.
10.4.3 Alteration of biotransformation capacity An important mechanism underlying chemoprevention is alteration of biotransformation capacity. Organisms are exposed to a large number of xenobiotic compounds such as drugs, pesticides and natural food constituents. To deal with these usually lipophilic substances, a range of phase 1 and phase 2 biotransformation enzyme systems are available. In phase 1 a xenobiotic compound undergoes a functional transformation, by oxidation, reduction or hydrolysis; of these, oxidation is the dominant reaction, catalysed by the cytochrome P450 mixed-function oxidase system. In phase 2 the xenobiotic or its metabolite is conjugated to an endogenous molecule; phase 2 can be divided into conjugations of electrophiles, catalysed by, for example, the glutathione S-transferases (GST) and epoxide hydrolase, and conjugations of nucleophiles, catalysed by sulphotransferases and glucuronyl transferases. The net result of biotransformation is a much more hydrophilic derivative which can be excreted in urine, or via the bile in the faeces. Almost all of the xenobiotics to which humans are exposed, including the carcinogens, need metabolic activation, mostly by phase 1 enzymes. The reactive intermediates that are formed during metabolism are responsible for binding to cellular macromolecules such as DNA. In general, other biotransformation enzymes, mostly phase 2 enzymes, can detoxify these metabolites. Thus, the concentration of the ultimate carcinogen, or toxicant in general, is the result of a delicate balance between the rate of activation and the rate of detoxication. Although the process of carcinogenicity is much more complex, interindividual differences in susceptibility are certainly also a result of interindividual differences in this balance between metabolic activation and detoxication. Differences in biotransformation enzyme levels between individuals can be of genetic or of environmental origin. Inherited differences in biotransformation enzymes are a fact of life that cannot be altered. For instance, for GST class JJL isozymes a clear polymorphism has been observed in humans: GST isozyme JJL was found to be expressed in only 60% of the samples analysed. As to acquired differences, nutrition plays an important role. In contrast to most micronutrients and macronutrients, non-nutritive dietary constituents are known to have striking effects on activity as well as isozyme patterns. For instance, cytochrome P450 isozymes appear to be ready inducible; induction rates can be an order of magnitude or more. The best studied examples of non-nutritive dietary constituents inducing cytochrome P450 are the glucobrassicin products, indole-3-carbinol, indole-3-acetonitrile and indole-3-carboxyaldehyde. They induce both hepatic and intestinal cytochrome P450 in rats. Of these
Process of chemical carcinogcsnesis
Classification of antjififlfrgpaesis and anticardnogenesis modified from De Flora and Ramel (1988) and De Flora etal. (1993)
exposure L Inhibition of uptake caranogen/muiagen
Z Modification of transmembrane transport detoxification 3. Stimulation of trapping and detoxification in nontarget cells 4. Inhibition of endogenous formation • Inhibition of the nitrosatton reaction (metabolism) accretion - Modification of the microbial intestinal flora 5. Modulation of metabolism - Inhibition of activation of promutagens/procarcinogens • Induction of detoxifying mechanisms - Stimulation of activation, coordinated with detoxification and blocking of reactive metabolites dectrophilic intermediates 6. Blocking or competition with reactive molecules - Reaction of nucleophiles with electrophiles - Scavenging of reactive oxygen species covatent binding at • Protection of nucieophilic sites of DNA DMA, BNA, proteins 7. Modulation of DNA repair or replication • Increase of the fidelity of DNA replication DN A repair • Stimulation of repair and/or reversion of DNA damage * Inhibition of error-prone repair pathways normal cell 8. Inhibition of cell replication DNA replication permanent DNA ksion 'initiated cell'
preneopiastic cells
neoplastk cells
metastases
9. Modulation of tumor promotion and tumor progression • Inhibition of genotoxic effects (see 1-7) - Scavenging of free radicals • Inhibition of proteases • Control of gene expression - Inhibition of cell replication * Protection of intercellular commmunication - Induction of cell differentiation - Modulation of signal transduction • Inhibition of DNA repair leading to death of damaged cells • Effects on growth factors and hormones • Effects on the immune system - Inhibition of neovascularization 10. Modulation of invasion and metastases - Inhibition of proteases - Induction of cell differentiation - Inhibition of neovascularization * Effect on cell-adhesion molecules * Modulation of interaction with the extracellular matrix
Figure 10.4 General scheme describing the multi-stage process of chemical carcinogenesis and overview of the most commonly used classifications in the field of chemoprevention.
Classification of anticarcinogenesis according to Wattenberg (1985)
INHIBITORS PREVENTING FORMATION OF CARCINOGENS
Classification Of anrinyitogpie^is according to Kada etal. (1986)
DESMUTAGENS1 (act directly on mutagens or their precursors and inactivate them)
BLOCKING AGENTS (prevent carcinogenic agents from reaching or reacting with critical target sites in the tissues)
BIOANTIMUTAGENS2 (act on repair and replication processes of the damaged DNA resulting in decreases in mutant frequency)
SUPPRESSING AGENTS (suppress the expression of neoplasm in ceils previously exposed to doses of a carcinogenic agent that will cause cancer)
1 2
Synonyms: 'countermutagen' (Hastings et al., 1976), 'interceptor' (Hartman and Shankel, 1990) Synonym: 'fidelogen' (von Borstel and Hennig, 1993)
Figure 10.4 Continued
three, indole-3-carbinol is the most potent, but an acidic environment, such as in the stomach, gives rise to formation of the even most potent dimer and trimer condensation products. For GST the relative amounts of induction are small compared to those of cytochrome P450s, but in view of the relatively large amounts of GST present in most cells, this may still be quite significant. GST activity can be induced by, for example, the synthetic phenolic antioxidant butylated hydroxyanisole (BHA), eugenol (cloves), frwzs-anethole (anise, fennel) and Brussels sprouts (Bogaards et al, 1990, 1994; Verhagen, 1993). Thus each individual has his or her own inherited and/or acquired isozyme pattern for the various drug-metabolizing enzymes, which leads to different responses to adverse of beneficial xenobiotics or dietary constituents. For instance, a substance may increase the level of a certain cytochrome P450 isozyme, and decrease the level of another. Although it is an oversimplification of this complex area, it is frequently stated that a preferential induction of phase 2 'detoxication' enzymes is indicative of beneficial effects.
10.4.4 Nutritive dietary chemopreventive agents Epidemiological studies have revealed that a number of micronutrients (e.g. vitamins C and E, selenium, calcium) may have cancer-preventive properties (Table 10.2). Most of these compounds are antioxidants, which could serve as an explanation for their mode of action. Studies have shown that for these nutrients the incidence of certain forms of cancer is highest in groups of people with a low dietary intake. However, no definite conclusion can been reached, since epidemiological studies cannot resolve whether a protective effect of, for instance, fruits and vegetables has to be explained by vitamin C or by other minor dietary constituents that are as yet not included in food composition tables.
Table 10.2 Nutritive chemopreventive agents: major food sources and proposed mode of action Chemopreventive agents
Major food sources
Proposed mode of action
Vitamin C Vitamin E Selenium
(Citrus) fruits, vegetables Vegetable oils, whole meal Meat (products), eggs, dairy products Dairy products
Antioxidant Antioxidant Antioxidant
Calcium
Binding of bile acids and fatty acids
10.4.5 Non-nutritive dietary chemopreventive agents Through the last decades much attention has been focused on chemopreventive agents in the diet, and these agents are found in all categories of food. An absolute classification of all known non-nutritive chemopreventive agents is very difficult, because the precise mechanism(s) of action are not known for many compounds. Non-nutritive chemopreventive agents, the primary sources and possible mechanisms of the preventive action are outlined in Table 10.3. Furthermore, a short and more detailed run through is given in the following pages. For more detailed information the reader is referred to the reviews by Steinmetz and Potter (199Ib), Bertram and Frank (1993), and Waltz and Leitzmann (1995).
Table 10.3 Non-nutritive chemopreventive substances: sources and possible mechanisms in the chemoprevention Chemopreventive agents
Primary sources
Carotenoids
Fruits, vegetables, cereal
Chlorophyllin
Leafy vegetables
Coumarins
Vegetables, citrus fruits
Diallyl sulphides
Onion, garlic
Dietary fibre
Fruits, vegetables, legumes, seeds
Flavonoids
Fruits, vegetables, tea
Indoles
Cruciferous vegetables
Monoterpenes
Citrus fruits
Organic isothiocyanates
Cruciferous vegetables
Phenolic acids
Fruits, vegetables, nuts, tea, coffee
Phytic acid
Legumes, cereals
Plant sterols
Vegetables
Protease inhibitors
Seeds, legumes, grains
a
Possible mechanism(s)a 1
2
3
4
5
6
7
8
Mechanism(s): 1, prevention of formation/uptake of carcinogens; 2, scavenging effect on the (activated) carcinogens; 3, shielding of nucleophilic sites in DNA; 4, inhibition of DNA-carcinogen complex; 5, modifying effect on the activities of xenobiotic-metabolizing enzymes; 6, modifying effect on the activities of other enzymes; 7, antioxidative activity; 8, other mechanisms; see text
Carotenoids. Vegetables and fruits are rich in carotenoids and they are the most important contributors of carotenoids in the typical human diet (Mangels et al, 1993). Carotenoids show a yellow to orange coloration. So far, almost 600 different carotenoids have been identified and described (Gerster, 1993) and the number is still increasing. The carotenoids can be divided into two groups: carotenes that are hydrocarbons (C40H56; e.g. aand p-carotene), and their oxygenated derivatives, xanthophylls (e.g. lutein and canthaxanthin). Most of the 600 described carotenoids belong to the group of xanthophylls. The chemopreventive action of carotenoids (e.g. p-carotene, lycopene, lutein, canthaxanthin) may be caused by antioxidant properties, modulation of xenobiotic-metabolizing enzymes, immunomodulating effects, and the ability to increase gap-junctional communication (Krinsky, 1991; Gerster, 1993; Astorg et al, 1994; Khachik et al, 1995; Zhang et al, 1995). Chlorophyllin. Chlorophyllin is a copper derivative of chlorophyll, the ubiquitous pigment in green plants; it is therefore of interest because of its relative abundance in the diet. The effects of Chlorophyllin in the prevention of cancer may be caused by antioxidative properties, modulation of xenobiotic-metabolizing enzymes, and the inhibitory effect on the binding of carcinogens to DNA (Bronzetti et al, 1990; Dashwood et al, 1991; Breinholt et al, 1995). Coumarins. Coumarins are found in vegetables, citrus fruits, nuts, beans and grains. The protective mechanism of dietary coumarins may be caused by modulation of xenobiotic-metabolizing enzymes and phospholipid metabolism (Sparnins et al, 1982; Nishino et al, 1990). Diallyl sulphides. Diallyl sulphide and diallyl disulphide are oil-soluble constituents of garlic and onion, and both compounds have been found to modulate xenobiotic-metabolizing enzymes (Sparnins et al, 1988; Wargowich et al, 1988; You et al, 1989; Haber et al, 1995). Other Allium vegetables, e.g. chives, may also be important contributors to the human intake of allyl sulphides. Dietary fibre. The bran layers of grains, fruit skins, legumes, seeds and berries are among the richest sources of fibre (Steinmetz and Potter, 199Ib). Insoluble fibre in the form of cellulose and hemicellulose has been shown to inhibit induction of colon cancer, and several mechanisms have been proposed; for example, insoluble fibre may adsorb the carcinogens, and insoluble fibre also tends to increase faecal bulk and decrease intestinal transit time (Bingham, 1990; Steinmetz and Potter, 199Ib). Flavonoids. The flavonoids form a very large group of compounds; more than 2000 of these are known, and nearly 500 are known to occur in their
free form (Strube et al, 1993). Flavonoids are found in many green plants, fruits, vegetables and cereals, and in beverages like tea, coffee, beer, fruitjuices and wine (Hertog et al, 1992; Strube et al., 1993). Because the formation of flavonoids normally depends on light, the outer layers of fruits and vegetables are the richest sources. A growing interest in potential chemopreventive agents among flavonoids, especially catechins (primarily the four major catechins in green tea) and flavonols, has emerged. Several mechanism may be involved in the chemopreventive action of flavonoids: antioxidant properties, modulation of xenobioticmetabolizing enzymes, interaction with ultimate carcinogenic metabolites (a scavenging effect), inhibitory effects on the binding of carcinogens to DNA, immunomodulating effects, inhibition of the arachidonic acid cascade, inhibition of the activity of ornithine decarboxylase (ODC) and cyclooxygenase activities induced by phorbol esters and irradiation, inhibition of protein kinase C and cellular proliferation, and enhancement of gap junction intercellular communication (Khan et al., 1988; Middleton and Kandaswami, 1992; Strube et al, 1993; Stoner and Mukhtar, 1995). Indoles. Indoles are formed by the hydrolysis of the glucosinolates, glucobrassicins (McDannell et al., 1988). Glucobrassicins are found in cruciferous vegetables; they are known to be constituents of Brussels sprouts, cabbage, kale, cauliflower, broccoli, kohlrabi, rutabaga and turnips (Steinmetz and Potter, 199Ib). Mechanisms involved in the chemopreventive action of indoles may be antioxidative effects, modulation of the xenobiotic-metabolizing enzymes, or effects on the binding of carcinogens to DNA; furthermore, it is believed that indoles affect the development of hormone-related cancers by a modulation of the metabolism of oestrogen (an increase in hepatic oestradiol 2-hydroxylation) (Shertzer et al, 1986; McDanell et al, 1988; Michnovicz and Bradlow, 1990; Vang et al, 1990; Jellinck et al, 1993, Verhoeven et al, 1997). Monoterpenes. Monoterpenes, found in a wide variety of plants, are major components of plant essential oils. Monoterpenes, including limonenes, e.g. d-limonene, have shown chemopreventive effects in several studies. Mechanisms involved in this action seem to be: modulation of xenobiotic-metabolizing enzymes, inhibition of the DNA-carcinogen binding, selective inhibition of the post-translational isoprenylation of p21ras and other small G proteins, inhibition of ubiquinone (CoQ) synthesis, inhibition of cell proliferation, and induction of the mannose 6phosphate/insulin-like growth factor II receptor and transforming growth factor-bet^ (Gould, 1995). Organic isothiocyanates. Isothiocyanates are produced by enzymic hydrolysis of glucosinolates, which are a group of secondary products
commonly, but not exclusively, found in cruciferous vegetables. The mechanisms involved in the chemopreventive action of isothiocyanates may be modulation of xenobiotic-metabolizing enzymes and inhibition of DNAcarcinogen binding (Nordic Council of Ministers, 1994). Phenolic acids. Phenolic acids are widely distributed in the plant kingdom, and phenolic acids with chemopreventive effects are frequently found in fruits, vegetables, and several kinds of beverages, like tea, coffee, juice, beer and wine. Several mechanisms may be involved in the chemopreventive action of phenolic acids: antioxidant properties, prevention of the formation of carcinogens, modulation of xenobiotic-metabolizing enzymes, interaction with ultimate carcinogenic metabolites (a scavenging effect), inhibitory effect on the binding of carcinogens to DNA, inhibition of the activity of ODC induced by phorbol esters (Strube et al, 1993; Stoner and Mukhtar, 1995). Phytic acids. Phytic acid (inositol hexaphosphate) is an abundant plant constituent, comprising 1-5% by weight of plant foodstuffs. In general, legumes, cereals, fruits and vegetables rich in fibre are the main sources of phytic acid (Ruggeri et al, 1994). The effects of phytic acid in the prevention of cancer may be caused by: antioxidative properties, inhibition of cell proliferation and immunomodulating properties (Baten et al, 1989; Empson et al, 1991; Shamsuddin et al, 1992: Sakamoto et al, 1993). Plant sterols. Vegetables are rich in plant sterols, including p-sitosterol, campesterol and stigmasterol, which make up about 20% of the sterols in most diets. The chemopreventive action of plant sterols may be caused by antioxidant activity and modulation of xenobiotic-metabolizing enzymes; also, because of their structural similarity to cholesterol, plant sterols may also affect cellular membranes (Steinmetz and Potter, 1991b; Bertram and Frank, 1993). Protease inhibitors. Protease inhibitors are widely distributed in plants. Seeds and legumes are especially rich sources. Soybeans contain at least five types of protease inhibitors. Postulated mechanisms for protease inhibitors involve: antioxidant activity, and effect on the proteases produced by neoplastic cells (Steinmetz and Potter, 1991b; Bertram and Frank, 1993).
10.5
The lessons of toxicology transposed to chemoprevention: four caveats
Dietary chemoprevention is an area of steadily growing interest from consumers, authorities and industry. Many a food or dietary constituent
has been claimed to have chemopreventive potential. However, the claims should be carefully viewed in the light of what we have learned in the past from the established sciences of pharmacology and toxicology. When carefully considered, a food or ingredient may not prove to be effective or suitable as a chemopreventive agent; below, a series of caveats is given to take into account when posing a health claim. Despite these caveats, there is proof that dietary chemoprevention is feasible in humans under normal dietary conditions. 10.5.1 A first caveat: assessment of antimutagenic potential Genotoxicity, the potential to alter or damage DNA, is an intrinsic property of a chemical. In toxicology, the assessment of genotoxic potential commonly follows a tiered approach (see Figure 10.2). However, the biological relevance of an established genotoxic potential in vitro has to be verified in vivo test systems (e.g. erythrocyte micronucleus test, liver UDS (unscheduled DNA-synthesis) test). When comparing in vitro and in vivo test data, there are several factors to take into account. In in vitro test systems, much higher concentrations can be reached at the target cells than in in vivo systems. In vivo test data are only appropriate when there are indications for exposure of the target cells to the test substance. The same arguments apply for potential antigenotoxic compounds. Ferguson (1994) rightly stated that it is important that an established antimutagenic response in vitro should be verified in vivo. Thus an established antimutagenic response in in vitro test systems should be verified in in vivo test systems, taking into account the limitations of target cell concentrations and exposure of target cells; viz., if no anitmutagenic potential in vivo is evident, a classification as an antimutagenic substance is not appropriate (Verhagen and Feron, 1994). Genotoxicity established in vivo can be overruled if there are no indications for carcinogenicity in long-term animal studies. Because tumour formation is an in vivo event, an anticarcinogenic potential can only be assessed in in vivo test systems. Testing for genotoxicity in vitro and in vivo is aimed at preventing humans from getting cancer or heritable diseases. Experiments with animals and studies conducted in vitro can overcome neither major differences in dose (high doses in animals versus low doses in humans) nor interindividual variations in humans (in contrast to humans, experimental animals form a relatively homogeneous population). Occasionally, positive evidence for carcinogenicity resulting from human exposure to genotoxic agents is obtained in epidemiological studies (e.g. cigarette smoke, vinyl chloride, aflatoxin B1).
Ideally, no indications for human carcinogenicity will be found in epidemiological studies, indicating adequate control of genotoxic exposure. In contrast, with chemopreventive agents one can make that final step, the definite proof of a putative beneficial effect can only come from studies in humans. Thus, one should take into account that if no antimutagenic potential in vivo is evident, a classification as an antimutagenic substance is no longer appropriate (Verhagen and Feron, 1994).
10.5.2 A second caveat: the threshold concept An established chemopreventive potential is necessarily a non-stochastical event; it cannot be assumed that in theory one molecule can prevent genotoxicity or carcinogenicity. Health risk assessment for non-stochastical events follows the establishment of safe levels of human exposure on the basis of a NOAEL, the threshold principle. The same principle applies for the establishment of beneficial effect levels. Thus, there will be a threshold for a chemopreventive effect to become manifest: a 'lowest beneficial effect level' (LBEL). Hence, exposure to putative beneficial substances below the LBEL remains necessarily without effect. This is far from being a new concept; also for drugs a high enough dose is needed to have the desired beneficial effect (e.g. to cure a disease).
70.5.3 A third caveat: beware oftoxicity! The threshold concept underlies both health risk assessment (the NOAEL for non-carcinogens and for non-genotoxic carcinogens) and the assessment of beneficial potential (the LBEL). For putative chemopreventive substances, the toxicological and beneficial endpoints should be considered together in a single evaluation. A beneficial effect is thus only valuable in the absence of toxicity: the LBEL should be well below the safe human dose (determined by the NOAEL-SF approach). In practice this means that the beneficial effects should be evident at (much) lower dose levels than those at which toxicity is expected (Figure 10.5). This again is not new, and the parallel with medicines also holds here. However, with drugs toxic side-effects may be unavoidable. In such cases the necessity for therapy outweighs the concominant toxicity. In case of dietary chemopreventive agents it is not acceptable to have toxicity at beneficial dose levels. This aspect can be well illustrated by referring to nutritive dietary chemopreventive agents, for which the 'margin of safety' (which is not a safety factor!) is sometimes very small (Table 10.1).
% of persons
margin of safety (nutrtonts)
safety factor (non-nutrients)
log (Dose) ADI
deficiency LBEL
RDA
NOAEL animal
(-NOAEL man, assumed)
Figure 10.5 Theoretical and simplified dose-effect relationships for desired effects (e.g. nutrients, medicines, dietary chemopreventive agents) and for toxic effects (e.g. side-effects of medicines). In general, the curves are not parallel. The curve for toxicity may be at lower dose levels than for beneficial effects. The curves may be (partly) overlapping or cross over. In toxicology, the 'no observed adverse effect level' (NOAEL) is divided by a 'safety factor' (SF) to obtain an 'acceptable daily intake' (ADI) for humans. Below the 'lowest beneficial effect level' (LBEL) there is no beneficial effect whatsoever (e.g. for chemopreventive agents, medicines), whereas for nutrients there will be a deficiency. For nutrients the 'recommended daily allowance' (RDA) is the dose that is sufficient for 95% of the population. The difference between the RDA and the NOAEL is the 'margin of safety' (which is not the SF!).
10.5.4 A fourth caveat: (anti)carcinogens are not always (anti)mutagens and vice versa In the early 1970s the general toxicological view was that 'carcinogens are mutagens'. Toxicologists thought that carcinogens could be found by performing short-term genotoxicity tests in vitro and in vivo. Indeed, initially there was a steadily growing overlap between these two categories of compounds, especially when the use of liver homogenate, 'S9', was introduced in in vitro assays. In later years the overlap decreased again; carcinogens were sometimes, but not always, mutagens and vice versa. Recent data indicate, for the most predictive of the short-term genotoxicity assays, the 'Ames test', a concordance of around 66%. One of the main reasons for this is that nowadays the rodent carcinogenicity assays are overly sensitive because of the necessity to test at a 'maximum tolerated dose', thereby rendering almost every second compound a carcinogen (Ames and Gould, 1990). In this way, many a 'carcinogen' is in fact a non-genotoxic carcinogen (and thus in fact a non-carcinogen) for which a threshold dose can be set.
Fully in line with these developments, it can be argued that anticarcinogens are not always antimutagens and vice versa. However, by analogy with the assessment of potential carcinogens by performing short-term mutagenicity tests, one may discover anticarcinogens by starting with short-term in vitro assays. Also, in toxicology long-lasting and costly carcinogenicity studies are performed only at a late stage.
10.6
Feasibility of dietary chemoprevention in humans
Given the four caveats for chemoprevention described above, one might wonder whether it is feasible for these desired effects to occur in humans. Indeed, all the drawbacks may seem to preclude the actual effects in humans of putative beneficial compounds or foods. However, there are sufficient indications to underpin the feasibility in humans. As stated before, the most valuable data on chemopreventive effects of dietary constituents and foods towards humans may come from studies in humans: either epidemiological studies based on dietary questionnaires or biomarkers or experimental 'biomarker research'. Finally, a novel approach for practical chemoprevention testing is given: the combined intake of putative beneficial substances ('the matrix approach'). In fact, the epidemiological findings of negative associations between development of cancer and chemoprevention of fruits and vegetables (a rather undefined matrix) is a matrix approach in every sense. 10.6.1
Evidence from epidemiological studies
Epidemiological studies have indicated that several dietary habits are associated with a decreased risk of cancer, demonstrating that chemoprevention does actually occur in humans. Negative associations with cancer incidence have been reported for, for example, fibre, fruits and vegetables. In fact such epidemiological findings have triggered the onset of experimental chemoprevention studies in vitro and in vivo. 10.6.2
Evidence from experimental studies in humans
Recently we have conducted three experimental studies with human volunteers using biomarkers of chemopreventive potential. A randomized, double-blind, placebo-controlled intervention trial was performed to study the effect of 14 weeks of p-carotene supplementation (20 mg/day) on biomarkers of DNA damage in heavy smokers. Biomarkers of DNA damage determined were: the frequency of sister chromatid exchanges (SCE) in cultured lymphocytes and the frequency of micronuclei in expectorated sputum cells (van Poppel et al, 1992a,b).
Plasma p-carotene levels increased 13-fold in the treatment group (n = 73) during intervention and remained stable in the placebo group (n = 70). Initial SCE levels were similar in the treatment and placebo groups. During the intervention, both groups showed an almost identical decrease, and at the end of the intervention period there was again no difference in SCE levels between the treatment and the placebo groups. Initial micronuclei counts were somewhat higher in the treatment group than in the placebo group. During intervention, the treatment group showed a sharp decrease in micronuclei (47%), whereas the placebo group showed a non-significant decrease (16%). These results indicate that p-carotene reduces smoking-induced DNA damage in the epithelial cells of the respiratory tract but not in cultured lymphocytes. However, it should be noted that these results contrast with recent findings from a Finnish study, in which no cancer-preventive effect could be shown for p-carotene (The a-Tocopherol, p-Carotene Cancer Prevention Study Group, 1994). Glucosinolates are present in high levels in cruciferous vegetables; Brussels sprouts have an especially high content. After 1 or 2 weeks of consumption of Brussels sprouts, an increased level of GST isozymes in plasma, lymphocytes, bladder and intestinal cells was observed, while the daily consumption of Brussels sprouts was not associated with adverse health effects, as apparent from a variety of clinico-chemical parameters for renal liver, thyroid and anticoagulant functioning (Bogaards et al, 1994; Nijhoff et al, 1995a,b). In addition, a significantly lower excretion of oxidative DNA adducts excreted into urine was observed upon consumption of Brussels sprouts, suggesting that these cruciferous vegetables may also decrease oxidative DNA damage (Verhagen et al, 1995). These data indicate that it is indeed feasible in humans to have a potential beneficial effect in the absence of adverse effects. Thus, it is possible to obtain short-term experimental evidence in humans to underpin what epidemiological studies indicate, i.e. the prevention of cancer through a diet including fruits and vegetables in general, and with cruciferous plants in particular. 10.6.3 More than one beneficial compound: the matrix approach Humans are simultaneously exposed to a huge number of chemicals. There is uncertainty as to how the combined toxicity of these chemicals should be assessed and how combined toxicity should be taken into account in setting standards for individual compounds. This is mainly due to an almost complete lack of data on prolonged, repeated exposure to relevant combinations of three or more compounds, and of data on possible interactions at non-toxic concentrations of the individual chemicals: this branch of toxicology is designated the 'toxicology of the 1990s and beyond'
(Feron et al, 1995a,b). These facts of toxicology in general apply to genetic toxicology and to chemoprevention as well. Indeed, with the possible exception of some nutrients such as antioxidant vitamins, it is unlikely that single compounds are or may be consumed in sufficient quantity to elicit the desired effects. In contrast, a combination of beneficial substances in a matrix may result in beneficial effects in humans under physiological conditions. Moreover, spreading the beneficial effects over a number of substances may actually reduce the non-desirable side-effects due to putative toxicity. The findings that 'fruits and vegetables', in fact a highly undefined matrix, have shown beneficial effects in epidemiological studies, together with the beneficial effects of Brussels sprouts found in our biomarker studies (without toxicity), indicate the practical feasibility of the matrix approach.
10.7 Conclusion Our diet contains a multitude of (anti)genotoxic and (anti)carcinogenic compounds. Toxicology has a whole range of methods to test substances for toxic effects as well as for potential genotoxicity and carcinogenicity, mostly in a tiered approach. There are various health assessment methods to determine (virtually) safe levels of human exposure to carcinogens and non-carcinogens. These merits of toxicology can easily be transposed to the assessment of beneficial effects such as chemoprevention. There are various ways of studying the chemopreventive potential of selected compounds or of whole foods, ranging from short-term assays to bioassays in humans. The latter, in particular, may provide data on the actual beneficial effects towards humans, provided they are well conducted. Moreover, human studies indicate the feasibility of chemoprevention under physiological conditions. In future, sufficient data may become available to indicate appropriate dietary measures to modulate our cancer risk downward. Acknowledgements This work was supported, in part, by the European Union (EC Contract No. ERB4050PL040536), and by the Utrecht Toxicological Center (UTOX).
References Ames, B.N. and Gold, L.S. (1990) Chemical carcinogenesis: too many rodent carcinogens. Proceedings of the National Academy of Sciences of the USA, 87, 7772-7776. Archer, V.E. (1988) Cooking methods, carcinogens, and diet-cancer studies. Nutrition and Cancer, 11, 75-79. Astorg, P., Gradelet, S., Leclerc, J. et al. (1994) Effects of (3-carotene and canthaxanthin on liver xenobiotic-metabolizing enzymes in the rat. Food and Chemical Toxicology, 32, 735-742. Baten, A., Ullah, A., Tomazic, VJ. and Shamsuddin, A.M. (1989) Inositol-phosphateinduced enhancement of natural killer cell activity correlates with tumour suppression. Carcinogenesis, 10, 1595-1598. Bertram, B. and Frank, N. (1993) Inhibition of chemical carcinogenesis. Environmental Carcinogen and Ecotoxicology Reviews, 11, 1-71. Bingham, S.A. (1990) Mechanisms and experimental and epidemiological evidence relating dietary fibre (non-starch polysaccharides) and starch to protection against large bowel cancer. Proceedings of the Nutrition Society, 49, 153-171. Birt, D.F. and Bresnick, E. (1991) Chemoprevention by nonnutrient components of vegetables and fruits. In: Alfin-Slater, R.B. and Kritchewsky, D. (eds) Cancer and Nutrition. Plenum Press, New York, pp. 221-260. Blenford, D.E. (1994) Food for health. The market. IFI, No. 4, 9-13. Bogaards, JJ., van Ommen, B., Falke, H.E. et al. (1990) Glutathione S-transferase subunit induction patterns of Brussels sprouts, allyl isothiocyanate and goitrin in rat liver and small intestinal mucosa: a new approach for the identification of inducing xenobiotics. Food and Chemical Toxicology, 28, 81-88. Bogaards, J.J.P., Verhagen, H., Willems, M.I. et al. (1994) Consumption of brussels sprouts results in elevated a-class glutathione S-transferase levels in human blood plasma. Carcinogenesis, 15, 1073-1075. Breinholt, V., Hendricks, J., Pereira, C. et al. (1995) Dietary chlorophyllin is a potent inhibitor of aflatoxin Bl hepatocarcinogenesis in rainbow trout. Cancer Research, 55, 57-62. Bronzetti, G., Galli, A. and Croce, D.D.M. (1990) Anti-mutagenic effects of chlorophyllin. Basic Life Sciences, 52, 463-468. Caragay, A.B. (1992) Cancer-preventive foods and ingredients. Food Technology, April, 65-68. Crabtree, H.G. (1947) Anti-carcinogenesis. British Medical Bulletin, 4, 345-348. Dashwood, H., Breinholt, V. and Bailey, G.S. (1991) Chemopreventive properties of chlorophyllin: inhibition of aflatoxin Bl (AFBl)-DNA binding in vivo and anti-mutagenic activity against AFBl and two heterocyclic amines in the Salmonella mutagenicity assay. Carcinogenesis, 12, 939-942. De Flora, S. and Ramel, C. (1988) Mechanisms of inhibitors of mutagenesis and carcinogenesis. Classification and overview. Mutation Research^ 202, 285-306. De Flora, S., Izzotti, A. and Bennicelli, C. (1993) Mechanisms of antimutagenesis and anticarcinogenesis: role in primary prevention. In: Bronzetti, G., Hayatsu, H., De Flora, S. et al. (eds) Antimutagenesis and Anticarcinogenesis Mechanisms HI. Plenum Press, New York, pp. 1-16. Doll, R. and Peto, R. (1981) The causes of cancer: quantitative estimates of avoidable risks of cancer in the United States today. Journal of the National Cancer Institute, 66, 1191-1308. Dragsted, L.O., Strube, M. and Larsen, J.C. (1993) Cancer-protective factors in fruits and vegetables: biochemical and biological background. Pharmacology and Toxicology, 72 (Suppl. 1), 116-135. Empson, K.L., Labuza, T.P. and Graf, E. (1991) Phytic acid as a food antioxidant. Journal of Food Science, 56, 560-563. Ferguson, L.R. (1994) Antimutagens as cancer chemopreventive agents in the diet. Mutation Research, 307, 395-410. Feron, VJ., van Bladeren, PJ., and Hermus, RJJ. (1990) A viewpoint on the extrapolation of toxicological data from animals to man. Food and Chemical Toxicology, 28, 783-788.
Feron, VJ., Til, H.P., de Vrijer, Fl. et al (1991) Aldehydes: occurrence, carcinogenic potential, mechanism of action and risk assessment. Mutation Research, 259, 363-385. Feron, V.J., Groten, J.P., Jonker, D. et al (1995a) Toxicology of chemical mixtures: challenges for today and the future. Toxicology Letters, 105, 415-427. Feron, V.J., Groten, J.P., van Zorge, J.A. et al (1995b) Toxicity studies in rats of simple mixtures of chemicals with the same or different target organs. Toxicology Letters, 82/83, 505-512. Gatehouse, D., Haworth, S., Cebula, T. et al (1994) Recommendations for the performance of bacterial mutation assays. Mutation Research, 312, 217-233. Gerster, H. (1993) Anticarcinogenic effect of common carotenoids. International Journal of Vitamin and Nutrition Research, 63, 93-121. Gould, M.N. (1995) Prevention and therapy of mammary cancer by monoterpenes. Journal of Cellular Biochemistry, 19, Suppl. 22, 139-144. Haber, D., Siess, M.H., Canivenclavier, M.C. et al (1995) Differential effects of dietary diallyl sulfide and diallyl disulfide on rat intestinal and hepatic drug-metabolizing enzymes. Journal of Toxicology and Environmental Health, 44, 423-434. Hartman, P.E. and Shankel, D.M. (1990) Antimutagens and anticarcinogens: a survey of putative interceptor molecules. Environmental and Molecular Mutagenesis, 15, 145-182. Hastings, P.J., Quah, S.K. and von Borstel, R.C. (1976) Spontaneous mutation by mutagenic repair of spontaneous lesions in DNA. Nature, 264, 719-722. Hayashi, M., Tice, R.R., MacGregor, J.T. et al (1994) In vivo rodent erythrocyte micronucleus assay. Mutation Research, 312, 293-304. Hertog, M.G.L., Hollman, P.C.H. and Katan, M.B. (1992) Content of potentially anticarcinogenic flavonoids of 28 vegetables and 9 fruits commonly consumed in the Netherlands. Journal of Agriculture and Food Chemistry, 40, 2379-2383. Jellinck, P.H., Forkert, P.G., Riddick, D.S. et al (1993) Ah receptor binding properties of indole carbinols and induction of hepatic oestradiol hydroxylation. Biochemical Pharmacology, 45, 1129-1136. Kada, T., Inoue, T., Ohta, T. and Shirasu, Y. (1986) Antimutagens and their modes of action. In: Shankel, D.M., Hartman, P.E., Kada, T. and Hollaender, A. (eds) Antimutagenesis and Anticarcinogenesis Mechanisms. Plenum Press, New York, pp. 181-196. Khachik, F., Beeches, GR. and Smith, J.C. Jr (1995) Lutein, lycopene, and their oxidative metabolites in chemoprevention of cancer. Journal of Cellular Biochemistry, 19, Suppl. 22, 236-246. Khan, W.A., Wang, Z.Y., Athar, M. et al (1988) Inhibition of the skin tumorigenicity of (±)-7-beta,8-alpha-dihydroxy-9-alpha,10-alpha-epoxy-7,8,9,10-tetrahydrobenzo[«]pyrene by tannic acid, green tea polyphenols and quercetin in Sencar mice. Cancer Letters, 42, 7-12. Krinsky, N.I. (1991) Effects of carotenoids in cellular and animal systems. American Journal of Clinical Nutrition, 53, 234S-246S. Kuroda, Y. (1990) Antimutagenesis studies in Japan. In: Kuroda, Y., Shankel, D.M. and Waters, M.D. (eds) Antimutagenesis and Anticarcino genesis Mechanisms II. Plenum Press, New York, pp. 1-22. Mangels, A.R., Holden, J.M., Beecher, G.R. et al. (1993) Carotenoid content of fruits and vegetables: an evaluation of analytical data. Journal of the American Dietetic Association, 93, 284-296. McDanell, R., McLean, A.E., Hanley, A.B. (1988) Chemical and biological properties of indole glucosinolates (glucobrassicins): a review. Food and Chemical Toxicology, 26, 59-70. Michnovicz, JJ. and Bradlow, H.L. (1990) Induction of oestradiol metabolism by dietary indole-3-carbinol in humans. Journal of the National Cancer Institute, 82, 947-949. Middleton, E. Jr and Kandaswami, C. (1992) Effects of flavonoids on immune and inflammatory cell functions. Biochemical Pharmacology 43, 1167-1179. Morse, M.A. and Stoner, G.D. (1993) Cancer chemoprevention: principles and prospects. Carcinogenesis, 14, 1737-1746. Nijhoff, W.A., Mulder, T.P., Verhagen, H. et al (1995a) Effects of consumption of brussels sprouts on plasma and urinary glutathione S-transferase class-alpha and -pi in humans. Carcinogenesis, 16, 955-957. Nijhoff, W.A., Mulder, T.P., Verhagen, H. et al (1995b) Effects of consumption of Brussels sprouts on intestinal and lymphocytic glutathione S-transferases in humans. Carcino genesis, 16(9), 2125-2128.
Nishino, H., Okuyama, T., Tanakata, M. et al (1990) Studies on the anti-tumour-promoting activity of naturally occurring substances. IV. Pd-II[(+)anomalin, (+)praeruptorin B], a seselin-type coumarin, inhibits the promotion of skin tumour formation by 12-0tetradecanoylphorbol-13 acetate in 7,12-dimethylbenz[a]anthracene-initiated mice. Carcinogenesis, 11, 1557-1561. Nordic Council of Ministers (1994) Naturally Occurring Antitumourigens. II. Organic Isothiocyanates. Nordic Council of Ministers, Copenhagen, TemaNnord, p. 539. Organization for Economic Cooperation and Development (1981) Carcinogenicity studies. OECD Guideline for Testing of Chemicals No. 451. OECD, Paris. Organization for Economic Cooperation and Development (1983a) Salmonella typhimurium, reverse mutation assay. OECD Guideline for Testing of Chemicals No. 471. OECD, Paris. Organization for Economic Cooperation and Development (1983b) Micronucleus test. OECD Guideline for Testing of Chemicals No. 474. OECD, Paris. Organization for Economic Cooperation and Development (1995a) Proposal for replacement of guidelines 471 and 472. Bacterial Reversion Mutation Test. OECD Guidelines for Testing of Chemicals. Revised draft document (September 1995). OECD, Paris. Organization for Economic Cooperation and Development (1995b) Proposal for replacement of guidelines 474. Mammalian Erythrocyte Micronucleus Test. OECD Guidelines for Testing of Chemicals. Revised draft document (September 1995). OECD, Paris. Pariza, M.W., Felton, J.S., Aeschbacher, H.U. and Sato, S. (eds) (1990) Mutagens and Carcinogens in the Diet. Wiley-Liss, New York. Peto, R., Gray, R., Brantom, P. et al. (1991) Effects on 4080 rats of chronic ingestion of Nnitrosodiethylamine: a detailed dose-response study. Cancer Research, 51, 6415-6451. Rompelberg, C.J.M., Verhagen, H. and van Bladeren, PJ. (1993) Effects of the naturally occurring alkenylbenzenes eugenol and /raws-anethole on drug-metabolizing enzymes in the rat liver. Food and Chemical Toxicology, 31, 637-645. Rompelberg, C.J.M., Steinhuis, W.H., de Vogel, N. et al. (1995) Antimutagenicity of eugenol in the rodent bone marrow micronucleus test. Mutation Research, 346, 69-75. Ruggeri, S., De Santis, N. and Carnovale, E. (1994) Intake and sources of phytic acid in Italian diets. In: Koztowska, H., Fornal, J. and Zdunczyk, Z. (eds) 'Bioactive Substances in Food of Plant Origin', Vol. 2, Phytates. Proceedings of the International Conference Euro Food Tox IV. Centre for Agrotechnology and Veterinary Sciences, Olsztyn, Poland, pp. 355-359. Sakamoto, K., Venkatraman, G. and Shamsuddin, A.M. (1993) Growth inhibition and differentiation of HT-29 cells in vitro by inositol hexaphosphate (phytic acid). C'arcinogenesis, 14, 1815-1819. Shamsuddin, A.M., Baten, A. and Lalwani, N.D. (1992) Effects of inositol hexaphosphate on growth and differentiation in K-562 erythroleukemia cell line. Cancer Letters, 10, 195-202. Shertzer, H.G., Niemi, M.P. and Tabor, M.W. (1986) Indole-3-carbinol inhibits lipid peroxidation in cell-free systems. Advances in Experimental Medicine and Biology, 197, 347-356. Skog, K. (1993) Cooking procedures and food mutagens: a literature review. Food and Chemical Toxicology, 31, 655-675. Sparnins, L.V., Venegas, P.L. and Wattenberg, L.W. (1982) Glutathione S-transferase activity: enhancement by compounds inhibiting chemical carcinogenesis and by dietary constituents. Journal of the National Cancer Institute, 68, 493-496. Sparnins, L.V., Barany, G. and Wattenberg, L.W. (1988) Effects of organosulphur compounds from garlic and onions on benzo[a]pyrene-induced neoplasia and glutathione S-transferase activity in the mouse. Carcinogenesis, 9, 131-134. Steinmetz, K.A. and Potter, J.D. (199Ia) Vegetables, fruit, and cancer. I. Epidemiology. Cancer Causes and Control, 2, 325-357. Steinmetz, K.A. and Potter, J.D. (199Ib) Vegetables, fruit, and cancer. II. Mechanisms. Cancer Causes and Control, 2, 427-442. Stich, H.F. (1991) The beneficial and hazardous effects of simple phenolic compounds. Mutation Research, 259, 307-324. Stoner, G.D. and Mukhtar, H. (1995) Polyphenols as cancer chemopreventive agents. Journal of Cellular Biochemistry, 19, Suppl. 22, 169-180. Strube, M., Dragsted, L.O. and Larsen, J.C. (1993) Naturally occurring antitumourigens. I. Plant phenols. The Nordic Council of Ministers. Nordiske Seminar of Arbejdsrapporter, Food, p. 605.
The a-Tocopherol, (3-Carotene Cancer Prevention Study Group (1994) The effect of vitamin E and (3-carotene on the incidence of lung cancer and other cancers in male smokers. New England Journal of Medicine, 330, 1029-1035. van Poppel, G., Kok, FJ. and Hermus, RJJ. (1992a) p-Carotene supplementation in smokers reduces the frequency of micronuclei in sputum. British Journal of Cancer, 66, 1164-1168. van Poppel, G., Kok, FJ., Duijzings, P. and de Vogel, N. (1992b) No influence of ^-carotene on smoking induced DNA damage as reflected by sister chromatid exchanges. International Journal of Cancer, 51, 335-358. van Poppel, G., Verhagen, H. and van't Veer, P. (1992c) Biomerkers in epidemiologisch en toxicologisch voedingsonderzoek. Voeding, 53, 222-229. Vang, O., Jensen, M.B. and Autrup, H. (1990) Induction of cytochrome P450IA1 in rat colon and liver by indole-3-carbinol and 5,6-benzoflavone. Carcinogenesis, 11, 1259-1263. Verhagen, H. (1993) Genetic toxicology and nutrition. Eurotox Newsletter, 3, 53-55. Verhagen, H. and Feron, VJ. (1994) Cancer prevention by natural food constituents - the lessons of toxicology transposed to antigenotoxicity and anticarcinogenicity. In: Kozlowska, H., Fornal, J. and Zdunczyk, Z. (eds) Bioactive Substances in Food of Plant Origins, Vol. 2, Dietary Cancer Prevention. Proceedings of the International Conference Euro Food Tox IV. Centre for Agrotechnology and Veterinary Sciences, Olsztyn, Poland, pp. 463-478. Verhagen, H., van Poppel, G. Willems, M.I. et al. (1993) Cancer prevention by natural food constituents. IFI, 1/2, 22-29. Verhagen, H., Poulsen, H.E., Loft, S. et al. (1995) Reduction of oxidative DNA-damage in humans by brussels sprouts. Carcinogenesis, 16, 969-970. Verhoeven, T.H., Verhagen, H., Goldblom, R.A. et al. (1997) Cruciferous vegetables, glucosinolates and anticarcinogenesis: a review. Part 2. Mechanisms. Cancer Epidemiology, Biomarkers and Prevention (in press). Von Borstel, R.C. and Hennig, U.G.G. (1993) Spontaneous mutations and fidelogens. In: Bronzetti, G., Hayatsu, H., De Flora, S. et al. (eds) Antimutagenesis and Anticarcinogenesis Mechanisms III. Plenum Press, New York, pp. 479-488. Wakabayashi, K., Sugimura, T. and Nagao, M. (1991) Mutagens in foods. In: Li, A.P. and Heflich, R.H. (eds) Genetic Toxicology. CRC Press, Boca Raton, pp. 303-338. Waltz, B. and Leitzmann, C. (1995) Bioative Substanzen in Lebensmitteln. Hipokrates Verlag, Stuttgart. Wargowich, MJ., Woods, C., Eng, V.W.S. et al. (1988) Chemoprevention of N-nitrosomethylbenzylamine-induced esophageal cancer in rats by the naturally occurring thiolether, diallyl sulfide. Cancer Research, 48, 6872-6875. Wattenberg, L.W. (1985) Chemoprevention of cancer. Cancer Research, 45, 1-8. Wattenberg, L.W. (1992) Inhibition of Carcinogenesis by minor dietary constituents. Cancer Research, 52, (Suppl.) 2085s-2091s. You, W.C., Blot, WJ., Chang, Y.S. et al. (1989) Allium vegetables and reduced risk of stomach cancer. Journal of the National Cancer Institute 81, 162-164. Zhang, L.X., Acevedo, P., Guo, H. and Bertram, J.S. (1995) Upregulation of gap junctional communication and connexin43 gene expression by carotenoids in human dermal fibroblasts but not in human keratinocytes. Molecular Carcinogenesis, 12, 50-58.
11 Prioritization of possible carcinogenic hazards in food L. SWIRSKY GOLD, T.H. SLONE and B.N. AMES
11.1
Causes of cancer
Epidemiological studies have identified several factors that are likely to have a major effect on reducing rates of cancer: reduction of smoking, increased consumption of fruits and vegetables, and control of infections. Other factors include avoidance of intense sun exposure, increased physical activity, reduction of high occupational exposures, and reduced consumption of alcohol and possibly red meat. Risks of many forms of cancer can already be lowered, and the potential for further risk reduction is great. In the USA cancer death rates for all cancers combined are decreasing, if lung cancer - 90% of which is due to smoking - is excluded from the analysis (Ames et al, 1995). The focus of this chapter is prioritization of possible cancer hazards in the diet.
11.2
Cancer epidemiology and diet
Doll and Peto (1981) estimated that 35% of cancer was due to dietary factors, and the plausible contribution ranged from 10% to 70%. We have reviewed the more recent epidemiological literature (Ames et al., 1995), which generally supports the earlier estimate with a slightly narrower estimated range of 20-40% (Ames et al., 1995). Current research on diet and cancer is slowly clarifying specific risk factors. New data have most strongly emphasized the inadequate consumption of protective factors rather than the excessive intake of harmful factors. The estimate for the contribution of dietary factors has been narrowed slightly downward, largely because the large international contrasts in colon cancer rates are probably due, in addition to diet, to differences in physical activity, which is inversely related to colon cancer risk in many studies (Gerhardsson et al., 1988; Slattery et al., 1988; Thun et al, 1992). For breast cancer, the Doll and Peto estimate for the dietary contribution of 50% is still plausible, although that may not be avoidable in a practical sense if rapid growth rate is the most important underlying nutritional factor.
11.2.1 Dietary fruits and vegetables Adequate consumption of fruits and vegetables is associated with a lowered risk of degenerative diseases such as cancer (Ames et al., 1993a). A review of nearly 200 studies in the epidemiological literature showed that the lack of adequate consumption of fruits and vegetables is consistently related to cancer (Block et al, 1992; Hill et al, 1994; Steinmetz and Potter, 1991). The quarter of the population with the lowest dietary intake of fruits and vegetables has roughly twice the cancer rate for many types of cancer (lung, larynx, oral cavity, esophagus, stomach, colon and rectum, bladder, pancreas, cervix and ovary) compared with the quarter with the highest consumption of those foods. The protective effect of consuming fruits and vegetables is weaker and less consistent for hormonally related cancers, such as breast cancer. Laboratory studies suggest that antioxidants such as vitamins C and E and carotenoids in fruits and vegetables account for a good part of their beneficial effect (Ames et al, 1993a). Present epidemiological evidence regarding the role of greater antioxidant consumption in human cancer prevention is inconsistent. Nevertheless, biochemical data indicate the need for further investigation of the wide variety of potentially effective antioxidants, both natural and synthetic. Evidence supporting this need includes the enormous oxidative damage to DNA, proteins and lipids (Ames et al, 1993a), as well as indirect evidence such as heightened oxidative damage to human sperm DNA when dietary ascorbate is insufficient (Fraga et al, 1991). A wide array of micronutrients and other compounds in fruits and vegetables, in addition to antioxidants, may contribute to the reduction of cancer. Folic acid may be particularly important. Low folic acid intake causes chromosome breaks in rodents (MacGregor et al, 1990) and in humans (Blount et al, 1997; Everson et al, 1988), and increases tumor incidence in some rodent models (Bendich and Butterworth, 1991). Folic acid is essential for the synthesis of DNA. 11.2.2
Calorie restriction
In rodents a calorie-restricted diet compared to ad libitum feeding markedly decreases tumor incidence and increases lifespan (Hart et al, 1995; Pariza and Boutwell, 1987; Roe, 1989; Roe et al, 1991). Protein restriction appears to have a similar effect on rodents as calorie restriction, although research is less extensive on protein restriction (Youngman et al, 1992). An understanding of mechanisms for the marked effect of dietary restriction on aging and cancer is becoming clearer and may, in good part, be due to reduced oxidative damage and reduced rates of cell division. Although epidemiological evidence on restriction in humans is sparse, two types of epidemiological evidence support the possible importance of growth in the incidence of human cancer: studies indicating higher
rates of breast and other cancers among taller persons (Hunter and Willett, 1993; Swanson et al., 1988) and studies of Japanese women (who are now taller and menstruate earlier) indicating increased breast cancer rates. Also, many of the variations in breast cancer rates among countries and trends over time within countries are compatible with changes in growth rates and attained adult height (Willett and Stampfer, 1990). ./7.2.3 Other aspects of diet Although epidemiological studies most clearly support the benefits of fruits and vegetables in the prevention of cancer, strong international correlations suggest that animal (but not vegetable) fat and red meat may increase the incidence of cancers of the breast, colon and prostate (Armstrong and Doll, 1975). However, large prospective studies have consistently shown either a weak association or a lack of association between fat intake and breast cancer (Hunter and Willett, 1993). Consumption of animal fat and red meat has been associated with risk of colon cancer in many case-control and cohort studies; the association with meat consumption appears more consistent (Giovannucci et al., 1994; Goldbohm et al., 1994; Willett and Stampfer, 1990). Consumption of animal fat and red meat (Hunter and Willett, 1993; Swanson et al., 1988) has also been associated with risk of prostate cancer (Giovannucci et al., 1994; Le Marchand et al., 1994). Mechanisms for those associations are not clear, but they may include the effects of dietary fats on endogenous hormone levels (Henderson et al., 1991), the local effects of bile acids on the colonic mucosa, the effects of carcinogens produced in the cooking of meat, and excessive iron intake. Alcoholic beverages cause inflammation and cirrhosis of the liver, and liver cancer (International Agency for Research on Cancer, 1988). Alcohol is an important cause of oral and esophageal cancer, is synergistic with smoking (International Agency for Research on Cancer, 1988), and possibly contributes to colorectal cancer (Freudenheim et al., 1991; Giovannucci et al, 1995). Epidemiological studies do not support the idea that synthetic industrial chemicals are causing a significant amount of human cancer. Although some epidemiological studies find an association between cancer and low levels of industrial pollutants, the associations are usually weak, the results are usually conflicting, and the studies do not correct for diet, which is a potentially large confounding factor. Outside the workplace, the levels of exposure to synthetic pollutants are low and rarely seem plausible as a causal factor when compared to the wide variety of naturally occurring chemicals to which all people are exposed (see below) (Gold et al., 1992a). Mechanistic studies of carcinogenesis indicate an important role of endogenous oxidative damage to DNA that is balanced by elaborate repair
and defense processes, some of which are dietary protective agents. Also important is the rate of cell division (which is influenced by hormones, growth, cytotoxicity and inflammation), since this determines the probability of converting DNA lesions to mutations. These mechanisms may underlie many epidemiological observations.
11.3
Human exposures to natural and synthetic chemicals
Current regulatory policy to reduce cancer risk is based on the idea that chemicals which induce tumors in rodent cancer tests are potential human carcinogens; however, the chemicals tested for carcinogenicity in rodents have been primarily synthetic (Ames and Gold, 1990; Gold et al, 1984, 1986,1987, 1990, 1993,1995,1997a). The enormous background of human exposures to natural chemicals has not been systematically examined. This has led to an imbalance in both data and perception about possible carcinogenic hazards to humans from chemical exposures. The regulatory process does not take into account: (1) that natural chemicals make up the vast bulk of chemicals to which humans are exposed; (2) that the toxicology of synthetic and natural toxins is not fundamentally different; (3) that about half of the chemicals tested, whether natural or synthetic, are carcinogens when tested using current experimental protocols; (4) that testing for carcinogenicity at near-toxic doses in rodents does not provide enough information to predict the excess number of human cancers that might occur at low-dose exposures; (5) that testing at the maximum tolerated dose (MTD) can frequently cause chronic cell killing and consequent cell replacement (a risk factor for cancer that can be limited to high doses), and that ignoring this effect in risk assessment greatly exaggerates risks. The vast proportion of chemicals to which humans are exposed are naturally occurring, yet public perceptions tend to identify chemicals as being only synthetic and only synthetic chemicals as being toxic; however, every natural chemical is also toxic at some dose. We estimate that the daily average American exposure to burnt material in the diet is about 2000 mg, and to natural pesticides (the chemicals that plants produce to defend themselves against fungi, insects and animal predators) about ISOOmg (Ames et al., 199Oa). In comparison, the total daily exposure to all synthetic pesticide residues combined is about 0.09 mg, based on the sum of residues reported by the US Food and Drug Administration (FDA) in their study of the 200 synthetic pesticide residues thought to be of greatest concern (US Food and Drug Administration, 1993). We estimate that humans ingest roughly 5000-10 000 different natural pesticides and their breakdown products (Ames et al., 199Oa). Despite this enormously greater exposure to natural chemicals, among the chemicals tested for carcinogenicity, 78% (1007/1298) are synthetic (i.e. do not occur naturally).
It has often been assumed that humans have evolved defenses against natural chemicals that will not protect against synthetic chemicals. However, humans, like other animals, are extremely well protected by defenses that are mostly general rather than specific for particular chemicals (e.g. continuous shedding of surface cells that are exposed). Additionally, most defense enzymes are inducible, and are effective against both natural and synthetic chemicals, including potentially mutagenic reactive chemicals (Ames et al, 199Ob). Since the toxicology of natural and synthetic chemicals is similar, one expects, and finds, a similar positivity rate for carcinogenicity among synthetic and natural chemicals. Among chemicals tested in rats and mice in our Carcinogenic Potency Database (CPDB) (Gold et al., 1984, 1986, 1987, 1990, 1993, 1995, 1997a), about half of the natural chemicals are positive, as are half of all chemicals tested. Cooking food produces numerous by-products. Concentrations of natural pesticides in plants are usually measured in parts per thousand or million rather than parts per billion, which is the usual concentration of synthetic pesticide residues or water pollutants. Therefore, since humans are exposed to so many more natural than synthetic chemicals (by weight and by number), human exposure to natural rodent carcinogens, as defined by high-dose rodent tests, is ubiquitous (Ames et al, 199Oa). It is probable that almost every fruit and vegetable in the supermarket contains natural pesticides that are rodent carcinogens, and no diet can be free of chemicals identified as carcinogens in high-dose rodent tests. Even though only a tiny proportion of natural pesticides have been tested for carcinogenicity, 35 of 64 that have been tested are rodent carcinogens (Table 11.1) and occur in the following 79 common plant foods and spices: alcoholic beverages, allspice, anise, apple, apricot, banana, basil, beet, broccoli, Table 11.1 Carcinogenicity status of natural pesticides tested in rodents Positive: N= 35
Acetaldehyde methylformylhydrazone, allyl isothiocyanate, arecoline.HCl, benzaldehyde, benzyl acetate, caffeic acid, catechol, clivorine, coumarin, crotonaldehyde, cycasin and methylazoxymethanol acetate, 3,4-dihydrocoumarin, estragole, ethyl acrylate, N2-/Y-glutamyl-/7-hydrazinobenzoic acid.HCl, hydroquinone, 1-hydroxyanthraquinone, lasiocarpine, d-limonene, 8-methoxypsoralen, Af-methyl-7V-formylhydrazine, a-methylbenzyl alcohol, 3-methylbutanal methylformylhydrazone, methylhydrazine, monocrotaline, pentanal methylformylhydrazone, petasitenine, quercetin, reserpine, safrole, senkirkine, sesamol, symphytine
Not positive: N = 28
Atropine, benzyl alcohol, biphenyl, d-carvone, deserpidine, disodium glycyrrhizinate, emetine.2HCl, ephedrine sulfate, eucalyptol, eugenol, gallic acid, geranyl acetate, p-jV-^-/(+)-glutamyl]-4-hydroxymethylphenylhydrazine, glycyrrhetinic acid, p-hydrazinobenzoic acid, isosafrole, kaempferol, dlmenthol, nicotine, norharman, pilocarpine, piperidine, protocatechuic acid, rotenone, rutin sulfate, sodium benzoate, turmeric oleoresin, vinblastine
Uncertain: N =2
Caffeine, trans-anethole
Brussels sprouts, cabbage, cantaloupe, caraway, cardamom, carrot, cauliflower, celery, cherries, chili pepper, chocolate, cinnamon, cloves, cocoa, coffee, collard greens, comfrey herb tea, coriander, currants, dill, eggplant, endive, fennel, garlic, grapefruit, grapes, guava, honey, honeydew melon, horseradish, kale, lemon, lentils, lettuce, licorice, lime, mace, mango, marjoram, mushrooms, mustard, nutmeg, onion, orange, paprika, parsley, parsnip, peach, pear, peas, black pepper, pineapple, plum, potato, radish, raspberries, rhubarb, rosemary, rutabaga, sage, savory, sesame seeds, soybean, star anise, tarragon, tea, thyme, tomato, turmeric and turnip. Humans also ingest large numbers of natural chemicals as a result of cooking food. For example, more than 1000 chemicals have been identified in roasted coffee. Only 28 have been tested for carcinogenicity according to the most recent results in our CPDB, and 19 of these are positive in at least one test (Table 11.2) totaling at least 10 mg of rodent carcinogens per cup (Clarke and Macrae, 1988; Fujita etal, 1985; Kikugawa et al., 1989; Maarse et al, 1994). Among the rodent carcinogens in coffee are the plant pesticides caffeic acid (present at 1800 ppm) (Clarke and Macrae, 1988) and catechol (present at 100 ppm) (Rahn and Konig, 1978; Tressl et al., 1978). Two other plant pesticides, chlorogenic acid and neochlorogenic acid (present at 21 600 ppm and 11 600 ppm respectively) (Clarke and Macrae, 1988), have not been tested for carcinogenicity. Chlorogenic acid and caffeic acid are mutagenic (Ariza et al., 1988; Fung et al, 1988; Hanham et al., 1983) and clastogenic (Ishidate et al, 1988; Stich et al, 1981). For another plant pesticide in coffee, d-limonene, data are available on the mechanism of carcinogenicity that suggest the rodent results are not relevant to humans, because carcinogenicity in the male rat kidney is associated with a urinary protein that humans do not excrete (Dietrich and Swenberg, 1991). Some other rodent carcinogens in coffee are products of cooking, e.g. furfural and benzo(a)pyrene. The point here is not to indicate that rodent data necessarily implicate coffee as a risk factor for human cancer, but rather to illustrate that there is an enormous background of chemicals in the diet that are natural and that have not been a focus of attention for carcinogenicity testing. A diet free of naturally occurring chemicals that are rodent carcinogens is impossible. Table 11.2 Carcinogenicity status of natural chemicals in roasted coffee Positive: #=19
Acetaldehyde, benzaldehyde, benzene, benzofuran, bezo(a)pyrene, caffeic acid, catechol, 1,2,5,6-dibenzanthracene, ethanol, ethylbenzene, formaldehyde, furan, furfural, hydrogen peroxide, hydroquinone, limonene, styrene, toluene, xylene
Not positive: N=8 Uncertain:
Acrolein, biphenyl, choline, eugenol, nicotinamide, nicotinic acid, phenol, piperidine
Yet to test:
~ 1000 chemicals
Caffeine
11.4
The high carcinogenicity rate among chemicals tested in rodents
Since the results of high-dose rodent tests are routinely used to identify a chemical as a possible cancer hazard to humans, it is important to try to understand how representative the 50% positivity rate might be of all the untested chemicals. If half of all chemicals (both natural and synthetic) to which humans are exposed would be positive if tested, then the utility of a test to identify a chemical as a 'potential human carcinogen' is questionable. To determine the true proportion of rodent carcinogens among chemicals would require a comparison of a random group of synthetic chemicals to a random group of natural chemicals. Such an analysis has not been done. We have found that the high positivity rate is consistent for several data sets: among chemicals tested in rats and mice, 59% (330/559) are positive in at least one experiment, 59% of synthetic chemicals (257/432) and 57% of naturally occurring chemicals (73/127). Among chemicals tested in at least one species, 55% of natural pesticides (35/64) are positive, 61% of fungal toxins (14/23) and 68% of the chemicals in roasted coffee (19/28) (Table 11.2). Additionally, in the Physician's Desk Reference, 49% (117/241) of the drugs for which animal cancer tests are reported are carcinogenic (Davies and Monro, 1995). It has been argued that the high positivity rate is due to selecting more suspicious chemicals to test for carcinogenicity. For example, chemicals may be selected that are structurally similar to known carcinogens or chemicals. That is a likely bias, since cancer testing is both expensive and time-consuming, and it is prudent to test suspicious compounds. On the other hand, chemicals are selected for testing for several reasons, including the extent of human exposure, level of production, and scientific questions about carcinogenesis. Although mutagens are positive in rodent bioassays more frequently than non-mutagens (79% of mutagens versus 49% of non-mutagens are positive), among the chemicals tested in rats and mice 55% are non-mutagens; this suggests that the prediction of positivity may often not be the basis for selecting a chemical to test. Moreover, while some chemical classes are more often carcinogenic in rodent bioassays than others - e.g. nitroso compounds, aromatic amines, nitroaromatics and chlorinated compounds - prediction is still imperfect (Omenn et al, 1995). One large series of mouse experiments by Innes et al (1969) has been frequently cited (US National Cancer Institute, 1984) as evidence that the true proportion of rodent carcinogens is actually low among tested substances. In the Innes study, among 119 chemicals tested - primarily the most widely used pesticides at that time and some industrial chemicals - only 11 (9%) were judged as carcinogens. We note that those early experiments lacked power to detect an effect because they were conducted only in mice (not in rats), they included only 18 animals in a group
(compared with the usual 50), the animals were tested for only 18 months (compared with the usual 24 months), and the Innes dose was usually lower than the highest dose in subsequent mouse tests of the same chemical (Gold et al, 199Ib). To assess whether the low positivity rate in the Innes study may have been due to the design of the experiments, we used results in our CPDB to examine subsequent bioassays on the Innes chemicals that had not been evaluated as positive. Among 34 such chemicals that were subsequently retested, 16 had a subsequent positive evaluation of carcinogenicity (47%), which is similar to the proportion among all chemicals in our database. Of the 16 new positives, six were carcinogenic in mice and 12 in rats. Innes had recommended further evaluation of some chemicals that had inconclusive results in their study. If those were the chemicals subsequently retested, then one might argue that they would be the most likely to be positive. Our analysis does not support that view, however. We found that the positivity rate among the chemicals that the Innes study said needed further evaluation was six of 16 (38%) when retested, compared to 10 of 18 (56%) among the chemicals that Innes evaluated as negative.
11.5
The importance of cell division in mutagenesis and carcinogenesis
We have argued that mutagenesis, and thus carcinogenesis, is increased by increasing either DNA damage or cell division in cells that are not discarded. There is enormous endogenous DNA damage from normal oxidation, and the evidence suggests that oxidative damage is a major factor not only in aging, but in the degenerative diseases of aging, such as cancer (Ames et al., 1993a). The steady-state level of oxidative damage in DNA is about one million oxidative lesions per rat cell (Ames et al., 1993a). Thus, this high background suggests that the cell division rate must be a factor in converting lesions to mutations and thus cancer (Ames et al, 1993b). Raising the level of either DNA lesions or cell division will increase the probability of cancer. Just as DNA repair protects against lesions, p53 guards the cell cycle and protects against cell division if the lesion level gets too high; however, neither defense is perfect (Ames et al, 1995). Cell division is also a major factor in loss of heterozygosity through non-disjunction and other mechanisms (Ames and Gold, 1990; Ames et al, 1995). A plausible explanation for the high positivity rate in rodent bioassays, which is supported by an ever-increasing array of papers, is that the MTD of a chemical can cause chronic cell killing and cell replacement in the target tissue, a risk factor for cancer that can be limited to high doses. Thus it seems likely that the high positivity rate in standard rodent
bioassays at the MTD will be primarily due to the effects of high doses for the non-mutagens, and to a synergistic effect of cell division at high doses with DNA damage for the mutagens. Ad libitum feeding in the standard bioassay can also contribute to the high positivity rate (Hart et al, 1995), plausibly by increased cell division due to high caloric intake (Ames et al., 1993b; Hart et al, 1995). Although cell division is not measured in routine cancer tests, many studies on rodent carcinogenicity show a correlation between cell division at the MTD and cancer. Cunningham and co-workers have analyzed 15 chemicals at the MTD, eight mutagens and seven non-mutagens, including several pairs of mutagenic isomers, one of which is a rodent carcinogen and one of which is not (Cunningham et al, 1995; Hayward et al, 1995). A perfect correlation was observed: the nine chemicals causing cancer caused cell division in the target tissue and the six chemicals not causing cancer did not. A similar result has been found in the analyses of Mirsalis et al (1993); for example, both dimethylnitrosamine (DMN) and methyl methane sulfonate (MMS) methylate liver DNA and cause unscheduled DNA synthesis, but DMN causes both cell division and liver tumors, while MMS does neither. At high doses, chloroform induces liver cancer (Larson et al, 1994), and sodium saccharin induces bladder cancer by chronic cell division (Cohen and Lawson, 1995). Extensive reviews on rodent studies (Gold et al, 1996a; Ames and Gold, 1990; Ames et al, 1993a; Cohen and Ellwein, 1991; Cohen, 1995; Cohen and Lawson, 1995; Counts and Goodman, 1995) document that chronic cell division can induce cancer. A large amount of epidemiological literature reviewed by Preston-Martin et al (1990,1995) indicates that increased cell division caused by hormones and other agents can increase human cancer. Several of our findings in large-scale analyses of the results of animal cancer tests (Gold et al, 1992b) are consistent with the idea that cell division increases the carcinogenic effect in high-dose bioassays, including: the high proportion of chemicals that are positive; the high proportion of rodent carcinogens that are not mutagenic; the fact that mutagens, which can both damage DNA and increase cell division at high doses, are more likely than non-mutagens to be positive, to induce tumors in both rats and mice, and to induce tumors at multiple sites. Analyses of the limited dose-response data in bioassays are consistent with the idea that cell division from cell killing and cell replacement is important. In the usual experimental design of dosing at the MTD and half MTD, both doses are high and may result in cell division. Even at these two high doses, about half of the positive sites in National Toxicology Program (NTP) bioassays are statistically significant at the MTD but not at half the MTD (Gold et al, 1992b). To the extent that increases in tumor incidence in rodent studies are due to the secondary effects of inducing cell division at the MTD, any chemical is a likely rodent carcinogen, and carcinogenic effects can be
limited to high doses. Thus, true risks at the low doses of most human exposures in the general population are likely to be much lower than what would be predicted by the linear model that is the default in US regulatory risk assessment. The true risk might often be zero. We have discussed validity problems associated with the use of the limited data from animal cancer tests for human risk assessment. Adequate risk assessment from animal cancer tests requires more information about many aspects of toxicology for each chemical than the limited data now available from standard bioassays, such as effects on cell division, induction of defense and repair systems, and species differences. Standard practice in regulatory risk assessment for a given rodent carcinogen is to extrapolate from the high doses of rodent bioassays to the low doses of most human exposures by multiplying carcinogenic potency in rodents by human exposure. Strikingly, however, since potency estimates are constrained to lie within a narrow range about the MTD (Bernstein et al, 1985; Freedman et al, 1993; Gold et al, 1996b), the dose usually estimated by regulatory agencies to give one cancer in a million can be approximated simply by using the MTD as a surrogate for carcinogenic potency. The 'virtually safe dose' (VSD) can be approximated from the MTD. Gaylor and Gold (1995) used the the ratio MTD/TD 50 and the relationship between ^1* and TD50 found by Krewski et al. (1993) to estimate the VSD. The VSD was approximated by the MTD/740000 for NCI/NTP rodent carcinogens. This result questions the utility of bioassay results in estimating risk, and demonstrates the limited information about risk that is provided by bioassay results. The MTD/740 000 was within a factor of 10 of the VSD for 96% of carcinogens. Without data on mechanism of carcinogenesis for a given chemical, the true risk of cancer at low dose is highly uncertain, and could be zero, even for rats or mice.
11.6
Ranking possible carcinogenic hazards
Given the limited information from rodent bioassays about mechanisms of carcinogenesis and low-dose risk, as well as the fact that there is an imbalance in bioassay data because the vast proportion of test agents are synthetic chemicals while the vast proportion of human exposures are to naturally occurring chemicals, what is the best use that can be made of bioassay results in efforts to prevent human cancer? In several papers we have emphasized that it is important to set research and regulatory priorities about cancer prevention by gaining a broad perspective about the vast number of chemicals to which humans are exposed. One reasonable strategy is to use a rough index to compare and rank possible carcinogenic hazards from a wide variety of chemical exposures at levels that humans typically receive, and then to focus on those that rank highest
(Ames et al, 1987a,b,c,d; Ames and Gold, 1987, 1988, 1989; Gold et al, 1992a, 1994a, 1996a). Ranking is a critical first step that can help to set priorities when selecting chemicals for chronic bioassay or mechanistic studies, for epidemiological research, and for regulatory policy. Although one cannot say whether the ranked chemical exposures are likely to be of major or minor importance in human cancer, it is not prudent to focus attention on the possible hazards at the bottom of a ranking if, using the same methodology, there are numerous common human exposures with much greater possible hazards. In earlier papers we ranked possible hazards from a variety of typical human exposures to rodent carcinogens. The analyses are based on the HERP index (Human Exposure/Rodent Potency), which indicates what percentage of the rodent carcinogenic potency (TD50 in mg/kg/day) a human receives from a given daily lifetime exposure (mg/kg/day). TD50 is the daily lifetime dose rate estimated to halve the proportion of tumor-free animals by the end of a standard lifetime (Peto et al, 1984). TD50 values in our CPDB span a 10000000fold range across chemicals. In general, the ranking by HERP is expected to be similar to a ranking of 'risk estimates' using current regulatory risk assessment methodology for the same exposures, since linear extrapolation from the TD50 generally leads to low-dose slope estimates similar to those based on the linearized multi-stage model (Krewski et al, 1990). As we discussed above, the VSD is approximately equivalent to the high dose in a bioassay divided by 740 000 (Gaylor and Gold, 1995). Our earlier analyses indicated that some historically high exposures in the workplace and some pharmaceuticals rank high, and that there is an enormous background of naturally occurring rodent carcinogens in typical portions of common foods that casts doubt on the relative importance of low-dose exposures to synthetic chemicals such as pesticide residues (Ames et al, 1987a,b,c,d; Ames and Gold, 1987, 1988, 1989; Gold et al, 1992a, 1994a). In this chapter we address the relative ranking by HERP of average US dietary exposures to rodent carcinogens that either occur naturally in food, are products of cooking and food preparation, or are present in food as residues of synthetic pesticides, food additives or contaminants. In order to calculate HERP, in addition to TD50, data are required on both concentration of a chemical in food and the average consumption of the food. We have tried to include as many chemicals as possible by calculating HERP for all chemicals for which we have been able to obtain reliable average dietary exposure data, for both natural and synthetic chemicals. The average daily exposures in the ranking (Table 11.3) are ordered by possible carcinogenic hazard (HERP). Results are reported for average exposures to 25 natural chemicals in the diet and to 20 synthetic chemicals. A few convenient reference points are: the median HERP value in Table 11.3 of 0.0007%; the upper bound risk estimate used by regulatory agencies
Table 11.3 Ranking possible carcinogenic hazards from average US dietary exposures to natural and synthetic chemicals (chemicals that occur naturally in foods are in bold).3 Possible hazard Average daily US HERP (%) consumption 2.1 0.5 0.1 0.04 0.03 0.03 0.03 0.02
0.02 0.02 0.02 0.01 0.009 0.008 0.006 0.005 0.005 0.004 0.004 0.004 0.003 0.002 0.002
Beer, 257 g Wine, 28.0 g Coffee, 13.3 g Lettuce, 14.9 g Safrole in spices Orange juice, 138 g Pepper, black, 446 mg Mushroom (Agaricus bisporus 2.55 g) Apple, 32.0 g
0.0009 0.0008
Cinnamon, 21.9 mg
0.0008
DDE: daily US average (before 1972 ban) TCDD: daily US average (1994) Bacon, 11.5 g
0.001 0.001 0.001 0.001
0.0007 0.0007 0.0006 0.0005 0.0004 0.0004
Average exposure: references
Ethanol, 13.1 ml Ethanol, 3.36 ml Caffeic acid, 23.9 mg Caffeic acid, 7.90 mg Safrole, 1.2 mg rf-Limonene, 4.28 mg J-Limonene, 3.57 mg Mixture of hydrazines, etc. (whole mushroom) Caffeic acid, 3.40 mg
Stofberg and Grundschober (1987) Stofberg and Grundschober (1987) Stofberg and Grundschober (1987) Technical Assessment Systems (1989) Hall et al. (1989) Hall et al. (1989) Stofberg and Grundschober (1987) Stofberg and Grundschober (1987)
Coffee, 13.3 g Catechol, 1.33 mg Coffee, 13.3 g Furfural, 209 mg Beer (before 1979), 257 g Dimethylnitrosamine, 726 ng BHA: daily US average BHA, 4.6 mg (1975) Aflatoxin: daily US Aflatoxin, 18 ng average (1984-89) Hydroquinone, 333 jjtg Coffee, 13.3 g Saccharin, 7 mg Saccharin: daily US average (1977) Carrot, 12.1 g Aniline, 624 jxg Potato, 54.9 g Celery, 7.95 g White bread, 67.6 g Nutmeg, 27.4 mg Carrot, 12.1 g Ethylene thiourea: daily US average (1990) DDT: daily US average (before 1972 ban) Plum, 2.00 g BHA: daily US average (1987) Pear, 3.29 g UDMH: daily US average (1988) Brown mustard, 68.4 mg
0.002
Human dose of rodent carcinogen
Caffeic acid, 867 |xg Caffeic acid, 858 jjig Furfural, 500 |Jig rf-Limonene, 466 jjig Caffeic acid, 374 jjig Ethylene thiourea, 9.51 ,JLg DDT, 13.8 |jig Caffeic acid, 276 jutg BHA, 700 JJLg Caffeic acid, 240 (jug UDMH, 2.82 (jig (from Alar) AHyI isothiocyanate, 62.9 JAg Coumarin, 65.0 jig
DDE, 6.91 (jug TCDD, 12.0 pg/day
Diethylnitrosamine, 11.5 ng Mushroom Glutamyl-/?-hydrazino(Agaricus bisporus 2.55 g) benzoate, 107 jmg Benzyl acetate, 504 jxg Jasmine tea, 2.19 g Bacon, 11.5 g W-Nitrosopyrrolidine, 196 ng Bacon, 11.5 g Dimethylnitrosamine, 34.5 mg
US Environmental Protection Agency (1989b) Stofberg and Grundschober (1987) Stofberg and Grundschober (1987) Stofberg and Grundschober (1987) US Food and Drug Administration (199Ib) US Food and Drug Administration (1992) Stofberg and Grundschober (1987) National Research Council (1979) Technical Assessment Systems (1989); Neurath et al. (1977) Technical Assessment Systems (1989) Economic Research Service (1994) Stofberg and Grundschober (1987) Stofberg and Grundschober (1987) Technical Assessment Systems (1989) US Environmental Protection Agency (199Ia) Duggan and Corneliussen (1972) Economic Research Service (1995) US Food and Drug Administration (199Ib) Stofberg and Grundschober (1987) US Environmental Protection Agency (1989b) Stofberg and Grundschober (1987) National Toxicology Program (1993); Poole and Poole (1994) Duggan and Corneliussen (1972) US Environmental Protection Agency (1994a) Stofberg and Grundschober (1987) Stofberg and Grundschober (1987) Stofberg and Grundschober (1987) Stofberg and Grundschober (1987) Stofberg and Grundschober (1987)
Table 11.3
Continued
Possible Average daily US hazard HERP (%) consumption
Human dose of rodent carcinogen
0.0004
EDB, 420 ng
0.0004
EDB: daily US average (before 1984 ban) Tap water, 1 1 (1987-92)
0.0003 0.0003 0.0003
Mango, 1.22 g Beer, 257 g Tap water, 1 1 (1987-92)
0.0003
Carbaryl: daily US average (1990) Celery, 7.95 g
0.0002 0.0002 0.00009 0.00008 0.00008 0.00007 0.0006 0.00005 0.00005 0.00003 0.00002 0.00001 0.00001 0.000005
Toxaphene: daily US average (1990) Mushroom (Agaricus bisporm 2.55 g) PCBs: daily US average (1984-86) DDE/DDT: daily US average (1990) Parsnip, 54.0 mg Hamburger, pan fried, 85 g Estragole in spices Parsley, fresh, 324 mg Hamburger, pan fried, 85 g Dicofol: daily US average (1990) Cocoa, 3.34 g
Beer, 257 g Hamburger, pan fried 85 g 0.000001 Lindane: daily US average (1990) 0.0000004 PCNB: daily US average (1990) 0.0000001 Chlorobenzilate: daily US average (1989) <0.0000001 Chlorothalonil: daily US average (1990) 0.000000008 Folpet: daily US average (1990) 0.00000006 Captan: daily US average (1990) a
Average exposure: References
United States Environmental Protection Agency (1984) Bromodichloromethane, American Water Works Association (1993) 13 |JLg -Limonene, 48.8 jjLg Economic Research Service (1994) Stofberg and Grundschober (1987) Furfural, 39.9 |jLg Chloroform, 16 jxg American Water Works Association (1993) Carbaryl, 2.6 jjig US Food and Drug Administration (199Ia) 8-Methoxypsoralen, Economic Research 4.86 |jig Service (1994) Toxaphene, 595 ng US Food and Drug Administration (199Ia) p-Hydrazinobenzoate, Stofberg and Grundschober (1987) 28 |xg PCBs, 98 ng Gunderson (1993)
DDE, 659 ng 8-Methoxypsoralen, 1.57 |jig PhIP, 176 ng
United Fresh Fruit and Vegetable Association (1989) Technical Assessment Systems (1989)
Estragole, 1.99 jxg 8-Methoxypsoralen, 1.17 ULg MeIQx, 38.1 ng
Stofberg and Grundschober (1987) United Fresh Fruit and Vegetable Association (1989) Technical Assessment Systems (1989)
Dicofol, 544 ng
US Food and Drug Administration (199Ia) a-Methylbenzyl alcohol, Stofberg and Grundschober (1987) 4.3 |jLg Urethane, 115 ng Stofberg and Grundschober (1987) IQ, 5ng Technical Assessment Systems (1989) Lindane, 32 ng PCNB (Quintozene), 19.2ng Chlorobenzilate, 6.4 ng Chlorothalonil, <6.4 ng Folpet, 12.8 ng Captan, 115 ng
US Food and Drug Administration (199Ia) US Food and Drug Administration (199Ia) US Food and Drug Administration (199Ia) US Food and Drug Administration (199Ia) US Food and Drug Administration (199Ia) US Food and Drug Administration (199Ia)
Carcinogenic potency (TD50) values and additional exposure references can be found in Gold et al. (1997b) and on the World Wide Web (http://potency.berkeley.edu/cpdb.html). BHA, butylated hydroxyanisole; ETU, ethylene thiourea; EDB, ehtylene dibromide; PCB, polychlorinated biphenyls; TCDD, tetrachlorodibenzo-p-dioxin; UDMH, 1,1-dimethylhydrazine; PhIP, 2-amino-l-methy-6-phenylimidazol[4,5-6]-pyridine; MeIQx, 2-amino-3,8-dimethylimidazol[4,5-/]quinoxaline; IQ, 2-amino-3-methylimidazol[4.5-/]quinoline; PCNB, pentachloronitrobenzene; MeIQ, 2-amino-3,4 dimethylimidazol[4,5-/Jquinoline.
of one in a million (using the ^1* potency value derived from the linearized multi-stage model), i.e. the VSD, which converts to a HERP of 0.00003% if based on a rat TD50 and 0.00001 % if based on a mouse TD50; and the background HERP of 0.0003% for the average chloroform level in a liter of US tap water, which is formed as a by-product of chlorination. The ranking maximizes the HERP values for synthetic compared to natural chemicals because we have reported historically high values for exposures that may now be much lower, e.g. DDT and PCBs, and because all exposures to synthetic chemicals are averages in the total diet, whereas for many natural chemicals the exposures are for individual foods (for which concentration data were available). Table 11.3 indicates that many ordinary foods would not pass the regulatory criteria used for synthetic chemicals. For many natural chemicals the HERP values are in the top half of the table, and natural chemicals are markedly under-represented because so few have been tested in rodent bioassays. We discuss several categories of exposure below and indicate that for some chemicals mechanistic data are available that suggest that the chemical would not be expected to be a cancer hazard at the doses to which humans are exposed, and their ranking by HERP would not be relevant in risk assessment. 11.6.1 Natural pesticides These are markedly under-represented in our analysis compared to synthetic pesticide residues, because few natural chemicals have been tested for carcinogenicity. Importantly, for each plant food listed, there are about 50 additional untested natural pesticides. Although -10000 natural pesticides and their breakdown products occur in the human diet (Ames et al, 199Ob), only 63 have been tested adequately in rodent bioassays (Table 11.1). Average exposures to many natural-pesticide rodent carcinogens in common foods rank above or close to the median, ranging up to a HERP of 0.1%. These include: caffeic acid (lettuce, apple, pear, coffee, plum, celery, carrot, potato); safrole (in spices), allyl isothiocyanate (mustard) and d-limonene (mango, orange juice, black pepper); estragole (in spices); hydroquinone and catechol in coffee; and coumarin in cinnamon. Caffeic acid is more widespread in plant species than other natural pesticides. Some natural pesticides in the commonly eaten mushroom (Agaricus bisporus) are rodent carcinogens (glutamyl-/?hydrazinobenzoate, p-hydrazinobenzoate), and the HERP based on feeding whole mushrooms to mice is 0.02%. For d-limonene, no human risk is anticipated because tumors are induced only in male rat kidney tubules with involvement of a2u-globulin nephrotoxicity, which does not appear to be possible in humans (US Environmental Protection Agency, 1991a; Hard and Whysner, 1994).
11.6.2 Synthetic pesticides Synthetic pesticides currently in use that are rodent carcinogens and quantitatively detected by the US FDA as residues in food are all included in Table 11.3. Most are at the bottom of the ranking, but HERP values are just above the median for ethylene thiourea (ETU) before recent discontinuance on some crops, UDMH (from Alar) before its discontinuance, and DDT before its ban in the USA in 1972. These rank below the HERP values for many naturally occurring chemicals. The HERP value for ETU would be about 10 times lower if the US Environmental Protection Agency (EPA) potency value were used instead of our TD50; the EPA combined rodent results from more than one experiment, including one with lower doses of ETU and administration in utero, and obtained a lower potency (US Environmental Protection Agency, 1992). DDT and similar, early pesticides have been a cause of concern because of their unusual lipophilicity and persistence, although there is no convincing epidemiological evidence of a carcinogenic hazard (Key and Reeves, 1994). Current exposure to DDT is in foods of animal origin, and the HERP value is 0.00008%. In 1984 the US EPA banned the agricultural use of ethylene dibromide (EDB), the main fumigant in the USA, because of the residue levels found in grain; HERP = 0.0004%. This HERP value is 350 000 times lower than the HERP of 140% for the high exposures that some workers received in the 1970s (Gold et al, 1992a). Three synthetic pesticides, captan, chlorothalonil and folpet, were evaluated in 1987 by the National Research Council (NRC) as being of relatively high risk to humans (National Research Council, 1987), and were also reported by the FDA in the Total Diet Study (TDS). The contrast between the extremely low HERP values for these exposures (chlorothalonil, 0.0000001%; folpet, 0.000000008%; captan, 0.000000006%) and the high risk estimates of the 1987 NRC report (which differ by a factor of 99 000 for chlorothalonil, 46 000 for folpet, and 116 000 for captan) is because the exposure estimates used by the NRC (i.e. the EPA Theoretical Maximum Residue Contribution) are hypothetical maximum exposure estimates, whereas the FDA monitors the actual food supply to estimate dietary intakes of pesticides. Hence, using hypothetical maxima results in enormously higher risk estimates than using measured residues (see Chapter 14). 11.6.3
Cooking and preparation of food
This can also produce chemicals that are rodent carcinogens. Alcoholic beverages are carcinogenic for humans (see section 11.2.3), and the HERP values in Table 11.3 for US average exposure to alcohol in beer (2.1%)
and wine (0.5%) are at the top of the ranking. Ethanol is one of the least potent rodent carcinogens in our CPDB, but the HERP is high because of high concentrations and high US consumption (average daily consumption of ethanol in beer in the US is 13 ml). Another fermentation product, urethane (ethyl carbamate), has an HERP value of 0.00001% in average beer consumption; in a daily two slices of whole wheat toast, the HERP would be 0.00003%. Cooking food is plausible as a contributor to cancer. A wide variety of chemicals are formed during cooking. Rodent carcinogens formed include furfural and similar furans, nitrosamines, polycyclic hydrocarbons and heterocyclic amines. Furfural, a chemical formed naturally when sugars are heated, is a widespread constituent of food flavor: The HERP value for furfural in the average consumption of coffee is 0.02% and in white bread is 0.004%. Nitrosamines formed from nitrite or nitrogen oxides (NOJ and amines in food can give moderate HERP values; for example, in bacon, the HERP for diethylnitrosamine is 0.0007% and for dimethylnitrosamine is 0.0004%. A variety of mutagenic and carcinogenic heterocyclic amines (HA) are formed when meat, chicken or fish are cooked, particularly when charred. Compared to other rodent carcinogens, there is strong evidence of carcinogenicity for HAs in terms of positivity rates and multiplicity of target sites; however, concordance in target sites between rats and mice is generally restricted to the liver (Gold et al, 1994a). Under usual cooking conditions, exposures to HAs are in the low ppb range. HERP values for HAs in pan-fried hamburger range from 0.00006% for PhIP to 0.000005% for IQ (Table 11.3). PhIP induces colon tumors in male but not female rats. A recent study indicates that whereas the level of DNA adducts in the colonic mucosa was the same in both sexes, cell proliferation was increased only in the male, contributing to the formation of premalignant lesions of the colon (Ochiai et al., 1996). Therefore, there was no correlation between adduct formation and premalignant lesions, but there was between cell division and lesions. 11.6.4 Food additives These can be either naturally occurring rodent carcinogens (e.g. allyl isothiocyanate and alcohol) or synthetic rodent carcinogens (butylated hydroxyanisole (BHA) and saccharin; Table 11.3). The highest HERP values for average exposures to synthetic rodent carcinogens in Table 11.3 are for exposures in the 1970s to BHA (0.009%) and saccharin (0.005%), both non-genotoxic rodent carcinogens. For both of these additives, data on mechanisms of carcinogenesis strongly suggest that there would be no risk to humans at the levels found in food. BHA is a phenolic antioxidant that is generally regarded as safe (GRAS) by the US FDA. By 1987, after BHA was shown to be a rodent
carcinogen, its use had declined six-fold (HERP = 0.001%) (US Food and Drug Administration, 199Ib); this was due to voluntary replacement with other antioxidants, and to the fact that the use of animal fats and oils, in which BHA is primarily used as an antioxidant, has consistently declined in the USA. The mechanistic and carcinogenicity results on BHA indicate that malignant tumors were induced only at a dose above the MTD at which cell division is increased in the forestomach, which is the only site of tumorigenesis; the proliferation is only at high doses, and is dependent on continuous dosing until late in the experiment (Clayson et al, 1990). Humans do not have a forestomach. We note that the doseresponse curve for BHA curves sharply upward, but the potency value used in HERP is based on a linear model; if the California EPA potency value (which is based on a linearized multi-stage model) were used in HERP instead of TD50, the HERP values for BHA would be 25 times lower (California Environmental Protection Agency, Standards and Criteria Work Group, 1994). For saccharin, which has largely been replaced by other sweeteners, there is convincing evidence that the induced bladder tumors in rats are not relevant to human dietary exposures. The carcinogenic effect requires high doses of sodium saccharin which form calculi in the bladder, and subsequent regenerative hyperplasia. Thus tumor development is due to increased cell division, and if the dose is not high enough to produce calculi then there is no increased cell division and no increased risk of tumor development (Cohen and Lawson, 1995). 11.6.5 My co toxins Of the 23 fungal toxins tested for carcinogenicity, 14 are positive (61%). The mutagenic mold toxin, aflatoxin, which is found in moldy peanut and corn products, interacts with chronic hepatitis infection in human liver cancer development (Qian et al, 1994). There is a synergistic effect in the human liver between aflatoxin (genotoxic effect) and the hepatitis B virus (cell division effect) in the induction of liver cancer (Wu-Williams et al, 1992). The HERP value for aflatoxin of 0.008% is based on the rodent potency; if the lower human potency value calculated by the US FDA from epidemiological data were used instead, the HERP would be about 10-fold lower (US Food and Drug Administration, 1993). Biomarker measurements of aflatoxin on populations in Africa and China, which have high rates of both hepatitis B and C viruses and liver cancer, confirm that those populations are chronically exposed to high levels of aflatoxin (Groopman et al., 1992; Pons, 1979). Liver cancer is rare in the USA. Although hepatitis B and C viruses infect less than 1 % of the US population, hepatitis viruses can account for half of liver cancer cases among non-Asians (Yu et al, 1991).
11.6.6
Synthetic contaminants
Polychlorinated biphenyls (PCB) and tetrachlorodibenzo-p-dioxin (TCDD), which have been a cause for concern because of their environmental persistence and carcinogenic potency in rodents, are primarily consumed in foods of animal origin. In the USA, PCBs are no longer used, but exposure persists from industrial products. Consumption in food in the USA declined about 20-fold between 1978 and 1986 (Gartrell et al, 1986; Gunderson, 1995). The HERP value for the most recent reporting of the US FDA TDS (1984-86) is 0.00008%, towards the bottom of the ranking, and far below many values for naturally occurring chemicals in common foods. It has been reported that some countries may have higher intakes of PCBs than in the USA (World Health Organization, 1993). TCDD, the most potent rodent carcinogen, is produced naturally by burning when chloride ion is present, e.g. in forest fires. The sources of human exposure appear to be predominantly anthropogenic, e.g. from incinerators (US Environmental Protection Agency, 1994a). TCDD has received enormous scientific and regulatory attention, most recently in an ongoing assessment by the US EPA (US Environmental Protection Agency, 1994a,b, 1995). Some epidemiological studies suggest an association with human cancer, but the evidence is not sufficient to establish causality. Estimation of average US consumption is based on limited sampling data, and the EPA is currently conducting further studies of concentrations in food. The HERP value of 0.0007% is at the median of the values in Table 11.3. TCDt) exerts many or all of its harmful effects in mammalian cells through binding to the aryl hydrocarbon (Ah) receptor. A wide variety of natural substances also bind to the Ah receptor (e.g. tryptophan oxidation products) and, insofar as they have been examined, they have similar properties to TCDD (Ames et al., 199Ob). For example, a variety of flavones and other plant substances in the diet, such as indole carbinol (IC), also bind to the Ah receptor. IC is the main breakdown compound of glucobrassicin, a glucosinolate that is present in large amounts in vegetables of the Brassica genus, including broccoli (Bradfield and Bjeldanes, 1987). Caution is necessary in drawing conclusions from the occurrence in the diet of natural chemicals that are rodent carcinogens. It is not argued here that these dietary exposures are necessarily of much relevance to human cancer. In fact, the epidemiological results discussed above indicate that adequate consumption of fruits and vegetables reduces cancer risk at many sites, and that protective factors like intake of vitamin C and folic acid are important, rather than intake of individual rodent carcinogens. Our analysis does indicate that widespread exposures to naturally-occurring
rodent carcinogens cast doubt on the relevance to human cancer of lowlevel exposures to synthetic rodent carcinogens. Our results call for a re-evaluation of the utility of animal cancer tests done at the MTD for providing information that is useful in protecting humans against low-level exposures in the diet when a high percentage of both natural and synthetic chemicals appear to be rodent carcinogens at the MTD, when the data from rodent bioassays are not adequate to assess low-dose risk, and when the ranking on an index of possible hazards demonstrates that there is an enormous background of natural chemicals in the diet that rank high, even though so few have been tested in rodent bioassays. Our discussion of the HERP ranking indicates the importance of data on the mechanism of carcinogenesis for each chemical. For several chemicals, mechanistic data have recently been generated which indicate that they would not be expected to be a risk to humans at the levels consumed in food (e.g. saccharin, BHA, chloroform, J-limonene). Recent developments in science and regulatory policy have also emphasized the importance of evaluating mechanistic data, rather than relying exclusively on default, worst-case assessments. The NRC's recent report Science and Judgment in Risk Assessment and the EPA's draft document Proposed Guidelines for Carcinogen Risk Assessment both recommend improvements in the risk assessment process that involve incorporating consideration of dose to the target tissue, mode of action, and biologically based dose-response models, including a possible threshold of dose below which effects will not occur (National Research Council, 1994; US Environmental Protection Agency, 1996).
11.7
Future directions
Our analysis in this chapter suggests several areas for further research into diet and cancer, including epidemiological, toxicological and biochemical investigations. Further understanding of the role and mechanism of endogenous damage could lead to new prevention strategies for cancer. Present epidemiological evidence regarding the role of greater antioxidant consumption in human cancer prevention is inconsistent. Nevertheless, biochemical data indicating massive oxidative damage to DNA, proteins and lipids, as well as indirect evidence such as increased oxidative damage to human sperm DNA with insufficient dietary ascorbate, indicate the need for further investigation of the wide variety of potentially effective antioxidants, both natural and synthetic. Additionally, studies on the importance of dietary fruits and vegetables in cancer suggest the importance of further work on micronutrient deficiency as a major contributor to cancer. Studies in rodents and humans suggest further work on caloric intake and body weight, and the effects on hormonal status.
Since naturally occurring chemicals in the diet have not been a focus of cancer research, it seems reasonable to investigate some of them further as possible hazards because they often occur at high concentrations in foods. Only a small proportion of the many chemicals to which humans are exposed will ever be investigated, and there is at least some toxicological plausibility to the idea that high-dose exposures may be important. In order to identify untested dietary chemicals that might be a hazard to humans if they were to be identified as rodent carcinogens, we propose an index, HERT, which is analogous to HERP: the ratio of Human Exposure/Rodent Toxicity. HERT uses readily available LD50 values rather than the TD50 values from animal cancer tests that are used in HERP. This approach to prioritizing chemicals makes assessment of human exposure levels critical at the outset. The validity of the HERT approach is supported by three analyses. First, we have found that for the exposures to rodent carcinogens for which we have calculated HERP values (N -68), the ranking by HERP and HERT are highly correlated (Spearman rank order correlation = 0.89). Second, we have shown that without conducting a bioassay the regulatory VSD can be approximated by dividing the MTD by 740 000 (Gaylor and Gold, 1995). Since the MTD is not known for all chemicals, and MTD and LD50 are both measures of toxicity, acute toxicity (LD50) can reasonably be used as a surrogate for chronic toxicity (MTD). Third, we and others (Zeise et al, 1984) have found that LD50 and carcinogenic potency are correlated; therefore, HERT is a reasonable surrogate index for HERP, since it simply replaces TD50 with LD50. We have calculated HERT values using LD50 values as a measure of toxicity in combination with available data on concentrations of untested natural chemicals in commonly consumed foods and data on average consumption of those foods in the US diet. We considered any chemical with available data on rodent LD50, that had a published concentration > 10 ppm in a common food, and for which estimates of average US consumption of that food were available. Among the set of 171 HERT values which we were able to calculate, the HERT ranged across seven orders of magnitude from 0.000001 to 13.5. We report in Table 11.4 the top-ranking HERT values for average exposures in the US diet (because the value is so high, we also include cassava, which is a staple in some parts of Africa and South America). It might be reasonable to investigate further the chemicals in Table 11.4 in chronic carcinogenicity bioassays. For example, solanine and chaconine, the main alkaloids in potatoes, are cholinesterase inhibitors that can be detected in the blood of almost all people (Ames, 1983, 1984; Harvey et al, 1985). Chlorogenic acid was clastogenic at a concentration of 150 ppm (Ames et al, 199Oa), which is 100 times less than its concentration in roasted
Table 11.4 High-ranking chemicals on the HERT index: Human Exposure/Rodent Toxicity (LD50) Possible hazard: HERT Daily human (%) exposure via food
LD50 (mg/kg) Human dose of chemical
Rats
Mice
Hydrogen cyanide, 35 mg
4.3
Cassava (as a dietary staple) Coffee, 13.3 g
0.3
Potato, 54.9 g
a-Chaconine, 4.10 mg
0.1
Coffee, 13.3 g
Chlorogenic acid, 274 mg
0.08
Chocolate, US average
Theobromine, 48.8 mg
0.06
Pepper, 446 mg
Piperine, 21.0 mg
0.05
Coffee, 13.3 g
Trigonelline, 176 mg
0.04
Potato chips, 5.2 g a-Chaconine, 491 |j,g
0.01
Beer, 257 g
Isoamyl alcohol, 13.6 mg
0.01
Coffee, 13.3g
2-Furancarboxylic acid, 821^g
10OP
0.01
Ipomeamarone, 336 |jig
50
0.009
Sweet potato, 7.67 g Potato, 54.9 g
0.005
Coffee, 13.3 g
3-Methyl-l ,2-benzenediol, 203 (jig
0.005
Coffee, 13.3 g
Oxalic acid, 25.2 mg
7500
0.004
Beer, 257 g
Phenethyl alcohol, 5.46 mg
1790
0.004
Beer, 257 g
Isobuyl alcohol, 6.40 mg
2460
0.003
Coffee, 13.3 g
Pyrogallol, 555 fjtg
300
0.003
Lettuce, 14.9 g
Methylamine, 567 (jig
317
13.5
Caffein, 381 mg
a-Solanine, 3.68 mg
3.7
(192) (84P)
127 19P
400OP (1265)
837
514
(1637)
5000 (84P)
19P
1300
590
56V
Exposure: References Luckner (1990) Stofberg and Grundschober (1987); Macaulay et al (1984) Bushway and Ponnampalan (1981); Takagi et al (1990); Technical Assessment Systems (1989) Stofberg and Grundschober (1987); Baltes (1977) International Agency for Research on Cancer (1991) Stofberg and Grundschober (1987) Stofberg and Grundschober (1987); Clinton (1986) Stofberg and Grundschober (1987); Ahmet and Muller (1978) Stofberg and Grundschober (1987); Arkima (1968) Stofberg and Grundschober (1987); Tressl et al (1978) Coxon et al (1975) Bushway and Ponnampalan (1981); Takagi et al (1990); Technical Assessment Systems (1989) Stofberg and Grundschober (1987); Heinrich and Baltes (1987) Stofberg and Grundschober (1987); Kasidas and Rose (1980) Stofberg and Grundschober (1987); Arkima (1968) Stofberg and Grundschober (1987); Arkima (1968) Stofberg and Grundschober (1987); Tressl et al (1978) Technical Assessment Systems (1989); Neurath et al (1977)
Table 11.4
Continued
Possible hazard: HERT Daily human exposure via food (%)
LD50 (mg/kg) Human dose of chemical
Rats
Mice
0.003
Beer, 257 g
Propyl alcohol 3.29 mg
1870
(6800)
0.002
Banana, 15.7 g
trans-2-Hexenal, 1.19mg
(780)
685
0.002
Tomato, 88.7 g
p-Coumaric acid, 10.2 mg
0.002
Apple, 32 g
Epicatechin, 1.28 mg
0.002
Beer, 257 g
Ethyl acetate, 4.42 mg
657P
IOOOP
(5620)
4100
Exposure: References Stofberg and Grundschober (1987); Arkima (1968) Technical Assessment Systems (1989); Hultin and Procter (1961) Technical Assessment Systems (1989); Schmidtlein and Herrmann (1975) Risch and Herrmann (1988); US Environmental Protection Agency (1989b) Stofberg and Grundschober (1987); Rosculet and Rickard (1968)
Potency of chemicals: HERT uses LD50 from the species with the lower value; the higher value is in parentheses. LD50 values shown are values taken from the Registry of Toxic Effects of Chemical Substances (RTECS) computer database. Daily human exposure: We have tried to use reasonable daily intakes to facilitate comparisons. The calculations assume a daily dose for a lifetime. Possible hazard: The amount of rodent carcinogen indicated under dose is divided by 70 kg to give a milligram per kilogram of human exposure, and this human dose is given as the percentage of the LD50 dose in the rodent (in milligrams per kilogram) to calculate the Human Exposure/Rodent Toxicity index (HERT). All LD50S are oral route except for P (Intraperitoneal) and V (Intravenous).
coffee beans and similar to its concentration in apples, pears, plums, peaches, cherries and apricots. Chlorogenic acid and caffeic acid are also mutagens (Ames et al, 199Oa). The genotoxic activity of coffee towards mammalian cells has been demonstrated (Tucker et al, 1989). Cyanogenesis, the ability to release hydrogen cyanide, is widespread in plants, including several foods, of which the most widely eaten are cassava and lima bean (Poulton 1983). Cassava is eaten widely throughout the tropics, and is a dietary staple for over 300 million people (Bokanga et al., 1994). There are few effective means of removing the cyanogenic glycosides that produce hydrogen cyanide, and cooking is generally not effective (Bokanga et al, 1994; Poulton, 1983). In mice, no standard lifetime studies of caffeine have been conducted. In rats, cancer tests of caffeine have been negative, but one study that was inadequate because of early mortality showed an increase in pituitary adenomas (Yamagami et al., 1983). The chemicals in Table 11.4 might reasonably be evaluated further by the National Toxicology Program as candidates for further testing.
Acknowledgements This work was supported through the Lawrence Berkeley National Laboratory by the US Department of Energy, contract DE-AC-0376SFO0098, and through the University of California, Berkeley by the National Institute of Environmental Health Sciences Center Grant ESO1896. We thank Neela B. Manley for comments on the manuscript, and Stuart W. Krasner for providing the disinfection by-products database.
References Ahmet, S.S. and Miiller, K. (1978) Effect of wound-damages on the glyco-alkaloid content in potato tubers and chips. Lebensmittel-Wissenschaft Technologic, 11, 144-146. American Water Works Association (1993) Disinfectant/Disinfection By-Products Database for the Negotiated Regulation. AWWA, Washington, DC. Ames, B.N. (1983) Dietary carcinogens and anti-carcinogens: oxygen radicals and degenerative diseases. Science, 221, 1256-1264. Ames, B.N. (1984) Cancer and diet. Science, 224, 668-670, 757-760. Ames, B.N. and Gold, L.S. (1987) Science, 238, 1633-1634. Ames, B.N. and Gold, L.S. (1988) Reply to letter to the editor: Carcinogenic risk estimation. Science, 240, 1043-1047. Ames, B.N. and Gold, L.S. (1989) Letter to the editor: Pesticides, risk and applesauce. Science, 240, 1040-1047. Ames, B.N. and Gold, L.S. (1990) Chemical carcinogenesis: too many rodent carcinogens. Proceedings of the National Academy of Sciences of the USA, 87, 7772-7776. Ames, B.N., Magaw, R. and Gold, L.S. (1987a) Ranking possible carcinogenic hazards. Science, 236, 271-280. Ames, B.N., Magaw, R. and Gold, L.S. (1987b) Reply to letter to the editor: Risk assessment. Science, 237, 235. Ames, B.N., Magaw, R. and Gold, L.S. (1987c) Reply to letter to the editor: Carcinogenicity of aflatoxins. Science, 237, 1283-1284. Ames, B.N., and Gold, L.S. (1987d) Reply to letter to the editor: Risk assessment. Science, 237, 1399-1400. Ames, B.N., Profet, M. and Gold, L.S. (199Oa) Dietary pesticides (99.99% all natural). Proceedings of the National Academy of Sciences of the USA, 87, 7777-7781. Ames, B.N., Profet, M. and Gold, L.S. (199Ob) Nature's chemicals and synthetic chemicals: comparative toxicology. Proceedings of the National Academy of Sciences of the USA, 87, 7782-7786. Ames, B.N., Shigenaga, M.K. and Hagen, T.M. (1993a) Oxidants, antioxidants, and the degenerative diseases of aging. Proceedings of the National Academy of Sciences of the USA, 90, 7915-7922. Ames, B.N., Shigenaga, M.K. and Gold, L.S. (1993b) DNA lesions, inducible DNA repair, and cell division: three key factors in mutagenesis and carcinogenesis. Environmental Health Perspectives, 101(Suppl. 5), 35-44. Ames, B.N., Gold, L.S. and Willett, W.C. (1995) The causes and prevention of cancer. Proceedings of the National Academy of Sciences of the USA, 92, 5258-5265. Ariza, R.R., Dorado, G., Barbanch, M. and Pueyo, C. (1988) Study of the causes of directacting mutagenicity in coffee and tea using the Ara test in Salmonella typhimurium. Mutation Research, 201, 89-96. Arkima, V. (1968) Die quantiative gaschromatographische Bestimmung der hoheren aliphatischen und aromatischen Alkohole im Bier. Monatschrift Brauerei, 21, 25-27.
Armstrong, B. and Doll, R. (1975) Environmental factors and cancer incidence and mortality in different countries, with special reference to dietary practices. International Journal of Cancer, 15, 617-631. Baltes, W. (1977) Rosteffekte auf die Kaffeezusammensetzung. Collogue Scientifique International sur Ie Cafe, 8, 85-96. Bendich, A. and Butterworth C.E. Jr, (eds) (1991) Micronutrients in Health and in Disease Prevention. Marcel Dekker, New York. Bernstein, L., Gold, L.S., Ames, B.N. et al. (1985) Some tautologous aspects of the comparison of carcinogenic potency in rats and mice. Fundamental and Applied Toxicology, 5, 79-86. Block, G., Patterson, B. and Subar, A. (1992) Fruit, vegetables and cancer prevention: a review of the epidologic evidence. Nutrition and Cancer, 18, 1-29. Blount, B.C., Mack M.M., Wehr C. et al. (1997) Folate deficiency causes uracil misincorporation into human DNA and chromosome breakage: Implications for cancer and neuronal damage. Proceedings of the National Academy of Sciences of the USA, 94, 3290-3295. Bokanga, E., Essers, A.J.A., Poulter, N. et al (eds) (1994) International Workshop on Cassava Safety. Acta Horticulturae, 375. International Society for Horticultural Science, Wageningen, Netherlands. Bradfield, C.A. and Bjeldanes, L.F. (1987) Structure-activity relationships of dietary indoles: a proposed mechanism of action as modifiers of xenobiotic metabolism. Journal of Toxicology and Environmental Health, 21, 311-323. Bushway, RJ. and Ponnampalam, R. (1981) a-Chaconine and a-solanine content of potato products and their stability during several modes of cooking. Journal of Agriculture and Food Chemistry, 29, 814-817. California Environmental Protection Agency, Standards and Criteria Work Group (1994) California Cancer Potency Factors: Update. CaIEPA, Sacramento. Clarke, RJ. and Macrae, R. (eds) (1988) Coffee, VoIs 1-3. Elsevier, New York. Clayson, D.B., Iverson, F., Nera, E.A. and Lok, E. (1990) The significance of induced forestomach tumors. Annual Review of Pharmacology and Toxicology, 30, 441-463. Clinton, W.P. (1986) The chemistry of coffee. Collogue Scientifique International sur Ie Cafe, 11, 87-92. Cohen, S.M. (1995) Human relevance of animal carcinogenicity studies. Regulatory Toxicology and Pharmacology, 21, 75-80. Cohen, S.M. and Ellwein, L.B. (1991) Genetic errors, cell proliferation, and carcinogenesis. Cancer Research, 51, 6493-6505. Cohen, S.M. and Lawson, T.A. (1995) Rodent bladder tumors do not always predict for humans. Cancer Letters, 93, 9-16. Counts, J.L. and Goodman, J.I. (1995) Principles underlying dose selection for, and extrapolation from, the carcinogen bioassay: dose influences mechanism. Regulatory Toxicology and Pharmacology, 21, 418-421. Coxon, D.T., Curtis, R.F. and Howard, B. (1975) Ipomeamarone, a toxic furanoterpenoid in sweet potatoes (Ipomea batatas) in the United Kingdom. Food and Cosmetic Toxicology, 13, 87-90. Cunningham, M.L., Pippin, L.L., Anderson, N.L. and Wenk, M.L. (1995) The hepatocarcinogen methapyrilene but not the analog pyrilamine induces sustained hepatocellular replication and protein alterations in F344 rats in a 13-week feed study. Toxicology and Applied Pharmacology, 131, 216-223. Davies, T.S. and Monro, A. (1995) Marketed human pharmaceuticals reported to be tumorigenic in rodents. /. Am. Coll. Toxicol, 14, 90-107. Dietrich, D.R. and Swenberg, J.A. (1991) The presence of a2u-globulin is necessary for dlimonene promotion of male rat kidney tumors. Cancer Research, 51, 3512-3521. Doll, R. and Peto, R. (1981) The Causes of Cancer. Oxford University Press, New York. Duggan, R.E. and Corneliussen, P.E. (1972) Dietary intake of pesticide chemicals in the United States (III), June 1968-April 1970. Pesticide Monitoring Journal, 5, 331-341. Economic Research Service (1994) Vegetables and Specialties Situation and Outlook Yearbook. US Department of Agriculture, Washington, DC. Economic Research Service (1995) Fruit and Tree Nuts Situation and Outlook Yearbook. US Department of Agriculture, Washington, DC.
Everson, R.B., Wehr, C.M., Erexson, G.L. and MacGregor, J.T. (1988) Association of marginal folate depletion with increased human chromasomal damage in vivo: demonstration by analysis of micronucleated erythrocytes. Journal of the National Cancer Institute, 80, 525-529. Fraga, C.G., Motchik, P.A., Shigenaga, M.K. et al (1991) Ascorbic acid protects against endogenous oxidative damage in human sperm. Proceedings of the National Academy of Sciences of the USA, 88, 11003-11006. Freedman, D.A., Gold, L.S. and Slone, T.H. (1993) How tautological are inter-species correlations of carcinogenic potency? Risk Analysis, 13, 265-272. Freudenheim, J.L., Graham, S., Marshall, J.R. et al. (1991) Folate intake and carcinogenesis of the colon and rectum. International Journal of Epidemiology 20, 368-374. Fujita, Y., Wakabayashi, K., Nagao, M. and Sugimura, T. (1985) Implication of hydrogen peroxide in the mutagenicity of coffee. Mutation Research, 144, 227-230. Fung, V.A., Cameron, T.P., Hughes, TJ. et al. (1988) Mutagenic activity of some coffee flavor ingredients. Mutation Research, 204, 219-228. Gartrell, M.J., Craun, J.C., Podrebarac, D.S. and Gunderson, E.L. (1986) Pesticides/selected elements, and other chemicals in adult total diet samples October 1980-March 1982. Journal of the Association of Official Analytical Chemists, 69, 146-161. Gaylor, D.W. and Gold, L.S. (1995) Quick estimate of the regulatory virtually safe dose based on the maximum tolerated dose for rodent bioassays. Regulatory Toxicology and Pharmacology, 22, 57-63. Gerhardsson, M., Floderus, B. and Norell, S.E. (1988) Physical activity and colon cancer risk. International Journal of Epidemiology, 17, 743-746. Giovannucci, E., Rimm, E.B., Stampfer, MJ. et al. (1994) Intake of fat, meat, and fiber in relation to risk of colon cancer in men. Cancer Research, 54, 2390-2397. Giovannucci, E., Rimm, E.B., Ascherio, A. et al (1995) Alcohol, low-methionine-low-folate diets, and risk of colon cancer in men. Journal of the National Cancer Institute, 87, 265-273. Gold, L.S., Sawyer, C.B., Magaw, R. et al. (1984) A Carcinogenic Potency Database of the standardized results of animal bioassays. Environmental Health Perspectives, 58, 9-319. Gold, L.S., de Veciana, M., Backman, G.M. et al. (1986) Chronological supplement to the Carcinogenic Potency Database: standardized results of animal bioassays published through December 1982. Environmental Health Perspectives, 67, 161-200. Gold, L.S., Slone, T.H., Backman, G.M. et al. (1987) Second chronological supplement to the Carcingenic Potency Database: standardized results of animal bioassays published through December 1984 and by the National Toxicology Program through May 1986. Environmental Health Perspectives, 74, 237-329. Gold, L.S., Slone, T.H., Backman, G.M. et al. (1990) Third chronological supplement to the Carcinogenic Potency Database: standardized results of animal bioassays published through December 1986 and by the National Toxicology Program through June 1987. Environmental Health Perspectives, 84, 215-286. Gold, L.S., Slone, T.H., Stern, B.R. (1992a) Rodent carcinogens: setting priorities. Science, 258, 261-265. Gold, L.S., Manley, N.B. and Ames, B.M. (1992b) Extrapolation of carcinogenesis between species: qualitative and quantitative factors. Risk Analysis, 12, 579-588. Gold, L.S., Manley, N.B., Slone, T.H. et al. (1993) The fifth plot of the Carcinogenic Potency Database: results of animal bioassays published in the general literature through 1988 and by the National Toxicology Program through 1989. Environmental Health Perspectives, 100, 65-135. Gold, L.S., Slone, T.H., Manley, N.B. and Ames, B.N. (1994a) Heterocyclic amines formed by cooking food: comparison of bioassay results with other chemicals in the Carcinogenic Potency Database. Cancer Letters, 83, 21-29. Gold, L.S., Garfinkel, G.B. and Slone, T.H. (1994b) Setting priorities among possible carcinogenic hazards in the workplace. In: Smith, C.M., Christiani, D.C. and Kelsey, K.T. (eds) Chemical Risk Assessment and Occupational Health: Current Applications, Limitations, and Future Prospects. Auburn House, Westport, CT, pp. 91-103. Gold, L.S., Manley, N.B., Slone, T.H. et al. (1995) Sixth Plot of the Carcinogenic Potency Database: results of animal bioassays published in the general literature 1989-1990 and by the National Toxicology Program 1990-1993. Environmental Health Perspectives, 103(Suppl. 8), 3-122.
Gold, L.S., Slone, T.H., Manley, N.B. et al (1997a) Carcinogenic Potency Database. In: Gold, L.S. and Zeiger, E. (eds) Handbook of Carcinogenic Potency and Genotoxicity Databases. CRC Press, Boca Raton, FL, 1-605. Gold, L.S., Slone, T.H. and Ames, B.N. (1997b) Overview of analyses of the Carcinogenic Potency Database. In: Gold, L.S. and Zeiger, E. (eds) Handbook of Carcinogenic Potency and Genotoxicity Databases. CRC Press, Boca Raton, FL, 651-685. Goldbohm, R.A., van den Brandt, P.A., van't Veer, P. et al. (1994) A prospective cohort study on the relation between meat consumption and the risk of colon cancer. Cancer Research, 54, 718-723. Groopman, J.D., Zhu, J.Q., Donahue, P.R. et al. (1992) Molecular dosimetry of urinary aflatoxin-DNA adducts in people living in Guangxi Autonomous Region, People's Republic of China. Cancer Research, 52, 45-52. Gunderson, E.L. (1995) Dietary intakes of pesticides, selected elements, and other chemicals: FDA Total Diety Study, June 1984-April 1986. Journal of the Association of Official Analytical Chemists, 78, 910-921. Hall, R.L., Henry, S.H., Scheuplein, RJ. et al. (1989) Comparison of the carcinogenic risks of naturally occurring and adventitious substances in food. In: Taylor, S.L. and Scanlon, R.A. (eds) Food Toxicology: A Perspective on the Relative Risks. Marcel Dekker Inc., New York, pp. 205-224. Hanham, A.F., Dunn, B.P. and Stich, H.F. (1983) Clastogenic activity of caffeic acid and its relationship to hydrogen peroxide generated during autooxidation. Mutation Research, 116, 333-339. Hard, G.C. and Whysner, J. (1994) Risk assessment of d-limonene: an example of male ratspecific renal tumorigens. Critical Reviews in Toxicology, 24, 231-254. Hart, R., Neumann, D. and Robertson, R. (1995) Dietary Restriction: Implications for the Design and Interpretation of Toxicity and Carcinogenicity Studies. ILSI Press, Washington, DC. Harvey, M.H., Morris, B.A., McMillan, M. and Marks, V. (1985) Measurement of potato steroidal alkaloids in human serum and saliva by radioimmunoassay. Human Toxicology, 4, 503-512. Hayward, JJ., Shane, B.S., Tindall, K.R. and Cunningham, M.L. (1995) Differential in vivo mutagenicity of the carcinogen/non-carcinogen pair 2,4- and 2,6-diaminotoluene. Carcinogenesis, 16, 2429-2433. Heinrich, L. and Baltes, W. (1987) Uber die Bestimmung von Phenolen im Kaffeegetrank. Zeitschrift fur Lebensmittel-Untersuchung und -Forschung, 185, 362-365. Henderson, B.E., Ross, R.K. and Pike, M.C. (1991) Towards the primary prevention of cancer. Science, 254, 1131-1138. Hill, MJ., Giacosa, A. and Caygill, C.PJ. (eds) (1994) Epidemiology of Diet and Cancer. Ellis Horwood, New York. Hultin, H.O. and Proctor, B.E. (1961) Changes in some volatile constituents of the banana during ripening, storage, and processing. Food Technology, 15, 440-444. Hunter, DJ. and Willett, W.C. (1993) Diet, body size, and breast cancer. Epidemiological Reviews, 15, 110-132. Innes, J.R.M., Ulland, B.M., Valerio, M.G. et al. (1969) Bioassay of pesticides and industrial chemicals for tumorigenicity in mice: a preliminary note. Journal of the National Cancer Institute, 42, 1101-1114. International Agency for Research on Cancer (1988) Alcohol Drinking. IARC, Lyon, France. International Agency for Research on Cancer (1991) Coffee, Tea, Mate, Methylxanthines and Methylglyoxal. IARC, Lyon, France. Ishidate, M. Jr, Harnois, M.C. and Sofuni, T. (1988) A comparative analysis of data on the clastogenicity of 951 chemical substances tested in mammalian cell cultures. Mutation Research, 201, 89-96. Kasidas, G.P. and Rose, G.A. (1980) Oxalate content of some common foods: determination by an enzymatic method. Journal of Human Nutrition, 34, 255-266. Key, T. and Reeves, G. (1994) Organochlorines in the environment and breast cancer. British Medical Journal, 308, 1520-1521. Kikugawa, K., Kato, T. and Takahashi, S. (1989) Possible presence of 2-amino-3,4-dimethylimidazo[4,5-/|quinoline and other heterocyclic amine-like mutagens in roasted coffee beans. Journal of Agriculture and Food Chemistry, 37, 881-886.
Krewski, D., Szyszkowicz, M. and Rosenkranz, H. (1990) Quantitative factors in chemical carcinogenesis: variation in carcinogenic potency. Regulatory Toxicology and Pharmacology, 12, 13-29. Krewski, D., Gaylor, D.W., Soms, A.P. and Syszkowicz, M. (1993) An overview of the report - Correlation Between Carcinogenic Potency and the Maximum Tolerated Dose: Implications for Risk Assessment. Risk Analysis, 13, 383-398. Larson, J.L., Wolf, D.C. and Butterworth, B.E. (1994) Induced cytoxicity and cell proliferation in the hepatocarcinogenicity of chloroform in female 66CSF1 mice: comparison of administration by gavage in corn oil vs. ad libitum in drinking water. Fundamental and Applied Toxicology, 22, 90-102. Le Marchand, L., Kolonel, L.N., Wilkens, L.R. et al (1994) Animal fat consumption and prostate cancer: a prospective study in Hawaii. Epidemiology, 5, 275-282. Luckner, M. (1990) Secondary Metabolism in Microorganisms, Plants, and Animals, 3rd edn. Springer-Verlag, New York. Maarse, H., Visscher, C.A., Willemsens, L.C. et al. (eds) (1994) Volatile Compounds in Foods, Qualitative and Quantitative Data. Supplement 5 and Cumulative Index. TNO-CIVO Food Analysis Institute, Zeist, The Netherlands. Macaulay, T., Gallant, C.J., Hooper, S.N. and Chandler, R.F. (1984) Caffeine content of herbal and fast-food beverages. Journal of the Canadian Dietetic Association, 45, 150-156. MacGregor, J.T., Schlegel, R., Wehr, C.M. et al. (1990) Cytogenetic damage induced by folate deficiency in mice is enhanced by caffeine. Proceedings of the National Academy of Sciences of the USA, 87, 9962-9965. Mirsalis, J.C., Provost, G.S., Matthews, C.D. et al. (1993) Induction of hepatic mutations in lacl transgenic mice. Mutagenesis, 8, 265-271. National Research Council (1979) The 1977 Survey of Industry on the Use of Food Additives. National Academy Press, Washington, DC. National Research Council (1987) Regulating Pesticides in Food: The Delaney Paradox. National Academy Press, Washington, DC. National Research Council (1994) Science and Judgement in Risk Assessment. National Academy Press, Washington, DC. National Toxicology Program (1993) Toxicology and Carcinogenesis Studies of Coumarin (CAS No. 91-64-5) in F344/N Rats and 66CSF1 Mice (Gavage Studies). NTP Technical Report Series No. 422. DHHS, Public Health Service, National Institutes of Health, Research Triangle Park, NC. Neurath, G.B., Diinger, M., Pein, F.G. et al. (1977) Primary and secondary amines in the human environment. Food and Cosmetics Toxicology, 15, 275-282. Ochiai, M., Masatoshi, W., Hiromi, K. et al. (1996) DNA adduct formation, cell proliferation and aberrant crypt focus formation induced by PhIP in male and female rat colon with relevance to carcinogenesis. Carcinogenesis, 17, 95-98. Omenn, G.S., Stuebbe, S. and Lave, L.B. (1995) Predictions of rodent carcinogenicity testing results: interpretation in the light of the Lave-Omenn value-of-information model. Molecular Carcinogenicity, 14, 37-45. Pariza, M.W. and Boutwell, R.K. (1987) Historical perspective: calories and energy expenditure in carcinogenesis. American Journal of Clinical Nutrition, 45(Suppl. 1), 151-156. Peto, R., Pike, M.C., Bernstein, L. et al. (1984) The TD50: a proposed general convention for the numerical description of the carcinogenic potency of chemicals in chronic-exposure animal experiments. Environmental Health Perspectives, 58, 1-8. Pons, W.A. Jr, (1979) High pressure liquid chromatographic determination of aflatoxins in corn. Journal of the Official Association of Analytical Chemists, 62, 586-594. Poole, S.K. and Poole, C.F. (1994) Thin-layer chromatographic method for the determination of the principal polar aromatic flavour compounds of the cinnamons of commerce. Analyst, 119, 113-120. Poulton, I.E. (1983) Cyanogenic compounds in plants and their toxic effects. In: Keeler, R.F. and Tu, A.T. (eds) Handbook of Natural Toxins: Plant and Fungal Toxins, Vol. 1. Marcel Dekker, New York, 117-157. Preston-Martin, S., Pike, M.C., Ross, R.K. and Jones, P.A. (1990) Increased cell division as a cause of human cancer. Cancer Epidemiology Biomarkers and Prevention, 50, 7415-7421. Preston-Martin, S., Monroe, K., Lee, PJ. et al. (1995) Spinal meningiomas in women in Los Angeles County: investigation of an etiological hypothesis. Cancer Epidemiology Biomarkers and Prevention, 4, 333-339.
Qian, G., Ross, R.K., Yu, M.C. et al. (1994) A follow-up study of urinary markers of aflatoxin exposure and liver cancer risk in Shanghai, People's Republic of China. Cancer Epidemiology Biomarkers and Prevention, 3, 3-10. Rahn, W. and Konig, W.A. (1978) GC/MS investigations of the constituents in a diethyl ether extract of an acidified roast coffee infusion. Journal of High Resolution Chromatography and Chromatography Communications, 1002, 69-71. Risch, B. and Herrmann, K. (1988) Die Gehalte an Hydroxyzintsare-Verbindungen und Catechinen in Kern- und Steinobst. Zeitschrift fur Lebensmittel-Untersuchung und -Forschung, 186, 225-230. Roe, FJ.C. (1989) Non-genotoxic carcinogenesis: implications for testing and extrapolation to man. Mutagenesis, 4, 407-411. Roe, F.J., Lee, P.N., Conybeare, G. et al. (1991) Risks of premature death and cancer predicted by body weight in early adult life. Human Experimental Toxicology, 10, 285-288. Rosculet, G. and Rickard, M. (1968) Isolation and characterization of flavor components in beer. American Society of Brewing Chemists Proceedings, 203-213. Schmidtlein, H. and Herrmann, K. (1975) Uber die Phenolsauren des Gemiises. II. Hydroxyzimtsauren und Hydroxybenzoesauren der Frucht- und Samengemiisearten Zeitschrift fur Lebensmittel-Untersuchung und -Forschung, 159, 213-218. Slattery, M.L., Schumacher, M.C., West, D.W. and Robison, L.M. (1988) Smoking and bladder cancer. The modifying effect of cigarettes on other factors. Cancer, 61, 402-408. Steinmetz, K.A. and Potter, J.D. (1991) Vegetables, fruit, and cancer. I. Epidemiology. Cancer Causes and Control, 2, 325-357. Stich, H.F., Rosin, M.P., Wu, C.H. and Powrie, W.D. (1981) A comparative genotoxicity study of chlorogenic acid (3-0-caffeoylquinic acid). Mutation Research, 90, 201-212. Stofberg, J. and Grundschober, F. (1987) Consumption ratio and food predominance of flavoring materials. Second cumulative series. Perfumer and Flavorist, 12, 27-56. Swanson, C.A., Jones, D.Y., Schatzkin, A. et al. (1988) Breast cancer risk assessed by anthropometry in the NHANES I epidemiological follow-up study. Cancer Research, 48, 5363-5367. Takagi, K., Toyoda, M., Fujiyama, Y. and Saito, Y. (1990) Effect of cooking on the contents of a-chaconine and a-solanine in potatoes. Journal of the Food Hygienic Society of Japan, 31, 67-73. Technical Assessment Systems (1989) Exposure 1 Software Package. TAS, Washington, DC. Thun, MJ., Calle, E.E., Namboodiri, M.M. et al. (1992) Risk factors for fatal colon cancer in a large prospective study. Journal of the National Cancer Institute, 84, 1491-1500. Tressl, R., Bahri, D., Koppler, H. and Jensen, A. (1978) Diphenole und Caramelkomponenten in Rostkaffees verschiedener Sorten. II. Zeitschrift fur Lebensmittel-Untersuchung und -Forschung, 167, 111-114. Tucker, J.D., Taylor, R.T., Christensen, M.L. et al. (1989) Cytogenetic response to coffee in Chinese hamster ovary AUXBl cells and human peripheral lymphocytes. Mutagenesis, 4, 343-348. United Fresh Fruit and Vegetable Association (1989) Supply Guide: Monthly Availability of Fresh Fruit and Vegetables. UFFVA, Alexandria, VA. US Environmental Protection Agency (1984) Ethylene Dibromide (EDB) Scientific Support and Decision Document for Grain and Grain Milling Fumigation Uses. USEPA, Washington, DC. US Environmental Protection Agency (1989a) Daminozide Special Review. Technical Support Document - Preliminary Determination to Cancel the Food Uses of Daminozide. USEPA, Washington, DC. US Environmental Protection Agency (1989b) Daminozide Special Review. Crop Field Trials. Supplemental Daminozide and UDMH Residue Data for Apples, Cherries, Peanuts, Pears, and Tomatoes. USEPA, Washington, DC. US Environmental Protection Agency (199Ia) EBDC/ETU Special Review. DRES Dietary Exposure/Risk Estimates. USEPA, Washington, DC. US Environmental Protection Agency (199Ib) Report of the EPA Peer Review Workshop on Alpha2lt-globulin: Association with Renal Toxicity and Neoplasia in the Male Rat. USEPA, Washington, DC.
US Environmental Protection (1992) Ethylene bisdithiocarbamates (EBDCs); notice of intent to cancel; conclusion of special review. Federal Register, 57, 7484-7530. US Environmental Protection Agency (1994a) Estimating Exposure to Dioxin-Like Compounds (Review Draft). USEPA, Washington, DC. US Environmental Protection Agency (1994b) Health Assessment Document for 2,3,7,8Tetrachlorodibenzo-p-Dioxin (TCDD) and Related Compounds. USEPA, Washington, DC. US Environmental Protection Agency (1995) Re-evaluating Dioxin: Science Advisory Board's Review of EPA's Reassessment of Dioxin and Dioxin-like Compounds. USEPA, Washington, DC. US Environmental Protection Agency. Office of Research and Development (1996) Proposed Guidelines for Carcinogen Risk Assessment. Federal Register, 61, 17960-18011. US Food and Drug Administration (199Ia) FDA Pesticide Program: Residues in foods 1990. Journal of the Association of Official Analytical Chemists, 74, 121A-141A. US Food and Drug Administration (199Ib) Butylated hydroxyanisole (BHA) intake: Memo from Food and Color Additives Section to L. Lin. USFDA, Washington, DC. US Food and Drug Administration (1992) Exposure to Aflatoxins. USFDA, Washington, DC. US Food and Drug Administration (1993) Food and Drug Administration Pesticide Program: Residue monitoring 1992. Journal of the Association of Official Analytical Chemists, 76, 127A-148A. US Food and Drug Agency (1993) Assessment of carcinogenic upper bound lifetime risk resulting from aflatoxins in consumer peanut and corn products. Report of the Quantitative Risk Assessment Committee. USFDA, Washington, DC. US National Cancer Insitute (1984) NIH Publication No. 84-2039. Everything doesn't cause cancer: But how can we tell which things cause cancer and which ones don't? USNCI, Bethesda, MD. Willett, W.C. and Stampfer, MJ. (1990) Dietary fat and cancer - another view. Cancer Causes and Control, 1, 103-109. World Health Organization (1993) Polychlorinated Biphenyls and Terphenyls. Environmental Health Criteria 140. WHO, Geneva. Wu-Williams, A.H., Zeise, L. and Thomas, D. (1992) Risk assessment for aflatoxin B1: a modeling approach. Risk Analysis, 12, 559-567. Yamagami, T., Handa, H., Juji, T. et al. (1983) Rat pituitary adenoma and hyperplasia induced by caffeine administration. Surgical Neurology, 20, 323-331. Youngman, L.D., Park, J.Y. and Ames, B.N. (1992) Protein oxidation associated with aging is reduced by dietary restriction of protein or calories. Proceedings of the National Academy of Sciences of the USA, 89, 9112-9116. Yu, M.C., Tong, MJ., Govindarajan, S. and Henderson, B.E. (1991) Nonviral risk factors for hepatocellular carcinoma in a low-risk population, the non-Asians of Los Angeles County, California. Journal of the National Cancer Institute, 83, 1820-1826. Zeise, L., Wilson, R. and Crouch, E. (1984) Use of acute toxicity to estimate carcinogenic risk. Risk Analysis, 4, 187-199.
12 Threshold of regulation M.A. CHEESEMAN and EJ. MACHUGA
This chapter discusses the scientific and legal basis for establishment of a threshold of regulation for substances used in food contact material. The so-called threshold of regulation is set at a level below which exposure to a given chemical substance can be reasonably expected to result in negligible risk in the absence of specific information on the toxicity of the substance. The utility of the threshold of regulation concept in enabling regulatory decision-making has been frequently discussed in the literature (Frawley, 1967; Flamm et al, 1987; Rulis, 1989; Munro, 1990; Machuga et al, 1992). In August 1995, the United States Food and Drug Administration (FDA) established a process for applying a threshold of regulation to uses of substances in food contact materials (Food and Drug Administration, 1995). Other regulatory bodies worldwide are also currently considering the application of the concept to the regulation of food packaging and other regulatory areas. We use the FDA's process as an established example, but the general principles of the threshold process are applicable to any regulatory process for food packaging and may have application in broader regulatory areas.
12.1 Introduction In 1958, the US Congress amended the Federal Food, Drug, and Cosmetic Act (FD&C Act) to require pre-market approval of food additives (21 USC 321(S), 342(a)(2)(C), and 348). Under US law, a 'food additive' is any substance the intended use of which results or may reasonably be expected to result, directly or indirectly, in its becoming a component or otherwise affecting the characteristics of any food. To obtain the necessary pre-market approval, persons are required to petition the FDA and to provide sufficient data to establish the safety of the additive under its intended conditions of use. These data can include toxicity data, data on the manufacture and use of the additive, data on potential impurities, data on human exposure from the additive's addition to food, and an assessment of the potential for environmental impact that approval of the additive may have. When one considers the above food additive definition within the framework of the physical laws that govern diffusion (the second law of
thermodynamics predicts that any two substances in contact with one another will tend to diffuse into one another), a strict interpretation of the law leads to the inescapable conclusion that all components of food contact material are food additives and, therefore, must be approved for their intended use. Thus, a strict interpretation of US law and relevant science would require food additive petitions for every component of all food contact materials, no matter how trivial the use level. This, in turn, would require that the FDA expend the resources required to review these petitions. Chemicals used in food packaging materials have posed a number of problems for regulatory agencies in reaching safety determinations. Although the potential for components of food packaging to migrate to food in significant amounts is often quite small, the chemicals used in the manufacture of food packaging can often be relatively toxic chemicals compared to food ingredients. The development of steadily lower detection limits for many analytical methods has provided evidence for classifying many substances previously not considered food additives as migrating to food. In addition, the versatility and use of synthetic food packaging have increased dramatically, resulting in an increase in the number and variety of compounds used in contact with food, and emphasizing the need for a more efficient regulatory scheme for dealing with those components that result in negligible risk to the consumer. Therefore, the FDA has found it essential to apply the principle of commensurate effort, i.e. to ensure that limited resources for regulatory review are applied in proportion to the risk to the public health. As indicated above, all components of food contact material may be expected to become components of food in some amount and, literally, would be required to be the subjects of food additive listing regulations. However, as with all statutory/regulatory scenarios, the legal principle of de minimis (this doctrine is expressed in Latin as de minimis non curat lex - the law does not concern itself with trifles) grants regulatory bodies discretionary authority in deciding whether or not to apply a strict interpretation of the applicable law/regulation. In the case of the FD&C Act, the FDA's discretionary authority in dealing with de minimis food additive situations was cited in the 1979 US Court of Appeal's decision in Monsanto v. Kennedy 613 F. 2d 947 (DC Cir. 1979). In this noteworthy case, the court stated that the Commissioner of Food and Drugs could decline to regulate a substance as a food additive even when the substance met the literal definition of a food additive under given conditions of use. The Monsanto Court decision stated that 'there is latitude inherent in the statutory scheme to avoid literal application of statutory definition of "food additive" in those de minimis situations that, in the informed judgement of the Commissioner of Food and Drugs, clearly present no public health or safety concerns'. In developing the threshold of regulation
process, the FDA interpreted the Monsanto case to mean that it may exempt food additives from the requirement of actually being listed in the notified regulations where the FDA believes that there are clearly no public health or safety concerns. As more diverse materials have been developed for use in food packaging over the past three decades, regulatory bodies such as the FDA have been faced with the problem of food additive petitions proposing minor uses of chemicals in food packaging that largely had to be considered under a traditional regulatory process. Although the application of a threshold of regulation process to such petitions was proposed as early as 1967 (Frawley, 1967), only recently has sufficient information become available to establish a level in the daily diet at or below which the risk to the public health could be considered negligible for a broad category of chemicals used in food contact materials. During the past three decades, several proposals have been put forward by industry and government regulatory scientists in response to the need for a more efficient regulatory scheme for indirect food additives (Table 12.1). The earliest of these proposals, made by Frawley, recommended establishing a dietary concentration level of 100 ppb, excluding pesticides and heavy metals, as the threshold of regulation for food packaging materials, based on a consideration of classical toxicological effects (i.e. non-carcinogenic effects) (Frawley, 1967). Frawley's proposal was based on an analysis of the range of chronic dietary concentrations at which toxic effects occur. His analysis of the results of 2-year chronic oral feeding studies on 220 compounds showed that only five of the 220 chemicals exhibited toxic effects at dietary concentrations below 1000 ppb, and all five of these were pesticides, compounds that would be expected to be more toxic than most substances. Even among the five pesticides, none exhibited toxic effects at dietary concentrations below 100 ppb. Rather than basing a threshold of regulation on dietary concentration, L.L. Ramsey, former Assistant Director for Regulatory Programs of the FDA's Bureau of Science, put forth a proposal to consider a 50-ppb migration level into food as the 'threshold of regulation' for food contact materials (L.L. Ramsey, unpublished). The Society of the Plastics Table 12.1 Threshold of regulation proposals Source
Date
Frawley Ramsey SPI Munro FDA
1967 1969 1977 1990 1993
a b
Migration (PPb) 50 <50 -
Diet. cone. (PPb) 100 1.0a 0.5b
Based on a diet of 1500 g per person per day. Based on a diet of 3000 g (1500 g liquid, 1500 g solid) per person per day.
Industry, Inc., submitted a citizen petition to the FDA in March 1977 (docket no. 77P-0122) requesting that the FDA modify the food additive regulations so that the use of a substance that does not result in detectable levels of migration into food-simulating solvents (using validated analytical methods sensitive to at least 50 ppb) would be exempt from regulation as a food additive, unless there was scientific evidence to indicate that the substance presented a significant risk of irreversible harm to human health. More recently, a study was carried out under the auspices of the Canadian Centre for Toxicology to evaluate the scientific basis for the safety evaluation of trace levels of packaging materials migrating into food. The consensus of those who took part in the study was that a 1-ppb dietary concentration level (equivalent to 1.5 |mg per person per day for a 1.5-kg daily diet) may be contemplated as the 'threshold of regulation' for components of food contact articles for which no toxicological data have been developed (Munro, 1990). In 1989, Rulis published an evaluation of acute toxicity data and carcinogenic potencies in which he suggested that growing databases of these types of toxicity data might be used to support a threshold of regulation for components of food contact materials (Rulis, 1989). Rulis analyzed the results of 18 000 acute oral feeding studies in the Registry of Toxic Effects of Chemical Substances (RTECS) and found that all acute toxic effects observed in these studies occurred at levels corresponding to a concentration above 1000 ppb in the daily diet. The group of 18 000 compounds was restricted only by the availability of data for oral dosing and appropriate species and thus is representative of a broad range of chemical structure and activities. Moreover, this diversity of compounds for which acute data were available demonstrates that these compounds are a reasonable representation of chemicals that might be used in the production of food packaging. Therefore, it is reasonable to assume that substances used in food packaging would not exhibit acute toxic effects below 1000 ppb in the diet. Considering only acute data, and applying a 1000-fold safety factor to the 1000-ppb lower limit, would lead to a 1-ppb level in the daily diet that could be considered for a threshold of regulation. Moreover, a 1-ppb threshold is well below the dietary concentration at which toxic effects occur as a result of chronic exposure to chemical substances. The results of 2-year chronic feeding studies on 220 compounds considered by Frawley (1967) demonstrate that only five of the compounds demonstrated toxic effects below 1000 ppb in the diet and all five of the more toxic chemicals are pesticides. Of these five, none exhibited toxic effects below 100 ppb in the diet. Again, application of a 100-fold safety factor, which is typically used by regulatory agencies in extrapolating from no observed effect levels (NOELs) derived from chronic animal studies to humans, would result in an acceptable daily intake of 1 ppb in the diet in all cases.
Relative frequency
Because a substance that has not been adequately tested for carcinogenicity may later be found to be a carcinogen, it is also reasonable to consider the likely risk if an unstudied compound used in food packaging were later found to be a carcinogen. A consideration of the likely risk of carcinogenicity is critical to establishing a threshold level, because carcinogenicity is ordinarily considered to be one of the more sensitive toxicity endpoints. To address this issue, Rulis evaluated the carcinogenic potencies of 343 chemicals selected from the carcinogenic potency database published by Gold et al (1984, 1986, 1987, 1990). Only chemicals administered via the oral route (including gavage) were used. In addition, only the TD50S (dose which is expected to produce tumors in 50% of test animals over a lifetime) for the most sensitive tumor site/species combinations with a statistical significance of p < 0.01 were chosen for each compound. Rulis found that the carcinogenic potencies of these compounds occurred over a well-defined range, and when the potencies were grouped into dietary concentration ranges and plotted as a probability distribution on a semilogarithmic scale, they formed a 'gaussian' or normal distribution with the most likely TD50 value at the peak of the curve. Further expansion of the number of compounds included in the database to 566 has not significantly altered the parameters of this probabilistic distribution (Figure 12.1). Again, the variety of chemicals included in the database, the number of chemicals considered and the fact that the parameters of the overall distribution have changed little with the increase in data points from 343 to 566 suggest that the data set may be considered representative of a diverse group of chemicals such as those used in the manufacture of food packaging. To relate the distribution of carcinogenic potencies to dietary intake, Rulis employed a so-called 'one-hit' model to estimate the dietary concen-
Log (potency) Figure 12.1 Probabilistic distribution of 566 TD50S.
tration corresponding to a 1 in 1 000 000 upper bound risk level. To accomplish this transformation, the unit risk for each of the 566 carcinogens is first estimated according to equation 12.1: Unit risk = 0.50/TD50 (12.1)
Relative frequency
Then, the lifetime-averaged dietary concentration corresponding to a maximum risk level of 1 in 1 000 000 is calculated for each compound using equation 12.2: Dietary concentration = 1.0 x 10 -6/Unit risk (12.2) (Dietary concentration here has the units of milligrams per kilogram body weight per day.) Thus the probabilistic distribution of dietary concentrations corresponding to an upper bound limit of 1 in 1 000 000 for the 566 chemicals may be plotted (Figure 12.2). RuKs' analysis of carcinogenic potencies indicates that the most likely carcinogenic potency in the distribution corresponds to an upper bound risk level of 1 in 1 000 000 at a dietary exposure of about 1 ppb (3 |mg per person per day for a 3-kg daily diet) (Figure 12.2). Thus, approximately half of the carcinogens may be expected to result in an upper bound risk of less than 1 in 1 000 000 at a dietary concentration of 1 ppb, and the other half would be expected to result in an upper bound risk of greater than 1 in 1 000 000. At a dietary concentration of 0.5 ppb, approximately two-thirds of the compounds would be expected to pose an upper bound risk equal to or less than 1 in 1 000 000. Based on these results and using the assumption that it is unlikely that an unstudied compound would both be a carcinogen and have an intrinsic potency far greater than that observed for studied compounds, the FDA determined that if an exempted substance present in the daily diet at 0.5 ppb were later found to be a carcinogen, the upper bound risk resulting from the use of the substance would be likely to be small.
Log(1/dietary conc.)(micrograms/kg bw) Figure 12.2 Plot of constant risk for 566 carcinogens.
A key question that was considered during the development of the FDA's threshold of regulation, and raised in comments subsequent to the establishment of the policy, concerns whether the methods used to quantitate carcinogenic risk at very low doses are valid. Although there may be scientific debate regarding the validity of the various methods available to mathematically model the dose-response data from carcinogenicity bioassays, the key concern of regulatory agencies is whether risk assessments are sufficiently conservative to ensure safety (i.e. are not likely to underestimate the risk from exposure to carcinogenic materials). Since no regulatory scheme can guarantee absolute certainty for any assessment, risk assessment procedures used by regulatory agencies need not be expected to quantitate actual risk but rather must be designed to provide a conservative upper bound estimate of risk. In this light, the use of linear extrapolation to low dose to arrive at a risk for a carcinogen is a sufficiently conservative approach to ensure safety. The additional conservatism of applying this procedure to the dose-response data for the most sensitive species and tumor site for a given chemical enables one to arrive at a conservative estimate of the unit risk for the chemical. It is important to note that the conservativisms built into this calculation are likely to overestimate the risk at low doses. Therefore, the actual risk posed by these chemicals at so-called worst-case 1 in 1 000 000 risk levels in the diet could be anywhere from zero to 1 in 1 000 000. Thus, although the risk assessment procedures used for reaching regulatory decisions may not quantitate risk, they are sufficiently conservative to protect the public health. An additional concern that could be raised about the scientific basis for the FDA's threshold of regulation is that carcinogenicity may not be the most sensitive toxicological endpoint, and hence basing a threshold on carcinogenic potencies may not provide an adequate margin of safety. Traditionally, regulatory bodies have based safety decisions on an evaluation of relevant toxicity data using a NOEL or LOEL (lowest observed effect level) for the most sensitive endpoint in the most sensitive species and applying an appropriate safety factor to arrive at an acceptable daily intake. Although a threshold level based on the distribution of dietary concentrations that represent a 1 in 1 000 000 risk level for a range of carcinogenic compounds may not specifically consider more sensitive endpoints than carcinogenicity, the use of a linear extrapolation to the 1 in 1 000 000 risk level is in effect the application of a large safety factor (approximately 50000-100000) to the result of 2-year chronic studies. Thus, even if more sensitive endpoints than carcinogenicity do exist, the application of a million-fold reduction to the unit risk for carcinogenicity may also be expected to ensure that an appropriate safety margin exists for more sensitive endpoints for which a safety factor much smaller than 1 000 000 is generally applied. As noted above, this has been confirmed by applying safety factors to NOELs in acute and chronic studies.
Another question that has been present throughout the evolution of the threshold of regulation concept is whether a threshold level should be set as a level of migration to food or as a maximum dietary concentration. A threshold level that is simply a migration level to food has the advantage that a single measurable level in food applies to all situations, while a threshold based on dietary concentration requires the use of consumption factors and/or food type distribution factors to translate a measured migration level to a dietary concentration. Thus, the determination of whether the use of a particular compound in food packaging results in an exposure below the threshold of regulation is simpler if the threshold is a migration level. However, the actual amount of a given additive consumed as a result of its use in food packaging is not solely dependent upon the amount of the material migrating into food simulants, but also depends upon how widely the additive is used in food packaging and what portion of the diet that food packaging contacts. Thus, selection of a migration level as a threshold of regulation leads to a policy that is simpler to administer but would not eliminate the burden of issuing regulations for situations that are even more trivial than some of those exempted by the threshold. Selection of a dietary concentration as a threshold of regulation permits comparison of the threshold level directly to relevant toxicity data, because toxicological risk is a function of innate potency of the chemical agent and the amount consumed. Table 12.2 illustrates the calculation of dietary concentrations. The first example shows the calculation of dietary concentration of a substance which migrates to food at a level of 10 ppb. If such a compound were used in food packaging that contacted 5% of the daily diet (a consumption factor of 0.05 is the smallest consumption factor that the FDA will generally use in the absence of specific marketing data), then the estimated dietary concentration would be 0.5 ppb. As illustrated in Table 12.2, if the same compound were used in all polymeric food packaging, and migrated at the same level, the estimated dietary concentration would be over 4 ppb. In addition to the threshold level of 0.5 ppb which may be applied to any non-carcinogenic substance used in food packaging, the FDA's threshold of regulation process also permits the use of regulated direct Table 12.2 Calculation of dietary concentrations (Dietary concentration = Migration x Consumption factor) Case 1 2 a b
Migration 10 ppb 10 ppb
Consumption factor a
0.05 0.41b
Dietary concentration 0.50 ppb 4.10 ppb
The minimum consumption factor ordinarily used in calculations of dietary concentration. The consumption factor for an additive used in all food contact polymers.
food additives as components of food contact articles when the dietary concentration resulting from the indirect use is less than 1% of the acceptable daily intake (ADI) for the substance. Under these conditions, the FDA would exempt a specific food contact use of the substance from regulation as an indirect food additive, even if the dietary exposure exceeded 0.5 ppb. A level of exposure that is 1% of the ADI is sufficiently small that it would not significantly affect the overall cumulative exposure to a substance even in the event that the substance was granted exemptions for several different types of uses in food contact articles. Thus any new use of the additive would result in a trivially small exposure compared to that which has already been judged to be safe, and would be of no concern.
12.2
The threshold of regulation in practice
Protection of the public health is paramount in establishing a threshold of regulation; however, the utility of a threshold of regulation process in the real regulatory world is also a key consideration. In this context, the threshold level would not be useful if it was so low as to exclude the majority of uses of food contact substances resulting in trivial migration into food or if use of the threshold level required a migration level below the detection limit for many analytical methods commonly used to quantify migrants from food contact materials. To test the feasibility of the threshold of regulation concept, the FDA carried out a pilot study. A three-member committee was established to conduct reviews of proposed new uses of food contact substances. Table 12.3 summarizes the results of this pilot study. Of the 35 uses of food contact substances reviewed by the committee, 23 qualified for an exemption from regulation, while 12 failed. The average deliberation time was 3.4 h per submission, as opposed to the 250-500 person hours required to review an indirect food additive petition and issue a regulation. The average total turnaround time was 2-3 months, as opposed to the 1-2-year turnaround for typical indirect additive petitions. A second pilot
Table 12.3 Results of the pilot study The uses of 35 components of food contact articles were reviewed 23 passed; 12 failed Average deliberation time was 3.4 h (compared to 250-500 h for the review of an indirect additive petition) The average turnaround of submissions was 2-3 months (compared to 1-2 years for an indirect food additive petition)
study extended from 1991 until finalization of the threshold of regulation rule-making process in the summer of 1995. Since 1991, the average number of threshold of regulation submissions that the FDA has received is about 90 per year. Roughly half of those receive exemptions subsequent to the initial review. Of the others, many require limited additional information before exemptions may be granted. The dietary concentration level is usually the litmus test for whether a proposed use will qualify for a threshold of regulation exemption. Although the FDA and other regulatory agencies have issued guidance on how to estimate the dietary concentration likely to result from particular uses of a chemical in food contact material, threshold of regulation decisions often involve unique uses of materials or exposure scenarios that require evaluation on a case-by-case basis. In many cases, specific migration data are not required, and a worst-case estimate of dietary concentration based on 100% migration of the subject additive to food can be assumed. However, many other types of information may also be considered in estimating dietary concentrations of substances for food contact use. Some examples of the types of information that may be appropriate for specialized food contact uses are given in Table 12.4. For example, materials used in food processing equipment may be particularly resistant to physical or chemical abrasion, and thus are not likely to migrate to food. A component of food processing equipment is usually intended for use in contact with bulk quantities of food such that the total quantity of food contacted during the useful lifetime of the equipment is enormous in comparison to the amount of a chemical incorporated into the food contact equipment. Thus, even if one assumes that all of the chemical migrates to food, the resulting dietary concentration is likely to be below the threshold of regulation. In addition, information on the conditions of use of the food contact material must be considered, since this type of information may drastically alter the likelihood that the material may migrate to food in significant amounts. For instance, food contact materials used
Table 12.4 Threshold of regulation special cases Hard metallic alloys
Data on hardness or resistance to abrasion
Food processing equipment intended for repeated use
Useful lifetime of the food contact article and estimate of the amount of food processed per unit surface area
Volatile solvents
Boiling point of the solvent and curing temperature and time for the polymer or information on other means of removing solvents
Recycled materials
Migration testing demonstrating the effectiveness of barrier layers
only at extremely low temperatures or restricted to use in contact with dry foods generally result in lower dietary concentrations than if used in contact with fatty or aqueous foods or in contact with food at elevated temperatures. Typically, a request is limited to the use of a substance in a particular type or types of food contact material (e.g. all polythylene polyolefins), but other limitations may include the use in contact with specific food types or specific foods (e.g. aqueous food or carbonated beverages). In these cases, 'consumption factors' (consumption factors are used by the FDA, and are based on an estimate of the fraction of the total diet by weight that is typically in contact with a given type of food contact material) and 'food distribution factors' (food distribution factors are used by the FDA, and represent the fraction of individual food types (aqueous, fatty, etc.) contacting a given packaging material) are used to estimate the dietary concentration. M
= /aqueous and acidic(m) + /alcoholicM + /fatty( m )
Dietary concentration = M x Consumption factor
(I 2 - 3 )
(12.4)
In equation 12.3, M is the total concentration of a component of food contact material in the food that it contacts, and is calculated by summing the products of migration levels into food simulants representative of specific food types (m) and the food-type distribution factors for these food types (/aqueous and acidic, /aicoholic> /fatty)'
In cases where a specific polymer is known to be used in contact with only a very small fraction of the daily diet or where a specific polymer has only limited use in contact with a specific type of food (e.g. dry foods, fatty foods), then a relatively high migration level may still result in a dietary concentration lower than the threshold of regulation when appropriate consumption factors or food distribution factors are applied. The FDA also requires the submission of the results of a search of appropriate literature sources of toxicity data in order to facilitate the review. This information is used to aid in determining whether the substance has been shown to be either a carcinogen or an unusually potent toxin. Other resources such as structure-activity relationships and toxicity databases may also be used to assist in making the above determinations. If relevant studies raise significant concerns regarding the toxicity or carcinogenicity of a substance, the proposed use of the material may be required to undergo the more comprehensive safety review associated with the food additive petition process. In this way, the toxicological portion of the threshold review effectively adds an additional safety margin to the 0.5-ppb threshold by excluding substances for which there is evidence demonstrating likely carcinogenicity or extreme toxicity. For example, the majority of the compounds associated in Figure 12.2 with upper bound risk of greater than 1 in 1 000 000 at a dietary concentration of 0.5 ppb
have distinct structural clues that could be used as a basis for eliminating structurally similar compounds from consideration under a threshold of regulation. Known carcinogens must be excluded from review under the FDA's threshold of regulation process, because the use of carcinogens as food additives is prohibited in the USA by the Delaney clause (section 409 (c)(3)(A)) in the FD&C Act. The FDA has also used the risk assessment procedures applied to food additives with carcinogenic impurities to establish criteria for its threshold of regulation process. The FDA has previously used risk assessment procedures to regulate nearly 100 additives containing carcinogenic impurities when the presence of these impurities represented less than 1 in 1 000 000 upper bound lifetime risk. The worst-case exposure to carcinogenic impurity in an additive under threshold review is represented by the impurity being present in the diet at the threshold concentration of 0.5 ppb. Assuming such a worst-case exposure, the minimum TD50 that a carcinogenic impurity may have and still ensure a negligible level of risk from exposure to the chemical from a use exempted under the threshold of regulation process is 6.25 mg/kg bw/day. Food additives with carcinogenic impurities shown to be more potent than this level would not be reviewed in the FDA's threshold of regulation process but would undergo the more in-depth evaluation given to a food additive petition. One of the comments in response to the FDAs proposed rule to establish a threshold of regulation process urged the FDA to include the possibility of exempting entire classes of chemicals instead of individual compounds. Because the level of migration and resulting dietary concentration depend in part on both the size and chemical properties of the migrating chemical, it would be impossible to predict whether the use of all chemicals within a class would result in dietary concentrations below the threshold based on the migration properties of just one or two sample chemicals. Similarly, the intrinsic toxic potencies for chemicals within a certain class may also vary significantly. Because both the resulting dietary concentration and intrinsic toxic potency may vary considerably for compounds within a given class, the likelihood of a substance posing a potential health hazard, which depends on both these factors, may also vary considerably. Therefore, safety concerns would not permit exemption of a class of chemicals based on the review of only a few chemicals within a given class. However, because of the conservatisms present in estimates of dietary concentrations, where 100% migration is assumed, it is possible to estimate a dietary concentration that may be valid for any compound used in a specific application. This permits a broad determination of whether a compound used at a given level in a given application will result in a dietary concentration below 0.5 ppb. Likewise, chemicals used in specific food additive applications, such as colorants in food contact polymers, are
expected to result in low dietary concentrations of the order of 0.5 ppb based on their general chemical and physical properties, which also serve to limit their migration to food (Cheeseman, 1994). In such cases, where the specific use of a food contact substance has been determined to result in dietary concentrations below the threshold, the threshold of regulation review is reduced to a consideration of toxicity and environmental impact. Although originally conceived to alleviate the overall burden of processing food additive petitions for food contact uses of substances resulting in trivial dietary exposure, the scientific basis supporting the threshold of regulation process also provides guidance for making sound regulatory decisions in other areas of concern. An example is the use of recycled materials for food contact applications. In this case, the identity of potential contaminants in the recycled polymers may not be known. For instance, if the recycled material is separated from food by a virgin barrier layer, the effectiveness of the barrier layer can be determined by migration testing performed on laminates in which the inner recycled layer is spiked with specific amounts of known surrogate contaminants. The chemicals used to spike the recycled layer typically possess a wide range of properties to ensure that they are representative of potential contaminants. For the review of uses of recycled polymers in food contact material, the threshold of regulation may be used as a benchmark for determining if a barrier is functional (i.e. limits migration to those levels that result in a dietary concentration at or below 0.5 ppb). In addition, because the effectiveness of the barrier layer depends on its thickness and the use temperature, and because the level of migration into food may depend on the types of food contacted by the material and the duration of contact, it may be necessary to impose limitations on the conditions of use of recycled materials used in food packaging when such limitations are necessary to ensure that safety concerns are negligible. In these cases, the threshold benchmark serves as a tool for developing appropriate limitations and also provides industry with a tool to determine the effectiveness of barrier layers used in food packaging prior to consulting regulatory agencies.
12.3
Advantages and effects of the threshold of regulation process
In nearly any regulatory scenario, there exist areas that are within the legal scope of a strict regulatory interpretation but may be at best on the fringes of regulatory intent (i.e. they are de minimis). In the case of regulation of food packaging material, while the law may literally encompass all components of food packaging, it is prudent to apply more resources to those regulatory questions that represent a greater risk to the public health. In the absence of a clear delineation of a threshold between what
is of concern and what is trivial, uncertainty exists. This uncertainty may result in inconsistent regulatory decisions and the creation of a more adversarial relationship between industry and regulatory agencies. By establishing a threshold of regulation, regulatory agencies can define a de minimis level that serves to advance public health protection efficiency. The most obvious advantage of an established threshold of regulation process is that it represents a more effective use of resources for both the regulatory agency and industry. An established standard provides clearly defined criteria by which the regulatory agency can measure industry submissions, thereby speeding up the process of reaching a decision. Likewise, the clear standard for threshold of regulation decisions also allows industry to assess when regulatory agencies are likely to decline to grant a threshold of regulation exemption. Thus, it permits industry to easily judge what level of regulatory review a given substance proposed for use in food contact material is likely to require prior to contacting the regulatory agency. This permits companies to make more informed decisions regarding the type and level of testing that is likely to be required to gain a favorable response from regulatory bodies. Making threshold of regulation criteria public improves the quality and consistency of submissions and reviews, improving the efficiency of the review process and ensuring fairness. (The FDA criteria for threshold of regulation exemptions are published in Title 21 of the United States Code of Federal Regulations section 170.39 and guidance is available from the agency at HFS-216, 200 C St SW, Washington, DC 20204, USA, or via the World Wide Web on the Center for Food Safety and Applied Nutrition's home page at http://vm.cfsan.fda.gov/index.html.) In addition, making threshold of regulation decisions public results in a further saving of time, because anyone may relay on a specific threshold of regulation exemption, not simply the company or individual to whom the initial response was given. (The FDA's threshold of regulation exemption letters are placed on public display at the FDA's Dockets Management Branch, HFA-305, 12420 Parklawn Dr., Rockville, MD 20857, USA.) The public availability of threshold of regulation exemptions will also better define for industry what kinds of uses of food contact chemicals may be likely candidates for threshold of regulation exemptions. With a formal list of previous threshold exemptions, the regulatory body can more easily refine the threshold of regulation policy and offer more useful regulatory opinions and guidance based on a growing database of experience. This process will be enhanced both by the experience gained in evaluating submissions and by the exposure of the regulatory agency to more detailed, accurate, and up-to-date information about industry practices. In addition, threshold of regulation decisions represent formal positions of the regulatory agency and are legally binding on the agency. Again, this promotes consistency of such responses and enables industry to rely
on such decisions. Making the process reliable and open in turn should lessen the likelihood that industry will rely on inconsistent independent determinations that a substance need not be regulated for a particular application. A more open process that promotes participation by industry will also result in a freer interchange of information between the regulatory agency and industry. This in turn will result in regulatory agencies having more detailed, accurate and up-to-date information on the use of chemicals in food packaging. This improved information permits a regulatory agency to make informed decisions regarding the safety of food packaging in general, which results in a high level of public health protection an public confidence. While the data in a typical indirect additive petition can take between 250 and 500 staff hours for the regulatory agency to review (including the enactment of a regulation) and may take up to 2600 h for industry to prepare, the typical threshold of regulation submission may take only an average of 88 h to prepare and ordinarily takes less than 8 h to review. Table 12.5 shows a comparison of the costs to both industry and the public in developing and reviewing a threshold of regulation submission. The savings are geometric in the sense that beyond the simple savings in cost of developing and preparing the data required for a regulatory review, the entire process of taking an idea from the benchtop to the factory is streamlined. Thus, new packaging or processing technology is brought to the consumer or end-user more efficiently. This could bring the twin benefits of lower prices and increased availability of higher-quality products. An additional benefit is that the threshold of regulation process permits clear decisions in areas for which regulatory agencies could previously offer only qualified guidance. The threshold process is therefore expected to promote innovation in food packaging and food processing technologies. The clear criteria for a threshold of regulation exemption provide a target for product and process development and a benchmark for what level of regulatory approval is likely to be required. A company can plan more effectively for capitalization and production when it knows that it can reasonably expect an exemption within a relatively short time frame. Additionally, smaller businesses will be more able to participate in product and processing innovation by having the financial burden of the petition
Table 12.5 Savings to the FDA and industry
Preparation time Cost Review Cost Decision
Threshold review
Petition review
68-108 h $1400-75 000 8h $800 2-3 months
2600 h $85 000-200 000 250-500 h $25 000-50 000 1-2 years
process removed to as great an extent as possible. Larger businesses will also benefit from this process because innovative products that would have been delayed or abandoned for financial reasons will be able to reach the marketplace in an expedited manner. Overall, the implementation of a threshold of regulation process serves to speed the regulatory process, permit greater allocation of limited regulatory resources to issues of public health concern, and thus better serve the mission of protecting the public health, while saving industry time and money and thereby promoting innovation in food packaging and processing technologies.
12.4 Future issues In the following discussion, suggestions for using collected observations from data in a wide variety of chemicals are described in the context of thresholds of regulation. These suggestions are of equal or greater importance in determining what data are needed to evaluate safety. Whether or not a threshold of regulation is established to address the legal and administrative requirements of law (threshold of regulation) or to establish scientific criteria for issuing regulations to permit the use of a substance, the concepts can provide a basis for improving regulatory efficiency. Thus the discussion is not intended to imply that a safety evaluation using such concepts would necessarily exempt a substance from regulation. Initially, the threshold of regulation concept has been focused on establishing a level of dietary concentration (or migration) for which regulatory bodies could be reasonably certain that the risk to the public health is negligible even in the absence of toxicity data. The next logical question is whether one could justify a higher threshold for compounds for which specific toxicity data exist or for which general assumptions regarding toxicity are valid. Such tiered approaches to a threshold of regulation decision-making process have been proposed in the literature (Munro et al, 1996; Gaylor and Gold, 1995). Such approaches may bridge the gap between traditional regulatory decision-making processes and the threshold of regulation process by utilizing elements of both. The use of structure-activity relationships and short-term toxicity testing to determine an initial level of concern for a compound used in food packaging or processing has always played a part in traditional regulatory safety reviews (Food and Drug Administration, 1993; World Health Organization, 1967, 1978, 1987). However, the use of this methodology to help determine whether there is a correlation between the toxicity of a food additive and the toxicity database of compounds with similar structure is relatively new, and it is potentially very useful for arriving at decisions regarding the use of chemicals in food contact materials. While the
application of such statistically based methodology may be important in assessing safety of food additives, it is also important to recognize that the use of statistical measurements to draw inferences about relationships between toxicity data sets should be done with care. Preliminary analysis of the possible correlation between carcinogenic potency and both mutagenicity and acute toxicity data suggest that substances which are negative in the Ames assay and show low toxicity in acute oral feeding studies are likely to be less potent carcinogens, if they are carcinogens at all. Initial work within the FDA has suggested that a higher threshold for substances used in food packaging may be justified based on the results of such short-term toxicity testing (e.g. mutagenicity tests, acute oral feeding studies). These preliminary results are summarized in Table 12.6, and indicate a possible correlation of mutagenicity and/or LD50 values with carcinogenic potency. Out of 566 carcinogens, mutagenicity studies (standard Salmonella tests) were available for 211 (95 negative and 116 positive). The typical dietary concentrations that correspond to a 1 in 1 000 000 upper bound risk for the mutagenic carcinogens and non-mutagenic carcinogens, respectively, are shown in Table 12.6. These results show that the typical dietary concentration corresponding to a 1 in 1 000 000 upper bound risk for the 95 non-mutagenic carcinogens is 8 ppb, compared to 1 ppb for the 116 mutagenic carcinogens. (The values of 1 ppb and 8 ppb reflect the fact that the typical potency for non-mutagenic carcinogens is eight-fold lower than the typical potency for mutagenic substances.) These findings may in the future support the establishment of a higher threshold for substances that have been shown to be non-mutagenic by appropriate short-term toxicity testing or revised criteria for evaluating the safety of indirect food additives. Additional analysis has been performed on the possible correlation between LD50 values and the potencies of non-mutagenic carcinogens. The typical dietary concentration corresponding to a 1 in 1 000 000 upper bound risk for 33 non-mutagenic carcinogens with LD50 values greater than 2000 mg/kg bw/day is 27 ppb. The corresponding value for 17 non-mutagenic carcinogens with LD50 values greater than 10 000 mg/kg bw/day was found to be 33 ppb. These results indicate that for the relatively small Table 12.6 Correlation between short-term toxicity testing and the virtually safe dose Test/Results 95 Non-mutagenic carcinogens 116 Mutagenic carcinogens 566 Carcinogens 33 Non-mutagenic carcinogens/LD50>2000 mg/kg bw/day a
VSDa (ppb in diet) 8 1 1.2 27
The virtually safe dose is defined as the dose estimated to result in no more than a 1 in 1 million risk level of cancer.
number of non-mutagenic carcinogens studied, there is a correlation between LD50 values and carcinogenic potencies. Thus, it may be possible in the future to establish higher thresholds for food contact substances based on whether or not they are mutagenic and whether or not they have high LD50 values. Before pursuing methods for establishing higher thresholds, it is prudent to determine whether a threshold that is 8-fold or even 33-fold higher than a 0.5-ppb dietary concentration significantly affects the percentage of substances requiring regulation as food additives. Table 12.7 shows the dietary concentration levels of substances that were the subject of indirect food additive petitions submitted to the FDA over a 5-year period. Of the 163 petitions received, 22 (or 13.5%) involved dietary concentrations at or below 0.5 ppb and would have qualified for an exemption under the FDA's existing threshold of regulation procedure. Table 12.7 also shows a significant increase in the percentage of petitions meeting higher threshold levels (28% of the petitions would meet a 3-ppb threshold and 34% would meet a 5-ppb threshold). These results indicate that establishing higher thresholds for compounds based on their toxicological properties as determined from appropriate mutagenicity studies, acute oral feeding studies or other appropriate short-term studies may have a significant impact on the overall scope of a threshold of regulation process. Recently, several other attempts have been made to build on the threshold of regulation concept by utilizing additional data or relatively quick analyses to provide for a higher threshold than might be permitted in the absence of specific toxicity information. Munro et al. (1996) have proposed the establishment of a 'threshold of concern' based on a correlation of NOELs derived from chronic and subchronic animal studies and the structural classification scheme devised by Cramer et al. (1978). This structural categorization uses 33 questions to correlate structural clues to toxicity and key physical properties of chemicals that may relate to absorption, distribution and reactivity in biological systems. Munro evaluated over 2900 NOELs for 611 compounds and used the structural scheme to separate the 611 chemicals into three distinct classes of concern associated with low, moderate and high potential for toxicity based on knowledge of the toxic properties of structurally similar chemicals. The use of structural clues to Table 12.7 Estimated percentage of petitions meeting specific dietary concentration levels Dietary concentration (PPb)
% Petitions
0.5 3.0 5.0
13 28 34
prioritize toxicity concerns is well described in the literature and is an accepted part of the regulatory review process. Munro has analyzed existing toxicity data on representative chemicals from each structural class based on the statistical distribution of NOELs for the representative chemicals with the application of an appropriate safety factor. Pairwise statistical analysis of the three data sets delineated differences between the members of the three structural classes. The 5th percentile NOEL was determined for each structural class and is proposed as a threshold of concern for substances falling into that class. Munro reports that the 5th percentile NOEL for all 611 compounds is 0.218 mg/kg bw/day, while the 5th percentile NOELs for compounds falling into the low-, moderate- and high-concern groups are 2.99, 0.907 and 0.146 mg/kg bw/day, respectively. With the application of a 100-fold safety factor, these levels would correspond to concentrations of 600 ppb, 181 ppb and 29 ppb in the diet. These dietary concentrations would permit the useful application of Munro's procedure to a wide variety of indirect food additive safety determinations. Although Munro states that the most toxic chemicals in the database are drugs, pesticides and industrial chemicals, and not substances commonly used in food, the question of whether the database used for estimating these tiered threshold levels is representative of the range of industrial chemicals used in food packaging remains to be answered. (It should be noted that Munro's primary application for his tiered threshold of concern was not the evaluation of indirect food additives.) In addition, the boundary parameters for the range of moderate toxicity need to be further defined, since relatively few of the chemicals that Munro considered fell into this category (28 of 611). Gaylor and Gold have proposed the use of data on the maximum tolerated dose (MTD), as determined from 90-day subchronic studies in rodent species, to estimate a virtually safe dose (VSD) (the virtually safe dose is defined as the dose estimated to result in no more than a 1 in 1 000 000 risk level of cancer) for individual compounds (Gaylor and Gold, 1995). This method is based on an evaluation of the TD50S and MTDs for over 300 chronic toxicity studies, including NTP/NCI (National Toxicology Program/National Cancer Institute)-sponsored studies and studies from the published literature. The relationship between MTDs and carcinogenic potencies derived from 2-year chronic bioassays is well established. Subchronic range-finding studies are used in the design of chronic toxicity tests to establish dosing levels. This increases the chance that the chronic bioassay will be useful by simultaneously ensuring that sufficient animals will survive the study and that a measurable toxic effect will be observed. The range of carcinogenic potencies that may be derived from a chronic bioassay is mathematically limited by the dosing levels and the number of animals in the study; therefore, it is possible to estimate the VSD of a carcinogen by using the maximum dosing level in the chronic
bioassay and a scaling factor. Gaylor and Gold demonstrated that VSDs calculated from the MTDs used in the 318 chronic bioassays are comparable to VSDs derived from the dose-response relationship of the bioassay. Ninety-eight per cent of the VSDs estimated from MTDs were within an order of magnitude of the VSDs calculated from the results of the bioassays, while 78% of the VSDs estimated from MTDs were within a factor of 4 of VSDs estimated from bioassay results. Based on this relationship, Gaylor and Gold proposed using the MTDs derived from 90-day subchronic studies to establish a 'threshold of regulation' for individual substances equivalent to the VSDs for those substances. Gaylor and Gold derive a quick estimate of VSDs by using the geometric mean of the ratio MTD/TD50 and the method developed by Krewski (Krewski et aL, 1993) that is represented in equation 12.5. VSD - MTD/740 000
(12.5)
Table 12.8 shows a range of dietary concentrations that would encompass typical exposures for food contact substances and the minimum MTDs that would result in VSDs equal to or greater than these dietary concentrations using the relationship in equation 12.5. Generally speaking, the MTDs in Table 12.8 are of the same order as the NOELs or LOELs for many chemicals used in food packaging. Thus, the application of Gaylor and Gold's quick estimate of VSDs could have utility in the safety decision process for components of food packaging. Although each of these proposed threshold of regulation procedures may be of use in the regulatory decision-making process, each is more labor-intensive than the threshold of regulation process currently used by the FDA. As the level of effort approaches that required for the food additive petition process, such procedures become less useful for threshold of regulation decisions but of possible value to the extent that they facilitate the overall safety decision process. However, as the database supporting such approaches grows, the utility of such methods is likely to increase. Thus, it may be worth while to explore the utility of these processes separately or in conjunction with traditional regulatory decisionmaking processes and threshold of regulation processes to develop a more complete, comprehensive and overall efficient regulatory process.
Table 12.8 Relationship of MTDs/NOELs to dietary concentrations Dietary concentration (PPb)
Lowest permitted MTD (mg/kg bw/day)
1 5 10 50
37 185 370 1850
References Cheeseman, M.A. (1994) FDA's colorants in polymers rule. American Ink Maker, 9, 81-87. Cramer, G.M., Ford, R.A. and Hall, R.L. (1978) Estimation of toxic hazard - a decision tree approach (and errata sheet). Food and Cosmetics Toxicology, 16, 255-276. Flamm, W.G., Lake, L.R., Lorentzen, RJ. et al (1987) Carcinogenic potencies and establishment of a threshold of regulation for food contact substances. In: Whipple, C. (ed.) Contemporary Issues in Risk Assessment, Vol. 2. De Minimis Risk. Plenum Press, New York, 87-92. Food and Drug Administration (1982) Toxicological Principles for the Safety Assessment of Direct Food Additives and Color Additives Used in Food. Red Book. US Food and Drug Administration, Bureau of Foods, Washington, DC. Food and Drug Administration (1993) Toxicological Principles for the Safety Assessment of Direct Food Additives and Color Additives Used in Food. Redbook II (Draft). US Food and Drug Administration Center for Food Safety and Applied Nutrition, Washington, DC. Food and Drug Administration (1995) Food additives: threshold of regulation of substances used in food-contact articles; Final Rule. Federal Register, 60, 36582-36596. Frawley, J.P. (1967) Scientific evidence and common sense as a basis for food-packaging regulation. Food and Cosmetics Toxicology, 5, 293-308. Gaylor, D.W. and Gold, L.S. (1995) Quick estimate of the regulatory virtually safe dose based on the maximum tolerated dose in rodent bioassays. Regulatory Toxicology and Pharmacology, 22, 57-63. Gold, L.S., Sawyer, C.B., Magaw, R. et al. (1984) A Carcinogenesis Potency Database of the standardized results of animal bioassays. Environmental Health Perspectives, 58, 9-319. Gold, L.S., de Veciana, M., Backman, G.M. et al. (1986) Chronological supplement to the Carcinogenic Potency Database: standardized results of animal bioassays published through December 1982. Environmental Health Perspectives, 67, 161-200. Gold, L.S., Slone, T.H., Backman, G.M. et al. (1987) Second chronological supplement to the Carcinogenic Potency Database: standardized results of animal bioassays published through December 1986 and by the National Toxicology Program through June 1987. Environmental Health Perspectives, 84, 215-286. Gold, L.S., Slone, T.H., Backman, G.M. et al. (1990) Third chronological supplement to the Carcinogenic Potency Database: standardized results of animal bioassays published through December 1986 and by the National Toxicology Program through June 1987. Environmental Health Perspectives, 84, 215-286. Krewski, D., Gaylor, D.W., Soms, A.P. and Szyszkowicz, M. (1993) An overview of the report 'Correlations between carcinogenic potency and the maximum tolerated dose: Implications for risk assessment'. Risk Analysis, 13, 383-398. Machuga, E.J., Pauli, G.H. and Rulis, A.M. (1992) A threshold of regulation policy for foodcontact articles. Food Control, 3(4), 180-182. Munro, I. (1990) Safety assessment procedures for indirect food additives: an overview. Regulatory Toxicology and Pharmacology, 12, 2-12. Munro, I.C., Ford, R.A., Kennepohl, E. and Sprenger, J.G. (1996) Correlation of structure class with no-observed-effect levels: a proposal for establishing a threshold of concern. Food Chemical Toxicology, 34, 829-867. Rulis, A. (1989) Establishing a threshold of regulation. In: Bonin, J. and Stevenson, D. (eds) Risk Assessment in Setting National Priorities. Plenum, New York, pp. 271-278. World Health Organization (1967) Procedures for investigating intentional and unintentional food additives: Report of a WHO scientific group. WHO Technical Report Series No. 348. WHO, Geneva. World Health Organization (1978) Principles and methods for evaluating the toxicity of chemicals, Part 1 - Principes et methodes devaluation de Ia toxicite des produits chimiques, Partie 1. Environmental Health Criteria No. 6. International Programme on Chemical Safety (IPCS). WHO, Geneva. World Health Organization (1987) Principles for the safety assessment of food additives and contaminants in food. Environmental Criteria 70. International Programme on Chemical Safety in Co-Operation with the Joint FAO/WHO Expert Committee on Food Additives (JECFA). WHO, Geneva.
13 An approach to understanding the role in human health of non-nutrient chemicals in food N. LAZARUS, J.A. NORMAN, and E.M. MORTBY
13.1
Introduction
Many expert nutritional groups are concerned with defining and then educating the public on the ingredients of a wholesome and concomitantly healthy diet. Vegetables and fruit are high on the list of acceptable foods. Their inclusion is backed up by much epidemiological evidence to show that diets containing a high proportion of fruit and vegetables have a protective effect against cardiovascular-related diseases, as well as cancer (Gey, 1994; Graham et al, 1978). The nutritionists appear to have reached a consensus as to what constitutes a healthy diet. However, on close examination it is rarely clear which constituents of the diet are responsible for these health-giving properties and what mechanisms may underlie their effects. Food contains both major and minor chemical constituents. Proteins, fats, carbohydrates (both simple and complex) and fibre are the major constituents. Minor constituents include vitamins and minerals. However, in addition to the above there is a host of other chemicals, such as natural inherent non-nutrients, that are present in food. For many years, nutritionists have tended to ignore these chemicals. One reason could be that they were perceived as being nutritionally inert and therefore contributing very little to the wholesomeness of foods. There is now a large body of evidence which suggests that these neglected compounds may play a supporting role in imparting health (Wattenberg, 1993). A selection of these inherent chemicals did, however, catch the attention of toxicologists (Ames, 1989; Ames et al., 1987a,b,c,d). A little belatedly has come the recognition that existing alongside the toxicants are other chemicals which have positive health effects in their own right. Toxicological assessments of chemicals such as pesticides, veterinary drugs and food additives are driven by regulation and are based on the concept of the acceptable daily intake (ADI) (Renwick, 1991). The ADI is usually derived by the application of a 100-fold safety factor to the 'no observable adverse effect' level determined by animal, usually rodent, experimentation. In these experiments the test chemical is added in increasing amounts, generally to standard diets, until an effect concentration is reached. The dose below the dose at which an effect has been observed is the 'no observable effect' level. The 100-fold safety factor and
the ADI have served regulators well. Whatever its deficiencies, the ADI appears to have protected the public from the toxic effects of added chemicals, although this supposition is based on faith rather than hard experimental data. Once set, intakes by the population can be monitored to check whether they are within the ADI. If they are not, conditions of use of the chemical may need to be altered. Should these same principles be applied to inherent chemicals in food? There are no regulations governing the necessity to gather toxicological information on these chemicals. Regulators may be driven to take action because of a perceived view that the toxicants in the cultivated varieties of plant foods that make up the normal Western diet may be causing the public harm. This view is backed up by the application of the same thinking that permeates regulatory toxicological assessments directed at added chemicals. If toxicologists believe that these chemicals in the Western diet cause harm, this view may not be shared by the public. The hazards from 'manmade' chemicals are perceived as being greater than those from naturally occurring chemicals (Ames et aL, 1987). An exemplary argument that might be advanced by a toxicologist could run as follows. Solanine and chaconine are normal alkaloids in potatoes and are cholinesterase inhibitors that were widely introduced into the diet about 400 years ago. They can be detected in the blood of all potato eaters. They may be present in potatoes at 125 mg/kg (Ames, 1989), leading to intakes only a six-fold safety margin under the safety level for humans. These chemicals have not been subject to the same rigorous testing as, say, a synthetic organophosphate cholinesterase inhibitor (malathion), present in the diet at 17 juig/day. Despite the in-depth investigation of the latter compound, the public would probably rank malathion higher in terms of risk than the natural compound. Are the hazards of the non-nutrient chemicals that have formed part of the Western diet for thousands of years understated? Should the same procedures that are used on additives be applied to these chemicals? The application of the methodology faces some problems. First, in many cases there are insufficient toxicological data to set intake limits. This arises largely because there are no commercial pressures on industry to fund such work. To apply sensible intake limits to these chemicals would require an exhaustive analysis and identification of all of the individual potential toxins present, followed by an assessment of the toxicological properties of each in turn. This would be an overwhelming task. Second, the concentrations of chemicals in plants are determined by many factors, and it is not only the concentration of an individual chemical that is important but the concentration of that chemical relative to related species that may have the same action. Third, if limits are set, then there may be nutritional implications, especially if the food under investigation makes
up a substantial percentage of some people's diets. It is the thesis of this chapter that the additive approach is fundamentally flawed when applied to inherent non-nutrient chemicals in foods. 13.2 Non-nutrient chemicals under discussion Perhaps before continuing it may be worth defining the chemicals under discussion. This may best be done by defining those chemicals and foods that are excluded. Natural contaminants of food such as, for example, aflatoxins, patulin and other mycotoxins are excluded from the definition of an inherent chemical because they arise from fungal contamination. Other excluded categories are those in which the food has been processed in a way that concentrates the chemicals. These include tablets, infusions, tinctures etc. The formulations are really medicines and should be so classified and judged by the same criteria as all medicines. Other excluded groups are foods which contain active principles that can cause acute or subchronic toxic effects. Wild mushrooms provide an example of this kind of food. Mushrooms that cause these effects are not normal cultivars. Various other foods which are known by the local population to contain toxic chemicals are prepared in ways which ameliorate or eliminate their toxicity. Red kidney beans exemplify this group. Foods that cause acute toxicity are not under discussion because they will be rapidly eliminated from the diet and are relatively easily dealt with through regulation and advice. The foods that are included are all the normal vegetable and fruit cultivars that make up the Western diet. The cultivars have been selected over many years and have been part of the Western diet for generations, and the concentrations of the various chemicals that they contain do not generate acute disease. As a result of recombinant technology, new variations of these old varieties are being produced. It is possible that, as a result, the non-nutrient composition will be different from that of the parent plant. A whole canon of assessment has been developed to try and cope with these novel foods (Department of Health, Committee on Medical Aspects of Food Policy, 1993; Ministry of Agriculture, Fisheries and Food and Department of Health, 1994). Unless the composition and functions of non-nutrients in normal cultivars is known, it will be difficult to evaluate these novel foods against the traditional ones. There is thus an intersection between the needs underlying the present approach and the possible future needs of novel food assessment. Inherent toxicants are defined as chemicals which occur in higher plants as a result of biosynthesis, metabolism or some other natural process. They are found in a wide variety of different products. Some examples are glycoalkaloids in potatoes, furocoumarins in celery, hydrazine derivatives in mushrooms, and phyto estrogens in soya products. Further examples are given in Table 13.1.
Table 13.1 Thanksgiving dinner menu Chemical composition includes
Course Appetizer Cream of mushroom soup Fresh vegetable tray Carrots Radishes Cherry tomatoes Celery Entree Roast turkey Bread stuffing with onions, celery black pepper, mushrooms Cranberry sauce Choice of vegetable Lima beans Broccoli spears Baked potato Sweet potato Rolls Butter Dessert Pumpkin pie with cinnamon and nutmeg Apple pie with cinnamon Beverages Coffee Tea
Red wine Water available upon request Assorted nuts Mixed nuts
Hydrazines Carotatoxin, myristicin, isoflavones, nitrate Glucosinolates, nitrate Hydrogen peroxide, nitrate, quercetin glycoside, tomatine Nitrate, psoralens Heterocyclic amines, malonaldehyde Benzo(o:)pyrene, di- and trisulphides, ethyl carbamate, furan derivatives, hydrazines psoralens, safrole Eugenol, furan derivatives Cyanogenetic glycosides AUyI isothiocyanate, glucosinolates, goitrin, nitrate Amylase inhibitors, arsenic, chaconine, isoflavones, nitrate, oxalic acid, solanine Cyanogenetic glycosides, furan derivatives, nitrate Amylase inhibitors, benzo(a)pyrene, ethyl carbamate, furan derivatives Diacetyl Myristicin, nitrate, safrole Acetaldehyde, isoflavones, phlorizin, quercetin glycosides, safrole Benzo(a)pyrene, caffeine, chlorogenic acid, hydrogen peroxide, methylglyoxal, tannins Benzo(a)pyrene, caffeine, quercetin glycosides, tannins Alcohol, ethyl carbamate, methylglyoxal, tannins, tyramine Nitrate Aflatoxins
American Council on Science and Health (1987) Mother Nature and Her Chemicals Join Us for Thanksgiving Dinner. ACSH, New York, NY. 8 pp. Reproduced by kind permission of Beier, R.C. (1990) and publishers Springer- Verlag.
13.3
A new approach
Vegetables and fruit are believed to be beneficial but the mechanisms whereby this wholesomeness is produced are only partially understood. Vegetables and fruits make up a significant proportion of many people's diets but in general are eaten as a part of a mixed diet unique to humans. At any one meal or over the day, chemicals that have been identified as
potentially toxic are ingested with an array of chemicals that have been identified as having protective functions. The resultant health effects of such a mixture are unknown. An amusing but pointed example of this mixture of chemicals taken at a meal is provided in the analysis of a Thanksgiving dinner menu (Table 13.1), where the array of potentially toxic substances is listed (Beier, 1990). Examining this formidable list may give cause for concern; however, we believe that studying chemicals in isolation from the 'whole diet' distorts and exaggerates the effects of those chemicals. In essence, the new approach stresses the consideration of the effects of whole diets rather than those of individual isolated chemicals. The approach is designed to give a better understanding of whole-diet biochemistry.
13.4 Factors affecting the action of chemicals in food The effect of any food chemical on human biochemistry is only partly influenced by the quantity ingested. The effect is a product of many factors. Some of these distorting factors are discussed below. 13.4.1 Bioavailability Not all of what is ingested appears in the circulation. This point is well understood by biologists and has been at the centre of drug design for many years. Recent work has assessed whether individuals consuming green potato tops as part of their regular diet are at risk from glycoalkaloid poisoning (Phillips et al, 1996). After identification of the variety of potato used and the method of preparation, samples of potato leaves and tubers were analysed for the glycoalkaloids a-chaconine and a-solanine. Extracts of leaves were tested in vitro and in vivo. The results showed that although these extracts were extremely toxic in vitro, animals treated with the extracts by gavage showed no ill-effects, even at very high doses. Extracts given intraperitoneally were highly toxic. These data support others showing that absorption from the gut following oral administration is low. This study highlights another important principle. Unless the amount absorbed is known, in vitro studies have little relevance to the concept of whole food toxicology. However, even the experiments reported above deviate from the new approach in that the studies were carried out using isolated chemicals. The chemicals were administered in concentrated form, distorting the concentrations normally ingested, and any interactions that may have taken place when they were eaten mixed with a whole diet were eliminated. These effects could have been either enhancing or ameliorating.
13.4.2 Products entering the circulation A study to determine the nature of teratogenic chemicals produced by the absorption of vitamin A has been reported (Buss et al, 1994). Products from vitamin A presented as a pure chemical, a supplement, were compared to those produced when the same concentration of vitamin A was fed as an inherent constituent of liver. There were substantial differences in the profiles of the transformed products entering the circulation. This study demonstrated another tenet of the new approach. The metabolism of chemicals given in isolation may not represent their metabolism when these chemicals are ingested as part of a whole diet. 13.4.3 Multiple functionality Nitrate is present naturally in vegetables. Vegetables are recommended foods. However, regulatory bodies have long been concerned about the potential dangers of nitrate (Ministry of Agriculture, Fisheries and Food, 1992), and an ADI using the standard external dose concept has been set. Nitrite is more toxic than nitrate, and this conversion occurs in considerable amounts in the gut. Nitrite is unstable in the gut and is reconverted to nitrate when it enters the circulation. Thus the external dose concepf for evaluating nitrite toxicity is suspect (Zeilmaker et al., 1995). In addition, nitrate is evaluated on its toxicity, when, in fact, evidence is accumulating that nitrate may also have protective effects (Bradbury and White, 1954). Whether different concentrations cause different effects or whether a given concentration may produce either a toxic or protective effect under different conditions is not known. The situation exists where a large proportion of the population use vegetables as a major source of nutrition without evidence of ill-health. The food is recommended by nutritionists, while toxicologists stress the dangers based on toxicology obtained by the use of isolated chemicals while ignoring any other beneficial effects that nitrates may have. The message is clear. Traditional ways of assessing inherent chemicals in foods, namely identifying the component, isolating it, generating toxicological data, carrying out surveillance to generate data on dietary intakes by assessing the amount eaten and then assessing the health risk to the consumer, is impractical. More importantly, it fails dismally in taking into account possible interactions in the gut, bioavailability, transformed products, the concentrations in the blood and the concentrations at the tissues. 13.5 The approach The approach is in two phases (Figure 13.1). The form that we envisage phase I and phase II studies taking can be exemplified by studies on phyto estrogens. Phyto estrogens are constituents of many plants (Bradbury and
White, 1954). They are especially abundant in soya products, a source of food for many millions of people. Phyto estrogens have been shown to cause infertility in diverse animal species (Bennets et al, 1946; Shutt, 1976). The compounds have been isolated and characterized. They are easily measured in human urine (Aldercreutz et al, 1991). Unfortunately, they do not fulfil all the criteria of phase I, in that a reliable technique for measurement of blood concentrations is not yet available. PHASE I Entry point
Food identified as containing chemicals of interest
Reported effects on humans
Chemical composition Composition studies Biomarker identification
Development of methodologies: measure relevant biomarkers in blood
Clinical studies
Bioavailability
Biotransformations Identification of biotransformed products
Non-bioavailable No further concern PHASE Il
Small scale trials in an "at risk" population
Pilot studies to determine functionality Measurement of outcomes
Conformity studies after pilot study
No effect measured
Effect measured
Re-consider or abandon
In vitro effects on human cell/tissue systems
Receptor studies
Toxic Animal studies
Test in animals then back to human studies if necessary
Test in animals at appropriate doses, mindful of caveats expressed
Physiological effects Beneficial Follow up human studies
Figure 13.1 Whole-diet approach to investigate non-nutrient chemicals in food.
13.5.1 Phase I These studies are involved with identification of the chemicals in human blood or other appropriate tissues after the chemical under study has been fed in concentrations relevant to those found in the usual diet. Entry points. Probably there are two main reasons why non-nutrient chemicals become candidates for investigation. Either a toxic or beneficial effect in humans is reported, or a chemical of interest is found on analysis of a food. Composition studies. These need to be done in order to determine the total exposure from the whole diet. This figure has little relevance to the true exposure and is obtained in order to have a benchmark figure for normal cultivars. Isolation studies. These studies are needed so that the lowest detection limits in blood can be determined. In addition, structural studies may allow the prediction of putative biotransformations that can be expected after ingestion. Optimising detection limits. Since experimentation is ultimately going to be carried out in human volunteers, any methodology that can increase sensitivity of detection should be pursued. Plants are relatively easily cultivated under controlled conditions and it should be possible to label the plant constituent chemicals with stable isotopes. The fate of stable isotopes can be followed and insight gained into pharmacodynamic and pharmacokinetic behaviour. Preliminary studies in humans. We have stressed that the human diet is unique. Thus the mix of chemicals ingested and the consequent reactions are also unique. These studies need to encompass the feeding of the pure compound, the whole food, and the whole food incorporated into a mixed diet. The concentration of the pure compound fed should accurately reflect the probable exposure in the whole diet. By comparison of the behaviour of the isolated compound with that of both the whole food and a mixed diet, the influence on various important kinetic parameters can be measured as well as concentrations and type of biotransformations. If these are many, then, just as in the development of a drug, it may be necessary to investigate each of the transformed products. 13.5.2 Phase II These follow-up studies are designed to investigate possible effects of the chemical(s) under study. There is a tendency for some workers to use concentrations of food chemicals well above those found in the diet. These
studies are, in fact, studies for the development of medicines and are only tangentially related to the wholesomeness of food. We resist the tendency to 'medicinalize' foods. Pilot studies on an at-risk population. Phyto estrogens have been mooted as being protective against breast cancer (Lee et aL, 1991). Pilot experiments in the at-risk population, women, have demonstrated that at normal dietary concentrations these compounds may exert their protective effect by prolonging the follicular phase of the menstrual cycle (Cassidy et al, 1994). These studies could not have been performed in animals. It is important fully to define at-risk groups. In addition to effects on premenopausal women, the question of effects of phyto estrogens in soya on men (male fertility) and on babies consuming soya-based infant formulae has been raised. Thus, data relating to all of these groups will need to be generated and evaluated. In vitro follow-up studies. These can be used to reinforce the effects seen in human pilot studies and obtain a detailed understanding of the biochemistry involved. They should be carried out on human-derived tissues. The concentrations used should be comparable to those found in phase I and used in phase II. The correct chemical derivative must be studied. In the example of phyto estrogens, appropriate studies would be receptorbinding studies to the oestrogen receptor together with studies to ascertain whether they are agonists, antagonists or partial agonists. All these results will substantially add to the clinical studies. Animal studies. The use of animals needs to be carefully evaluated. If no effects in the pilot studies are found, then some researchers may feel it necessary to try and 'force' an effect by feeding large, inappropriate doses. However, it must again be stressed that, if the bioavailability and biotransformed products are different from those found in humans, and if a pure compound is fed rather than being incorporated into a 'humanlike' whole diet, then the results from these studies must be treated with caution before being applied to the human situation. It may be best, if no effects in humans are found, to abandon the investigation and await the results of further appropriate research. If toxic effects are found by the human pilot studies, animals could be used for further studies, but these are attended by all the caveats already mentioned. We began our approach with the assumption that food is good for you and that the Western diet has evolved over thousands of years and is still evolving. During that period the cultivars of fruit and vegetables have been improved, and those that caused acute effects have been eliminated or controlled by custom and practice. To isolate any one chemical from the mixed human diet and to try and ascribe effects, without considering
the various factors discussed above, is obtuse. We are also of the opinion that the human diet is unique and that the proper study of the effects of non-nutrient chemicals can only take place in humans. There are supporting roles for animal studies but these are limited. In vitro studies on human tissues are appropriate when phase I of the protocol has been completed. Serious consideration should be given to abandoning investigations if detecting effects proves difficult in at-risk human populations. It is better to await further data than to embark on inappropriate experimentation. The views expressed in this chapter are those of the authors and should not be taken to represent those of the Ministry of Agriculture, Fisheries and Food (MAFF).
References Aldercreutz, H., Honjo, H., Higashi, A. et al (1991) Urinary excretion of lignans and isoflavonoid phytoestrogens in Japanese men and women consuming a traditional Japanese diet. American Journal of Clinical Nutrition, 54, 1093-1100 Ames, B.N. (1989) What are the major carcinogens in the etiology of human cancer? In: DeVita, P.T., Hellman, S. and Rosenberg, S.A. (eds), Important Advances in Oncology, Lippincott, Philadelphia, pp. 237-247. Ames, B.N., Magaw, R. and Gold, L.S. (1987) Ranking possible carcinogenic hazards. Science, 236 (4799), 271-280. Beier, R.C. (1990) Natural pesticides and bioactive compounds in foods. In: Ware, G.W. (ed.) Reviews of Environmental Contamination and Toxicology, Vol. 113. Springer-Verlag, Berlin, pp. 47-138. Bennets, H.W., Underwood, EJ. and Shier, F.L. (1946) A specific breeding problem of sheep on subterranean clover pasture in western Australia. Australian Journal of Agricultural Research, 22, 131-138. Bradbury, R.B. and White, D.E. (1954) Oestrogens and related substances in plants. Vitamins and Hormones, 12, 207-233. Buss, N.E., Tembe, E.A., Prendergast, B.D. et al (1994) The teratogenic metabolites of vitamin A in women following supplements and liver. Human and Experimental Toxicology, 13, 33-43. Cassidy, A., Bingham, S. and Setchell, K. (1994) Biological effects of a diet soya protein rich in isoflavones on the menstrual cycle of premenopausal women. American Journal of Clinical Nutrition, 60, 333-340. Department of Health, Comittee on Medical Aspects of Food Policy (1993) The Nutritional Assessment of Novel Foods and Processes. Report on Health and Social Subjects No. 44. HMSO, London. Gey, K.F. (1994) The relationship of antioxidant status and risk of cancer and cardiovascular disease: a critical evaluation of observational data. In: Nohl, Esterbauer and Rice-Evans (eds) Free Radicals in the Environment, Medicine and Toxicology. Richelieu Press, London, pp. 191-219. Graham, S., Dayal, H., Swanson, M. et al. (1978) Diet in the epidemiology of cancer of the colon and rectum. Journal of the National Cancer Institute, 61, 709-714. Lee, H.P., Gourley, L., Duffy, S.W. et al. (1991) Dietary effects on breast cancer risk in Singapore. Lancet, 337, 1197-1200. Ministry of Agriculture, Fisheries and Food (1992). Nitrate, Nitrite and N-nitroso Compounds in Food. Twentieth Report of the Steering group on the Chemical Aspects of Food surveillance. HMSO, London.
Ministry of Agriculture, Fisheries and Food and Department of Health (1994) ACNFP Annual Report. MAFF Publications, London. Phillips, B.J., Hughes, J.A., Phillips, J.C. et al. (1996) A study of the toxic hazard that might be associated with the consumption of green potato tops. Food and Chemical Toxicology, 34, 439-448. Renwick, A.G. (1991) Safety factors and the establishment of acceptable daily intakes. Food Additives and Contaminants, 8, 135-150. Shutt, D.A. (1976) The effects of plant estrogens on animal reproduction. Endeavour, 35, 110-113. Wattenberg, L. (1993) Chemoprevention of carcinogenesis by minor non-nutrient constituents of the diet. In: Parke, D.V., Ioannides, C. and Walker, R., (eds) Food, Nutrition and Chemical Toxicity. Smith-Gordon Nishimura, London, pp. 287-300. Zeilmaker, M.J., van den Ham, W., Jansen, E.H.J.M. and Slob, W. (1995) Molecular modelling of the fate of nitrite in the blood: implications for the risk assessment of nitrate. Human and Experimental Toxicology, 14, 694.
Part Three Risk Management
14 The philosophy of food chemical risk management F.F. BUSTA and C.F. CHAISSON
14.1
Introduction - responsibilities and benefits
Over the past few years the food safety paradigm has changed and the script has been rewritten. The traditional concept of food safety assurance was one of the powerful regulators policing the food supply industry. Inspectors were lurking in every corner of the food chain, ready to detect contaminated or compromised products and remove them from the grocery shelves to protect the public. The public played no active role in this process, except as the silent, uninformed, potential victim. That script exists only in history now, starting with the role of government. The 'super cop' mentality of regulatory strategy has proven to be ineffective and terribly expensive. Limits on government resources and increasing responsibilities of the regulatory scientists and risk managers have forced them to examine the fundamental objectives and mechanics of their mission. They now aim to set the standards for food safety and define the programs which they expect industry to employ to meet those standards. They then gather information about the performance of those programs and focus monitoring activities towards areas of presumed problems. Regulators must build a partnership with industry to address the myriad of new technologies and safety issues which emerge every year. They must focus the resources of government and of industry towards the proper research, appropriate regulatory priorities and reasoned risk management strategies. The role of the industry has also changed dramatically. It is now becoming the entity with the greatest overall stake in food safety, at all levels. Expectations of the food industry have changed. Today's consumers expect food which is beneficial to them, entertains them, and is easy, inexpensive and even health-promoting. This goes far beyond the expectation that it is simply 'safe'. The food industry has a lot to lose if its products are not 'safe'. It loses its reputation - the name of the producer is wrapped around the product. It loses its credibility with the regulator - a move that can cost it dearly in future regulatory attitudes. It loses its profits in litigation over the safety problem and in the commercial costs of lost contracts and sales to the wary consumer. Food crises can unleash a cascade of problems which
can bring down even the mightiest of companies if it does not lead the technical evolution of better and better food safety initiatives. And who better to do this? Food producers - not regulators or consumers - know the processes best and the steps which most efficiently minimize the potential for problems. The food industry understands the technology which promotes or prevents food safety problems. The public has changed also. Today's consumers are far more demanding of the food industry and its products. Consumers bring tremendous pressure to bear on the regulator and the industry directly, through their buying attitudes, and indirectly via the media and their votes. They are often vocal, although frequently inconsistent. They demand more safety, more variety, more convenience and less cost. Today's consumers are also less likely to participate in the mechanics of food safety. They know less and less about food hygiene, home food preparation and storage practices, or the perils of food-borne illness. They accept less and less responsibility for their own actions. Gone are the traditional teachers of the public, to be replaced by an industry which is expected to prevent food safety problems, even in the home. In some regulatory regimes consumers are willing and able to embark on litigation in circumstances previously unheard of. 14.2 A new game on a different playing field Food safety is a business which must be practiced globally. The food industry is a long chain of interdependent industries, beginning with the grower who uses pest management products, the shippers who use packaging materials and pest management products, the processors, packagers, transporters, distributors, retailers and all of the makers of food additives, packaging materials, inks and labels. For each of these industries there are banks who make loans, guarantee credit and take risks on the promise that the product will be successfully delivered to the customer and the products will be purchased. If the food is unsafe, or even thought to be unsafe, all of these businesses are in trouble. It is not surprising, then, to note that food safety is a responsibility a business responsibility - of each of these entities. They must often do more than merely satisfy the regulator. They must meet the standards imposed by one part of the industry (the purchaser) on other parts of the industry (the suppliers). These are the most powerful of all regulatory forces, since failure to comply will result in immediate loss of business. Purchasers may impose standards more stringent than the government regulator and be less merciful when standards are not met. In many countries it is often food retailers who are taking on this policing role, since it is they who are inevitably in the front line if things go wrong. Some
major food retailers now insist on absolute control from the farm right through to the supermarket shelf. Government or business food safety standards are something like 'beauty'. They exist in the eye of the beholder, which in this case comprises the governments and businesses in many different countries where food is produced and sold. Each country may have its own idea of what is 'safe enough', how safety should be tested and how the evidence must be produced and presented. Failure by producers to accomplish this can result in a greatly diminished market. The old financial barriers to trade are being increasingly replaced with technical barriers - food safety concepts. Those industries which are able to successfully understand the emerging international attitudes about risk and safety will be positioned best to satisfy the regulator, the buyer and the consumer.
14.3 The emerging role of the risk manager The effort of describing and quantifying risk is undertaken to help form attitudes about how to deal with that risk. It is the role of the risk manager to focus the effort towards those questions which society deems most relevant and important. Fundamentally, the question is: 'Who is at what level of risk and under what conditions?' The details of this question influence the very beginning of the risk assessment process - data gathering and study design - and color the entire assessment process. For example, which subpopulation of the community should be the focus of concern? Should it be the most susceptible (though potentially rare) individual? Should it be a particular geopolitical region? How much will consumer habits or differences in eating patterns influence the effort of describing risk? Will we consider the 'average situation' or the extreme circumstance? What kinds of products will constitute the object of our efforts? Do we examine only products produced synthetically by private manufacturers or do we include products generated in nature, but not necessarily benign? The risk manager must also understand the differences between 'real' risk, 'regulatory' risk and 'credibility' or 'perceived' risk. Each has its own legitimate audience, and in some cases not all can be satisfied simultaneously. A real risk is one where there is tangible scientific evidence of potential harm being done. Regulatory risk occurs when a product is in violation of some standard or compliance parameter, but no real risk is necessarily anticipated because of this violation. Nonetheless, such violations have real business consequences and may destabilize the credibility of the product in the eye of the consumer. This is a credibility risk based on perception, not real risk. The options available to the risk manager include risk removal or
risk shifting. There are always consequences of any change and those consequences may not be the intended or desired ones. To remove a chemical or pharmaceutical from the market because of a risk (real? regulatory? perceived?) will create a vacuum to be filled by competitive products, each with its own risk profile (real? regulatory? perceived?). Risks can also shift from one population subgroup to another. The use of some chemicals or procedures may be more risky to the worker than to the consumer. If a shift of risk towards the worker is the consequence of an action taken to protect the consumer (who may be politically vocal), the result may appear to be risk removal, whereas the total real risk to society is increased. Shifting risk scenarios towards a source of responsibility is another option. For example, a professional prescription may be required for some situations where careful administration of a chemical is needed to avoid risky situations. This option may be socially unacceptable when the control point is the general public. Asking the public to practice good kitchen hygiene in the handling of meat products may be an option for mitigation of microbiological risks, but members of the public may not want to bear the responsibility of being careful. They may prefer a risk management decision focused on the food producer and over which they have little control, but of which they have great expectations.
14.4
A glimpse into the deliberations of the risk manager
All of these philosophical considerations may be interesting, but it is useful to contemplate the many crossroads encountered by the risk manager on the route to a real decision about food safety. First, there is the assumption that someone has done or could do a scientifically sound quantitative risk assessment. There must be a database selected from some source of information which is truly relevant to the question at hand. The risk manager must answer the question: 'Is there a real risk, a regulatory risk or a perception risk?' Some philosophies must be expressed to guide the process. Is every food considered to have some level of risk associated with it? Risk management should include an unbiased consideration of the risks introduced by natural chemicals (including microbial toxins) as well as those presented by synthetic products. What are the features and magnitudes of health threats from each contributing chemical? Are these competing risks or compiled risks? What is the relevant contribution of each and which risks can be mitigated? Expectations around the acceptable risk associated with the production or consumption of food transcend the scientist. These are social issues to be translated by the risk manager to the scientists who select and use the data in the assessment.
If the issue is one of risk perception, then education is necessary for the risk manager; this is too often absent. Focused education can be applied to suppliers and producers of the food or ingredients, to the workers in the system, to the retailers and consumers, to the legislators, media and advocates, to the regulatory scientists and to other risk managers. The consequence of good education is often a more constructive focus of resources towards areas which have real impacts on public health, and this often actually reduces the risk at issue. Finally, in an efficient risk management system, the risk manager validates the process. He or she can develop a way to assess the success of the process in obtaining relevant data, assessing the information, proposing viable options and the consequences of those options, and monitoring the impact of the regulatory decision on public health. There must be a procedure in place to detect an apparently correct decision which has had unintended consequences or did not have the intended consequence. For example, if risks to birth defects associated with vitamin A deficiency are to be avoided by supplementation of selected foods with vitamin A, can we quickly detect any possible effects of oversupplementation? It is known that vitamin A overdosing has adverse effects, too. Can the system detect this possibility and correct such a problem?
14.5
Applying the philosophy - using the tools
The business of knowing about the 'art' and science of food safety is the business of us all. The responsibility to lead in this field lies with us all including industry. There are many case studies from which to learn the many lessons, and many more will be written in the future. Each can illustrate one of the principles of the new framework of food safety technology, assessment, regulation, business and public acceptance. The following chapters of this book describe several of the newer tools now available to risk managers. However, these tools can be only applied successfully if the overall philosophy of food chemical risk management is fully understood.
15 Consumer perceptions A.C.D. HAYWARD
15.1
Introduction
Food risks form a diverse and considerable spectrum. Potentially hazardous substances such as preservatives, colourings and insecticides are intentionally applied to food products because they provide offsetting benefits. Veterinary medicines and migrants from food packaging can also be sources of risk. Other food hazards occur as accidental and unwanted contamination; for example, chemicals such as dioxins, lead and mercury sometimes enter the food chain from the surrounding air, soil and water. Microbiological contamination, e.g. Salmonella and Literia, can arise through poor handling and hygiene practices. Food itself is a source of risk: aflatoxins (produced by moulds on nuts and grains) and glycoalkaloids (present in green potatoes) are examples of many naturally occurring sources of possibly carcinogenic harm; equally, consuming an unbalanced diet can have lethal results. This cornucopia of potential detriments to our health and well-being must be appropriately managed with finite resources. Difficult decisions have to be made about the levels of food safety that are required, and at what cost. Which benefits are most highly prized, and, importantly for this chapter, which risks are of greatest concern to the public, and why? Exploring and understanding the risk perceptions of members of the public is crucial to the success of risk communication efforts. Public concern can only be appropriately addressed if the nature of, and reason for, that concern is adequately understood. Public anxiety surrounding the safety and quality of the food supply is not a new phenomenon. In the 19th century, intentional adulteration of food was a major problem. Leaves from trees such as ash, oak and willow were added to tea to increase bulk; hams were brushed with borax and creosote to make them appear well smoked; milk was watered down; alum was added to make poor-quality bread appear whiter (Collins, 1993). The trade in rotten and diseased food, especially meat, was a cause of widespread disease. A regulatory framework able to investigate complaints by members of the public developed from the mid-1800s. In Britain, early food safety regulations were introduced under public health legislation; for example, controls for the use of additives were brought in under the
Public Health (Regulations as to Food) Act 1907 (Jukes, 1993). However, improvements in food safety and food quality were influenced as much by economic considerations (such as the shift in emphasis within the food industry from price to quality) as they were by scientific and legislative factors (Collins, 1993). In recent times, public concern about food and the hazards associated with it has continued. During the last 10 years, food scares have occurred on both sides of the Atlantic in relation to specific issues like alar in apples, cyanide in grapes, Salmonella in eggs, and bovine spongiform encephaiopathy (BSE) in cattle. These have contributed to an overall high level of general concern. Findings from the USA indicate that a majority of the American public is concerned about the safety of the US food supply and that such concern is increasing (Lynch and Lin, 1994; Brewer et al, 1993; Schafer et al, 1993). The generally high level and incidence of concern about food-related risks is consistent with Beck's characterization of modern times as the 'risk society' (Beck, 1992). Some geographical and cultural differences in levels of concern have been detected. One international comparative study revealed that Japanese respondents were more concerned about the safety of food than their counterparts in the USA (Jussaume and Judson, 1992). A German comparison found that a sample of individuals from the former West Germany emphasized food hazards more than those from East Germany: to a question asking about special dangers to health, 33% of West Germans (n - 2000) answered food and beverages, compared with only 10% of East German respondents (n = 500) (Oltersdorf, 1995). Results from Scandinavia show only moderate concern about foodrelated health risks in Sweden, but a majority of the Norwegian population was reported to be concerned in this respect (Sjoden, 1990; Wandel, 1994). Establishing that concern exists is relatively simple. The challenge comes when deciding what ought to be done to address it. It would be straightforward if a consensus existed about which food-related risks should be most highly regulated, which should be banned, and which should be less prescriptively managed. Finding out how people perceive the risks associated with food can assist decision-makers in deciding which hazards should be managed and in what order of priority. Studies have been undertaken which focus on these issues, asking people to consider a series of food risks and rank them in terms of their relative seriousness or concern. Results of this type of research are discussed in section 15.2. Such work has highlighted disparities in the perception of risks between different groups, most often between experts and lay people. Reasons for why this disparity occurs are discussed in section 15.3. Differences in opinion lead to debate. Section 15.4 looks at the debate surrounding the acceptability of food risks in terms of a broad policy context in which issues of trust and justice take on paramount importance. In sum, this
chapter focuses on the way in which members of the lay public judge food-related risks, and draws on risk perception theory and the results of risk perception research to provide insight into the possible basis for their concerns.
15.2
Ranking the risks
Researchers in the field of risk have noticed a discrepancy between the concerns of the public and those of scientists in relation to various substances and technologies; the most obvious case is that of nuclear power, but food risks are no exception. Some substances pose risks which are judged by the relevant experts to be negligibly small, but which are the focus of considerable public attention and anxiety. Equally, risks which experts believe to be particularly high, such as those from smoking or alcohol or saccharin intake, often cause little or no public concern. Official assessments of food risks focus on statistical estimates of morbidity and mortality. Various experts, including toxicologists, chemists, biologists, epidemiologists, physiologists and nutritionists, are employed to assess the extent to which food chemicals, manufacturing processes and dietary choices pose a threat to human health. Risks from food chemicals are expertly assessed through a process of hazard identification (to ascertain the type of harm involved, such as whether a chemical could be carcinogenic or mutagenic), dose-response evaluation (when the toxicity of the chemical at a range of human exposure doses is determined), and exposure assessment (who is exposed, to what dose, and for how long). In the UK, the regulating bodies responsible for the safety and quality of food are the Department of Health (DoH) and the Ministry of Agriculture, Fisheries and Food (MAFF). The MAFF believes that for UK consumers, 'the main risk to us from food is through eating too much of it' (Ministry of Agriculture, Fisheries and Food, 1994, p. 1). Overnutrition is followed in the MAFF's ranking by microbiological contamination, natural toxicants, environmental contaminants, pesticide residues, packaging migrants, food and feed additives, novel foods and products of novel processes, and finally sporadic risks such as industrial accidents or spillages (Ministry of Agriculture, Fisheries and Food, 1990). This ranking has no 'scientifically rigorous basis' but is based upon 'the best informed view of their approximate relative importance' (Ministry of Agriculture, Fisheries and Food, 1990, p. 3). Thus the official view is that the greatest risk to the UK public from its diet is not pesticides or additives but is in fact health problems associated with poor nutrition, such as through the overconsumption of saturated fats. Less recently, Lee (1989) and Hall (1971) reported rankings of food hazards from the USA. Hall's expert ranking is based on that suggested
by a member of the US Food and Drug Administration. (Lee gives no derivation or attribution of his expert ranking.) Both sets of expert priorities are almost identical, despite the passage of 18 years: microbiological hazards, followed by nutritional problems, environmental contaminants (Lee simply had 'contaminants'), natural toxicants, pesticide residues (Lee used the term 'agricultural chemicals') and food additives. It appears that the expert perception is that two issues (poor diet and microbiological food poisoning) present the greatest food hazards faced by consumers in both the USA and the UK. In order to give some idea of scale, a few official figures relating to these two issues will be reported here. The issues differ in their health outcomes, as one tends to cause long-term problems whereas the other has acute effects. Poor diet can cause chronic health problems. The DoH has responsibility for nutrition-related aspects of health in the UK and also for food hygiene. It reports on the health of the nation in relation to coronary heart disease in Britain. In 1990, death rates for males from coronary heart disease were 296 (England), 334 (Wales), 363 (Scotland) and 372 (Northern Ireland) per 100 000 of the population (female mortality rates were approximately half) (Department of Health, 1993). The difficulty with coronary heart disease and other chronic conditions such as cancer is that they can be caused by a multiplicity of factors, some of which may be diet-related and others which are not. Therefore, it is impossible to attribute the incidence of cases of heart attacks, say, to particular dietary causes. In contrast, the health outcomes of food poisoning are acute and measurable, as causal agents can usually be identified. Every year in the USA, there are about 2 000 000 cases each of Salmonella and Campylobacter, approximately 10 000 cases of Escherichia coli, and 1500 cases of Listeria monocytogenes (Aldrich, 1994). The Communicable Diseases Surveillance Centre of the Public Health Laboratory Service records reported cases of food poisoning in England and Wales. Campylobacter is the most commonly reported cause of acute gastrointestinal infection in the UK, followed by Salmonella (Advisory Committee on the Microbiological Safety of Food, 1993). Total food poisonings notified in England and Wales rose from 63 347 in 1992 to 82 095 in 1994 (Communicable Disease Surveillance Centre, personal communication). Several studies have been undertaken which allow comparison of expert and lay risk rankings. Trends over the years are interesting to note. In 1971, Hall reported public priorities as being in the following order: food additives, pesticide residues, environmental contaminants, nutritional problems, microbiological hazards, and natural toxicants. Lee (1989) reported that the public were most concerned about pesticides, followed by new food chemicals, additives, fat and cholesterol, microbial spoilage and junk foods. More recently, Frewer and her colleagues have investigated consumer perceptions of food risks in the UK. Using a postal
questionnaire (n - 186), they asked respondents to rate the risks of several food hazards on a scale from 'none at all' to 'a great deal' (Frewer et al, 1994). The questions were repeated using different 'risk targets', including personal risk (risk to oneself) and risk to society. The greatest risks to society were perceived to be from alcohol abuse and a high-fat diet, followed by pesticides. The food risk to which respondents felt most personally exposed was consuming a high-fat diet, followed by pesticides, and then by food poisoning when the food had been prepared by others. Except for the high placing of pesticides, these results are in line with the expert priorities. Another study investigated food risk perceptions using focus groups comprised of members of the British public (Simpson, 1995). Participants in the study (n - 64) were less concerned about the risks of an unbalanced diet than they were about the risks from pesticides, environmental contamination of food, animal feed additives and veterinary drugs, microbiological contamination, and novel foods and processes. Similar results were found by Sparks and Shepherd (1994), who surveyed 216 people in the UK. They reported greater concern surrounding the issues of environmental contamination, pesticide residues, BSE and microbiological contamination, than that related to food additives, nutritional deficiencies, alcohol and caffeine. In Germany, a nationwide survey (n = 2500) asked respondents to select, from a list presented to them, those food-related risks which posed 'quite a risk' to German society (Oltersdorf, 1995). Former West German respondents most often chose pesticide residues, followed by microbiological contaminants (described as 'spoilt food'), mycotoxins, veterinary drugs and their residues (including hormones), and irradiated food. Former vEast German respondents most often reported 'spoilt food', followed by mycotoxins, pesticide residues and food additives, as presenting the greatest risks. Genetically modified food was selected the least often by both subgroups, although younger respondents were more concerned about these novel foods than the older members of the sample. Buzby and Skees (1994) report a study undertaken at the University of Kentucky (n - 1671) which asked consumers for their main food safety concerns. The top concerns were fats and cholesterol (34% of respondents), bacterial food poisoning (30%) and, a distant third, pesticide residues (18%). Preservatives and additives, salt, hormones and antibiotics and sugar were only chosen by 6% or less of respondents. Interestingly, the top two concerns of these consumers match the current scientific and governmental view that an unbalanced diet and food poisoning are the most serious food-related health issues. From a list of four food issues, the US Department of Agriculture's Agricultural Research Service found that the most important food safety concern was bacteria and parasites in food (approximately half of almost 2000 respondents chose this as their most important concern), followed
by pesticide residues (chosen by 23% of respondents), residues of drugs given to animals (12%), and food additives (selected as the most important concern by only 3% of the sample) (Lynch and Lin, 1994). Again, the high position given to microbial contamination is in line with expert perception of this issue as a serious food safety problem. Fats and nutrition concerns were not included in the questionnaire. Schafer et al (1993) examined consumer concern about food safety in the midwestern USA using a postal questionnaire (n = 630). They asked respondents to estimate the likelihood (small, medium, large) of personally experiencing harmful effects from a list of seven food issues. The order of the issues perceived as most likely to cause personal harm was as follows (beginning with the issue most often selected): food additives (35% of respondents), residues of agricultural chemicals, growth hormone residues in meat, antibiotic residues in meat, growth hormone residues in milk, bacteria or viruses in food, and naturally occurring toxicants (10% of respondents). Again, dietary imbalances were not included in the survey. Results of these studies are summarized in Table 15.1. Such studies show that both members of the public and scientific experts appear to agree that diet and microbiological hazards are significant causes of foodrelated illness. Perhaps health promotion campaigns and media attention have helped to bring the risks associated with fats, cholesterol and being overweight to the forefront of people's minds. In this respect, the priorities of the public and the experts seem not too dissimilar. However, lay people also tend to emphasize their concerns with other food issues such as pesticides, veterinary drugs and intentionally added chemicals. One possible change in recent years has been a decrease in relative concern over food additives. Labelling requirements have given people increased control over their exposure to certain additives, which has perhaps tempered concern over this particular food issue. Opinion varies about how the results of risk perception surveys should be interpreted. One's opinion of the risk perceptions of others may depend on one's view of others and one's view of risk. Depending on one's philosophical and epistemological outlook, lay people's judgements are received with a spectrum of reaction ranging from derision ('the public are clearly irrational'), benevolence ('if only they understood the numbers then they would think like we do'), puzzlement ('surely they cannot really be more worried about the risks of pesticides than those of alcohol abuse?'), to credence ('lack of scientific expertise is no reason to dismiss lay judgements: such opinions are legitimate and worthy of consideration on equal terms'). Institutional context is also a determinant. For example, Hall, a member of the food industry, provides his own 'admittedly subjective review' (Hall, 1971, p. 456) of food risk priorities, and seems to believe that, since in his opinion the majority of people express little or no concern
Table 15.1 Rankings of public concern and risk perceptions Hall (1971) -USA
Lee (1989) -USA
Schafer et al. (1993)a - USA
Buzby and Skees (1994) USA
Lynch and Lin (1994) - USA
Sparks and Shepherd (1994) - UK
Frewer et al. (1994)h - UK
Simpson (1995) - UK
Oltersdorf (1995) Former DFR
Oltersdorf (1995) Former DDR
Food additives
Pesticides
Food additives
Fats and cholesterol
Environmental contamination
Alcohol abuse
Pesticides
Pesticide residues
Spoilt food
Pesticide residues
New food chemicals
Pesticide residues
Bacterial food poisoning
Bacteria and parasites in food Pesticide residues
Pesticide residues
Spoilt food
Mycotoxins
Environmental contaminants
Additives
Hormone and antibiotic residues in meat
Pesticide residues
Veterinary drug residues
BSE
Mycotoxins
Pesticide residues
Nutritional problems
Fats and cholesterol
Hormone residues in milk
Additives
Food additives
Microbiological contamination
Veterinary drugs and their residues
Food additives
Microbiological hazards
Microbial spoilage
Salt content Bacteria or viruses in food
Environmental contamination of food Pesticides Animal feed additives, veterinary drugs and their residues Food poisoning Microbiological contamination (from food prepared by others) Genetic Novel foods manipulation and processes of animals
Irradiated food
Alcohol
a h
Respondents rated personal harm. Respondents rated risk to society.
Veterinary drug residues
High-fat diet
about food risks, those who do speak out must represent the 'fringe hysteria' (Hall, 1971, p. 457) rather than mainstream public attitudes. Forsythe, an academic supported by the food industry, believes that the public perception of risk 'is influenced by imagination, dramatization, and memorability, not food safety reality' (Forsythe, 1993, p. 1155) and wonders what 'needs to be done to change perception into reality?' (Forsythe, 1993, p. 1155). Lee, an academic, puts it differently: 'the public seems to have . . . much more concern about new food chemicals and pesticides than hazards which cause injury and death' (Lee, 1989, p. 62). This might mean that the public does not fully appreciate the statistical incidence of illness associated with food risks, or may be concerned about issues like pesticides for reasons other than their predicted mortality and morbidity. But what could these reasons be? Studies in the USA during the late 1970s addressed this question by investigating the discrepancy between the concerns of experts and lay people using an expressed preference approach. Different groups of people (n ^1OO for each group) were asked to (individually) rank a list of items (activities, substances and technologies) according to perceived risk of dying as a consequence of the item (Slovic et al, 1979). It was found that, in general, all of the groups of lay people ranked the relative riskiness of the items in a similar way, but in a way which was dissimilar to the perceptions of the expert group. Slovic and his colleagues wondered what caused this expert/lay difference. The experts' judgements of perceived risk matched the statistical estimate of annual fatalities from the literature. Were members of the public simply estimating their numbers incorrectly, or did they have something else in mind when they judged the risks? To investigate whether the public was basing its perceptions on erroneous fatality estimates, they asked the lay respondents to estimate how many people were likely to die in the USA in an average year as a result of each item. If the public really equated risk with annual fatalities, its mortality estimates (even if inaccurate) should be similar to its risk judgements. Comparison of these fatality estimates with the estimates of perceived risk led the researchers to reject this hypothesis. The ordering of the items in terms of fatality estimates did not correspond to the risk ranking. This result implied that lay people incorporate considerations other than annual fatalities into their concept of risk. Although this conclusion was based on the findings of studies with small samples, this result was an important one in the history of risk perception research. It indicated that the term 'risk' may mean different things to different people. In particular, the public may have a richer definition of risk than experts. Perhaps members of the public use different criteria when judging risks than do experts. The foundation had been laid for more in-depth risk perception studies. These are the subject of the next section.
15.3 Theories of risk perception It is not enough to know that food risks cause concern. In order to improve food risk management decision-making, it is necessary to probe deeper in order to find out what underlies these concerns about food. Clarification should be of benefit to the public, the industry and the regulators alike: an understanding of the nature of consumer concerns makes it more likely that these concerns can be addressed by food risk managers; the food and agricultural industries could benefit, as they would be able to gauge more easily the likely success or otherwise of products and processes; the regulators would be likely to welcome such information in order to facilitate decisions about safety investment and risk management. Asking why some risks but not others are of particular concern leads to an exploration of what is meant by 'risk' and 'risk perception' itself. 'Risk' is a somewhat abstract concept. It can mean different things to different people. Perhaps the most widely used definition of the term is that risk is the probability of a particular adverse event occurring during a stated period of time (Royal Society, 1992). The quantitative assessment of food-related risk has been described elsewhere in this book and is essentially based on the two dimensions of probability and (adverse) consequence. This restrictive definition may provide a useful yardstick for engineers who wish to compare, say, failure rates of thermostatic valves in chiller cabinets, but it is not rich enough to encompass the use of the concept of risk employed by the public. The term 'expert' is used within this chapter as referring to someone working within their field of specialization. Scientific qualifications do not make people 'experts' on moral and ethical questions, or on problems outside their area of expertise. There is a tendency to say that experts 'assess' risk whereas the public merely 'perceives' it. This terminology carries with it an implication of right and wrong. The title of Forsythe's paper, 'Risk: reality versus perception' (Forsythe, 1993), is in itself indicative of one view of risk. Whether or not 'real risks' exist 'out there' is the central focus of a debate about whether risk is an objective or subjective phenomenon. The Royal Society defines risk perception as involving 'people's beliefs, attitudes, judgements and feelings, as well as the wider social or cultural values and dispositions that people adopt, towards hazards and their benefits' (Royal Society, 1992, p. 89). This broad scope is echoed by Fischhoff: 'All that anyone does know about risks can be classified as perceptions' (Fischhoff, 1989, p. 270). In short: What is clear is that risk perception cannot be reduced to a single subjective correlate of a particular mathematical model of risk, such as the product of probabilities and consequences, because this imposes unduly restrictive assumptions about what is an essentially human and social phenomenon. (Royal Society, 1992, p. 89)
The varying significance attributed to different types of food risks by different people and the contested nature of risk itself lead to the conclusion that risk is a social construct. Researchers across the social sciences have attempted to explain why people perceive risks in different ways, mostly from within disciplinary perspectives. Psychologists have studied risks and have identified certain attributes of risks which seem to be of particular concern to lay people. In contrast, sociologists and anthropologists have focused on the characteristics of the perceiver of the risk in order to explain risk perception. Decision theorists have attempted to describe the way in which people ought to react to risk. That is, they have developed normative rules for making choices under risk in order to maximize expected returns. The initial guiding question for many risk studies was 'how safe is safe enough?'. Starr (1969) estimated the benefits (in dollars) and the risks (in deaths per hour of exposure) of a number of hazardous items (including natural disasters, smoking and aviation) using a revealed preference approach (in which the status quo is examined for patterns and traits). Examining the balance between risks and benefits, he concluded that the public was more willing to accept risks which were voluntarily imposed than those which were assumed involuntarily. Building on the results of revealed preference studies can be problematic, as one is forced to assume that the present arrangements in society which are measured in such studies are optimal, or at least reflective of current priorities, and as such provide the basis for future standards. In terms of risk, the assumption underlying this approach is that management regimes, currently tolerated levels of safety and approved products are acceptable to society. This may be an erroneous assumption as, when asked, members of the public may report that the existing state of affairs is not that which they would ideally prefer. For example, Simpson (1994) found that food additives were generally disliked, yet were judged to be present in so many foods nowadays that to avoid them completely was unrealistic. In addition, revealed preference analyses tend to ignore questions of distributional and intergenerational equity: who faces the risks, who gets the benefits and when. Subsequent risk perception work tended to abandon revealed preferences in favour of an expressed preference approach, which involves actually asking people about the risks they are prepared to tolerate. 15.3.1
The psychometric paradigm
Psychologists set about trying to measure public risk perceptions. A measurement paradigm for attitudes towards risk was developed by Slovic, Fischhoff, Lichtenstein and their colleagues at Decision Research in Oregon, USA, during the late 1970s and early 1980s. It has become known as the psychometric paradigm of risk, and has produced an extremely influential set of results.
Slovic et al asked members of the public to characterize hazards by rating them on a series of qualities or attributes, hypothesized to influence risk perception. These factors included those suggested by Starr (1969) and Lowrance (1976). The studies used a list of up to 18 risk characteristics, including: dread; threatens future generations; globally catastrophic; certain to be fatal; risk increasing; affects me personally; risks and benefits inequitable; not easily reduced; severity not controllable; little preventive control; catastrophic; involuntary; many people exposed; new/unfamiliar; effects immediate; not known to those exposed; not observable; and not known to science (Slovic et al, 1985). Up to 90 hazards, including food-related items, were rated in terms of perceived risk and also in terms of the risk characteristics. Interrelationships between the characteristics were examined using factor analysis. It was found that the 18 dimensions factored into just three. The ensuing factor space diagram and its taxonomy of hazards has become a classic result in the risk literature (Figure 15.1). The horizontal axis, Factor I, was labelled 'dread' (Slovic et al., 1980; Slovic, 1987). The high end of this axis is associated with lack of control, lethality, high catastrophic potential, reactions of dread, inequitable distribution of risks and benefits, and increasing risks. Factor II, which has been labelled both 'unknown' (Slovic et al, 1985) and 'familiarity' (Slovic et al, 1980), contains other characteristics which correlated relatively highly with each other and less highly with the other factors (observability, knowledge to those exposed, immediate effects, unfamiliarity, and knowledge to science). Factor III, not shown in Figure 15.1, contains the remaining factor, number of people exposed, which was relatively independent of the other characteristics (Slovic et al, 1980). The psychometric paradigm shows, for example, that nuclear power presents unique risks, highly dreaded and unfamiliar. Food irradiation and DNA research were judged to be extremely unfamiliar and unknown to both science and those exposed. In contrast, alcoholic drinks and 'lifestyle' hazards such as skiing, bicycles and home power tools, were perceived as being familiar, voluntary and controllable. Caffeine, alcohol and food preservatives all scored highly on Factor III, 'societal and personal exposure' (Slovic et al, 1985). Among the risk characteristics themselves, Factor I was the best predictor of perceived risk. Factor I was also highly correlated with respondents' desire for regulation, so that the further to the right an item's position on the factor space, the greater its perceived risk and the greater the public's desire for government restriction and regulation of the item (Slovic etal, 1985). The primary conclusion of this work is that experts and lay people use different definitions of risk when making judgements about hazardous activities, technologies and substances. The public has a multi-dimensional
SOLAR ELECTRIC POWER
• • DNA RESEARCH
_
• EARTH ORBITING SATELLITE '•
• SPACE EXPLORATION
FOOD IRRADIATION* ' LASERS FOOD COLOURING • FLUORESCENT LK3HTS
"^ ^^
• COSMETICS •
NUCLEAR POWER
*
•
SCO.UM NITRATE .
VSACCHARIN
''
WATER FLUORIDATION C
MARIANA
NO-N-Nu?LEAHREST0R!?To^ARNTS
«*» ™™*VES • •
MICROWAVE OVENS
• HYDROELECTRIC POWER • .LIOUIDNATURALGAS
m
HAIRDYES
ORAL CONTRACEPTIVES
_
ASPIRIN*
'•
SCHEM-FEHTlLIZERS ASBESTOS*
DIAGNOSTICXRAYS*
..
nAD1ATION
THERAPY
•
0HERBICIDES
FOSSIL ELEC. POWER
CHRISTMAS TREE LIGHTS •
.VACCINATIONS
ANAESTHETICS* TRACTORS •
*
SKYSCRAPERS , .
• SST
,
i i t i i
* BRIDGES
DARVON*.
• DAMS
NERVEGAS*
• MUSHROOM HUNTING
SKATEBOARDS PREGNANCYCHILDBIRTH. POWER LAWN MOWERS CAFFEINE. *
FaCtOf 1
PRESCRIPTION DRUGS
1 I M I I HOME I I GAS I IFURNACES I fUiB.oT.cs ' ' ' H * • VALlUM . HOMEAPPLIANCES*
• DDT
•
JOGGING SUNBATHING*
• PESTICIDES
*
»
.F.REWORKS SURFING. • .
SWIMMINGPOOLS. RECREATIONAL BOATING * . HOMEPOWERTOOLS.
ROLLER COASTERS , ^** ° SURGERY.
.
•
RAILROADS .. GENERAL AVIATION .
B XING
°
'
BICYCLES*
.JUMBOJETS
OPEN-HEARTSURGERY .COMMERCIAL AVIATION
TERRnRlSM* CMMUMISM •
. MORPHINE ..BARBiTUHATES
" AMPHETAMINES __ . SMOKING
• FOOTBALL
NUCLEAR WEAPONS. NATIONALDEFENCE.
HtM(JIN • DYNAMITE
DOWNHILL SKIING •
0 MOTORCYCLES % MOUNTAIN CLIMBING
WARFARE .
.
CRIME •
HUNTING .POLICE WORK ALCOHOLIC BEVERAGES *
. MOTOR VEHICLES
.FIREFIGHTING
.HANDGUNS
Factor Il
Controllable "N NotdTead Not global catastrophic Consequences not fatal Equitable Individual I Low risk to future i ' generations Easily reduced Risk decreasing Voluntary Doesn't affect me J
Not observable Unknown to those exposed Effect delayed New risk Risks unknown to science v — * .
/
* s Observable Known to those exposed Effect immediate Old risk Risks known to science
Uncontrollable Dread Global catastrophic Consequences fatal Not equitable < Catastrophic High risk to future generations Not easily reduced Risk increasing Involuntary V Affectsme
Factor I
Figure 15.1 Map of hazards on a structure derived from the interrelationships between 18 risk characteristics. The third factor, not shown, reflects the number of people exposed to the hazard and the degree of one's personal exposure (redrawn from Slovic, 1987). Reprinted with permission from 'Perception of Risk', Science, vol. 236 (17 April 1987), 282. Copyright 1987 American Association for the Advancement of Science.
view of risk which is as rational as, though different from, that of scientific experts. The judgements of a lay person about risks may include consideration of the attributes investigated by SIovie et al, attributes which experts are likely to disregard when making their risk judgements. This result implies that risk management strategies which try to explain 'the facts' about projected deaths and injuries to members of the public in order to try to win their acceptance of products and technologies are likely to fail, as their concern may be based on considerations of voluntariness, dread and control, rather than potential health consequences. For example, Lynch and Lin (1994) found much concern about the safety of pesticides, both as residues on food and regarding their general use. Respondents felt that 'pesticides should not be used on crops grown for food because the risks are greater than the benefits' (Lynch and Lin, 1994, p. 17), that the health effects of pesticide residues were poorly understood by the scientific community, and that current regulations do not adequately protect consumers. These reasoned judgements underline the relevance of the characteristics included in the psychometric paradigm. The psychometric paradigm employed a set of attributes of risk, applied universally to the hazardous activities, technologies and substances. The assumption is that judgements of these items can be represented using this common scale of characteristics, the meanings of which are shared by all respondents. It can be criticized on this basis, and for its failure to account for the context specificity of risk problems. In addition, the studies made no attempt to distinguish between individuals or groups of people (except experts versus lay people). The factor analysis uses average scores, providing no information about how different people may perceive risks in dissimilar ways. Harding and Eiser (1984) state the oft-made criticism of the initial psychometric work of Slovic et al - that it uses mean ratings, averaging over individuals and heterogeneous activities. However, they note that this inattention to individual variation may have been appropriate, as the study was designed to investigate societal relationships, not to look at how individuals make risk decisions. 75.3.2
Relationship to sociodemographic variables
Studies undertaken subsequent to the original psychometric studies of Slovic et al have explicitly investigated differences between individuals (Flynn et al., 1994; Gardner and Gould, 1989; Harding and Eiser, 1984). These studies have focused on sociodemographic variables such as age, gender, race, occupation, level of education and nationality. Summaries and reviews of such work have also been compiled (Sjoberg and DrottzSjoberg, 1994; Rohrmann, 1991). In general, these studies have found that sociodemographic variables are unable to explain attitudes towards risks. A few modest yet consis-
tent relationships have been discovered: women tend to rate risks higher than men, older people perceive greater risks than younger people, and individuals with a higher income or level of education often assess risks as lower than those with less. Research focused solely on the food sector has found similar results. Schafer et al. (1993) found that concern about food safety was not related to any specific age, place of residence, gender or educational level. Jussaume and Judson (1992) found a weak relationship with age, where respondents between 40 and 60 years of age in both samples of Japanese and US residents were more concerned about food safety than others, suggesting that such concern should not be interpreted as a fad amongst the younger generation. A study of Swedish consumers (Sjoden, 1990; Wandel, 1994) found that worry about health aspects of food increased with age, and also that women were more worried than men. This relationship with gender was also found in Norway (Wandel, 1994). The effects of gender and race were explored by Flynn et al (1994), who noticed that the mean risk ratings of white males were strikingly lower than those of other race/gender subgroups. Further examination of the data revealed that a subsample of white men who had assigned particularly low risk scores were better educated, had a higher household income and were politically more conservative than the other respondents. These white males were more likely than the others to 'disagree that they have little control over risks to their health', to 'disagree that the world needs a more equal distribution of wealth', to 'disagree that technological development is destroying nature' and to 'agree that if a risk is very small it is OK for society to impose that risk on individuals without their consent' (Flynn et al, 1994, p. 1106). Flynn and his colleagues summarise these attitudes: 'the subgroup of white males who perceive risks to be quite low can be characterized by trust in institutions and authorities and a disinclination toward giving decision-making power to citizens in areas of risk management' (Flynn et al, 1994, p. 1106). They hypothesize that the 'white male effect' may be to do with socio-political factors rather than 'biological' ones, given that this population group manages, controls and benefits from risk more than others. A comparative study of the USA (n = 460) and Japan (n = 442) found that in both countries two predictors of food safety concern were frequency of vegetable consumption (those consuming more vegetables were more likely to emphasize food safety) and whether respondents had children under 18 (Jussaume and Judson, 1992). Protecting the health of future generations has been found to be an important feature of food safety concern in the UK (Simpson, 1995). Other work has investigated the effect of political orientation on risk perception (Buss and Craik, 1983; Jenkins-Smith, 1994). Results have not been very conclusive, perhaps because rather simplistic psychometric
scales have been used to elicit ideologies, and a method is needed which is more embedded in the social context of the respondents (Marris et al., 1995). The psychometric paradigm has been robust in the face of empirical testing, but clearly goes only part of the way in explaining why people perceive risks in the way that they do. Risk perception studies of a more sociological nature are necessary to tap into broader social and cultural determinants of risk perception. 15.3.3
The cultural theory of risk
Another branch of risk perception research, the preserve of sociologists and anthropologists, has explored the cultural context of risk perception. In this domain, the focus is shifted from trying to identify characteristics of risks to an investigation of possible shared beliefs and values of those who perceive the risks. Cultural theory was originally proposed by Douglas (1966), and has been applied to the study of risk and risk perception since the 1980s by Douglas herself and others (Douglas, 1986; Douglas and Wildavsky, 1982; Rayner, 1987, 1992; Thompson and Wildavsky, 1982; Thompson et al., 1990; Wildavsky and Dake, 1990). The central question for risk perception researchers is why do some risks cause worry, while others do not? Douglas and Wildavsky (1982) discuss how particular dangers come to be selected for attention. According to cultural theory: The choice of risks and the choice of how to live are taken together. Each form of social life has its own typical risk portfolio. Common values lead to common fears (and, by implication, to a common agreement not to fear other things). (Douglas and Wildavsky, 1982, p. 8)
The term 'cultural theory' is in fact shorthand for 'sociocultural viability theory' (Thompson et al., 1990). This latter term is useful, as it 'has the advantage of indicating to the reader that ways of life are composed of both social relations and cultural biases (hence sociocultural) and that only a limited number of combinations of cultural biases and social relations are sustainable (hence viable)' (Thompson et al., 1990, p. 15). The three terms, cultural bias, social relations and way of life, are distinguished as follows: Cultural bias refers to shared values and beliefs. Social relations are defined as patterns of interpersonal relations. When we wish to designate a viable combination of social relations and cultural bias we speak of a way of life. (Thompson et al., 1990, p. 1)
There is a reciprocity between social relationships and views of the world (cultural bias). Each is supportive of the other, producing viable ways of life. The mutual nature of shared beliefs and social relations can be
illustrated by the example of Japanese consumer co-operatives. Jussaume and Judson (1992) describe how Japanese consumers have begun to express their concern with the safety of food by joining consumer co-operatives that apply stricter standards than the statutory limits imposed by the government. Members state that the desire to buy safe food motivated them to join the co-operatives. However, participation in a consumer co-operative exposes members to like-minded people, and to information that can strengthen their interest in food safety issues. Thus members identify themselves with the goals of the group, which are then reinforced by the membership. Rayner (1992) notes that 'Jewish dietary restrictions illustrate a basic principle of cultural theory. Whatever objective dangers may exist in the world, social organizations will emphasize those that reinforce the moral, political, or religious order that holds the group together' (Rayner, 1992, p. 87). Therefore, '[o]nce the idea is accepted that people select their awareness of certain dangers to conform with a specific way of life, it follows that people who adhere to different forms of social organization are disposed to take (and avoid) different kinds of risk. To alter risk selection and risk perception, then, would depend on changing the social organization' (Douglas and Wildavsky, 1982, p. 9). Cultural theorists have developed a framework for analysis of social organization, based on the two independent dimensions of grid and group. Group is defined as the extent to which an individual is incorporated into a bounded social unit. Weak group individuals are part of open-ended social networks, having only infrequent contact between members of the network. This promotes competitiveness. At the other end of the scale, strong group members interact widely and often, and are dependent on each other (Rayner, 1992). The grid variable describes the extent to which an individual's life is circumscribed by externally imposed prescriptions (Thompson et al, 1990). Low-grid organizations permit participation in any social role without status discrimination. Contrastingly, in a high-grid context, access to social activities is limited by constraints. For example, one may have to be a particular gender, or know the 'right' people, or be a certain age, in order to participate (Rayner, 1992). The interdependent nature of cultural bias, social relations and resulting ways of life is illustrated on the grid-group diagram of Figure 15.2. As shown in Figure 15.2, four cultural biases are associated with the grid-group framework: egalitarianism, individualism, hierarchism and fatalism. (Thompson et al (1990) discuss a fifth possible way of life, that of the hermit, who may be depicted at the centre of Figure 15.2 at the point where the axes cross. The hermit is autonomous and withdrawn from all social involvement.) The values and beliefs held by each of these cultural types, taken in conjunction with their mode of social organization as described by the grid and group variables, sustain a particular way of life.
Grid STRONG (Many externally imposed restrictions on choice)
FATALISM
HIERARCHY
Group WEAK (Individualized)
INDIVIDUALISM
STRONG (Collectivized)
EGALITARIANISM
WEAK (Few externally imposed restrictions on choice)
Figure 15.2 Graphical representation of the grid-group typology. (Redrawn from Schwarz and Thompson, 1990.)
Social relations which are low grid and high group are termed 'egalitarian'. Within an egalitarian organization, there is little differentiation in internal roles. No privileges are granted by virtue of position. As with the other three ways of life described by cultural theory, the egalitarian pattern of social relationships generates a distinct cultural bias (way of looking at the world), which itself legitimizes the corresponding level of grid-group relations. Egalitarians wish to limit competition and promote equality. They may mistrust secretive, unaccountable institutions (such as multinational companies), and encourage public participation in decisionmaking. Environmental threats are viewed as particularly worrisome. When an individual's social environment has strong group boundaries, binding prescriptions, and much control through role differentiation which can be drawn upon in times of internal conflict, the resulting social relations are 'hierarchical' (Thompson et al, 1990). Demarcation of an individual's role within the hierarchy may be based, for example, on age, gender or kinship. An example of a hierarchical institution is the UK Civil Service, with its strict differentiation of internal roles. A hierarchist pattern of social relations supports a cultural bias which is particularly concerned about threats to social order (e.g. crime), though shows little concern over technologies which are sanctioned and managed by experts (Marris et al, 1995). Hierarchists may prefer risks to be managed by established institutions and expert committees.
Low grid and low group describes an 'individualistic' social context. Such people are relatively free to make their own choices, though are themselves able to manipulate and exert control over others (Thompson et al, 1990). The competitive business 'yuppie' phenomenon of the 1980s is an example. Individualism can bring about inequalities of wealth, power and knowledge (Schwarz and Thompson, 1990). The world view of individualists is such that threats to the smooth functioning of economic markets are judged as most troublesome. They would welcome deregulation, and so, for example, would prefer companies to be able to set their own standards of safety. They may embrace cost-benefit analysis as the basis of rational decision-making, and require that risk standards be justifiable on economic terms (such as through the use of valuation of statistical lives saved as a result of proposed safety measures). The fourth and final way of life is that of the 'fatalist'. Fatalists are excluded from group membership (low group) but are controlled from without by powerful others (high grid). They have few choices in life. One might expect people dependent on state benefits to be typical in this respect. Fatalists feel powerless and believe that their lives are ruled by destiny. Researchers within the cultural theory tradition have restated the original guiding question as one of 'how fair is safe enough?' (Rayner and Cantor, 1987; Rayner, 1992). Instead of using probabilities of undesirable outcomes as the basis for risk judgements, the fairness hypothesis argues that societal risk management should be focused on issues of trust, liability and consent. The preferences of organizations with different cultural biases towards these three determinants are predicted by cultural theory. Thus egalitarian groups will prefer consent to be given by expressed preferences, will trust institutions which promote involvement in decisionmaking, and will favour values-based liability for losses on the basis of a strict fault system. Hierarchists assume hypothetical consent to decisions, trust long-established, formal organizations, and use redistributive mechanisms to spread liability so that preferred institutions do not suffer unduly. Individualistic organizations want to use market mechanisms to determine liability and so spread losses. They obtain implicit consent for decisions using a revealed preference approach (so that market forces determine priorities). Individualists trust successful people, the 'high flyers', to effectively manage risk. Finally, given their fatalistic view of the world, atomized individuals have no particular preferences for consent or liability, and trust only to luck or the spirit world (Rayner and Cantor, 1987). Risk can be highly political. Douglas (1992) believes that one way in which risk is politicized is through the attribution of blame: who is at fault when things go wrong? The cultural theory of risk suggests why blame is attributed in some situations but not others, given the predicted views of the world of the different cultural types. Blame was certainly a factor in a natural hazards controversy in the USA in the late 1970s. As Brown
(1984) describes, a proposed flood control scheme (Orme Dam) was halted in 1976 due to pressure from local resident Yavapai Indians, who would have been forced to relocate, and from environmental groups concerned about the effect of the scheme on bald eagle nesting sites. Three major floods occurred over the following three years, causing several deaths and millions of dollars of damage. '[P]ro-Orme groups blamed the Indians and environmentalists for the flood damages they felt could have been prevented by Orme Dam, and the opponents of the dam blamed the State and operators of the existing dams for "outmoded water policies" that prevented safe storage of the flood flows' (Brown, 1984, p. 331). (The disputes were eventually resolved through extensive public involvement in the planning process using decision-analysis and conflict resolution techniques.) Despite its well-structured and detailed theoretical basis, the cultural theory approach to the study of risk has produced much less empirical evidence than alternative psychological approaches (Royal Society, 1992, p.10). Even Rayner himself notes that '[cjultural theorists have made few systematic empirical studies of risk perception and management' (Rayner, 1992, p. 84). Gross and Rayner (1985) attempted to provide a methodology for measuring the grid and group dimensions. Rayner admits that 'the measurement paradigm we created is too demanding for most empirical applications' (Rayner, 1992, p. 83). Individually administered questionnaires about attitudes to hazards, such as those that are frequently used by psychometricians, are not so appropriate for investigating risk perception from the perspective of cultural theory. On the contrary, '[cjultural analysis does not ask about people's private beliefs. It asks what theories about the world emerge as guiding principles in a particular form of society' (Douglas and Wildavsky, 1982, p. 89). However, perhaps the best-known instrument for empirical investigation of the cultural theory of risk is a survey method administered to individuals developed by Dake (1991). He devised a set of statements about society which are hypothesized to measure cultural bias. As yet, however, the method has not produced reliable results. New empirical tools are needed, which measure not only cultural bias but also patterns of social relations. Until then, all one can do is note that 'while the principles of cultural theory have been enormously influential, its practical application has been very limited' (Rayner, 1992, p. 84).
15.4 Risk debates and the importance of trust Risks do not exist in isolation; they constitute one of a package of features possessed by everyday items such as food. Choosing to buy one type of food in preference to another is a choice among options; associated with
each is some level of risk. It has been seen that meanings of 'risk' vary. Risks which are judged to be acceptable by some are totally unacceptable to others. The social determination of what constitutes an intolerable risk, or risky item, involves negotiation and debate. The case of food irradiation can be taken as an example of a current risk debate. The contested nature of risks and benefits is not universally accepted. Lee, for instance, sees a much more straightforward situation. Irradiation is stated to be a beneficial, safe, technology that the public will not accept: The benefits of food irradiation far outweigh the risks. Scientific consensus is this technology is safe and effective. An annoying question confounds every scientist studying irradiation: Why won't the public accept irradiated foods with negligible risk? (Lee, 1989, p. 68)
Burke, Chairman of the UK Advisory Committee on Novel Foods and Processes, has dismissed concerns about irradiation as irrational: I see food irradiation as a helpful addition to the range of techniques that we use to deliver safe food to the general public. However it is not seen by the public like that and that largely because a vigorous campaign has been conducted against food irradiation. It is not helped by the use of the word irradiation, with its image of nuclear reactors and nuclear damage, but the argument is frankly not rational. (Burke, 1991, p. 77)
Thus the problem is framed as one not of science, but of public acceptance. Burke wonders '[d]id we just fail to explain it clearly enough in scientific language?' (Burke, 1991, p. 78). There is sometimes an assumption that members of the public would accept technologies like irradiation if they could be made to understand the statistics. This view fails to appreciate that there may be other aspects to the technology, in addition to the scientifically calculated likely numbers of injuries and fatalities, which can cause public concern. Otway (1992) describes a cycle of assumptions about risk debates. Problems involving risk are first likely to be framed as scientific; risks are viewed as technical entities which should be quantified with care in order to resolve debate. In this way, disagreements which arise over contested technical 'facts' can often be resolved through scientific experimentation. If such a technocentric approach fails to address the concerns voiced during the risk debate, comparison of risks using economic tools such as cost-benefit analysis or estimating the potential monetary value of statistical lives saved as a result of proposed safety improvement may then be used. For example, calculation of risks, costs and benefits in financial terms may be used to try to persuade consumers that food irradiation
should be acceptable to them. Such techniques may not be sufficient to encompass the nature of the problem as perceived by the different parties involved. The debates may in fact be centred on questions of value, or wider social goals. Does society need the technology at all? Would it have a detrimental effect on developing countries, perhaps, or counteract people's religious beliefs? The problem hence evolves into a realization that the debate, and the nature of risk itself, is intrinsically social and political. As Otway puts it: Acknowledging the limited role science can play in conflict resolution allows policy issues to be addressed directly, reduces unrealistic public expectations of scientists and, in the end, strengthens both science and democracy. This is an essential step toward a new paradigm that conceives authentic communication between experts and citizens as an integral part of the social relations of technology and the sharing of power and responsibility. (Otway, 1992, p. 228)
Trust is a prerequisite for meaningful communication. Trust and credibility are recognized as important factors in the perception of risk. Risk managers need to be judged as competent and trustworthy by those whose health they are charged to protect. If decision-makers are not trusted, it is likely that their decisions will not be accepted. Equally, if those in authority have different priorities and hold different views of the world to their constituents, their decisions may fail to gain wide acceptance. Jussaume and Judson (1992) investigated trust of farmers, government and the food industry in a comparison of residents of Seattle, USA, and Kobe, Japan. Respondents were asked to indicate the extent to which they disagreed or agreed with statements like 'farmers do a good job of making sure the food I buy is safe to eat'. Equal levels of scepticism in both the USA and Japan were found about the ability of governmental and business institutions (including farmers) to guarantee safety of the food supply. Independent variables which were statistically significant in explaining the trust variable were household income (those with an income greater than US$50 000 or 10 million yen were more trustful than poorer households) and membership of a consumer co-operative (members were less inclined to trust the ability of governmental and business institutions to protect food safety). Wandel (1994) reports a lack of confidence in food authorities in Norway. The degree of confidence decreased with age, with nearly half of all respondents (n = 1021) aged over 45 reporting a lack of confidence in food authorities to prevent contaminated foods from entering the market. Such erosion of trust is not confined to these countries, or to the food sector. Once confidence is lost, however, it is difficult to regain (Slovic, 1993).
15.5 Conclusion The perception of food-related risks is an evolving field of study. Researchers need to be careful, as '[e]ven by claiming to explain the public's behavior, psychologists can contribute to a sort of disenfranchisement - by reducing the perceived need to let the public speak for itself (Fischhoff, 1990, p. 647). One of the greatest challenges for society is to decide how varying priorities and concerns should be addressed. The need to incorporate the wishes of the consumer is particularly appropriate for the public policy arena, where decisions are made on behalf of the population. Tolerable risk decisions are not based solely on the technically defined magnitude of the risk. Considerations such as the cost of risk reduction, and the values of the wider public, inform risk management and help to determine which substances are banned (e.g. DDT), which are permitted for use under particular controls (e.g. saccharin or veterinary medicines), and which have few intake restrictions (e.g. alcohol or potatoes). Public risk perception research has shown that the acceptability of a risky product or technology is determined by many factors. These include the characteristics of the hazard itself: whether it is faced voluntarily or involuntarily, whether it poses harm to future generations, and so on. The social, political and institutional context in which the hazard is managed is also important. People with different world views and who hold unlike ideological goals will probably disagree about the importance of particular risks. At the organization level, if a risk-managing authority has a reliable track record and is trusted by the public to competently protect its health, then conflicts over risk-related decisions taken by that authority are likely to be reduced. Risk management may become more informed, fair and defensible by including a range of judgements of risk (risk perceptions) in the decisionmaking process: When we come to consider the problems of management of risks, we shall need to recall that it is only natural for the different parties to a hazard, including those who create it, those who control it, and those who experience it, to see it in different ways. An appreciation of this diversity of perception is of great importance for the development of means to achieve good management of risks. (Council for Science and Society, 1977, p. 18)
If some people (e.g. a committee of toxicologists) evaluate a product and feel that it possesses a tolerable level of risk, whereas others (e.g. a consumer interest panel) judge the same product as being of great concern, whose risk judgements should be given most weight? Put another way, in whose interests should decisions be made when values are in conflict?
Such questions are answered during the process of risk management. An article on food safety in Australia addressed the question of value and risk management, stating 'ultimately it is the people of this country, not the officials, who must make the decisions about the kind of society they want. At some point, the amount of risk a society is willing to accept in order to achieve a certain benefit becomes a matter for the public at large to decide' (Brell, 1979). Renn et al (1993) also advocate the incorporation of citizens' and experts' concerns into decision-making: Technocratic decision making is incompatible with democratic ideals. The involvement of affected parties represents the political value of government by the people, not just for the people. If we take the ideal of democracy seriously, public participation is a normative prerequisite. (Renn et al, 1993, p. 210)
Putting the call for public involvment in decision-making into practice is an enormous challenge. To some extent it is met by existing systems of political democracy through the election of officials who represent the wider public. However, as Keeney et al (1990) put it: While voting and political representation is and will remain the main vehicle for incorporating values, it leaves unresolved how political representatives or policy makers should interpret public values in a specific policy context, how public values shoud be operationalised, what role the experts and their values should have, and how expert recommendations and value interpretations should be combined in policy making. (Keeney et al, 1990, p. 1011)
Social and political problems of risk need to be recognized and addressed as such. This should facilitate decision-making, which in turn should benefit all interested parties, scientists, regulators, industries and members of the public, alike. Framing risk problems in terms of the public view versus that of the scientific view is inadequate and not constructive. First, it assumes the existence of a consensus among members of the public and among members of the scientific community on risks. Every consumer does not judge food risks in the same way. Equally, the legal system bears witness to the differences that can exist between scientists when required to give expert testimony. Second, it creates a somewhat artificial distinction between two sets of perceptions, one of which (that of the scientist) is often stated as being an assessment of the 'real' or 'actual' risk. In fact, 'the distinction between "actual" and "perceived" risk is misconceived, because, at a fundamental level, both inevitably involve human interpretation and judgement, and hence "subjectivity", to a greater or lesser degree' (Royal Society, 1992, p. 97). Put simply, members of the public and scientific experts define and judge risks differently. Acknowledgement of the existence of a range of risk perceptions, and
differing views on what should be done about risk, is the first step towards making accepted risk decisions. Integration of this variety of viewpoints is the task of the risk manager, who, inevitably and unenviably, is left with a juggling act.
Acknowledgement I would like to thank Barbara Soby for her useful comments on this chapter, and the UK Ministry of Agriculture, Fisheries and Food for their continued support.
References Advisory Committee on the Microbiological Safety of Food (1993) Interim Report on Campy lobacter. HMSO, London. Aldrich, L, (1994) Food-safety policy: balancing risks and costs. Food Review, 17(2), 9-13. Beck, U. (1992) Risk Society: Towards a New Modernity. Sage Publications, London. Brell, M. (1979) Food safety and the consumer. Food Technology in Australia, 31(2), 65-67. Brewer, M.S., Sprouls, G.K. and Russon, C. (1993) Consumer attitudes toward food safety issues. Journal of Food Safety, 14, 63-76. Brown, C.A. (1984) The central Arizona water control study: a case for multi-objective planning and public involvement. Water Resources Bulletin, 20(3), 331-337. Burke, D.C. (1991) Public acceptance of innovation. In: Roberts, L. and Weale, A. (eds) Innovation and Environmental Risk. Belhaven Press, London, pp. 75-79. Buss, D.M. and Craik, K.H. (1983) Contemporary worldviews: personal and policy implications. Journal of Applied Social Psychology, 13(3), 259-280. Buzby, J.C. and Skees, J.R. (1994) Consumers want reduced exposure to pesticides on food. Food Review, 16(2), 19-22. Collins, E.J.T. (1993) Food adulteration and food safety in Britain in the 19th and early 20th centuries. Food Policy, 18(2), 95-109. Council for Science and Society (1977) The Acceptability of Risks: The Logic and Social Dynamics of Fair Decisions and Effective Controls. Barry Rose Ltd, Chichester. Dake, K. (1991) Orienting dispositions in the perception of risk: an analysis of contemporary worldviews and cultural biases. Journal of Cross-Cultural Psychology, 22(1), 61-82. Department of Health (1993) Key Area Handbook: Coronary Heart Disease and Stroke. Department of Health, Heywood, Lancashire. Douglas, M. (1966) Purity and Danger: An Analysis of Concepts of Pollution and Taboo. Routledge and Kegan Paul, London. Douglas, M. (1986) Risk Acceptability According to the Social Sciences. Routledge and Kegan Paul, London. Douglas, M. (1992) Risk and Blame: Essays in Cultural Theory. Routledge, London. Douglas, M. and Wildavsky, A. (1982) Risk and Culture: An Essay on the Selection of Technological and Environmental Dangers. University of California Press, Berkeley, California. Fischhoff, B. (1989) Risk: a guide of controversy. Appendix C in Improving Risk Communication. National Research Council, National Academy Press, Washington DC. Fischhoff, B. (1990) Psychology and public policy: tool or toolmaker? American Psychologist, 45(5), 647-653. Flynn, J., Slovic, P. and Mertz, C.K. (1994) Gender, race and perception of environmental health risks. Risk Analysis, 14(6), 1101-1108. Forsythe, R.H. (1993) Risk: reality versus perception. Poultry Science, 72(6), 1152-1156.
Frewer, L.J., Shepherd, R. and Sparks, P. (1994) The interrelationship between perceived knowledge, control and risk associated with a range of food-related hazards targeted at the individual, other people and society. Journal of Food Safety, 14, 19-40. Gardner, G.T. and Gould, L.C. (1989) Public perceptions of the risks and benefits of technology. Risk Analysis, 9(2), 225-242. Gross, J.L. and Rayner, S. (1985) Measuring Culture. Columbia University Press, New York. Hall, R.L. (1971) Information, confidence and sanity in the food sciences. The Flavour Industry, August, 455-459. Harding, C. and Eiser, J. (1984) Characterising the perceived risks and benefits of some health issues. Risk Analysis, 4(2), 131-141. Jenkins-Smith, H.C. (1994) Stigma Models: Testing Hypotheses of How Images of Nevada are Acquired and Values Attached to Them. Research Report Policy and Economic Analysis Group, University of Albuquerque, New Mexico. Jukes, D. (1993) Regulation and enforcement of food safety in the UK. Food Policy, 18(2), 131-142. Jussaume, R.A. and Judson, D.H. (1992) Public perceptions about food safety in the United States and Japan. Rural Sociology, 57(2), 235-249. Keeney, R.L., von Winterfeldt, D. and Eppel, T. (1990) Eliciting public values for complex policy decisions. Management Science, 36(9), 1011-1030. Lee, K. (1989) Food neophobia: major causes and treatments. Food Technology, December, 62-73. Lowrance, W. W. (1976) Of Acceptable Risk: Science and the Determination of Safety. William Kaufman Inc., Los Altos, California. Lynch, S. and Lin, C.TJ. (1994) Food safety: meal planners express their concerns. Food Review, 17(2), 14-18. Marris, C.D., O'Riordan, T. and Simpson, A.C.D. (1995) Redefining the cultural context of risk perception. Paper presented to the Annual Meeting of the Society for Risk Analysis (Europe), 21-25 May, Stuttgart, Germany. Ministry of Agriculture, Fisheries and Food (1990) Risk Assessment and Risk Management in Food Safety. MAFF Consumer Panel paper CP (90) 4/6. MAFF, London. Ministry of Agriculture, Fisheries and Food (1994) Chemicals in Food: Managing the Risks. FoodSense Booklet PB1695. MAFF, London. Oltersdorf, U. (1995) Differences in German consumer concerns over suggested health and food hazards. In: Feichtinger, E. and Kohler, B.M. (eds) Current Research into Eating Practices, Contributions of Social Sciences. 16th Annual Meeting of AGEV in Potsdam, Germany, 14-16 October 1993. AGEV Publication Series Volume 10. Supplement to Erndhrungs-Umschau, 42, 171-173. Otway, H. (1992) Public wisdom, expert fallibility: toward a contextual theory of risk. In: Krimsky, S. and Golding, D. (eds) Social Theories of Risk. Praeger, Westport, Connecticut, pp. 215-228. Rayner, S. (1987) Risk and relativism in science for policy. In: Johnson, B.B. and Covello, V.T. (eds) The Social and Cultural Construction of Risk. D. Reidel, Dordrecht, pp. 5-23. Rayner, S. (1992) Cultural theory and risk analysis. In: Krimsky, S. and Golding, D. (eds) Social Theories of Risk. Westport, Connecticut, pp. 83-115. Rayner, S. and Cantor, R. (1987) How fair is safe enough? The cultural approach to societal technology choice. Risk Analysis, 7(1), 3-9. Renn, O., Webler, T., Rakel, H. et al. (1993) Public participation in decision making: a three step procedure. Policy Sciences, 26, 189-214. Rohrmann, B. (1991) A Survey of Social-Scientific Research on Risk Perception. Research Report, Programme Group Man, Environment, Technology. KFA Jiilich, Germany. Royal Society (1992) Risk: Analysis, Perception and Management. The Royal Society, London. Schafer, E., Schafer, R.B., Bultena, G.L. and Hoiberg, E.O. (1993) Safety of the US food supply: consumer concerns and behaviour. Journal of Consumer Studies and Home Economics, 17, 137-144. Schwarz, M. and Thompson, M. (1990) Divided We Stand: Redefining Politics, Technology and Social Choice. Harvester Wheatsheaf, London. Simpson, A.C.D. (1994) Integrating Public And Scientific Judgements into a Tool Kit for
Managing Food-Related Risks, Stage III: Pilot Test. Research Report 23. Centre for Environmental and Risk Management, University of East Anglia, Norwich. Simpson, A.C.D. (1995) Exploring consumer perceptions of food-related risks and benefits using focus groups. Paper presented at the Annual Meeting of the Society for Risk Analysis (Europe), May 21-25, Stuttgart, Germany. Sjoberg, L. and Drottz-Sjoberg, B-M. (1994) Risk perception. In: Radiation and Society: Comprehending Radiation Risk, Vol. 1. Proceedings of an International Conference Organized by the International Atomic Agency, Paris, October. IAEA, Vienna, pp. 29-59. Sjoden, P-O. (1990) Oro och uppfattningar bland konsumenter. Vdr Foda, 42(3), 1756-185. Slovic, P. (1987) Perceived risk. Science, 236(4799), 280-285. Slovic, P. (1993) Perceived risk, trust and democracy. Risk Analysis, 13(6), 675-682. Slovic, P., Fischhoff, B. and Lichtenstein, S. (1979) Rating the risks. Environment, 21(3), 14-20, 36-39. Slovic, P., Fischhoff, B. and Lichtenstein, S. (1980) Facts and fears: understanding perceived risk. In: Schwing, R.C. and Albers, W.A. (eds) Societal Risk Assessment: How Safe is Safe Enough? Plenum Press, New York, pp. 181-214. Slovic, P., Fischhoff, B. and Lichtenstein, S. (1985) Characterizing perceived risk. In: Kates, R., Hohenemser, C. and Kasperson, J. (eds) Perilous Progress: Managing the Hazards of Technology. Westview Press, Boulder, Colorado, pp. 91-125. Sparks, P. and Shepherd, R. (1994) Public perceptions of the potential hazards associated with food production and food consumption: an empirical study. Risk Analysis, 14(5), 799-806. Starr, C. (1969) Social benefit versus technological risk: what is our society willing to pay for safety? Science, 165(3899), 1232-1238. Thompson, M. and Wildavsky, A. (1982) A proposal to create a cultural theory of risk. In: Kunreuther, H.C. and Ley, E.V. (eds) The Risk Analysis Controversy: An Institutional Perspective. Springer-Verlag, New York, pp. 145-161. Thompson, M., Ellis, R. and Wildavsky, A. (1990) Cultural Theory. Westview Press, Boulder, Colorado. Wandel, M. (1994) Understanding consumer concern about food-related health risks. British Food Journal, 96(7), 35-40. Wildavsky, A. and Dake, K. (1990) Theories of risk perception: who fears what and why? Daedalus, 119(4), 41-60.
16 Decision aids M. POSTLE and D. BALL
16.1
Introduction
Food products, consumer goods in general, and indeed all human activities, normally carry some risk to health or the environment which is either accepted as trivial or tolerated in exchange for the benefits of the product or activity. Seldom is it possible to eliminate risk without forgoing benefits, and generally the quest is to achieve a balanced position which provides, inter alia, a 'reasonable' level of risk. The food sector is faced with a singular challenge because of the large number of substances entering the food supply as natural toxicants and mycotoxins, as chemicals used in agricultural production, processing or packaging, or present as a result of environmental pollution. The range of potential chemical contaminants is increasing as new substances such as additives are licensed for food, agricultural or veterinary use, or when novel pollutants from other industries impinge upon the food supply. The public expects to be protected effectively against such imposed risks, and legislation has, until now, been largely prescriptive. Thus, statutory limits are often used to define the maximum tolerable concentration for a variety of chemicals in particular foods. Even in these cases, however, statutory limits are normally based upon some kind of risk analysis which is used to inform the risk management decision process. Despite the plurality of causes and effects of food-borne health symptoms, which implies the need for a diversity of risk analysis techniques, there is a degree of similarity in the style of approaches which are applied when, for example, considering hazardous compounds added to food. Typically, a three-stranded approach of the type shown in Figure 16.1 is adopted, the outcome of which is then applied to risk management. Broadly, this follows one of two routes (Figure 16.2). In the case of supposed threshold chemicals, a no observed adverse effect level (NOAEL) is determined from animal tests. A margin of protection is then applied to this to arrive at a much lower human acceptable daily intake (ADI) (for chemicals deliberately added), or tolerable daily intake (TDI) for adventitious chemicals. In the case of either chemicals with no known threshold or food-borne radioactivity, an approach based upon an acceptable level of risk has to be adopted, since prohibition is
Hazard evaluation Risk estimation
Occurrence assessment Intake estimation Consumption assessment
Figure 16.1 A general scheme of risk assessment for food and water.
seldom feasible. Although substantial safety factors are included in both approaches, it is recognized that the level of risk is not zero, though it is generally taken to be at a level which most would regard as negligible. In the case of the adventitious chemical contaminants, the first priority of the risk management strategy is to prevent food containing concentrations of the contaminant above certain predetermined levels, namely, those that would result in exposure that exceeds the TDI for chemicals which are not genotoxic carcinogens, from entering the food supply. At concentrations below those likely to cause a consumer to exceed the TDI, the most common approach to risk management is to reduce contamination to the lowest level practicable, although how this should be defined in practice is less obvious. In the case of food additives, the primary tradeoff which has to be made in order to arrive at a decision is between the benefits of the additive and the associated risk. Adventitious chemicals generally have no benefits, however, and in this case the appropriate tradeoff is between the costs to society of implementing remedial measures and the amount of risk reduction which the measures bring about. At some point the incremental decrease in risk of adverse health consequences will no longer be justified by the additional cost of control. The Toxicological evaluation
NOAEL
Dose-response curve
ADI/TDI = NOAEL/SF
Risk Assessment R = /(dose)dose -> o
Regulation to ensure PDI < ADI/TDI
Regulation to ensure PDUVSD
Figure 16.2 Two approaches to risk management. NOAEL = No Observed Adverse Effect Level; SF - Safety Factor; R = Risk; ADI/TDI = Acceptable/Tolerable Daily Intake; VSD = Virtually Safe Dose; PDI = Probable Daily Intake of a near maximally exposed consumer (97.5%).
crucial question is how and on what basis these trade-offs should be made. In the UK the underlying philosophy of risk management, which is applied in many sectors, is referred to as the 'tolerability of risk' or 'constrained optimization' (Ball and Goats, 1996). With reference to Figure 16.3, it may be seen that the risks associated with some activity (or product) are first assessed against three criteria: • • •
whether a given risk is so great or the outcome so unacceptable that it must be refused altogether (top zone of Figure 16.3); whether the risk is, or has been made, so small that no further precaution is necessary (bottom zone); or if a risk falls in the intermediate zone, that it has been reduced to the lowest level practicable, bearing in mind the benefits arising from its acceptance and taking into account the costs of any further reduction.
Inherent within the scheme outlined in Figure 16.3 are a number of fundamental concepts. First, the idea of zero risk has been rejected. Instead, the notion of tolerating risks in exchange for the benefits of risky activities is introduced. Second, above a certain level a risk is regarded as intolerable and cannot be justified in any ordinary circumstances. Third, below the intolerable risk level an activity may take place provided that the associated risks are as low as reasonably practicable (ALARP). LEVELS OF RISK AND ALARP
INTOLERABLE LEVEL
TOLERABLE (Gross disproportion)
ALARP REGION
TOLERABLE (Balance)
BROADLY ACCEPTABLE REGION
NEGLIGIBLE RISK
Figure 16.3 The UK framework for risk regulation (Health and Safety Executive, 1992).
Decisions as to what constitutes ALARP may be made on a variety of bases, ranging from entirely qualitative to fully quantitative. However, the greater awareness of the need for efficient legislation and more consistency in decision-making has encouraged the more widespread use of quantitative techniques such as cost-benefit and risk-benefit analysis as part of the risk management process. Such techniques are now finding increasing application in many countries in both the public and private sectors as an aid to decision-making. The use of cost-benefit analysis is in itself nothing new. In the UK, for example, it has long been recommended that systematic economic appraisal should be made of expenditure decisions and should include even those costs or benefits which have no market price, but which nonetheless matter, such as health and safety and the environment (HM Treasury, 1991). This has generated much pioneering research into the valuation of the avoidance of fatalities and injuries, and of environmental goods, and has provided a spur for the development of techniques of risk analysis in all sectors, including that dealing with food safety (Goats and Ball, 1994). The purpose, then, of this chapter is to outline the procedures of cost-benefit and risk-benefit analysis, and to indicate the means by which non-market goods, such as health and safety and the environment, are being valued. 16.2 16.2.1
Risk-benefit analysis The analytical framework
Over the past few years, there has been considerable work into the development of risk-benefit frameworks for the assessment of regulatory decisions which could impact upon human health and safety and the environment. Although the specific requirements of the various frameworks may vary, there is a general recognition that the appraisal may take one of three possible forms (Department of Environment, 1995a): a systematic qualitative assessment; a semi-quantified assessment; or a fully quantified assessment. In some cases, a qualitative assessment may be sufficient to indicate how the safety benefits to human health and the environment compare with the costs to producers and other stakeholders. For example, Article 10 of the European Community Existing Substances Regulations (EEC 793/93) requires that any recommendations concerning restrictions on the marketing or use of a substance (including food-related chemicals) must submit an analysis of the advantages and disadvantages of the substance and of the availability of replacement substances. At a minimum, this requires a qualitative assessment of the risks, costs and benefits of any measure which would involve marketing and use restrictions.
However, in many cases, a qualitative assessment will not be detailed enough to show whether the benefits from improved safety outweigh the costs to industry and others. In such cases, more quantified assessments may be required and a semi-quantified or fully quantified assessment is undertaken. For example, it should be possible, for many regulatory issues, to develop estimates of the costs involved in adopting and implementing a risk management option. Similarly, many of the economic benefits to producers and other stakeholders may be readily calculated. With regard to reduction of risks to human health and the environment, these may be quantifiable through the use of risk assessment techniques and, where the data are available, a monetary value can be placed on the level of risk reduction. Where valuation of human health and environmental effects is feasible, the expression of safety benefits in the same units as the costs of control allows the merits of the proposed regulation to be more readily evaluated and should help ensure that regulatory decision-making becomes more consistent. The framework which is most often suggested as providing the basis for the semi-quantitative or fully quantitative assessment is that of social cost-benefit analysis. Cost-benefit analysis is based on the assumption that the preferences of individual members of society should determine the trade-offs that society is willing to make in the allocation of resources amongst competing demands. It provides a direct determination of the resource implications of a decision and whether or not a given action is justified from a societal perspective. Cost-benefit analysis is more flexible than other economic appraisal approaches such as cost-effectiveness analysis. Cost-effectiveness analysis is only applicable in cases where there is a set safety goal which must be met and the aim of the analysis is to determine the most inexpensive means of achieving that goal. Because it is focused on achievement of a single set of goals, it does not consider the additional costs and benefits which may arise from a proposed measure. It also fails to address the question of whether or not society is willing and able to allocate resources towards the safety measure in question. As a result, it provides no indication of whether the benefits outweigh the costs, but only that the goals as defined are met. The use of cost-effectiveness analysis would be compatible, however, with the implementation of decision criteria related to the principles underlying ALARP. 76.2.2
The scope of the analysis
A generalized risk-benefit framework is illustrated in Figure 16.4, based on the adoption of a cost-benefit analysis approach. As can be seen from this figure, a number of different steps are involved in undertaking a risk-benefit analysis. These include the following:
Identify risks of concern
Specification of potential risk mitigation options
Data on consumption, production and substitutes
Producer and consumer surplus
Human health and environmental risk assessment before and after control
Assessment of human health benefits
Assessment of environmental benefits
Total benefits
Total costs
Calculation of net benefits of mitigation
Comparison of options and sensitivity analysis
Figure 16.4 The risk-benefit assessment process.
Identification of the potential risks to human health and the environment for the aspect of food safety under examination; this will generally follow well-established risk assessment procedures such as those set out by the Department of the Environment (1995b). Detailing the potential control or risk mitigation options, where these may involve one or more of a wide range of different measures, such as changes in production or processing techniques, use of chemical substitutes, changes in storage and/or packaging. Determining the implications (costs, risks and benefits) of the different control options, where these include the economic impacts on producers and consumers, the changes in human health risks and the changes in environmental risks. Comparing the costs, risks and benefits of the different control options and identifying the preferred option. Testing the robustness of the analysis with regard to the data and assumptions used in the analysis and any associated uncertainties. A systematic analysis needs to address all of the costs and benefits associated with a given policy action. It is important, however, that resources are focused on the key elements affecting the decision. For a risk-benefit analysis, this requires consideration of the financial implications for producers and consumers, and the impacts on human health (and potentially the environment). Determining those aspects which are likely to be significant, and thus affect the end decision, will involve a comprehensive examination of available information. For some food safety issues, a considerable amount of information will exist on all three areas, due to long-established use or high levels of concern with regard to a particular effect. In other cases, little information is likely to exist (particularly for newer food-related chemicals and/or processes). One of the first steps will be to gather information on consumption of the chemical, or exposure to the chemical or process of concern. This will require data on current levels of consumption from all sources and consumption associated with food, including trends, and taking into account past, current and expected legislation, including that which would impact upon other sources of the hazard of concern. It will also be important to consider whether any potential substitutes exist, and their availability, efficacy, and associated risks. Collection of these data may not be straightforward and it is likely that a range of different sources will have to be tapped, as information may be of a conflicting nature (e.g. on trends and on the potential of different substitutes and their efficacy). In addition, many industry data may be of a confidential nature. Furthermore, where a hazardous substance is a minor input to the overall production of a range of end-products (within a given category), few data may be available on the level of consumption
in relation to food or to a specific food item. At a broader level, endproduct statistics may not be available in the form needed for this type of analysis. In addition, many substitutes may be new products, making it difficult to assess their relative merits or demerits. The sections which follow elaborate upon the generalized risk-benefit analysis framework and provide more detail on the estimation of impacts on producers and consumers and the approaches available for the valuation of human health and environmental risks.
16.3
Assessing impacts on producers and consumers
Safety issues may arise at a number of different stages in the production and marketing of food products, and may be of natural or anthropogenic origin. For example, there are concerns associated with different agricultural pesticides, food additives and preservatives, and naturally occurring agents such as aflatoxins. The economic implications for consumers and producers are likely to vary considerably, depending on the nature of the risk and the means of its control. In the case of pesticides, banning the use of a particular chemical could lead to a number of actions by producers. These may include switching to a more costly alternative, changing farming practices to those less dependent on agrochemicals, or ceasing production of the crop due to a decrease in profit margins. All of these could result in increased production costs and, therefore, lead to increased prices of the final food products. Those involved in the production of food face a variety of options in determining how to undertake their activities and are assumed to choose the combination of activities which is the most efficient in maximizing their revenues. The imposition of restrictions on the manner in which those activities are undertaken could therefore raise the costs faced by the operator, which in turn may raise the price of its output to others involved in food processing and marketing and to consumers. Similarly, the profitability of those involved in production and marketing may be affected. These effects are illustrated in Figure 16.5, which provides a graphical presentation of the supply and demand curves that define the impacts of changes in costs on producers and consumers. These changes in costs are said to affect producer and consumer 'surplus'. As can be seen from the graphs, the supply curve for a good is upward sloping, since increases in price will induce the quantity supplied to increase. The demand curve is downward sloping, because consumers are prepared to buy more at lower prices. The point at which demand and supply intersect is known as the point of equilibrium; it represents a balance between supply and demand. These graphs are taken from welfare economics, which provides the theoretical underpinnings for cost-benefit analysis. In economic terms, the
(a) PRICE
PRICE
(b)
Supply
Consumer Surplus
Demand
Demand
QUANTITY
QUANTITY
(d) (supply with controls)
S1
(supply without gj controls)
PRICE
(C)
PRICE
Supply
Demand QUANTITY
(supply with controls)
.Si S0
(supply without controls)
Demand QUANTITY
Figure 16.5 Producer and consumer surplus.
value of a good or service is measured by how much an individual is willing to pay for it. An individual's willingness to pay relates to all of the attributes of a good which are of value to the individual, including factors such as quality and reliability, but also takes into account any negative qualities. The relationship between the amount which an individual is willing to pay for a given quantity of a good at different prices is represented through a demand function. In Figure 16.5a, the total amount that consumers are willing to pay for quantity q0 is represented by the shaded area. The shaded areas in Figure 16.5b indicate the producer and consumer surplus for the equilibrium represented by point qQ. Producer surplus is the difference between what producers actually sell their output for and what their costs of production are, as represented by different points on the supply curve. Consumer surplus is the difference between what individuals actually pay for the good, and what they would have been willing to pay as represented by the demand curve.
With the introduction of a food safety measure, the supply curve is moved from its original position. In Figure 16.5c this is represented by a shift from S0 to S1 as the costs of producing the good become higher and the price of the good increases. This shift in supply impacts upon both producers and consumers. The change in producer surplus is the difference between the shaded areas A and B. The change in consumer surplus is illustrated in Figure 16.5d and is represented by the shaded area C. The overall change in surplus is therefore A + (C - B), or the sum of A and the triangular area in the centre of the graph. It is this sum which provides the total measure of the economic impacts on producers and consumers (Risk & Policy Analysts and Acer Environmental, 1992; Risk & Policy Analysts, 1995). Estimation of the impacts on producer and consumer surplus requires information on the relationships between demand and supply at different prices. This information can be collected through analysis of changes in supply and demand over time or through the use of market survey techniques. The first step in calculating the impacts on producer surplus is to identify what will be affected and to trace the effects of this through the chain of production activities. Although it would be possible to identify the impacts of a safety measure on each stage of production, it may be more sensible in some cases to identify the link in the chain which will face the largest impact; it may also be important to examine the implications at a point which is as close as possible to the point of consumption, if there is a clearly identifiable impact. The importance of the triangular area (C-B) will depend upon the nature of the issue being considered. Where the increase in costs associated with a given measure accounts for only a small proportion of end-product costs, then the net change in this area will be small. Many chemical compounds used in food production will fall into this category and changes in the costs associated with their use will have only a minor impact upon the total costs of production. For example, if the adoption of an alternative product which was 'safer' led to a 20% increase in the costs of production, and these accounted for only 5% of end-product costs, then the resultant price increase would be only 1 % and the loss in surplus would be small. Where the increase in costs accounts for a large proportion of end-product costs, the losses associated with this area may be large, indicating potentially significant impacts on consumer surplus. Estimation of these effects requires information on the relationship between changes in price and changes in demand (otherwise referred to as the elasticity of demand), which is unlikely to be available for the types of substances used in food processing and manufacture. The above analysis is based on an important assumption, particularly with regard to the food industry. This is that the implementation of a food safety measure will not lead to any deterioration in the quality of the endproduct and that there are alternatives available to producers for use as
substitutes in the production and manufacturing processes. That such assumptions may not be valid is likely for certain substances such as preservatives, without which there may be significant losses in quality. Similarly, there may be no acceptable alternatives available to some chemicals, and methods used in food production processing and controls may, therefore, effectively change the nature of the end-product. 16.4 16.4.1
Valuing human health risks The risk assessment process
The first step in the evaluation of impacts on human health and the environment is to undertake a risk assessment. The outputs of such assessments may vary, but frequently are in the form of risk quotients that compare expected doses and environmental concentrations with predicted no effect levels (in other words, the ratios for Predicted Environmental Concentration/Predicted No Observable Adverse Effect Level and Predicted Environmental Concentration/Predicted No Effect Concentration). (Strictly speaking, the use of such quotients is more akin to 'hazard screening' than 'risk assessment'.) These are the requirements of the Existing Substances Regulation 793/93, for example (Department of the Environment, 1995b). These assessments are likely to be based on historical data, which may be limited in amount and quality. In addition, these quotients do not provide a fully quantified risk assessment, since, inter alia, they lack information on the probability of the harmful effects occurring. The translation of a risk quotient to a probabilistic risk assessment is not necessarily a straightforward process, but is an important step in the evaluation of the risks, costs and benefits (Rodricks, 1992). The extent to which a probabilistic assessment can be developed will depend on the amount and quality of information which is available on the effects of concern, on the exposure-response or dose-response relationships for different receptor groups or environmental targets, and on the number or stock affected. Where it is possible to develop quantitative estimates of the probability of a specified harmful effect and of the change in probability and number affected resulting from a given control measure, it may also be possible to convert the risk information into a monetary value, with this representing the value of a statistical reduction in the level of harm to people or the environment. 16.4.2
The valuation techniques
A fully quantified and monetary risk-benefit analysis includes consideration of all the effects of a decision in order to estimate the overall social costs and benefits of a risk control measure. This type of approach, therefore, requires the valuation of human health and environmental impacts.
Because such effects are not generally traded in the marketplace, however, their values must be imputed in some other way. A considerable amount of research has been undertaken into the valuation of life and injury, and a wide range of methods is available (Soby and Ball, 1991; Jones-Lee, 1989). These methods attempt to value benefits by, for example, devising an individual's willingness to pay for a risk reduction (or willingness to be compensated for loss or increased risk) as revealed in the marketplace, through individuals' actions, or as directly expressed through surveys. The most relevant techniques to food safety and the principles underlying their application are as follows. Market price/effect on production approaches: for goods sold in the marketplace (such as food), prices provide an estimate of an individual's willingness to pay. Estimation of the changes in quantity demanded, given changes in associated levels of risk, provides a measure of the economic value of risk avoidance. Similarly, the avertive expenditure approach is based on determining the amount of money spent in avoiding an impact, e.g. through the purchase of organic food, to provide an indication of an individual's (or household's) willingness to pay to avoid a particular form of risk. Human capital approach: the expected earnings of an individual, the characteristics of individuals at risk, and the probability of harm occurring are used to derive a value for life. Within this approach it is possible to add nominal sums to cover the pain, grief and suffering associated with death. Values for various injury states can be derived by considering length of hospitalization, severity of impairment, medical costs, and loss of earnings. Contingent valuation and contingent ranking methods: these are social survey techniques which rely on the creation of a hypothetical market for a reduced risk of a specified health effect. Individuals are surveyed to determine their willingness to pay for the specified change in the frequency, duration or nature of the health effect. Surveys are constructed so as to control, to the degree possible, against the introduction of biases associated with the amount of information provided, the manner and order in which questions are asked, the type of payment vehicles used (e.g. the means by which individuals would pay, such as through an increase in food prices), and the potential of the respondent not considering other related issues when indicating their willingness to pay (the part-whole mental account or embedding problem). Perhaps the most obvious way of determining how much people would be willing to pay to avoid a food-related risk is by examining their behaviour in the marketplace. This is one of the ways in which researchers in the USA tried to estimate willingness to pay for reduced risks associated with the use of the chemical Alar (daminozide) on apples in the USA.
Alar regulates fruit ripening, colour and size, and concern arose in the mid-1980s over the amount of chemical residue permitted on fresh produce, following evidence of possible carcinogenic activity found by the Environmental Protection Agency (EPA). Media coverage alerted consumers to the risks of eating fruit containing Alar residues, and a study published in 1991 assessed how much more consumers were willing to pay to avoid Alar-treated apples (van Ravenswaay and Hoehn, 1991). The authors analysed changes in per capita consumer purchasing patterns as levels of information about risks from Alar increased, and this was used to estimate lost sales of Alar-treated apples. The discrepancy between the actual sales and projected sales prior to the Alar controversy indicated consumers' willingness to pay to remove Alar from apples, or to avoid the perceived carcinogenic risk from Alar-treated apples. By considering consumers' perceptions of risks of developing cancer from eating Alar-treated apples, the authors suggest that consumer willingness to pay to avoid such apples reflects their willingness to pay to avoid the associated perceived risk of death from cancer. The contingent valuation method is perhaps the technique which is receiving the most attention, owing to its flexibility in application, and within the field of food safety it has been used to address a range of different safety issues. For example, a series of studies in the USA has examined the trend towards buying more expensive organic or chemically free produce and individuals' willingness to pay for such produce. In a survey carried out by Ott and Maligaya (cited in Weaver et al, 1989), when questioned about the price of such produce 66% of respondents were willing to pay at least 5% more for pesticide-free tomatoes. In a follow-up to this work, respondents indicated that 'foods grown with pesticides' was the most significantly important concern out of a choice of 10 popular food concerns. However, fresh food such as fruit and vegetables which has been grown without chemical applications can be more prone to cosmetic defects and insect damage. Despite the finding that the majority of consumers was prepared to pay more for organic produce, the study also found that 62% of respondents were not prepared to accept cosmetic defects in pesticide residue-free produce and 88% were not prepared to accept insect damage (Ott, 1990). Similarly, a study published in 1990 found that 'organic buyers' were willing to pay a mean of 50% more for food grown without the use of in pesticides, in contrast to 'conventional buyers', for whom the mean price increase for buying organic produce was 5%. This study also showed that the 'organic buyers' perceived much higher risks of morbidity and mortality from chemical residues on food than did 'conventional buyers' (Hammitt, 1990). Following on from this, a Food Marketing Institute survey concluded that the proportion of respondents concerned about pesticide residues on food produce was as high as 82%.
On a similar theme, another study examined consumers' willingness to pay to reduce the health risks associated with pesticide residues in food. Eom (1992) found that the consumers surveyed were willing to pay US$0.67 more per item for a 50% reduction in health risks, US$0.64 more per item for a 33% risk reduction and US$0.60 more for a 10% risk reduction per item. Other research has focused on the benefits provided by various food safety-related measures. For example, research undertaken by Malone (1990) found that about 70% of respondents were willing to pay more for irradiated food which proved to have between 50% and 90% reduction in food-borne diseases. Another US contingent valuation study focused on willingness to pay to eliminate Salmonella and Trichinella spiralis from individual meals. Results showed that respondents would be willing to pay $0.55 to eat a Salmonella-free meal and $0.81 for a Trichinella spiralisfree meal (Shin et al, 1992). In contrast to willingness to pay valuations, the human capital value is calculated from the direct and indirect costs to society of an individual's death, illness or injury, and this was a favoured method of evaluation until the 1970s. Direct costs refer to the treatment costs, while the indirect costs relate to the productivity lost through the morbidity or mortality. Research on the direct and indirect costs of illness (COI) has found that the discounted values of lifetime earnings and housekeeping peak for individuals aged between 20 and 35, decreasing significantly after the age of 60 (Hodgson, 1983). Morbidity or mortality is then valued using these figures as the loss to society of an individual's ability to earn, relating to the length of time for which they are ill, or at what age they die. This approach has been criticized in general for not providing a full social value of morbidity or mortality, as it undervalues those who do not work, are elderly or are young (Soby and Ball, 1991). It is also argued that, because it measures the 'value of livelihood' rather than indicating the value of an individual's life, it is an insufficient measure of costs or benefits for use in a cost-benefit analysis (Ives, 1995). Additional criticisms come to mind when considering application of this approach to food safety. Food risks to the elderly or the young may well be valued more highly than risks to those who are in work between the ages of 20 and 35. Thus, there have been concerns that adoption of this method for determining the value of health detriments would not give adequate priority to food-borne risks to the elderly and young, despite research having indicated that many adults value children's lives more highly than their own. 16.4.3
Other valuation techniques
Although we consider the above methods to be those most relevant to food safety, other valuation methods have been used. For example, early
studies examined the differences in wages paid to those accepting highrisk and average-risk jobs. It was hypothesized that individuals reveal the value that they place on their life when they accept a job with increased risks of death or illness. This approach has been criticized on a number of grounds, however, because some of the assumptions required for the analysis have been found not to hold; e.g. that workers understand the increase in risk and that there is mobility in the job market. Similarly, compensation payments have been used to place values on loss of life and various health effects. In these cases, it is argued that payments made following an incident aim to return the injured party to the state in which they were prior to the incident, and that there are standard methods for the calculation of pecuniary and non-pecuniary costs. However, these methods ignore the wider costs to society associated with a health effect, as the compensatory payments are intended to ease the financial burden of dependants rather than to provide an economic equivalent of the lost life or illness. In addition, compensation values tend to be calculated in an ex post (or after the event) manner, taking into account details of the individual concerned. Transferring these values within a cost-benefit framework to aid decisions relating to ex ante (or precautionary) control options may lead to an incongruity within a cost-benefit analysis. This is borne out by variations in compensation payments for fatality reported by Fernandes-Russell et al (1988), which ranged between about £18 000 and £345 000 (UK£1995) per individual. With such large differences in individual compensation payments, obvious questions arise as to which value of life figure would be appropriate for valuing safety in any given cost-benefit analysis.
16.5
Links to the environment
Although the focus is normally on human health when considering food safety measures, the use of chemicals in the preparation and production of food can also have environmental implications. Waste water from chemical factories and food processing plants containing chemical residues may alter the composition of receiving waters and turn them into a hostile environment for some aquatic species. Individual chemical or food processing plants will present different risks to the local flora and fauna or to the environment as a whole from emissions. The use of insecticides, herbicides and fertilizers, for example, can contaminate surface and groundwater and impact upon the natural ecological balance of associated ecosystems. Research has shown that only 10-40% of field-applied pesticides which aim to reduce specific species actually reach their target organism, with a significant amount falling as residues onto the soil (Wise, 1994). The use of agrochemicals can also result in 'spray drift' and
volatilization which affects neighbouring fields, hedgerows, water sources and wildlife. Evidence from the B ox worth Project indicates that, following high levels of pesticide use, numbers of invertebrate and predatory species decline considerably (Jarvis, 1988). Research has also shown that the use of pesticides has led to the direct poisoning of many partridges and pheasants, and accumulation in the food chain of certain pesticides has been linked to the decline in numbers of peregrine falcons and sparrowhawks (Conway and Pretty, 1991). As discussed earlier, in order that environmental effects such as those described above can also be factored into the analysis, they need to be valued in money terms. In order to value the benefits to the environment of reduced chemical usage, for example, it is necessary to determine the elements of the environment at risk. Once this has been done, it may then be possible to translate the value of the change in the risk of damage into monetary terms. The techniques available for the valuation of environmental effects are related to those used for valuation of human health effects, and there are general publications that deal with the quantification of environmental and ecological risks. (Suter (1993) provides a good reference to quantitative techniques.) They include (Department of Environment, 1991) the following. Market price approaches: the dose-response approach determines the welfare cost of a given level of pollution by estimating the market value of the changes in output resulting from changes in pollution; the replacement costs approach (and related shadow project approach) measures damage in terms of the costs of restoring a damaged asset. Household production function techniques: again, the avertive expenditure approach is relevant and derives a valuation for environmental damages in terms of the outlay undertaken to avoid an impact; under the travel cost method, the benefits arising from the recreational aspects of a site are estimated by modelling the demand for a site based on the expenditure incurred by visitors in travelling to the site. Hedonic price methods: with these methods an implicit price for an environmental attribute is estimated from consideration of the real markets in which the attribute is effectively traded (e.g. air quality or amenity and property values). Contingent valuation methods: social survey techniques are used to derive values for environmental change from determining people's willingness to pay and/or accept compensation. In discussing the applicability of these techniques within risk-benefit analysis, it is useful to distinguish between those techniques which can be used to derive 'use' values and those applicable to estimation of 'non-use'
values. All of the above methods can be used to develop valuations which relate to current actual uses (or 'consumption') of the environment, although they are applicable to different types of uses. The dose-response techniques could be applied to the valuation of effects on crop production, fisheries or forestry from the existence of damaging pollutant concentrations. Where an ecosystem has suffered damage, the costs of replacing or re-creating that ecosystem can be used to develop a value for the original resource. Similarly, the amount of money spent by individuals on, for example, water purifiers to reduce concentrations of pesticides in drinking water could be estimated. These three techniques are the most straightforward to apply in practice, with data generally being more readily available (Department of the Environment, 1991; Pearce et al, 1989). It is less likely, however, that hedonic pricing methods or the travel cost method would be applicable to the types of issues associated with the usage of chemicals as part of food production and processing. In comparison to the other methods, only the contingent valuation method can be used to derive estimates for non-use values. These are the values which relate to an individual's desire to conserve a resource for use by future generations or out of a desire to preserve a resource and thus to ensure its existence even though the person never intends to personally make use of it. In undertaking research concerning more general agricultural issues, we have come across one application, in particular, which illustrates the type of overlap between environmental and human health concerns related to food safety work and used the contingent valuation technique to determine willingness to pay to reduce both types of risk. In this study, 8000 farmers were surveyed to ascertain their willingness to pay to avoid damage to human health (acute and chronic), to avoid damage to water resources, and to avoid harm to non-target organisms (aquatic organisms, birds, mammals and beneficial insects) associated with the use of pesticides (Higley and Wintersteen, 1992). The resulting values refer to the willingness to pay to avoid risks per pesticide application and per area of use, with pesticides categorized by potential for harm on the basis of toxicological and other data. The willingness to pay values, therefore, provide an indication of the 'economic injury levels' associated with different pesticides. The bid prices found in the study ranged from $2.25 to $11.52 per pesticide application as the total willingness to pay to avoid all environmental risks. These values could be used in a number of different ways. For example, they could be used at a farm level for selecting the best pesticide strategy, given different crop mixes, or they could be used with a risk-benefit analysis to examine the implications of changes in pesticide usage, including changes in chemical and in application rates.
16.6
Summary and conclusions
The regulation of food safety requires that a number of different tradeoffs are taken into account. These include trade-offs between the price of food, food quality and appearance, human health and safety, and the environment. It is argued here that the framework which is most appropriate for assisting in the evaluation of these trade-offs is based on the use of social cost-benefit analysis. A simplified version of such a framework for risk-benefit analysis has been presented, with the recognition that it may not always be possible to fully quantify risks, costs and benefits. However, where quantification is possible it is argued that the analysis should go a step further and these risks, costs and benefits should be valued in monetary terms. Valuing human health and life in monetary terms is a contentious issue. Any amount of compensation for the loss of a near relative would be insufficient, and it is clearly impossible to compensate someone for the loss of their own life. However, valuation in analyses such as these focuses on marginal changes in the risk of morbidity or mortality. They also apply to a defined group of people, and thus the valuation of such reductions is in statistical terms, and does not deal with known individuals. Furthermore, while some people object to the principle of monetary valuation, it is important to realize that all economic decisions place implicit monetary values on human safety and the environment. For example, a decision to ban a particular food preservative reflects an implicit judgement that the benefits of doing so (in terms of reduced health effects) exceed the costs to producers and consumers associated with the use of alternatives or no preservative (which could also have negative health consequences).
References Ball, DJ. and Goats, G.C. (1997) Towards a coherent industrial safety and environmental risk management philosophy in the United Kingdom. International Journal of Environment and Pollution, in press. Conway, G.R. and Pretty J.N. (1991) Unwelcome Harvest: Agriculture and Pollution. Earthscan, London. Department of the Environment (1991) Policy Appraisal and the Environment. HMSO, London. Department of the Environment (1995a) A Guide to Risk Assessment and Risk Management for Environmental Protection. HMSO, London. Department of the Environment (1995b) Risk Benefit Analysis of Existing Substances. UK Government/Industry Working Group, London. Eom, Y.S. (1992) Consumer response to information about pesticide residues. Food Review, 15, 6-10. Fernandes-Russell, D., Bentham, C.G., Haynes, R.M. et al (1988) The Economic Valuation of Statistical Life. Research Report No. 5. Centre for Environmental & Risk Management (CERM), School of Environmental Sciences, University of East Anglia, Norwich.
Goats, G.C. and Ball, DJ. (1994) The Management of Risks Posed by Food Chemical Contaminants - Scope for Rationalisation? Research Report No. 24. Centre for Environmental & Risk Management (CERM), School of Environmental Sciences, University of East Anglia, Norwich. Hammitt, J.K. (1990) Risk perceptions and food choices: an exploratory analysis of organic versus conventional produce buyers. Risk Analysis, 10(3), 367-374. Health and Safety Executive (1992) The Tolerability of Risk from Nuclear Power Stations. HMSO, London. Higley, L. and Wintersteen, W. (1992) A novel approach to environmental risk assessment of pesticides as a basis for incorporating environmental costs into economic injury levels. American Entomology, 1, 34-39. HM Treasury (1991) Economic Appraisal in Central Government. HMSO, London. Hodgson, T,A. (1983) The state of the art of cost-of-illness estimates. Advances in Health Economics and Health Services Research, 4, 129-164. Ives, D.P. (1995) Public Perception of Biotechnology and Novel Foods: A Review of Evidence and Implications for Risk Communication. Research Report 26. Centre for Environmental & Risk Management (CERM), School of Environmental Sciences, University of East Anglia, Norwich. Jarvis, R.H. (1988) The Boxworth Project. In: Harding, DJ. (ed.) Britain Since 'Silent Spring' - An Update on the Ecological Effects of Agricultural Pesticides in the UK, Proceedings of a Symposium held in Cambridge, 18 March 1988. Institute of Biology, London. Jones-Lee (1989) The Economics of Safety and Physical Risk. Blackwells, Oxford. Malone, J.W. Jr (1990) Consumer willingness to purchase and to pay more for potential benefits of irradiated fresh food products. Agribusiness, 6(2), 163-178. Ott, S.L. (1990) Supermarket shoppers pesticide concerns and willingness to purchase certified pesticide-free fresh produce. Agribusiness, 6(6), 593-602. Pearce, D., Markandya, A. and Barbier, E. (1989) Blue Print for a Green Economy, Earthscan, London. Risk & Policy Analysts (1995) Cost Benefit Assessment (Agrochemical Reduction), for the Department of the Environment, a Draft Final Report, September (unpublished). Risk & Policy Analysts Ltd and Acer Environmental (1992) Risk-Benefit Analysis of Hazardous Chemicals: Final Report. Department of the Environment contract number 7/8/243, November. HMSO. Rodricks, J.V. (1992) Human Health Risk Assessment. Cambridge University Press, Cambridge. Shin, S., Kliebenstein, J., Hayes, DJ. and Shogren, J.F. (1992) Consumer willingness to pay for safer food products. Journal of Food Safety, 13(1), 51-59. Soby B.A. and Ball, DJ. (1991) Consumer Safety and the Valuation of Life and Injury. Research Report No. 9. Centre for Environmental and Risk Management (CERM), School of Environmental Sciences, University of East Anglia, Norwich. Suter, G.W. (1993) Ecological Risk Assessment. Lewis Publishing, Boca Raton. van Ravenswaay, E.O. and Hoehn, J.P. (1991) The impact of health risk information on food demand: a case study of Alar and apples. In: Easnell, J. A. (ed.) Economics of Food Safety. Elsevier, London, New York, pp. 155-174. Weaver, R.D., Evans, DJ. and Luloff, A.E. (1992) Pesticide use in tomato production: consumer concerns and willingness to pay. Agribusiness, 8(2), 131-142. Wise, C. (1994) Reducing pesticide contamination of water: a farming view. Pesticides News, No. 26, 14-16.
17 Risk evaluation, risk reduction and risk control D.R. TENNANT
17.1 Introduction Decisions about the management of food chemical hazards are dependent on many factors. Part 2 of this book described how complex scientific and technical factors are brought together in the risk assessment process. The foregoing chapters of Part 3 have shown how additional socio-economic factors such as consumer perceptions and the relative costs associated with different options also have their roles. Risk evaluation is the process whereby all these disparate and sometimes conflicting factors are brought together in an attempt to describe the total problem and to identify an optimal solution. Risk reduction describes the search for strategies which could reduce the level of risk or otherwise change the values of factors in the risk evaluation process, causing the balance to shift and a different outcome to become optimal. Risk control is the introduction of measures which will monitor or limit the levels of risk or other factors in the risk analysis process. Whilst these three activities are described as distinct processes in this chapter, the reader will soon recognize that they are interdependent and together provide the mechanism for assessment, feedback and control in the risk management process.
17.2
Risk evaluation
Decision-making about food chemical risks often requires the balancing of conflicting information, opinions and possible consequences. For effective decision-making it is essential that all the relevant factors should be identified and the relative importance of each factor made explicit in the formulation of the policy decision. This is difficult using conventional decision-analysis tools, since some factors are quantitative, some semiquantitative and some purely qualitative. Furthermore, some factors may be relatively easy to specify, whereas others are vague or ill-defined. Cost-benefit analysis is a technique which has been applied to regulatory decision-making because it removes subjectivity by assigning a monetary value to each variable. Through comparison of the overall monetary value of each option, the 'best option' can be identified. In practice, assigning
monetary values is extremely difficult, and may be impossible where some qualitative factors are concerned. If the analysis is 'forced' so that only those factors which are reliably quantifiable are included, then the analysis may reach a false conclusion. 17.2.1 Stakeholder analysis Stakeholders are all those individuals, parties and organizations who have an interest in the outcome of a decision. In the food safety arena they can be broken down into three main groups: • • •
Supply industry (retailers, processors, producers, trade associations, R&D scientists, financial investors, etc.) Consumers and general public (consumers, taxpayers, individuals with special dietary needs, pressure groups, political parties, media organizations, etc.) Government (regional, national and international food authorities, advisory bodies, QUANGOs, etc.)
The concerns of each stakeholder group can be summarized under various headings such as stability (the desire to be able to predict future markets, food needs, etc.), safety (confidence that food will present no unacceptable health risk), economics (the ability of manufacturers to make a profit or consumers to afford the food), knowledge (the distribution of advice and information about hazards and risks), and markets (industry's need to predict demand and consumers' desire for choice). Each stakeholder group will have a different perspective on each factor and apply a different weighting to its relative importance. 77.2.2 Decision analysis In order to bring together all the various factors and perspectives, a decision framework is required. This can be based on an objective hierarchy which allows the key objectives to be identified and then broken down into their objective components. In food safety legislation, the overriding objective is 'to ensure an economic supply of safe, nutritious food'. This can be achieved by minimizing costs and maximizing benefits. A cost is defined as a disadvantage of implementing a control measure. This would include, for example, the compliance cost to industry, the enforcement cost to regulatory authorities and the increased purchase price to consumers. A benefit is defined as an advantage of implementing a control measure, such as improved food safety, improved commercial reputation or stable markets. Costs and benefits can accrue to society (directly and indirectly), to consumers and to industry. Costs are generally more easy to determine. Benefits can be more difficult to define.
Benefits to society could include minimizing the risks from failure of production (by avoiding major 'food scares', for example) or the protection of the resources supporting the food supply. They might also include the fulfilment of government commitments, such as meeting international obligations to ensure the freedom of trade. Benefits to consumers include minimizing the risks of ill-health (both real and perceived), protecting against fraud and deceptions and maintaining choice and availability. Benefits to industry facilitate business by maximizing cost savings, promoting a fair market and providing a stable market environment. It is also important that measures are not unnecessarily prescriptive or inflexible. 17.2.3 Ethical and moral dimensions Ethical and moral factors are coming more to the fore in risk evaluation. Some societies are concerned about animal welfare, and this can have repercussions in areas such as the use of in vitro toxicology for evaluating the safety of food chemicals and the administration of pharmaceuticals to farm animals, particularly when these are seen as being intended to boost production rather than for therapeutic purposes. Consumers' reactions to bioengineered bovine somatotrophin (BST), which is a natural protein capable of increasing milk yields, may have had more to do with their feelings for the welfare of dairy cattle than worries about their own food safety. Consumers also sometimes express concern about the effects of chemicals, such as pesticide residues on children and future generations, and this too has a moral dimension. Parents do not feel entitled to take such risks on their children's and future generations' behalf. Certain specialized diets, such as vegetarianism, may reflect an ethical motivation, and cultural and religious diets often relate to moral codes. Here the choice of foods may affect intakes of chemicals from the diet. Vegetarians might have lower levels of intake of substances associated with animal products, such as veterinary drug residues, but could have higher intakes of substances associated with plants, such as pesticide residues. Ethical issues are particularly prominent in the area of biotechnology. Concerns about the environmental consequences of introducing genetically modified plants and animals can spill over and affect food safety issues. Consumers' concerns about the 'naturalness' of food may also have an ethical dimension, particularly in this context. Recent controversy in Europe about the emergence of bovine spongiform encephalopathy (BSE), which might be related to the human equivalent Creutzfeld-Jacob disease, may have a strong ethical dimension. Consumers express surprise and revulsion about the use of animal protein in ruminant feed and question the morality of 'recycling' waste animal products in this manner.
Risk analysts need to maintain an awareness of moral and ethical issues in order to ensure that these are properly addressed in risk evaluation. Failure to take such factors fully into account may have dramatic consequences if these are prominent in the public's view. 17.2.4
Quantitative risk evaluation
This analysis provides a useful means of allowing all the factors taken into account in a decision to be made explicit. This in itself should improve the transparency, consistency and overall quality of decision-making. The analysis can be taken a stage further by quantifying all the variables. This differs from conventional cost-benefit analysis, in that subjective estimates of value rather than monetary values are used (of course, where monetary values are available, they can be incorporated into the analysis). In this analysis the relative values of affected stakeholders on a common numeric scale are used. This can provide a quantitative output, but caution should be applied in its interpretation. Values are not necessarily generated on similar scales, and values in which the analyst has greater confidence may carry undue weight over those which are less certain. Such tools should therefore only be used to guide decision-making, not to determine it. The use of such decision-analysis tools can help to eradicate some of the popular misconceptions surrounding risk or cost-benefit analyses in risk management. One of these is that the costs of control or regulation fall solely on industry, whereas benefits accrue only to consumers. Industry achieves benefits such as market stability, removal of the danger of being undercut by less scrupulous competitors and the ability to compete in overseas markets. Consumers face costs, since any costs borne by producers must ultimately be passed on to consumers in the form of price rises or loss of choice. It is salutary to note that the search for unnecessary or obsolete food regulations in the UK has identified very few where industry or consumers would benefit from their removal. 17.2.5 Managing uncertainty As in many of the aspects of risk analysis that have been considered so far, risk evaluation is bedevilled by uncertainties. Uncertainties are generated in every stage of the risk assessment process, and further uncertainties surround any quantitative or qualitative socio-economic values generated in the risk management phase. This means that the process of risk evaluation should be as much about balancing uncertainties in the data as it is about assessing the data produced. Where uncertainties are identified, it is necessary to estimate the size of the error which could be introduced and to introduce appropriate safety factors to ensure that such errors cannot occur. However, if the safety factor is chosen correctly, then there is always
the small possibility that in a few cases there is no conservatism. This is why care must be taken to avoid eroding safety factors. In many cases, each stage in the risk analysis process leading up to risk evaluation can introduce uncertainties which are allowed for by adding safety factors. This is particularly the case for risk assessment. As each safety factor is added, so the degree of conservatism increases. Since each safety factor is associated with a probability, the cumulative conservatism introduced by such a series of safety factors can result in a risk estimate which exceeds the bounds of reasonable possibility. 77.2.6 Sensitivity analysis If reasonable estimates of the degree of uncertainty associated with each safety factor can be made, then it is possible to use sensitivity analysis techniques to investigate the effect of altering each safety factor on the final risk estimate. For example, if the 'envelope' of uncertainty can be defined (i.e. the limits within which the value must fall are known), then the effects of changing the value can be investigated. If our best estimate of the value of a variable is x and we can say with some confidence that: (a) the variable can never be greater than x + y, and (b) the variable will always be greater than x - z, then values between 'x + _y' and 'jc-z' can be used in the risk evaluation to discover the effect on the final outcome. Unfortunately, reliable information about the degree of uncertainty in data submitted to the risk evaluation is rare. Furthermore, it is not always clear whether the data provided are 'best estimates', are 'worst case' or include some other undefined degree of conservatism. Good guesswork or experienced judgement are often, therefore, important features of risk evaluation. This is why effective risk evaluation often depends on the combined experience of panels of experts rather than technical assessment tools. 17.3
Risk reduction
Risk reduction includes seven main steps: 1. identifying options for reducing risks; 2. identifying the most appropriate strategy for implementing those options; 3. assessing the risks and benefits associated with each strategy; 4. consulting others affected by the strategy; 5. drawing up monitoring plans; 6. revising the strategy, if necessary; 7. introducing the strategy.
Table 17.1 Examples of food-related activities and ways of reducing risks Activity
Primary production
Manufacturing
Processing
Storage
Preparation
Hazard
Glycoalkaloids in potatoes ?
Preservatives
W-Nitroso compounds in alcoholic drinks Carcinogen (?)
Aflatoxins
Cooked food mutagens Carcinogens(?)
Change manufacturing practice Limit concentrations Inform consumers
Limit imports Limit products Detoxify
Alter cooking appliances and practices Consumer advice Consumer information
Develop new production methods Legislation Voluntary action
Voluntary action Legislation
Voluntary action Advise consumers Inform consumers
Potential effect Options for risk reduction
Choose different varieties Breed new varieties Improve agricultural practice Inform consumers
Means of implementation
Plant breeding programmes Guidelines Legislation
Toxicity/allergic reactions Identify and limit to optimal concentrations Restrict uses Introduce alternative techniques Labelling Legislation Labelling Voluntary agreement
Carcinogen
Step 3 requires referral back to the risk evaluation process, sometimes via a revised risk characterization. Step 4 links with risk communication, whilst steps 5, 6 and 7 move towards risk control. There are many ways of reducing risks, not just technical solutions but improvements in training, information and management. Very often the restriction or complete removal of a substance or process will not be necessary. The risk reduction procedure allows the best option to be identified. Table 17.1 gives some examples of food production and preparation activities, their associated hazards and some approaches to risk reduction. Of course, there will always be another option: to do nothing. This option should always be considered alongside all others - it may turn out to be the optimal course of action. Doing nothing ranks as a decision in itself, and if this course is followed without full evaluation of its consequences it may represent a very high-risk option indeed! The risk reduction process is particularly applicable to existing substances which might contaminate food at any point between primary production and consumption. An excellent source of information on risk reduction for existing substances is the guidance document produced by the UK Department of the Environment (Department of the Environment, 1995), on which much of this section is based. The potential for contamination should be identified at each step in the food chain, and risk reduction opportunities considered. If control measures are to be introduced, it is usually best to apply them at the point in the food chain where the contamination actually occurs. The risk assessment process will draw an overall conclusion about the risks posed by a substance and set objectives for risk reduction. The risk reduction strategy should provide a practical means of achieving those objectives whilst minimizing any adverse consequences of measures being taken. There is a wide range of options which can be considered in the risk reduction process, ranging from no action through to prohibition of use. These include better information or better communication, issuing of advice and guidance, voluntary agreements and improvements to good agricultural or good manufacturing practice, and controls on use.
17.3.1
Options for food additive risk reduction
When intakes of food additives appear to exceed acceptable intakes and steps have been taken in the risk characterization (Chapter 2) to check that the risk assessment is as accurate as possible, then risk reduction should be aimed at controlling exposure. For a food additive this may mean examining the range of uses and the technological need for each use. The examination of need for each use must consider three aspects:
whether the use is needed at all; whether satisfactory alternatives are available; and, if the use is shown to be necessary, then what is the minimum concentration which will fulfil the technological need? The last point is vital, for regardless of the purpose of the additive, there will always be a concentration below which it ceases to perform its function, and this will vary from food to food. For example, the effectiveness of a preservative may be affected by the acidity of the food, and the need for a flavouring depend upon whether it is to be used alone or in combination with other flavours. Limits on use must reflect these minimum levels, as otherwise the additive may be rendered useless. There are also risk-benefit considerations, such as when reducing the usage of a preservative might result in an increased risk of food poisoning. There are often real opportunities for reduction: manufacturers may be tempted to add more than is necessary to 'be on the safe side'. They may also ask for higher limits so that they can change formulations in the future. Whilst these considerations must be taken into account in the risk reduction process, manufacturers will be aware that the loss of this flexibility would be preferable to a complete ban. The aim of this process should be to define as many different uses as necessary to ensure that the maximum permitted level in each food group is as low as practicable. Paring back the maximum permitted levels of additives in foods to minimum technological levels is likely to result in reductions in apparent intake levels. However, these reductions may not be sufficient to lower intakes below recommended acceptable levels. In these cases, each use must be examined and prioritized. Cost-benefit analysis can be used to prioritize uses and identify those which will cause least loss if removed. The alternative could be the loss of all uses if the substance is prohibited. The aim of this step is to reduce risks to acceptable levels whilst minimizing the socio-economic impact. In some cases, risk reduction may result in no change to permitted uses. A good example of this is artificial sweetener use by diabetics. Here the benefits of artificial sweeteners to diabetics may heavily outweigh the potential risks associated with intakes above recommended levels. Nevertheless, there are opportunities for risk reduction in the form of advice to consumers. Diabetics can be advised to vary their choice of sweetener so that their intake of any individual sweetener does not exceed recommended levels. Similarly, children's intakes of artificial sweeteners might temporarily exceed acceptable daily intakes (ADIs) from the consumption of inadequately diluted fruit drink concentrates. Here the solution might be to provide more information to parents and carers about the correct dilution procedure rather than introduce stringent controls.
17.3.2
Options for food contaminant risk reduction
Contaminants such as pesticides, veterinary medicines and food contact materials can be approached in a way similar to food additives, inasmuch as they can be controlled by carefully evaluating and controlling their use. Where the use of a particular chemical presents unacceptable risks to consumers, then it may be necessary to identify and evaluate potential substitutes. For food contaminants which are not intentionally added to food, there are fewer opportunities for risk reduction. As for additives, the accuracy of the risk assessment should have been considered in the risk characterization step. There may be some benefit in considering all the foods in which the contaminant occurs in an attempt to identify those which contribute most to high intake levels. However, the foods which contribute most to intakes will not necessarily be where the best opportunities for risk reduction lie. In some cases it may be appropriate to concentrate on one particular food, whereas in other cases it might be more appropriate to seek across-the-board reductions. Only through risk evaluation can the optimum course be identified. The point at which contamination is controlled should normally be as close as possible to the source. For example, if contamination by heavy metals such as lead, arsenic and cadmium is occurring on farms in a certain region because of local geochemical deposits, then it will probably be far more effective to control levels in foods coming off those farms than to introduce controls at the wholesale or retail levels. Similarly, there is little point in introducing 'farm gate' controls on contaminants which are likely to arise during processing - such as leaching of metals from food manufacturing equipment. The best options for reducing risks associated with food contaminants might lie in other economic sectors. For example, industrial emissions or chemicals disposal may be significant sources of contaminants which occur in food. Options for controlling such sources, which should include substitution, should be assessed even though their implementation may prove more difficult. The document 'Agenda 21' (Quarrie, 1992) summarizes the recommendations of the United Nations Conference on Environment and Development held at Rio de Janeiro in 1992. The section on environmentally sound management of toxic chemicals recommended policies to 'minimize exposure to toxic chemicals by replacing them with less toxic substitutes and ultimately phasing out the chemicals which pose unreasonable and otherwise unmanageable risk to human health'. In order for this to be achievable, there will need to be greater dialogue across the traditional regulatory boundaries between food safety, environmental protection and chemical licensing. An unpleasant and, unfortunately, growing source of food contamination is malicious tampering. There have been many cases in recent years of food manufacturers' and retailers' reputations being severely damaged by indi-
viduals or organizations tampering with food in order to achieve personal or political gains. Many more cases probably go unreported in order to reduce the amount of damage caused. Consumers are quite rightly concerned about the risks introduced by such tampering, and it is clearly important that the risk management process must take these factors into account. Tamper-proof packaging is probably impossible to achieve, but tamperevident packaging can give clear signs of tampering at all stages in the distribution chain after manufacture. Disturbingly, consumers do not always seem to be able to recognize tamper-evident packaging that has been interfered with, particularly if attempts have been made at repair (Moore, 1994). There may therefore be a risk communication challenge here. Risk reduction strategies must be carried forward into action, which is when risk control comes into play. 17.4 Risk control Responsibility for risk control lies with all participants in the food production process, from farmers through to consumers. Food safety is often seen as the responsibility of governments, yet the consequences of loss of control usually fall on the organization or individual responsible. Central government can establish guidelines and regulations, and local governments check to see that these are followed, but it is often food companies who suffer losses of sales and reputation when food contamination incidents occur. 17.4.1 Risks and regulation Governments have responsibility for making regulations to protect consumers against harm arising from chemicals in food. However, as was seen in section 17.2, regulation is usually regarded as a last resort rather than the first option. This is because regulations tend to be inflexible, difficult to draft and implement, hard to enforce, and appear to shift the responsibility for food safety away from the food producer - as long as the letter of the law is followed, then producers may be immune from prosecution for hazardous practices. General food safety regulations, supported by guidelines, which place the burden of responsibility firmly on the shoulders of the culprit, are preferable. However, there are many examples of regulations which benefit producers and consumers alike. Sensible food labelling requirements, for example, help the producer to describe the product accurately and the consumer to avoid specific ingredients. Regulations and formal agreements governing food in international trade are also necessary, since the perpetrators of any hazardous actions cannot be pursued easily across borders. The difficulties of establishing international trading standards will be discussed in Chapter 19.
17.4.2
Less prescriptive control methods
Goal-oriented legislation such as the as low as reasonably practicable (ALARP) approach described in Chapter 15 is sufficiently flexible to accommodate many different circumstances whilst applying pressure to reduce levels of chemicals in food. If the ALARP approach is to have any utility, it must be practicable. One of the drawbacks of the method's application as used in occupational health and safety is that it is dependent on a high degree of quantification. This is a practical possibility in simple cases such as the provision of machine guards, where it is feasible to estimate the number of accidents preventable, the cost of each accident and the costs associated with varying levels of protection. For chemical contaminants it is difficult to identify all the variables, let alone place values on them. Nevertheless, the approach could provide a useful framework for standard setting or for use in regulations. The key element is the determination of the limits to tolerability and acceptability. Health and safety regulators use probabilities of death or injury to define these limits. However, there are seldom sufficient toxicological or epidemiological data available to make such estimates for food chemical hazards. Nevertheless, there are some risk-related values which could be used to define these limits for chemical contaminants. They include: analytical limit of determination theoretical maximum concentration maximum surveillance value minimum surveillance value mean surveillance value threshold for acute toxic effects threshold for chronic toxic effects Having defined the limits to tolerability and acceptability, it will then be necessary to identify all of the relevant factors which must be taken into account when balancing risks against benefits in the ALARP zone. The system of compliance cost assessments might provide valuable information. However, it is unlikely that all of the factors which must be taken into account can be quantified. In particular, it is likely to be very difficult to quantify the health effects of very low intakes above the tolerable intake, because the anticipated effects would be so small. Nevertheless, the population potentially affected might sometimes be significant and so this factor too must be taken into account. The ALARP approach presents two options. First, it offers a tested riskbenefit analysis framework which can be employed when determining maximum tolerable levels for contaminants in food and which could pro-
vide a comprehensive, transparent and reasoned mechanism for risk management. Second, the ALARP approach could be introduced into regulations in place of absolute limits on contaminants in food. This more flexible approach provides a mechanism for ensuring that levels of contaminants are as low as practicable without penalizing those producers who, despite their best efforts, are unable to prevent contamination. The alternative to the ALARP approach is to set high limits, which would not penalize producers in high-background regions, but have little impact on contaminant intakes, or to set limits low enough to reduce intakes whilst placing a heavy penalty on those producers who cannot avoid contamination. The ALARP approach allows the cost of controls to be kept in proportion to the level of risk whilst putting pressure on the whole food industry to keep levels of contamination low. The net result could be a system which has real impact on intakes whilst affecting mostly those producers who can do something practicable to control contamination. However, there are likely to be many technical, legal and procedural difficulties which will need to be resolved before ALARP-based regulations on food chemical contaminants can become a practical reality.
17.4.3
Voluntary agreements
An advantage of the voluntary approach is that industries are free to adopt the most cost-effective means of achieving given targets. Agreements are easier to achieve where there are fewer companies involved or where there are well-organized trade associations. Enforcement can be achieved via the general provisions of food safety legislation, and the need for detailed regulation can be reduced. It may be possible to achieve greater benefits in quicker time with voluntary agreements than through prescriptive regulations. Regulations can take years to negotiate and bring into force and may be rapidly outdated by technological developments.
17.4.4
Codes of practice
One very useful tool for risk control is the agreement of codes of practice. These may take several forms: statutory, where failure to comply is an offence (unless it can be shown that other means are equally effective) advisory, where general legislation exists and companies need not follow the code, but if they are prosecuted the extent to which they followed the code may be used as evidence in court
voluntary, where failure to follow the code has no direct or indirect legal consequences, but the code represents generally accepted good manufacturing practice Codes can also help to prepare the ground for future legislation by helping smaller companies to catch up with the leaders. The hazard analysis critical control points (HACCP) approach provides a useful framework for developing codes of practice in the food sectors.
17.4.5
Hazard analysis critical control points
The hazard analysis critical control points (HACCP) approach was specifically developed for control in the food production industries. Its applications to date have focused mainly on microbiological hazards, although it also has potential application for chemical hazards. An excellent source of reference on the HACCP approach is the Food and Agriculture Organization's booklet on the subject (Food and Agriculture Organization of the United Nations, 1995). The HACCP approach includes seven essential steps. 1. Risk assessment: Following the procedures described in Part 2 of this book. 2. Identification of critical control points: 'Which are the stages in the process where contamination or other loss of control could occur?' 3. Definition of critical limit values: 'What is the maximum concentration of the substance that can be tolerated?' 4. Monitoring and surveillance: Instituting a system to check that limit values are being met. 5. Selection of corrective actions: 'What should be done to put things right?' 6. Audits: Independent and external scrutiny of the entire HACCP procedure. 7. Documentation and record-keeping: Looking for trends and evidence of gradual progress to loss of control. Steps 3 to 7 must be carried out for each critical control point (CCP). Each process will require a different HACCP analysis based on the framework in Figure 17.1. Note that two target levels are suggested: one to alert the operator to a change which might indicate a gradual loss of control, and a second, higher action level to stop the process. The HACCP approach is best suited to self-contained food production, storage, transport, processing or retailing operations. It would be impossible to provide a detailed analysis of possible HACCP procedures for each operation.
HACCP provides a useful framework which plant operators can use to manage the potential risks which their business faces. Figure 17.2 is an idealized representation of the entire food production system, showing possible CCPs for various food chemical hazards. It should be noted that a potential CCP arises each time goods move between compartments in the system, in addition to CCPs which occur within each process. This means that all goods and materials brought into a process, whether they be agricultural chemicals used on farms, raw commodities purchased for processing, or additives used in production or packaging materials used to present the final product, should be subject to the HACCP approach. Auditing should include not only the operator's own HACCP procedures but also those of operators up the chain. For example, food processors buying in raw commodities should seek assurances that HACCP procedures have been applied during primary production, storage and transport insofar as this is practicable. Thus a continuous audit trail will be formed from 'farm to plate' so that when problems do occur they can be traced easily back to their source. One source of problems sometimes overlooked is the bulk transport of foods, and in particular animal feedstuffs. The nature of a cargo previHazard identification
Determine critical control points
Probability Severity CCP1 CCP2 CCP3 etc.
Establish monitoring systems and effectiveness criteria Establish target levels
ALERT level ACTION level
Corrective measures
Audit
Recordkeeping
Figure 17.1 HACCP analytical framework. Each arrow represents a possible artificial control point.
Pesticides, fertilizers, seed dressings, atmospheric deposits, soils, water, sewage sludge.
Preservatives, pesticides, contaminants
Natural toxicants
Feed additives. Feed contaminants
Crop production
Animal feed
Primary food commodities
Animal production
Bulk transport
Crosscontamination
Food storage
Manufacturing and processing Packaging materials, paints, inks, solders
Veterinary medicines, veterinary pesticides, soils, environmental contaminants, water.
Packaging
Retail
Additives, processing aids, preservatives, adulteration, processing contaminants, tampering
Tampering
Purchase Figure 17.2 Application of HACCP analysis to the human food chain.
ously contained in a ship's hold, for example, can be easily overlooked if HACCP principles are not being applied. In some cases this might be a significant source of contamination. In many cases the only other option open to food and feed processors is to carry out random incoming material inspections. However, it is usually impossible for a food or feed manufacturer to analyse a statistically significant sample of incoming materials. Food and animal feed manufacturers can protect the commercial success and reputation of their businesses by insisting on seeing evidence of
HACCP having been applied throughout the production and distribution system on all incoming materials. 17.4.6
Good manufacturing practice and ISO 9000
ISO 9000 is a series of International Standards which apply to quality management and quality assurance systems. The standards specify requirements and recommendations for the design and assessment of management systems which are intended to ensure that suppliers provide products and services which satisfy specific requirements. The requirements and recommendations apply to the management of organizations that supply products and services rather than to those products and services themselves. The subsidiary standards, ISO 9001, ISO 9002 and ISO 9003, allow businesses to claim registration according to their capabilities, and customers to specify an appropriate standard in contractual situations. The ISO 9000 model is built upon the principle of preventing non-conformity at all stages in the supply chain. The benefits of an effective quality control system are the reduction in crises when systems go out of control, the improvement of consistency and efficiency and the ability to monitor quality and intervene before loss of control. Aspects of ISO 9000 quality control procedures which relate particularly to food chemical risk management are the requirements for specification of quality standards, inspection, testing, documentation, record-keeping and audit. These principles apply to goods and services being bought in as much as to products and services being produced, and so ISO 9000 compliant food industries should seek suppliers and contractors who are themselves ISO 9000 registered. Ideally, ISO registration should be present throughout the food supply chain. For most producers who are applying good manufacturing practice the requirements of ISO 9000 should be already in place, and compliance and registration should not be difficult. 17.4.7
Monitoring and surveillance
Whilst monitoring is a vital part of HACCP systems, it also, with surveillance, has a broader role in risk analysis. The term 'monitoring' normally implies activities designed to check compliance to predetermined standards, whereas surveillance is less directed towards specific standards and is aimed towards data gathering in the broader sense. Risk analysis may lead to the setting of standards, such as maximum residue levels for pesticides, or maximum tolerable concentrations for environmental contaminants. Very often, such standards will be based on field studies or earlier surveillance exercises. However, agricultural
practices and patterns of contamination may change, and so it is essential to monitor residue levels and contaminant concentrations to ensure that standards are being complied with. In rare cases it may be necessary to take legal action against those responsible if there has been a clear breach of regulatory limits. Surveillance is normally undertaken to investigate the need for action to control chemicals in food. It may not be directed towards chemicals where there are already controls in existence and may therefore include substances such as inherent plant toxicants, where there is less scope for regulatory control. Surveillance will also include the gathering of dietary information for risk assessment. Surveillance programmes are normally based on random sampling plans in order to acquire as representative a picture of the real situation as possible. Monitoring and, to a lesser extent, surveillance provide a feedback mechanism from risk management to risk assessment (Chapter 2). The results can be used to revise an earlier risk assessment and, in turn, trigger any necessary risk management action. This is particularly important in the context of chemicals for which statutory limits have been set. Where the limits have been set to ensure that consumers do not exceed acceptable or tolerable intakes, then monitoring can be directed to confirming that the conditions which prevailed when the limits were set have not changed. In the European Union, recent Directives have charged member states with the responsibility for monitoring and reporting on the usage and intakes of food additives. The results of these exercises may be used by the Commission to re-evaluate the conditions of use of some additives. Caution should be exercised in the interpretation of monitoring and surveillance data. This is particularly important for monitoring data which may have been collected because of concern about a particular issue. This may lead to a bias in the collection of samples, since the authorities wish to unearth evidence about a potential problem. Such data cannot be compared with surveillance data which have been collected on a random basis. Many national and regional food authorities gather monitoring and surveillance data on a regular basis and their reports are usually made public.
17.5
Evaluating, reducing and controlling risks - getting the balance right
In concluding this chapter, it is important to stress the need to ensure that any control measures that are introduced are kept in proportion to the risk. Sometimes the size of the risk as perceived by consumers must be taken into account, whereas in other circumstances a purely
technological assessment of risk will be appropriate. In either case, overzealous and overprescriptive regulation is likely to bring costs which far outweigh the benefits brought to consumers. On the other hand, a too lenient attitude towards controlling risks could result in huge costs to both industry and consumers if a poisoning event were to occur. Most responsible industries therefore take the view that regulations represent the minimum which needs to be done to protect the interests of consumers and the industry itself. Good manufacturing practice will almost always go far beyond the requirements of legislation. Novel approaches such as ALARP and HACCP provide flexible frameworks for evaluating, reducing and controlling risks across the full spectrum of stakeholders in food production from small producers right through to national governments and international standard-setting organizations. If properly applied, they can guarantee high standards for consumers whilst ensuring that producers are free to apply the most cost-effective solutions to their own situation. In the final analysis, all stakeholders in food production share a responsibility to investigate and apply the principles set out in this chapter.
References Department of the Environment (1995) Risk Reduction for Existing Substances. Guidance provided by a UK Government/Industry Working Group. DoE. Food and Agriculture Organization of the United Nations (1995) The use of hazard analysis critical control point (HACCP) principles in food control. FAO Food and Nutrition Paper 58. FAO, Rome. Moore, L. (1994) Damage limitation. Super Marketing, 1 October, 18-19. Quarrie, A. (ed.) (1992) Earth Summit '92. Regency Press, London.
18 Risk communication R. SHEPHERD and LJ. FREWER 18.1 Introduction One of the major issues arising in risk management is the communication between different parties involved. This often comes down to the problem of communication between the scientists, experts and regulators on the one hand and the public on the other. This is not always a straightforward procedure and this chapter will include some consideration of the research which has tried to address the problems in this area. Risk communication is, of course, closely linked to the subject of risk perception, discussed in a previous chapter, since in order to communicate effectively with the public it is necessary to understand how the public thinks about risks. Following some consideration of what risk communication aims to achieve, there will be a discussion of some of the problems which arise and some of the types of theories which have been put forward in this area. The next three sections cover aspects of communication: the message, the source of the information and the target audience. The role of the media is central in risk communication and therefore will be discussed in some detail, and this will be followed by a consideration of practical issues in communication and how we might learn from previous successful (and unsuccessful) attempts at risk communication. Much of the work done on risk communication, as with risk perception, has been done in areas other than food and other than chemicals in food. For this reason, many of the examples will be from other types of application, but where work has been conducted on food this will be presented. 18.2
Aims of risk communication
In a major review of risk communication methods, the National Research Council (1989) suggested that risk communication can serve two purposes, the first being to inform and the second to influence. Covello et at. (1986), reviewing the literature on risk communication, came to a similar conclusion but identified four types of objectives: information and education, behaviour change and protective action, disaster warnings and emergency information, and joint problem-solving and conflict resolution. The emphasis in this last objective on a two-way flow of information has been highlighted by others (e.g. Fessenden-Raden et al, 1987), and represents
something of a departure from traditional conceptions of communication as a one-way process of experts providing information to the public in the most appropriate and useful form. However, most instances of risk communication do centre on some form of information provision and education. Sharlin (1986), in a case study of the action of the Environmental Protection Agency (EPA) on ethylene dibromide, concluded that such an agency has to make sure that the public is informed so that the public can participate in the risk debate and the regulatory process. The National Research Council (1989) differentiated between two types of settings for risk communication: those of public debate and those related to personal action. This distinction between population and individual perspectives is echoed by other authors (Sharlin, 1986; Covello et al, 1986). Sharlin (1986) suggested that agencies such as the EPA need to perform risk assessment and risk management at a macro level of population statistics for the purposes of regulation, but public information has to be at the micro level of the implications for the individual if it is to be effective. Some of the conflicts inherent in these different perspectives can lead to problems in the communication process. 18.3 Problems associated with risk communication SIovie (1986) points out a number of problems with communicating risk. These are characterized as being related to limitations of technical risk assessment or to limitations of public understanding. In terms of technical assessment of risks, the tests performed do not provide exact estimates of risk but rather rely on a number of underlying assumptions and produce numbers which have inherent uncertainties and are open to different interpretations. Despite safety margins, the inherent uncertainty is bound to affect people's perceptions of the usefulness of the risk assessments. A second point is the adversarial climate within which risk assessments are discussed. Given the different views expressed by experts, people are likely to say that even the experts do not know what the risks are. The administrator of the EPA, Ruckelshaus (1985), said that the attempt to quantify risks to human health and the environment from industrial chemicals is: essentially . . . a kind of pretence; to avoid the paralysis that would result from waiting for 'definitive' data, we assume that we have greater knowledge than scientists actually possess and make decisions based on these assumptions. (Ruckelshaus, 1985, p. 26)
It has been suggested that explicit discussion of uncertainties in risk estimates would have positive effects on public views (B.B. Johnson and P. Slovic, personal communication), but Fessenden-Raden et al (1987) argue that admission of uncertainty may strike the public as surprising ignorance or
evasiveness. Johnson and Slovic report that members of the public tend to be unfamiliar with the concept of uncertainty in risk assessment. In this study, admission of uncertainty had less effect on the people's attitudes towards the risks than it had on their attitudes towards the regulators of risk; admission of uncertainty appeared to facilitate perceptions of source credibility, although not competence. Thus experimental evidence for risk communication being enhanced by including issues of uncertainty is not strong. Within a message on risk, the measures chosen for expressing risk may make some risks appear worse than others (Crouch and Wilson, 1982). Thus, expressing accidental deaths per million tons of coal mined in the USA shows a reduction over time, but expressing it as deaths per 1000 employees shows an increase: the same source of statistics can thus be used to claim that mining is getting safer or more dangerous. This is an instance of making some information more salient than other information, and an indication of the importance of realizing the different ways in which risks may be presented. Perceptions of risks are weighted in favour of more dramatic and memorable events (Lichtenstein et al, 1978). One problem with raising issues for debate or consideration by the public is that the 4availability heuristic' (Tversky and Kahneman, 1973) suggests that people are then likely to see this event as more probable. Risk messages may increase feelings of anxiety rather than reducing them as intended (Covello et al, 1986), and in attempts at behaviour change the use of high-threat or fear communications tends not to be successful (Covello et al, 1986). Rosenberg (1978), for example, cites the experience of recombinant DNA researchers in raising the issue of contamination by new organisms and finding that: Speculation abounded and the scarier the scenario, the wider the publicity. Many of the discussions of the issue completely lost sight of the fact that the dangers were hypothetical .. . (Rosenberg, 1978, p. 29)
An alternative procedure would be not to raise the issue of risk for debate, but this too presents problems. Even where there is little current public concern, e.g. about biotechnology, the strategy of 'letting sleeping dogs lie' is likely to be counterproductive, since when a negative event does occur this will lead to maximum public outrage. Strongly held beliefs may be very difficult to change. New evidence may be noted if it fits with preconceptions, but contrary evidence may be dismissed as unreliable or unrepresentative, or it may be interpreted using existing beliefs. The same information may thus be interpreted as supportive of both of two competing positions: for example, the Three Mile Island accident might be interpreted by those supportive of the nuclear industry as evidence of its safety because there were no fatalities, while those opposed to the industry would be more likely to interpret the accident in terms of the 'catastrophic potential' of the industry.
Where people do not have strong initial views, the presentation of the information may have a dramatic effect, due to framing (Tversky and Kahneman, 1986). There is strong evidence that people are sensitive to the wording of decision problems: that is, their choice between two 'risky' options will be influenced by the ways in which those choices are worded, even when the expected outcomes are similar for each option. This is thought to arise because the way in which a choice is presented will make some considerations more salient in the person's thinking. People with mixed attitudes (compared to people with relatively fixed attitudes) are likely to be susceptible to these 'context effects'. McNeil et al (cited in SIovie, 1986) found that subjects choosing a certain type of therapy dropped from 42% to 18% when the outcome was couched in terms of likelihood of dying rather than surviving. Those preparing information have a great responsibility to provide the information in an impartial way, but this type of framing effect means that in adversarial circumstances opponents can use the same statistics to favour contradictory arguments and hence add to possible public confusion. A further problem in risk communication is whether the target recipients actually pay attention to the risk information transmitted. 'Optimistic bias' refers to an effect where individuals believe that negative events are relatively unlikely to happen to them, but are more likely to affect other people: this effect has been demonstrated for a number of food-related hazards (Frewer et al, 1994). Optimistic bias is related to the need by an individual to feel that he or she has control over a situation. Clearly, some hazards are easier for the individual to control than are others, and thus it might be predicted that those hazards where perceived control is higher will also be more likely to exhibit optimistic bias. Optimistic bias has been shown to be greater for lifestyle hazards (e.g., a high-fat diet) than for genetic engineering, although the effect was still observed for the latter (Frewer et al., 1994). In addition, individuals think they know more about food-related risks than do other people (Frewer et al, 1994). It is implicit that, if individuals consider others to be at greater risk and less knowledgeable about a particular risk than themselves, they will consider risk communications to be directed towards these vulnerable and ignorant others. The solution may be to make information more directly and personally relevant to people.
18.4 Implications of models of risk perception and psychological theories for communication In order to communicate risk information effectively, it is important to take into account the social construction of risk perception. Jasanoff has noted that:
risk analysts, regardless of their disciplines, would probably agree that risk assessment is not an objective, scientific process; that facts and values frequently merge when we deal with issues of high uncertainty; [and] that cultural factors affect the way people assess risk. . . . (Jasanoff, 1993, p. 123)
To illustrate the social construction of risk perception, it is useful to examine the disparities between expert and lay concerns regarding the nature and relative importance of different hazards. Research in the USA has shown that the lay public and experts differ not only in their opinions of risk magnitudes associated with the handling of nuclear waste, but also in the conceptualization of what types of risks represent a serious threat (Flynn et al, 1993). Expert and lay judgements of chemical risks have been found to differ markedly, although assessments of experts were also sharply divided according to membership of different organizations (Kraus et al, 1992). Clearly, even the risk information provided by 'experts' is likely to be influenced by the social constructions surrounding the communicator. For example, scientists in universities or local government may see the risks of nuclear energy and nuclear waste as greater than do scientists who work as business consultants, for national government or for private research establishments. Such disagreements in risk communications between 'experts' is likely to result in confusion and mistrust at a public level, as the message which is conveyed is that of uncertainty. Implicit differences between 'experts' are likely to exacerbate conflict over potentially risky policies due to the mismatch between different 'scientific' findings, and the legitimacy of science can be undermined as a determinant of policy formulation. Slovic (1987) has argued that the lay 'conceptualization' of risk is much richer than that of experts. To be effective, risk communication must be structured as an interactive process, as both experts and the public have important insights to offer. Instead of utilizing a traditional 'source-receiver' model of risk communication (where messages are transmitted from an official organization to the lay public), it may be more fruitful to adopt the 'convergent' model (where an open dialogue is established between the experts and the public, such that consensus agreement about key concerns is established). Research on models of risk perceptions (discussed in Chapter 15) obviously has major implications for risk communication. Communicators need to understand how the public perceives risks and hazards in order to know how to structure risk-related messages. In short, they need to be aware of the public's 'models' of risk. In risk communication it is clearly necessary to have this basic information before effective communication can be attempted. However, while there is a growing body of research in risk perception, the research specifically addressing risk communication is much more limited (Covello et al., 1986) and the processes of effective
risk communication are far from being well-understood (e.g. Slovic et al, 1990). It is likely that some hazards may be more amenable to attitude change through effective risk communication than others. Alhakami and Slovic (1994) have observed that there is an inverse relationship between perceptions of risk and benefit for a range of different hazards. It would therefore seem possible to change perceptions of risk by changing perceptions of benefit, and vice versa. Thus for a technology perceived as high in risk and low in benefit, reducing risk perceptions may be brought about by increasing perceptions of benefit rather than heightening perceptions of safety. Attitude change has been found to be small in the case of nuclear energy (Alhakami and Slovic, 1994), but this might be because the technology is stigmatized in terms of its public image. There may be greater potential for attitude change in the case of technologies which are relatively unknown and poorly understood, such as genetic engineering, where there is little a priori public knowledge regarding the potential risks and benefits of the technology. Although there is relatively little research work on risk communication, there are developments in the field of persuasion and attitude change which might profitably be applied to this area. There is a very extensive and long-standing literature relating to this area. Recent contributions have included the work of Petty and Cacioppo (1986) who have developed a theory of persuasive communications called the 'elaboration likelihood model' (ELM). This basically posits that there are two routes to persuasion: one route is via a careful and thoughtful assessment of arguments (central route) and the other is based on some cognitive, affective or behavioural cue in the context of the persuasion which allows a simple inference about the merits of the argument without recourse to complex cognitive processing (peripheral route). Despite extensive work on this model in the area of attitude change, it has not been applied to the communication of messages on risk. It is a model which acknowledges the importance of individual differences and the need to bear this in mind when messages are structured. In the following sections various factors important in risk communication will be considered. These will be discussed under the headings of the contents of the message, the source of the information and the target audience. These factors are highlighted in models such as the ELM but also relate to more traditional models of persuasion and attitude change.
18.5
Contents of the risk message
One of the most important initial steps in designing risk communications is selecting what information should be contained. Fischhoff et al (1993)
criticize many existing communications on the basis of arbitrary selection of information, citing as an example the case of AIDS transmission. Whilst the concern of medical authorities focused on the low percentage of the population who knew that transmission of the disease was caused by a virus, it is arguably people's behaviour which is the key message to be transmitted in the risk communication. The salient issue is whether there are incorrect beliefs about the hazard which could result in inappropriate behaviours, not a fundamental misunderstanding of the scientific underpinning of risk precautions. Slovic (1986) points out the lack of hard empirical tests of how risk statistics should be presented. There are, however, some general rules in this area. Many authors have argued that risks need to be put into a wider context of other risks. Despite the seeming simplicity of this notion, the means for doing it are far from clear and its usefulness as a method continues to attract debate (Slovic et al, 1990). Crouch and Wilson (1982) presented data on annual fatality rates per 100 000 persons at risk, showing a comparison, for example, between the saccharin in one diet soda per day representing a rate of 1, aflatoxin in four tablespoons of peanut butter per day as 0.8, and motorcycling at 2 000 or smoking at 120 (from lung cancer). One problem with fatality comparisons is that they fail to capture the fact that some hazards (e.g. motorcycle accidents) cause death at an earlier age than do others (e.g. cancer), and hence other authors have prepared comparisons of estimated loss of life expectancy from different causes. Another alternative is to present activities which would each increase the chance of death in a year by 1 in 1 000 000. Here, for example, eating 40 tablespoons of peanut butter (aflatoxin) or drinking 30 cans of diet soda (saccharin) would be equivalent to cycling 10 miles or spending 1 hour down a coal mine. Such comparisons can provide a quick guide to relative risks, but comparisons of this type have also been advocated as a means for making decisions about priorities. Such an approach has been criticized (National Research Council, 1989), since the comparisons do not include relative costs and benefits, or indications of uncertainty or of how people may view relative risks (e.g. natural and unavoidable risks against non-natural risks). Also, the use of comparisons may give the appearance of selecting risks which play down the risk in question. While providing a potentially useful overall framework, such schemes can be uninformative for some people; for example, a single takeoff or landing on a commercial airliner reduces life expectancy by 15 min, whereas in terms of the reality of possible outcomes it either has no effect or reduces life by much more than 15 min (Slovic et al., 1982). A better form of comparison is between risks of a similar nature where the comparison is realistic. Slovic (1986) gives the example of comparing
the risks associated with radiation from non-natural sources, e.g. medical X-rays, with naturally occurring background radiation. However, in many instances such straightforward meaningful comparisons are difficult to provide. 18.6 Information sources The source of information about a particular risk is likely to be very important for a number of reasons. Sources may be seen as high in expertise, hence often increasing their persuasiveness, although they may also be seen as having a vested interest in withholding information or in presenting information in a biased manner. There is evidence for expertise increasing persuasion (McGuire, 1985). Within the ELM (Petty and Cacioppo, 1986), in order for persuasion to occur, source factors such as expertise need to be accompanied by quality arguments when the issue of concern is very relevant to the individual. On the other hand, where the issue is of low relevance, source factors may serve as a simple inferential cue as to the quality of the arguments. For issues of intermediate personal relevance, source factors can influence the amount of information processing (Petty and Cacioppo, 1986). The persuasive impact of sources high in expertise is short-lived: in fact, it is a major proposal of the ELM that persuasion via the peripheral route is generally of limited duration. In a study of group discussions on irradiated foods, a group leader who was expert in the area was found to reduce fears and to increase general consumers' willingness to buy irradiated products (in comparison with a non-expert group leader), possibly by being able to address specific questions raised (Bruhn et al, 1986). However, the expert leader failed to have this impact on 'alternative consumers' who were already decided in their views. Source knowledge/expertise appears to have little impact if not accompanied by trustworthiness and may even reduce persuasiveness by emphasizing the remoteness of expert sources from ordinary people. Likewise, expertise will have a negative effect if the source is perceived to be personally involved and so less objective. A message will have maximum effect if the person is seen to be arguing against personal self-interest. For example, a political candidate may effect more opinion change when he is perceived to be arguing against his own selfinterest. In real-world examples this would rarely be the case, although there may be examples, such as an industrial company identifying a safety problem in its own product, where it is not to the short-term advantage of the company to argue that the problem exists. Institutions also differ in how much they are trusted (McGuire, 1985) and, again, this is
likely to be affected by perceived self-interest. Credibility is likely to be one of the most important determinants of effective risk communication. Despite source credibility being raised as a major potential problem in the context of risk communication, there has been little applied research on the actual effect of credibility. Dissent between sections of the public and scientists over the relative risks associated with particular technologies can be interpreted as reflections of underlying public distrust of scientific institutions. Consumer trust in the regulation of the food supply has declined. A survey conducted in the USA into consumer attitudes towards the use of pesticides indicated that these were determined by three underlying dimensions: safety of pesticides, necessity of pesticides, and trust in industry (Dunlap and Beus, 1992). As the public believes governments work closely with industry, who may be seen as having vested interests in putting forward a particular point of view, trust in regulation may be reduced. Thus one of the central questions addressed by the risk communication literature is why some individuals and organizations are trusted as sources of risk information and others are not. In particular, industry and government often lack public trust and credibility (Frewer and Shepherd, 1994), partly due to perceptions of lack of proactivity in communication with the public. Many government officials are perceived as being insensitive to the information needs and concerns of the public. Improvements in co-ordination and collaboration with organizations publicly perceived to be trustworthy may increase public perceptions of trustworthy behaviour. An additional factor must be taken into account when assessing questions linked to trust and credibility. This relates to the nature of the hazard, and the context in which information is presented. For example, Frewer and Shepherd (1994) have shown that self-reported trust in hypothetical situations (where no information is presented) may not equate with behavioural responses to actual information when attributed to a particular source (Figure 18.1). In this experiment, people were presented with information about genetic engineering in food production attributed to either a quality newspaper, a consumer organization information leaflet, or a government information leaflet. Respondents were asked to rate the extent to which they trusted the information. A fourth group of subjects was asked to rate trust in the same sources, but did not receive any information. A control group was asked to rate the extent to which they trusted the different sources, but were not provided with any information. When no information was presented, trust in the government source was significantly lower. However, differences in trust disappeared when actual information was provided (Figure 18.1).
Trust
ns
Quality newspaper
Consumer organization information leaflet
Government information leaflet
Rating of trust without direct attribution Rating when information attributed directly to source
Figure 18.1 Trust in information attributed to different sources compared to stated trust in these sources when no information is presented. (Adapted from Frewer and Shepherd, 1994.)
18.7 Target recipients A number of authors point to the need to consider the public as a heterogeneous rather than as a homogeneous group (Covello et al, 1986; National Research Council, 1989). Members of the public may differ in their knowledge of particular issues and in how much they care about those issues. These factors are likely to be of great importance for risk communication strategies. The 'public', therefore, needs to be considered in terms of substantive differences between sub-groupings. There are certainly demographic differences in the perception of risk. For example, women tend to perceive greater risks to be associated with various hazards than men, partly because of their tendency to perceive risks in terms of the consequences should they occur rather than the probability of occurrence. Flynn et al (1993) have reported that European
males tend to associate less risk with environmental health hazards than women and members of other ethnic groups, suggesting that socioeconomic and political factors such as empowerment, status and trust in risk regulators are as important in determining risk perceptions as actual estimates of the risks themselves. Individual reactions to risk communication are likely to reflect social relations implicit in maintaining a particular way of life (Dake, 1991). This is related to the anthropological notion of 'cultural bias'. It assumes that there is a close relationship between general attitudes towards the world and how people think about different types of risks and how much they trust different sources of information. The individual differentially selects certain information sources as trustworthy so that they are consistent with the predominant world view held by that individual (Dake, 1991). Cultural biases are defined as shared values and beliefs, and correspond to different patterns of interpersonal relationships, or perspectives on the structure of society - egalitarian, individualist, hierarchist, and fatalist. The combinations of cultural bias and social relationship are defined as 'ways of life'. Ways of life are not necessarily stable through an individual's lifetime, but may be subject to change. However, if the predominant cultural bias of an individual can be quantified at any given time, it should be possible to predict what that individual perceives as risky at that time. Research has indicated that individualists (who are in favour of selfregulation and the free-market economy) are more likely to trust risk information from government and industrial sources. Hierarchists prefer a social organization where there is a clear structure of authority. Egalitarians value equality of outcome in the sense of diminishing distinctions between individuals in terms of economic inequalities. They tend to distrust industry, but rather favour those organizations or institutions which are seen as primarily concerned with collective outcomes rather than expansion of market economies. Fatalists see themselves as excluded from any formal organization of social life, and are thus more likely to distrust risk information from any organized social structure. Given the results discussed above, it would seem unlikely that the same type of risk communication strategy would be equally effective for all target groups. Thus effective risk communication might best be 'tailored' to the predominant risk perception 'style' of different groups.
18.8
The role of the media
The mass media are obviously important in modern societies for disseminating information. Certain media sources have been shown to be among the most trusted sources about food-related risk - in particular, the quality press and television news broadcasts are highly trusted, certainly in
comparison with government and industry (Frewer and Shepherd, 1994). Although the media have been accused of distortion and misinforming the public on issues of risk, this may be somewhat unfair (National Research Council, 1989). Given the technical difficulties involved in covering such stories, the frequent controversy surrounding them and the lack of any specialist knowledge by journalists, there are always likely to be problems. Slovic (1986) recommends schemes to encourage journalists to specialize in science writing and for professional bodies to set up information services. Fischhoff (1985) has suggested a number of checklists that reporters might use in evaluating information on risk. If used correctly, these would assist in accurate reporting. However, it is not clear whether reporters would be motivated to use such a complex procedure, particularly if it might ruin a 'good story'. It needs to be borne in mind that the media are in the business of entertainment and selling newspapers and advertising and not in the business of education. Those seeking to use the media to inform the public better have to play by the media rules. What is the impact of media risk reporting on the behaviour of the public? The media have been shown to play an important part in the determination of societal risk perceptions, in a number of different cultures. Research has indicated that the media, as a source of information about food-related risk, have a high potential for influencing consumer behaviour. For example, Smith et al. (1988) examined the sales reduction following a food contamination incident (heptachlor found in fresh milk in Hawaii in 1982). They reported that the media coverage following the incident had a significant impact on milk purchases. Negative coverage was a more important determinant of consumer behaviour than positive coverage, and reporting of reassuring statements from government or manufacturers was ineffective in the restoration of public confidence. Similarly, media reporting following the Alar scare in the mid1980s resulted in sales losses in the region of 30% (van Ravenswaay and Hoehn, 1991). Media risk reporting also seems to influence risk attitudes. For example, Wiegman et al. (1989) investigated whether the quantity and content of newspaper coverage of technological and environmental hazards is related to the reactions of readers on that issue. Increased exposure to risk information was associated with more negative attitudes towards the hazards. One of the key questions in this area is whether the information is seen to be trustworthy. It has been reported that trust in television news and newspaper reporting is dependent on the four attitudinal components of direction, intensity, closure and involvement (Stamm and Dube, 1994). Direction refers to the initial attitudinal perspective of the receiver of message information, such that agreement with the message is more likely to increase source credibility. Intensity refers to the extent to which the
source puts forward a particular view, and involvement the degree to which the receiver relates to the issue. The fourth component, closure, represents the extent to which the attitude is subject to change. All four components were shown to contribute to credibility of information presented. Credibility is clearly multi-dimensional, and cannot be predicted by single components. The question arises as to what type of risk information is provided in the media, and whether there are differences according to the type of hazard. Sharlin (1987) has observed that there is a preference for media reporting to be presented in 'microrisk' terms (i.e. the impact on individual members of the population), and does not focus on the long-term, chronic, 'macrorisk' presentation of information favoured by government agencies and other risk communication specialists. This reflects the media tendency to present information in sensational and dramatic terms. Intuitively, this is because of both the need to engage an audience, and the competition between different journalists for editorial space in the newspapers. There is evidence that different types of food-related hazards are linked to very different types of risk reporting. Frewer et al (1993/4) conducted a content analysis which examined risk reporting of different food hazards in the British quality press over a period of 1 year, from February 1992 to January 1993. The risk information associated with a range of different food hazards ('intentional food additives', 'biotechnology and genetic engineering', 'chemical and pesticide residues', 'food irradiation', 'microbiological food contamination' and 'natural toxins') was identified and quantified. The resulting data were analysed using correspondence analysis to produce the plot shown in Figure 18.2. Component 1 represents the quantity of risk information associated with different hazards. Hazards to the right of the figure are linked with more information. Component 2 represents the extent to which different hazards are associated with quantitative, as opposed to qualitative, information. Hazards to the top of the figure are linked with more statistical risk information than those towards the bottom. The results show that different food-related hazards are associated with different types of risk reporting. For example, microbiological hazards are associated with quantitative, statistical information. In contrast, the potential risks associated with biotechnology are presented in terms of value statements, and are linked to statements associated with the risk being unknown, and to conflict between the different 'actors' in the risk debate. To some extent, this might be predicted by the nature of the hazards themselves. However, food additives (where a great deal of quantitative risk information is available) are presented with very little risk information at all. Rather, food additives are presented as a risk, with no qualifying risk (or safety) information, thus implying that they should be avoided by the public.
Component 2 (12%) (Quantitative Risk)
9 % increase in risk numbers affected mentioned • probabilistic risk information/relative risk mentioned * • instances of risk to adults
ADDITIVES
Component 1(18%) (Information)
. instances of risk to children * MICROBIOLOGICALHAZARDS exper cited .r.C exam les , / " Sp P NATURAL TOXIN^ J ^ alarming statements i reassuring statements •
CHEMICAL & PESTICIDE RESIDUES
* informative statements government mentioned conflict mentioned * risk described as unknown pressure group cited
" BIOTECHNOLOGY
Figure 18.2 The type of risk information associated with different food hazards in the UK press. (Adapted from Frewer et al, 1993/4.)
Factors that contribute to perceptions of personal risk are not necessarily the same factors that contribute to perceptions of societal risk. The media are likely to influence social-level risk perceptions to a greater extent than personal risk perceptions, although it is important to determine whether information-seeking by the public is active, and, if so, what caused it to be initiated. One form of explanation of the role of the media in risk issues is that they act as part of a process of 'social amplification of risk' (Kasperson et al, 1988). The social amplification of risk results from the notion that public perceptions of the dangers from hazardous events are defined not only by technical risks but also by the social environment. Amplification refers to the process whereby risk messages become more intense when the risk is discussed. The individuals or groups who collect information about risks communicate with others, and reported degree of risk is increased each time the risk message is passed on; this is the process of amplification. If a risk is incorporated into the amplification network, various effects relating to the social processing of the risk may result (e.g. social stigmatization of the hazard and social disruption). The pattern of media
coverage may be one of the most important determinants of the social amplification of risk. The number of news stories, the duration of coverage and the 'half-life' of coverage are all factors of relevance. Risk attenuation, where risk communication ceases because the hazard is no longer seen to be dangerous, was found to occur at a local level, whereas risk amplification occurs at a national level (Kasperson et a/., 1988). One effect of amplification processes is that chronic hazards may suddenly enter the arena of public awareness, despite the fact that the hazard has been present for many years. In general, new hazards are perceived as being riskier than older hazards. However, sudden media 'awareness' of the hazard induces public awareness of the hazard, an amplification effect in itself, which might be further amplified by media attention. Thus a 'chronic' hazard may appear to be 'episodic' through the process of media amplification.
18.9
Practical concerns in risk communication
While the development of theories of risk perception and communication remains an important and necessary objective, there are also other ways of looking at the processes involved in risk communication. One useful procedure is to use case studies in order to examine in some depth particular instances of risk communication in order to see what we might learn from these specific examples. Although there is a dearth of published case studies of risk communication (National Research Council, 1989), examination of such cases as do exist (e.g. Sharlin, 1986; Fessenden-Raden et al, 1987; Weterings and Van Eijndhoven, 1989; Chess et al, 1992) can give an insight into the problems of risk communication. Weterings and Van Eijndhoven (1989), for example, studied three cases of soil pollution in The Netherlands. They concluded that, despite efforts at communication of risk to residents, there were problems in all of the cases. These problems included the use of technical language, the lack of explicit mention of health risks despite indications of how to reduce risks to health, and the lack of discussion of uncertainty in the risk assessments. In two cases this led residents to employ their own experts, who then made 'worst-case' estimates of the risks. Although the authors suggest that communicators should present not only a 'probable case' but also a 'worst case' and a 'best case', this seems likely to confuse people, and the presentation of worst cases would be expected to have a very negative effect (based upon the availability heuristic) (Tversky and Kahneman, 1973). Qualitative case studies were carried out by Fessenden-Raden et al (1987) into a series of incidents involving chemical contamination of water. Their analysis stresses the role of the receivers of the information and the
nature of the community rather than the individual in these types of cases. They highlight a number of common themes. If the public is the first to identify the problem rather than the authorities, it will view subsequent risk communications from the authorities with suspicion. They also highlight other aspects of the community, such as other current community concerns, attitudes towards local government, attitudes towards state and federal agencies and attitudes towards the presumed polluter (is it a member of the community or an outsider?). Individuals still varied in their reception of the information within communities based on personal experience, although some of the interpretation of risk information may be influenced by the views of others in the community. They point to the problems of interpretation of technical statements such as 'parts per million'. They also point out the reduction in the effects of use of phrases such as 'carcinogenic in mice', showing the reduced shock value with repeated use. Oversimplification of the problems is mentioned by Fessenden-Raden et al (1987), e.g. the use of maximum contaminant levels below which people are said to be safe and above which they are said to be at risk. This again highlights the problems that people may experience with statistical concepts. Chess et al. (1992) have used the case study approach to explore the question of what organizations must do internally in order to increase the effectiveness of their risk communication with the public. They report that an accidental release of ethyl acrylate (which poses a low degree of hazard if exposure is intermittent and at low doses) was effectively handled in terms of risk communication by the chemical company concerned. This was partly because immediate responses could be made to community concerns because of the close links between those personnel managing the risks and those involved in communication about the risks, with some people even having a dual role. However, the authors add the caveat that findings from a small organization in a crisis management situation do not necessarily generalize to the management of chronic risks in larger organizational contexts. Case studies provide a useful means for assessing the real-world application of risk communication techniques. However, they are not sufficient to provide the basic scientific underpinning of this area. This must be provided by systematic investigation and the implications tested in real risk communication contexts. Case studies in themselves tend to be idiosyncratic and, hence, problems of generalization abound.
18.10
Conclusions
While it is reassuring to the public to know that legislation exists to control potential hazards, simply legislating to control for risk is not the answer
to risk communication, because the concerns of the public are not necessarily the same as those of science. Publishing risk information does not facilitate risk communication unless issues of public concern are directly addressed, unless the communicators are trusted, and unless legislation is the result of public discussion regarding regulatory needs. Of course, there are many actors in the area of risk communication. The regulators, industry, pressure groups and the media may all have different agendas in terms of communicating on risks. However, if risk communicators are to be effective, they need to consider a number of issues concerning how they go about such communications. Risk communication should focus not only on the risks and benefits of the hazards themselves, but also on the risks and benefits associated with alternative courses of action. This is likely to be the only way to facilitate the process of informed choice regarding lifestyle choices or acceptance of technology and its products. The first stage in effective risk communication is to understand what the public knows and believes, rather than to initiate communication from the perspective of what scientists think the public ought to know. The use of interview techniques, surveys and focus groups may facilitate this process of understanding what it is that the public understands (or does not understand) and what it needs to know, as well as establishing a dialogue between expert and public. A set of general guidelines for effective risk communication cannot be given. It is crucial to develop a complete understanding of the characteristic public risk perceptions of the particular hazard in question. However, some elements of effective risk communication are common to all hazards; for example, credibility and trust in the source of the information is one of the most important determinants of effective risk communication. Furthermore, not all individuals will react to risk communications in the same way. The concerns of the minority should not be considered irrelevant if they are not the concerns of the majority. It is also crucial that the issues surrounding ambiguity and the concept of science as a process be fully addressed. If the public is to understand the messages of risk communicators, and make informed choices about the relevant issues, information about the fundamental uncertainty implicit in science and scientific research should be included in the risk message.
Acknowledgements Parts of the work reported here were funded by the UK Ministry of Agriculture, Fisheries and Food and by the Biotechnology and Biological Sciences Research Council.
References Alhakami, A.S. and Slovic, P. (1994) A psychological study of attitudes. Risk Analysis, 14(6), 1085-1096. Bruhn, C.M., Schutz, H.G. and Sommer, R. (1986) Attitude change toward food irradiation among conventional and alternative consumers. Food Technology, 40(1), 86-91. Chess, C., Saville, A., Tamuz, M. and Greenburg, M. (1992) The organizational links between risk communication and risk management: the case of Sybron chemicals Inc. Risk Analysis, 12(3), 431-438. Covello, V.T., von Winterfeldt, D. and Slovic, P. (1986) Risk communication: a review of the literature. Risk Abstracts, 3, 171-182. Crouch, E.A.C. and Wilson, R. (1982) Risk/Benefit Analysis,. Balinger, Cambridge. Dake, K. (1991) Orientating dispositions in the perception of risk: an analysis of contemporary worldviews and cultural biases. Journal of Cross Cultural Psychology, 22(1), 61-82. Dunlap, R.E. and Beus, C.E. (1992) Understanding public concerns about pesticides: an empirical examination. Journal of Consumer Affairs, 26, 155-171. Fessenden-Raden, J., Fitchen, J.M. and Heath, J.S. (1987) Providing risk information in communities: factors influencing what is heard and accepted. Science, Technology and Human Values, 12, 94-101. Fischhoff, B. (1985) Environmental reporting: what to ask the experts. The Journalist, Winter, 11-15. Fischhoff, B., Bostrum, A. and Quadrel, MJ. (1993) Risk perception and communication. Annual Review of Public Health, 14, 183-203. Flynn, J., Slovic, P. and Mertz, C.K. (1993) Decidedly different: expert and public views of risks from a radioactive waste repository. Risk Analysis, 13(6), 643-648. Frewer, LJ. and Shepherd, R. (1994) Attributing information to different sources: effects on the perceived qualities of the information, on the perceived relevance of the information, and on attitude formation. Public Understanding of Science, 3(4), 385-403. Frewer, LJ., Raats, M.M. and Shepherd, R. (1993/4) Modelling the media: the transmission of risk information in the British press. Institute of Mathematics and its Applications to Technology and Industry, 5, 235-247. Frewer, LJ., Shepherd, R. and Sparks, P. (1994) The interrelationship between perceived knowledge, control and risk associated with a range of food related hazards targeted at the self, other people and society. Journal of Food Safety, 14, 19-40. Jasanoff, S. (1993) Bridging the two cultures of risk analysis. Risk Analysis, 13(2), 123-129. Kasperson, R.E., Renn., O., Slovic, P. et al. (1988) The social amplification of risk: a conceptual framework. Risk Analysis, 8(2), 177-187. Kraus, N., Malmfors, T. and Slovic, P. (1992) Intuitive toxicology: expert and lay judgements of chemical risks. Risk Analysis, 12(2), 215-232. Lichtenstein, S., Slovic, P., Fischhoff, B. et al. (1978) Judged frequency of lethal events. Journal of Experimental Psychology: Human Learning and Memory, 4, 551-578. McGuire, WJ. (1985) Attitudes and attitude change. In: Lindzey, G. and Aronson, E. (eds) Handbook of Social Psychology, 3rd edn, Vol. 2, pp. 233-346. Random House, New York. National Research Council (1989) Improving Risk Communication. National Research Council, Washington DC. Petty, R.E. and Cacioppo, J.T. (1986) Communication and Persuasion: Central and Peripheral Routes to Attitude Change. Springer-Verlag, New York. Rosenberg, J. (1978) A question of ethics: the DNA controversy. American Educator, 2, 27-30. Ruckelshaus, W.D. (1985) Risk, science, and democracy. Issues in Science and Technology, 1(3), 19-38. Sharlin, H.I. (1986) EDB: a case study in communicating risk. Risk Analysis, 6, 61-68. Sharlin, H.I. (1987) Macro-risks, microrisks and the media: the EDB case. In: Johnson, B.B. and Covello, V.T. (eds) The Social and Cultural Construction of Risk. Reidel, Dordrecht, pp. 183-198. Slovic, P. (1986) Informing and educating the public about risk. Risk Analysis, 6(4), 403-415. Slovic, P. (1987) Perception of risk. Science, 230, 280-285.
Slovic, P., Fischhoff, B. and Lichtenstein, S. (1982) Facts versus fears: understanding perceived risk. In: Kahneman, D., Slovic, P. and Tversky, A. (eds) Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press, Cambridge, 463-489. Slovic, P., Kraus, N. and Covello, V. (1990) What should we know about making risk comparisons? Risk Analysis, 10, 389-392. Smith, M.E., van Ravenswaay, E.O. and Thompson, S.R. (1988) Sales loss determination in food contamination incidents: an application to milk bans in Hawaii. American Journal of Agricultural Economics, 70, 513-520. Stamm, K. and Dube, P. (1994) The relationship of attitudinal components to risk in the media. Communication Research, 21, 105-123. Tversky, A. and Kahneman, D. (1973) Availability: a heuristic for judging frequency and probability. Cognitive Psychology, 4, 207-232. Tversky, A. and Kahneman, D. (1986) Rational choice and the framing of decisions. In: Hogarth, R.M. and Reder, M.W. (eds) Rational Choice: The Contrast between Economics and Psychology, pp. 67-94. University of Chicago Press, Chicago. Van Ravenswaay, E.O. and Hoehn, J.P. (1991) The impact of health risk information on food demand: a case study of Alar and apples. In: Caswell, J.A. (ed.) Economics of Food Safety. Elsevier Science Publishing, New York, pp. 155-174. Weterings, R.A.P.M. and Van Eijndhoven, J.C.M. (1989) Informing the public about uncertain risks. Risk Analysis, 9(4), 473-482. Wiegman, O., Gutteling, J.M., Boer, H. and Houwen, RJ. (1989) Newspaper coverage of hazards and the reactions of readers. Journalism Quarterly, 56, 844-852.
19 Regulating food-borne risks RJ. SCHEUPLEIN
19.1
Introduction
For centuries, governments have had an essential role in assuring the safety and integrity of the food supply. The early regulatory focus was on fraud in the marketplace, but it very quickly expanded to include protection against the sale of unsafe food. Today, proscriptions against adulteration and misbranding of food are the core elements of food regulation in virtually all developed countries. In the USA and in some other countries, federal regulation has been extended to protecting the nutritional integrity of food and to providing nutritional information to the consumer (Hutt and Merrill, 1991). Truly effective oversight of the food supply by government is difficult and expensive to achieve. This has led in recent years to a demand for more individual accountability on the part of the food industry in the production, processing, storage and transport of food in an effort to prevent problems from occurring. Good manufacturing process guidelines (GLPs), specialized monitoring procedures for safety (HACCP) and quality certification procedures (ISO 9000) are examples of these preventive approaches to assure food quality and safety. The globalization of trade in food has increased the demand for more international uniformity and harmonization in food standards, specifications and food regulatory procedures This chapter presents a view of how our concerns over food safety developed, and what measures were universally adopted to address those concerns. The laws, procedures and methods of food safety regulation are then discussed in some detail as they apply to the USA. Finally, a brief description of food regulation in the international arena is presented. 19.2
History of food regulation
The adulteration of food is as old as commerce itself. Traditional homegrown food has always been considered safe by people in every country. However, as people began to depend on food prepared by strangers and shipped long distances, problems with adulteration arose. References to food adulteration can be found in the early literature of China, Greece and Rome. Theophrastus (370-285 BC), in his botanical treatise, Inquiry
Into Plants, reported on the use of artificial preservatives and flavors in food. Fragments of early Chinese literature record adulterations practiced during the early dynasties. As early as the second century BC, The Institutes of Chou recorded: The Supervisor of Markets had agents whose duty it was to prohibit the making of spurious products, and the defrauding of purchasers' (White, 1948). When markets began to offer fully prepared foods, opportunities for adulteration increased dramatically (Hart, 1952a). Foods such as honey, wines, ales and oils were the first candidates for 'sophistication', which was the early term for adulteration. Pliny the Elder (AD 24-79) has one book dealing specifically with wines. Gallic wine met with Pliny's disfavor; he states that the dealers have set up regular factories where they give a dark hue to their wine by means of smoke, and I regret to say, employ noxious herbs, inasmuch as a dealer actually used aloe for adulterating the flavor and color of wine (Hart, 1952a) By the 13th century, commerce in Europe was controlled by voluntary mercantile associations or guilds with their own codes and regulations. Public ridicule against an offender included the pillory, or stocks, or immersion in the pond, or the worst punishment - expulsion from the guild. One of the earliest guilds was the 'Pepperers'. King Henry III of England made the Pepperers the custodians of the official Weights Standards. Pepper was one of the more valuable spices in the spice trade, due to its use as a meat preservative. This guild established the profession of 'garbelers', the first public food inspectors in England. 'Garble', from an old Arabic word meaning to 'to sift or select', was the process of detecting and removing impurities from spices and similar products and certifying their commercial purity (Hart, 1952a). Bread, as one of the staples of life, came under legal control early in English history. King John, in 1202, proclaimed the first Assize of bread. This was first an attempt to fix the price of bread by varying the weight of the loaves according to the cost of a quarter of wheat, keeping the price of the loaf fixed. Later laws regulated quality as well. Short weights brought corporal punishment, and, for repeated offenses, confiscation of the bakery. Pike describes an imaginary journey from Dover to London in 1348: Here perhaps a baker, with a loaf around his neck, was being jeered and pelted in the pillory because he had given short weight . . . there, perhaps, an oven was being pulled down, because a baker had been detected in a third offense and had been compelled to abjure trade in the city forever. . . . Pillories were used to punish the sellers of bad meat, poultry and fish, of oats good at the top of the sack and bad below. (Pike, History of Crime in England, 1873, cited in Hart, 1952a)
After a time, the voluntary policing by guilds was replaced by local ordinances or state laws. A London ordinance of about 1400 provided that: No poulterer or other person whatsoever shall expose for sale any manner of poultry that is unsound or unwholesome to mans body, under pain of punishment by the pillory, and the article being burnt under him. (Filby, 1934)
An edict in Paris in 1396 banned the coloring of butter. Nuremberg, in 1444, punished an adulterer of the expensive spice, saffron, by burning him at the stake, over a fire of his own saffron. In Biebrich in 1482 a vintner who had adulterated his product was condemned to drink six quarts of his own wine, from which he died. The 16th and 17th centuries offered new opportunities to adulterers, with the bringing of such luxuries as tea, coffee, chocolate and sugar from the New World. The cruder forms of adulteration were replaced by more skillful substitutions that were beyond the detection skills of food inspectors. Adulteration had become a fine art, food laws were of little avail, and for the most part they were designed to preserve income for the state, not to protect public health. Addison's humorous comment on imitation wine illustrates the times: There is in this city a certain fraternity of chemical operators who work underground in holes, caverns and dark retirements. . . . They can squeeze Bordeaux out of the sloe, and draw champagne from an apple. (The Tattler, No. 131, 1710, cited in Hart, 1952a)
The period from 1800 to the present is referred to as the legislative period by historians of food regulation, as government authority increased in response to demand (Hart, 1952b). Reported food adulterations increased, primarily owing to the greater ability to detect them. The development of the analytical balance and the microscope enabled chemists to rapidly detect, identify and measure foreign substances in foods. Frederick Accum, a German chemist and pharmacist, published the most celebrated of his books and pamphlets on food adulteration in 1820. This book had a very long and academic title and was shortened by the public to 'Death in the Pot'. Accum told of illnesses and deaths from eating pickles 'greened' with copper sulfate, and cheese colored with vermilion and red lead. He described adulterated gin as containing oil of vitriol and capsicum to impart pungency, extracts of orris and angelica to give body, ether to enhance the alcoholic taste, turpentine to give strength, and arsenic to promote thirst so that the consumer would drink more. In England in 1851-1855, the medical journal Lancet published a similar series of articles by A.H. Hassall, a British physician who owned his own microscope and had a keen interest in food adulteration. Hassall's articles covered practices throughout the major cities in England, and
listed names and addresses of merchants selling adulterated foods. The articles created a sensation in the press. One reviewer said: To such a pitch of refinement has the art of fabrication of alimentary substances reached that the very articles used to adulterate are themselves adulterated, and while one tradesman is picking the pockets of his customers, a still more cunning rogue is, unknown to himself, deep in his own. (Quarterly Review 460, 1855, cited in Hart, 1952a)
Hassall published extensive tables showing adulteration of common foods. One of the major adulterants of cocoa was cocoa shell, a much cheaper diluent. Excise figures were quoted showing that, during one year, 612 122 pounds of cocoa shell were imported to Ireland, and only 4000 pounds of cocoa (Hart, 1952b). As a result of the expose by the Lancet, Parliament appointed a committee to investigate the extent of adulteration, and hearings were held. The committee concluded that adulteration was rampant, public health was endangered, public morality tainted, and pecuniary fraud committed on the whole community. Continued public agitation and a series of food-poisoning incidents persuaded Parliament to pass the Adulteration of Food and Drink Act 1860. This was replaced in 1875 by the Sale of Food and Drugs Act. The legislative history of other countries went through similar cycles. The German empire passed a general food law in 1879. The law in The Netherlands was even earlier (1829), but it covered only the addition of poisonous ingredients. France had passed one in 1851, and in 1855 it was amended to include beverages. The improvement in food quality was gradual; the new laws were no panacea. In London around 1900, the coloring of milk was so common that housewives refused to buy the uncolored product, thinking that it was adulterated. The natural color of milk is almost white, with a touch of yellow, while skimmed or diluted milk has a bluish cast. To hide this, London merchants would add a little yellow color. Gradually, all the milk, whether skimmed or not, became artificially colored. The buyer, thinking the yellow tinge was the hallmark of quality, refused to buy the natural unadulterated milk. The British banned the coloring of milk in 1925. The Director of the Paris Municipal Laboratory in 1900 described the practice of milk distribution in his city this way: The first fraud was the farmer's, who skimmed off some of the rich top milk. The next dilution or skimming took place at the receiving stations of the dairy companies. The cans, duly sealed, started for Paris. There they were picked up by the drivers of the retail delivery wagons. They again diluted the milk before starting on their rounds. It was reported that these drivers tripled their wages by this practice. From this money they contributed $4 per week to a fund to pay for legal defense for any drivers caught, and to pay double wages for any drivers in jail. The article quoted said that there were 800 drivers in Paris. Thus a fund of $3,200 a week was maintained. (Hart, 1952b)
These historical attempts to deal with food safety and deceptive practices have some salience today, for two reasons First, many nations today are in a phase of very rapid development and are just beginning a broad commerce in food products. Human nature has not changed. Deceptive practices are strong temptations in countries that are just starting to build their commerce and infrastructure. Second, there will continue to be a strong movement to harmonize and internationalize food trade and to reduce trade barriers. The major barriers to international trade are cultural and economic. The illustrations above and many others that are recorded in the literature show how such cultural and economic barriers have been dealt with and sometimes overcome in the past. 79.2.7
Why are intentional chemical additives used today?
Generally, the addition of chemicals to foods for whatever purpose is regarded suspiciously by people in most countries. As the historical examples indicate, this is not always paranoia. Both the unnaturalness of the act of adding chemicals to food and the possibility of its being done carelessly are understandable concerns. What is relatively new in the world is the capability to add chemicals to food to improve its quality rather than damage it. Chemical food additives are widely used in most countries and are approved for international trade. Chemical additives are employed to help preserve food, improve nutritional value, enhance quality or consumer acceptability, ensure seasonal consistency, or make food more readily available. In addition to intentional additives, there may be additives that are incidental, such as pesticides which are today essential to the growth and production of many crops. Pesticide residues can remain in food at very low amounts as a result of their use on the crops. It is well to remember that foods themselves are chemical mixtures, most of them uncharacterized, and some of these components are quite toxic as individual chemicals. Their concentration in food is low enough so that they are quite safe as consumed. The same dilutional principle works for chemical food additives, and as long as they are properly used by the manufacturer, any significant potential risks can be eliminated. All these additives are closely regulated in the USA and the European Community and most developed countries to ensure that the amounts in food are far below toxic levels.
19.3 Food regulation in the USA 79.3.7
Early regulation
Early efforts at food regulation in the USA had to promise tangible benefits in order to overcome the natural reluctance of Americans to have
government involved in personal food choices. Eating home-cooked food was considered too important and too fundamental a human right to be subject to government interference. However, as people began to depend on food prepared by strangers and shipped long distances, and with the advent of food preservation, processing and canning, many opportunities arose for illegitimate profit through food adulteration. Consumers gradually came to expect and demand that the food they purchased was not intentionally or inadvertently debased. They also came to expect and demand honest labeling and fair dealing in food commerce. Much of the original impetus for the first food laws in the USA was economic as well. Legitimate businesses could not compete successfully if their competitors were allowed to sell inferior and cheaper food products that were fraudulently misrepresented as genuine. US food commerce could not expand abroad if the food products could not be guaranteed as safe and of good quality. US food laws represent the outcome of years of informal and formal risk-benefit debates that focused on these two major societal concerns: personal freedom and protection from harm. Federal regulation of food and drugs in the USA began with the Pure Food and Drugs Act of 1906, but the antecedents of that law go back to the statutes and the traditions in England and the European continent that the colonists brought with them. From the American Revolution, through the first half of the 19th century, food was regulated locally or by individual states. States and towns renewed such laws sporadically. In New York at the end of the 18th century, no fresh meat or dead fish could be sold in public markets after 10 o'clock in the morning. As the country grew and food commerce developed, so did new opportunities for deception and fraud. Food deception in the USA soon mimicked that in Europe, with interesting American innovations. In New York and Massachusetts in 1880, 71% of the olive oil was mixed with cottonseed oil which had been shipped out of the country, returned and sold as 'olive oil'. In New Hampshire, the home of the sugar maple, the state board of health found that only 50% of the samples were pure maple syrup; the rest were adulterated with cane sugar. As late as 1887, pure coffee was hard to find in the USA; it was usually diluted with chicory or roasted potato. Candy was colored with mineral pigments, including compounds of lead, arsenic, mercury and chromium. Milk was diluted with water or skimmed, and sold more cheaply. By 1840, some 60000 families in New York, used 'swill milk' from distillery dairies. Swill milk was milk from dairy herds attached to distilleries, where hot waste from the fermentation vats was the primary feed for the cows. In 1848 a committee of the New York Academy of Medicine drafted a report that blamed 'swill milk' for the high infant death rate in the city. Some dairies tried to supply hay to supplement the swill, but the hot swill so damaged the teeth of the cows that they could not chew the
hay. Criticism of dairies in New York City began in the late 182Os and renewed itself periodically until a 'swill-milk' bill was passed in 1864, denning milk coming from cows fed distillery swill as automatically adulterated (Young, 1989). Following large-scale growth of the canning industry after the Civil War (1865), and prior to the acceptance of Pasteur's germ theory as an explanation for food spoilage, canners had begun to use chemical preservatives, including boracic acid, salicylic acid, benzoic acid and formaldehyde, in their processing operations to prevent spoilage. The possibility that these preservatives could cause harm was an important consideration for the early proponents of food regulatory reform, but no official of the federal government during this period suggested that the regulation of foods or drugs might fall within federal jurisdiction. In the early part of the 20th century, the debasement of food was still widespread in the USA. As Young (1989) states, skepticism about the quality of American meat had long existed, and European concern provoked the meat inspection laws of 1890 and 1891. American agricultural exports doubled in the 187Os due to severe crop failures in Europe, the increasing productivity on American farms, and the ingenuity of meat packers. Moving 'disassembly' lines were devised to butcher meat. Refrigerator cars and cold storage warehouses became central to distribution systems. Overseas imports expanded and prices fell, so that the poor in Germany could eat more pork, and the poor in Britain more beef, than they earlier could afford (Young, 1989) Such competition worried European farmers, especially when their own crop conditions improved. Beginning in 1879, a wave of embargoes on American meat swept across Europe, supported by scare stories about trichinae and cholera in American hogs and pleuropneumonia in cattle. These stories were untrue, but they hurt American exports and a law designed to inspire confidence in the safety of American meat was passed in 1890 (US, 1890). The law banned the import or export of infected cattle or unwholesome meat, and authorized careful inspection of pork destined for export. The law also prohibited the importation of adulterated articles of food and drink into the USA. In 1891 the law was strengthened to include mandatory antemortem inspection of all live cattle, hogs and sheep. Sales to Europe increased and by 1895 reached their pre-embargo high. Three laws were enacted in the late 188Os which imposed discriminatory taxes on oleomargarine, filled or imitation cheese, and mixed flour. In 1879 the first general bill aimed at protecting the purity of food and drink throughout the nation was introduced into the House of Representatives (Kebler, 1930). The bill failed to pass, but it was a significant accomplishment; no session of the Congress from 1879 to 1906 failed to have before it for consideration a broad food bill. However, major obstacles had still to be overcome and it would take 25 years to get a
food bill through Congress. Eventually, with the combined support of reformers like Dr Harvey Wiley, who wanted to protect the public health from 'danger', and industry, which wanted to improve its markets and profits by ridding itself of 'deception', the Federal Food and Drugs Act of 1906 was passed. The outlines of the modern food law were established in the revised Federal Food, Drug and Cosmetic Act of 1938.
19.3.2
Statutory background of current US food regulation
US food statutes embody two major concepts that form the basis of the substantive provisions: adulteration and misbranding. Technically, food itself is not regulated; a food must first be deemed adulterated or misbranded; it is the adulterated or misbranded food that is subject to regulation. Adulteration. In 1938 Congress established three broad safety standards applicable to potentially toxic substances in food. 1. The reasonably permissive 'ordinarily injurious' safety standard was applied to ordinary food or for non-added, natural substances. The 'ordinarily injurious standard' means, in effect, that if people can eat a food without apparent ill-effect, it passes the standard. In a famous case, the court held that canned oysters were not adulterated under this standard despite the presence of oyster shell fragments (a deleterious substance) (US District Court, 1942). The reasoning was that it was impossible to eliminate all the shell fragments and that most people can eat oysters quite safely. 2. The more stringent ' may render injurious' standard was applied to added constituents that were neither necessary nor unavoidable. This standard implies that a food may be considered adulterated if it contains an added 'poisonous or deleterious substance' that might cause harm. The US Supreme Court interpreted this standard to mean that the key issue was the quantity of the added substance required to affect health adversely: The act has placed upon the Government the burden of establishing, in order to secure a verdict of condemnation under this statute, that the added poisonous or deleterious substances must be such as may render such article injurious to health. . . . It may be consumed, when prepared as a food, by the strong and the weak, the old and the young, the well and the sick; and it is intended that if [it] . . . may possibly injure the health of any of these, it shall come within the ban of the statute. If it cannot by any possibility, when the facts are reasonably considered, injure the health of any consumer, such [food], though having a small addition of poisonous or deleterious ingredients, may not be condemned under the act. (US Supreme Court, 1914)
3. Under section 406 of the Act, the Food and Drug Administration (FDA) can establish tolerances 'for the protection of public health' for added constituents that are 'necessary in the production of a food' or whose occurrence is 'unavoidable by good manufacturing practice'. The initial purpose of this provision was to permit the FDA to set tolerances for pesticide residues in food. However, this use has been supplanted by specific provisions for pesticides, and section 406 is now used for 'unavoidable' environmental contaminants, e.g. dioxins and PCBs. During the period 1938-1958, the government had to prove that a food was adulterated and that the adulteration violated the standard before it could act against the food. This proved a difficult and expensive task. The setting of a standard is one thing, but proving that it is violated time after time is quite another. Difficulties with implementing these provisions during the 1938-1958 period made the FDA look forward to the benefit of a pre-market approval system, that would reverse the burden of proof and reduce excessive compliance costs. In subsequent years the three standards were amended by introducing pre-market approval systems for separate categories of added substances. The Miller Pesticide Amendments of 1954 required tolerances for pesticides under section 408. In 1958 Congress passed the Food Additives Amendment, which established a pre-marketing approval scheme for food additives under section 409. This amendment also applies to indirect additives, such as packaging materials that 'may reasonably be expected to become' components of food. A food that bears or contains a food additive whose use the FDA has not approved, or that contains an approved additive in a quantity exceeding the limits specified by the FDA, is adulterated under section 402(a)(2)(C) (Hutt and Merrill, 1991). The famous Delaney Clause was added to the Act at this time. This clause precludes the FDA from approving any food additive found to induce cancer in humans or in animals when administered by ingestion or other appropriate means. Congressional concern over the risks of synthetic chemicals increased after World War II in response to new technological aids that were rapidly introduced into food production and food processing. In order to manage the problems associated with the variety of pest-control agents such as DDT, aldrin, dieldren, hexachlorobenzene and the organophosphates, Congress in 1947 passed the Federal Insecticide, Fungicide and Rodenticide Act (FIFRA). In June 1950 the House created a select committee to investigate possible hazards posed by the increasing use of (synthetic) chemicals in foods. The Committee report, issued in June 1952, concluded that the law was inadequate to ensure the protection of the public health and recommended new legislation. The eventual result was the Pesticide Chemicals Act of 1954, the
Food Additives Amendment of 1958, the Color Additive Amendments of 1960, and, of course, the Delaney Clause. Because the Delaney Clause is drafted as a limitation on the FDA's approval authority under section 409, it technically applies only to 'food additives'. Excepted from its coverage are (1) substances whose use in food is 'generally recognized as safe by qualified experts', referred to by the acronym GRAS (includes substances like sugar, salt, cocoa), and (2) substances that either the FDA or the US Department of Agriculture (USDA) had sanctioned for use prior to 1958. In 1960 Congress established a similar pre-approval scheme for color additives under section 721. A second Delaney Clause prohibits the listing of any color additive that has been shown to induce cancer in humans or animals. In 1968 Congress revised the procedure for evaluating animal drugs by prescribing a licensure scheme under section 512. Under the amended Act, no animal drug that is likely to leave residues in edible tissue of livestock may be used, and nor may food containing residues be marketed without prior FDA approval. The approval process requires the submission of technical information and safety data similar to those for food additives, but emphasizes the safety of possible drug residues in edible tissue. Food and color additives, animal drug residues, pesticides and other added substances are effectively subjected to a stiffer standard than other additives. The new safety standard requires a positive 'demonstration of safety' to support a finding of a 'reasonable certainty of no harm'. The demonstration occurs through an evaluation of a petition which must show that the substance may be safely used at the recommended allowable daily intake (ADI). This petition is required to contain information on the intended use, chemical composition, toxicity and intended use levels (see detailed discussion below). Since food additives must be shown to be safe to a reasonable certainty, this has been termed the 'reasonable certainty of no harm' standard. Misbranding. Since this chapter is intended to discuss the regulation of risk rather than fraud, little will be said regarding this important area of food regulation. In various ways the misbranding provisions of the law are designed to force food suppliers to tell the truth about their products. Section 403(a) of the US FD&C Act prohibits label statements that are 'false and misleading in any particular'. Additional provisions are aimed at preventing other types of deceptions with respect to quality, quantity and identity. Other provisions force manufacturers to provide information that they otherwise might not choose to provide - such as the complete ingredients of a product (Hutt and Merrill, 1991). The NELA (Nutrition Education and Labeling Act of 1990) goes even further in the affirmative labeling arena. It requires suppliers to provide, on the label, nutrition information that will help the consumer maintain
healthy dietary practices. This includes a list of the contained nutrients, together with the amounts per serving compared to normal daily requirements. The NELA also permits health claims to be made on the label, if adequate scientific data exist justifying the claim and after the label is affirmatively approved by the FDA.
19.3.3
The process of regulatory approval
Food additives. As soon as the food additive petition is filed (accepted for review) by the FDA, the regulatory clock starts. By statute, the FDA has 180 days to review the petition. Invariably, the FDA does not meet this deadline and has come under intense criticism from food manufacturers and processors and, more recently, from Congress. The FDA makes two points in rebuttal. First, it says that scientific questions vital to human health inevitably arise in the evaluation of the petition that take time to resolve. Often, it is necessary to ask the petitioner for more data or to clarify the data in hand. The FDA has two options: to stop the official clock while the question is being resolved, or to let the clock run and deny the petition without prejudice if the 180-day time period expires. More often than not, the sponsor would rather that the official clock be stopped. This keeps the petition active and saves the sponsor the trouble of submitting it anew - but the real time often exceeds 180 days, sometimes by years. The second point the FDA makes is that it does not have sufficient resources to meet the 180-day period. Current safety evaluation is far more complex and time-consuming than when the original food additive statute was passed in 1958. Congress was not impressed by these arguments when the FDA tried to explain the delays in the processing of food additive petitions at the recent Congressional Hearings in June 1995. Currently, the FDA is attempting to revise the process, by shifting some of its resources from research activities, by sharply curtailing its GRAS procedures and by possibly adding third party, external review for some of the petitions. The current food additive law (section 409) requires that the petition shall, in addition to any explanatory or supporting data, contain: (A) the name and all pertinent information concerning such food additive, including, where available, its chemical identity and composition; (B) a statement of the conditions of the proposed use of such additive, including all directions, recommendations, and suggestions proposed for the use of such additive, and including specimens of its proposed labeling; (C) all relevant data bearing on the physical or other technical effect such additive is intended to produce, and the quantity of such additive required to produce such effect;
(D) a description of practicable methods for determining the quantity of such additive in or on food, and any substance formed in or on food, because of its use; and (E) full reports of investigations made with respect to the safety for use of such additive, including full information as to the methods and controls used in conducting such investigations.
These data are evaluated in the light of other requirements in the statute. These include the Delaney Clause, which forbids the addition of a carcinogen to food (409(c)(3)(A)). Tolerances may not be established at levels higher than that necessary to accomplish the intended physical or other technical effect of the additive (409(c)(4)(A)). The additive must be effective for its intended purpose (409(c)(4)(B)). Other factors that must be considered include the probable consumption of the additive, and the cumulative effect of the additive in the diet, taking into account any chemically or pharmacologically related substances and appropriate safety factors (409(c)(5)(A-C)). The approval process for food additives usually unfolds in the following way. The sponsors of the food additive often contact the FDA to discuss the specific studies to be conducted, and sometimes to discuss the details of the protocols. This is not required, but it is sometimes very helpful to the sponsor to discuss the planned studies with the FDA scientists who will later have to review the completed tests. When eventually submitted, the studies are evaluated for completeness and quality. The studies have to meet minimum test standards. These 'core' standards are typically the same or comparable to those established by Codex or other international groups. FDA protocols for animal toxicology studies are published by the Center for Food Safety and Applied Nutrition (CFSAN) in its 'Redbook', which describes the methodology of the toxicology tests and indicates the battery of tests required for approval. The principles used are virtually identical to those published in 'Principles for the Safety Assessment of Food Additives and Contaminants in Food' by WHO (World Health Organization, 1987). The number and type of tests required depend on the proposed additive's anticipated extent of exposure and chemical structure. Difficulties with acceptance of a specific study seldom arise as long as the study is scientifically sound and follows a standard protocol. The contentious issues typically arise over the amount of testing required and the FDA's evaluation of those tests. Questions raised in the evaluation are carefully considered and sometimes resolved by requesting further studies, sometimes by getting independent expert opinion and sometimes by both. This hazard evaluation part of the process consumes a large amount of time and work that is often underestimated both by the sponsor and by the FDA. There are mainly two reasons for this. First, questions
arising from evaluation of the scientific studies invariably occur and take longer than expected to resolve. Second, the result of all the information ultimately leads to a focus on a single critical study, the one describing the most crucial toxic effect, i.e. the one occurring at the lowest dose and on which the lowest NOEL (no observed effect level) is based. The outcome is based on only one adverse effect, and one often tends to lose sight of all the previous work and analysis that was necessary in order to exclude other possible adverse effects. The statutory provisions for color additives (section 721) and animal drugs (section 512) are similar in purpose and effect to the food additive provisions, but there are certain specific differences. Color additives. Color additives are intended either for ingestion or for topical application in cosmetics, and this dual use is the basis for a special provision in the color additive version of the Delaney Clause (721(b)(5)(B)). In the case where the color additive is not ingested, it is permissible to base the evaluation of the color's safety on an appropriate study for such use, other than a feeding study. The FDA could, if it wished, determine a skin painting study to be appropriate. But, despite this apparent flexibility, the FDA has typically requested long-term feeding studies for all additives, colors and animal drugs included. Animal drugs. Feed additives and animal drugs, unlike food or color additives, are not directly ingested by humans but must pass through the animal first, where they may be metabolized or eliminated. The regulatory focus is on the residues of the additive that may still remain in the edible tissues of the animal after it has been slaughtered, not on the added material. Section 512(d)(l)(I) contains a Delaney Clause with a special exemption proviso to permit a carcinogen if 'no residue' of the drug is detected with a suitable method. It has an historical importance because later attempts to implement this provision in a sensible fashion gave rise to the regulatory use of quantitative risk assessment. 19.3.4 Local enforcement - FDA field offices In most European countries, enforcement of the food laws is the responsibility of the local officials. In the USA the FDA maintains the responsibility for local enforcement of the FD&C Act, although states may have additional food and food sanitation laws. Since the earliest days, the FDA has had most of its resources 'in the field' in regional and district offices throughout the country. These field offices form the backbone of the inspection and enforcement arm of the FDA. They conduct inspections of food manufacturing companies and food warehouses, conduct import inspections, investigate potential violative cases, prepare case
materials and prosecute violations of the FD&C Act in the courts. (Other FDA-regulated products in addition to food products, i.e. drugs, devices, biologies, cosmetics etc., are, of course, also handled by these same field offices.) One of the major functions of the field offices is gathering and analyzing samples of potentially violative products. Most of the district offices operate sizeable analytical laboratories and have the capacity to analyze food for its content of additives, contaminants, pesticides and other regulated constituents. Some of the laboratories have speciality functions; for example, the Kansas City laboratory conducts the annual 'Market Basket Survey' of the nation's daily intake of pesticide residues and contaminants from food. These field offices also furnish a large part of the FDA's outreach service to consumers, providing information and advice from consumer safety officers.
19.3.5 HACCP, GLPs and other prevention systems HACCP. In 1992, several children in Seattle, Washington, died and many others were made ill by eating hamburgers from a fast food restaurant. The organism was E. coll O157:H7; it produces hemorrhagic colitis and diarrhea, and can be fatal in children and compromised adults. It was another in a series of sporadic and relatively infrequent but nonetheless deadly episodes of food poisoning. This dramatic event, along with others, illustrated some enduring weaknesses in the food distribution system in the USA and in other developed countries. The first is the existence of a highly centralized and large food distribution system. The tainted hamburger was prepared for many restaurants from the same central source and was then distributed widely throughout several states. An error made anywhere in the system was likely to be propagated widely. The second weakness is the limited effectiveness of both state and federal inspection and control. The FDA's jurisdiction extends to food manufacturing plants and warehouses engaged in food storage and interstate shipment, and it can inspect these only infrequently due to limited resources. Finally, food-borne pathogens appear to be increasing or at least developing new biological niches in response to changing patterns of food production, distribution and consumption. Some of the complications of the pathogen-induced illnesses are also being recognized as more serious, such as arthritis, heart disease and neurological or kidney damage. Most of these same problems exist in all other places where food commerce has reached a fair degree of complexity - they are not unique to the USA. In July 1994 the FDA gave notice that it intended to carry out a hazard analysis critical control points (HACCP)-type program to enhance its
ability to protect the food supply. The FDA stressed the increasing size and complexity of the food safety problem and emphasized the need for a workable preventive strategy: Inspections that FDA conducts under the current system can determine the adequacy of conditions in a food plant at the time of the inspection but not whether the company has in place a food safety assurance program that is operating reliably and consistently to produce safe food at all times. Furthermore, the current approach is generally reactive, not preventive. It is effective in detecting and correcting problems after they occur, but except in certain limited areas such as the regulation of infant formula and low acid canned foods, it is not currently based on a system of preventive controls. (FDA, ANPR 7 July 1995)
The HACCP concept is based on a systematic approach to the identification and assessment of risk, and the control of the biological, chemical and physical hazards associated with a particular food production process. HACCP is a preventive strategy. It is based on development of a plan by the food processor that anticipates possible food safety hazards and identifies the points in the production process (critical control points - CCPs) where a failure would be likely to result in a hazard or the continuation of a hazard. Under HACCP, critical control points are identified, systematically monitored to stay below critical limits, and faithfully recorded. The FDA's inspection is limited to the inspection of the manufacturer's own HACCP plan and the monitoring records, which must be current and accurate. Use of HACCP underscores the industry responsibility for continuous problem prevention and problem-solving, rather than reliance on traditional inspections by regulatory agencies to detect loss of control. The World Health Organization and the European Union are also implementing HACCP plans for their food industries. Some form of HACCP may well be a requirement in order to participate effectively in international food trade. GMPs. Section 402(a)(4) of the FD&C Act gives the FDA the authority to declare a food adulterated if the conditions under which it was prepared could have contaminated it or caused it to become injurious to health. It is not necessary for the government to prove that a food is in fact contaminated or actually poses a risk to health, only that there is a reasonable likelihood that it could. Court decisions through the years expressed the view that this section of the Act was too vague and that it would be beneficial if the FDA promulgated regulations which defined more clearly what appropriate hygienic conditions were (Dunkelberger, 1995). In 1969 the FDA published final regulations on good manufacturing practices (GMPs), implementing section 402(a)(4). GMP regulations are substantive rules and are directly enforceable by the agency. GMPs, however, have general
application to all foods under the FDA's jurisdiction ('umbrella GMPs'), and are accordingly phrased in general, rather than specific, terms. In 1986 the FDA revised the GMPs in title 21 of the Code of Federal Regulations (CFR) part 110 (US Federal Register, 1986). ISO 9000. Much of the European food industry has voluntarily adopted self-monitoring procedures. Like GLPs and also HACCP, they are aimed at preventing safety problems, but through non-government, accredited third party certification of: physical plants, standard operating procedures, training of personnel and appropriate monitoring. Unlike GLPs and HACCP, however, ISO 9000 is broader and includes not only safety issues but total quality management from producer to consumer (ISO 9000, 1992). 19.4 Scientific basis for food safety evaluation 19.4.1
Traditional approach - the use of animal data
Safety evaluation for food substances is typically conducted without clinical data. This contrasts sharply with the way in which drugs are evaluated; their ultimate approval rests on safety and efficacy studies conducted in humans. There are usually no clinical or epidemiological studies for food substances, because there is no market experience with a proposed additive and there are rarely epidemiological data on environmental contaminants in food. Thus in virtually all cases, the regulatory safety decisions on food substances are based on animal data, and safety evaluation becomes a problem in comparative toxicology and comparative pharmacokinetics. The central scientific questions are: how to obtain reliable estimates of human risk of the substance from studies in animals, or its regulatory equivalent, and how to determine human levels of consumption that will assure safety? Acceptable daily intakes - ADIs. The FDA's traditional approach for setting safe levels of ingestion for non-carcinogenic substances was first established at the FDA in the late 1940s (Lehman et al, 1949). Safety was established by first demonstrating an effect level in an animal study so that the nature of the toxicity and the organs affected were known. Then, a no effect level or NOEL was determined by studies at lower doses. Finally, an acceptable daily intake for humans (ADI) was established by dividing the NOEL by a factor of 100: ADI - NOEL/SF = NOEL/100 This deceptively simple process embodies two major scientific extrapolations: animal to human and high to low dose. A major point to make
is that both extrapolations, high to low dose and animal to human, are compressed into the safety factor (SF). The SF accounts for both pharmacokinetic and pharmacodynamic differences between animals and humans as well as intraspecies variability in these quantities (see below). The choice of the SF of 100 was arbitrary, but not capricious. It was consistent with the data then available. It primarily reflected the fact that humans were generally more sensitive to the acute effects of chemicals than were laboratory animals given the same unit dose, based on body weight scaling. The 100-fold SF also reflected the fact that humans are genetically more diverse than laboratory animals. It also subsumed possible differences in the state of health, type of diet and other factors that might make caged experimental animals less susceptible than human beings. As long as the scientific community believed that the adverse effects observed in animals were thresholdable and reasonably akin to those in humans, the NOEL-SF seemed a reasonable way of determining safety. To assure the validity of the approach, the animal toxicology tests on which it is based have to be designed appropriately, conducted with care and evaluated with judgement. Various guidelines and protocols for the conduct of such tests are available (Food and Drug Administration, 1993; World Health Organization, 1987). The NOEL-SF is still used by the FDA today as a demonstration that the statutory requirement of a 'reasonable certainty of no harm' has been met. The NOEL-SF method is used by all developed countries and is the basis for JECFA determinations of food additive safety by the Codex Alimentarius Commission, the World Health Organization and the Food and Agriculture Organization (FAO). When the target organ and the toxic response are identified in the experimental animal, there are usually only three experimental quantities explicitly given - incidence, daily dose rate and body weight - and the process seems deceptively simple. The FDA has traditionally used body weight, interspecies scaling, i.e. (bw)1 to obtain 'equivalent' doses for humans. This in effect produces an equivalence based on a crude concentration of the chemical or an average concentration over a period of a day. Intake per unit weight is a crude surrogate for a concentration, because it takes no account of absorption, distribution into organs, metabolism, clearance or specific binding or other factors. In the absence of better data, such an approach, if consistently applied, can be justified as a necessary default procedure. However it appears that pharmacokinetic theory more generally supports (bw)3/4 scaling (O'Flaherty, 1989), and it may be just a matter of time before the traditional approach is modified. The Environmental Protection Agency (EPA), FDA and Consumer Product Safety Commission (CPSC) have jointly published a proposal to adopt (bw)3/4 interspecies scaling for carcinogens (Environmental Protection Agency, 1992).
19.4.2
Safety factor versus risk-based methods
The use of a safety factor represents a considerable short cut, and it has been widely criticized. But before we eliminate the NOEL-SF approach in favor of a more scientific or more accurate method to establish safe levels, it is well to be aware of just how much lack of knowledge it subsumes and how difficult it will be to go beyond it. The traditional ADI is based on the dose administered to the animals over a fixed period, not on the target organ concentration achieved or even the blood level. To relate the animal dose to the human dose one needs to use comparative pharmacokinetics more explicitly. A simple demonstration will illustrate how useful the NOEL-SF approach is, and how much difficult-to-obtain information it subsumes. We shall assume that animals are reliable surrogates for humans but with different anatomies, metabolic characteristics and possibly different sensitivities to different chemicals. This means that, when they are exposed to the same agent at equivalent doses, we expect the responses (risks) between animals (A) and humans (H) to be mutually proportional, i.e. RH = Const, x RA. The risks are not equal because, in addition to dose scaling, we expect different intrinsic sensitivities or susceptibilities or potencies (p) between the species. We assume that in both species, biological responses are proportional to the local concentration of active metabolite at the site of action. R - intrinsic potency x local concentration -pxC
(19.1)
The local concentration of agent at the site of action in chronic studies is ordinarily assumed to be proportional to the steady-state local tissue concentration, which is usually proportional to plasma concentration (Css). So in general (or at least when the relations are linear), R=pxkxCss
(19.2)
where A: is a measure of the species efficiency in converting plasma concentrations into effective local concentrations. By dividing the similar relations for both species we obtain an explicit form for the mutual proportionality of animal and human risks: ^H - [(PnlPA) x (*H/*A) x (Qi/Q)] x ^A
(19-3)
This relation says that the human risk can be determined from the animal risk if the relative potencies and tissue concentrations in both species are known. The first term in parentheses is of phamacodynamic origin, and no general theory is now capable of predicting the ratio of interspecies sensitivities. The magnitude of the second term is also unknown in general, but it is physiological in origin and may not vary strongly between species; i.e. kA ~ kH. The third term is the plasma
concentration which, for simple, linear pharmacokinetics, can be replaced with its pharmacokinetic equivalent, Css = (DFt1n)IV ln(2), to obtain: /?H = [(DH/DJ(pHlpJ(kHlkJ(FHIFM2l
ff/2) (VA/VH)] x /?A
(19.4)
The terms have their usual pharmacokinetic meanings: the Ds are the administered doses in terms of body weight scaling, Fs are the absorbed fractions, Vs are the volumes of distribution and tl/2s are the respective half-lives. The purpose of this exercise is to compare this expression, which is still very approximate and incomplete, to the NOEL-SF expression which we conventionally use in the safety evaluation of food substances: ^H = (£>H /DA) (1/100) x R*A
(19.5)
where R^ is the average response or the risk from an actual experiment with typically 50 animals, and the Ds are doses still expressed in terms of body weight scaling. (Of course, technically, the response at the NOEL is zero, so that the two expressions need to be compared at doses near the NOEL.) It is evident from the comparison that there is an enormous amount of difficult-to-measure information compressed into the SF. We get a valuable benefit in reduced cost and simplicity by basing safety evaluation on the limited experimental data required for equation (19.5) rather than on equation (19.4). The disadvantage is the loss of insight into the mechanism of toxicity and the obligatory use of a crude and probably larger than necessary SF for some substances. ADIs for food additives could in principle be based on blood levels as described, in this way accounting for both absorption and clearance, but to do so would require corresponding measurements in humans. The need to conduct measurements in both species is the practical limitation of the full application of pharmacokinetics to the safety evaluation of food substances. Renwick has attempted to codify such a scheme (Renwick, 1991). There is no reason why such human studies could not be done, but they would be costly and clearly would require significant justification. Perhaps in the future we will have more rapid and less invasive ways to include human pharmacokinetic data. The EPA uses conceptually the same procedure as the FDA but provides more flexibility and more conservatism in the SF. The EPA converts the SF into an uncertainty factor and, while typically using 100, sometimes requires higher factors, depending on how it views the quality and length of the study (Dourson and Starra, 1983). Some investigators have suggested ways to establish the ADI on a LOAEL (lowest observed adverse effect level) rather than a NOEL or NOAEL (Weil, 1972). Others rely on an extrapolation from an arbitrary effect dose or 'benchmark dose', instead of on the NOEL (Crump, 1984). These are still SF approaches and rely totally on animal data, but they do have a firmer definitional
basis, since the problems with determining a NOEL are avoided, and they also may have some, small practical advantage. Animal testing relies on the ability to exaggerate the dose given to the animal in order to provide an adequate SF for humans. The reliance on animal tests may have to be modified in the future if novel foods, intended to replace macronutrients, are developed. These would be consumed at high levels and dose exaggeration in testing would not be possible. For these substances some human testing will probably be necessary (Food and Drug Administration, 1993).
19.4.3
Quantitative risk assessment of chemical carcinogens
Early history. Arnold Lehman, chief toxicologist at the FDA in the 1940-1960 period, stated in an article in 1949 that carcinogenic food additives would not be permitted: a finding that a substance caused cancer in animals was regarded as so alarming as to exclude it from consideration. (Lehman et al, 1949)
The sentiment was categorical, but it was made within the context of the existing technology. Chemical detection methods of the day could detect nothing at all unless the substance was present at a concentration in excess of several parts per million. Today we can routinely detect fractional parts per billion in food, and in some instances parts per trillion. No one back then (or now) would assume, in general, the existence of a threshold for cancer. The reasons for this go back to the discoveries made by cancer investigators in the early 1940s. They showed that exposure to small persistent doses of carcinogens could be more effective than single large doses in producing tumors (Druckrey, 1943; Berenblum, 1945). They discovered that a single exposure to a carcinogenic 'initiator' could produce a latent effect that was essentially permanent and that could be manifested later even if treatment with the 'promoter' were delayed for up to half the lifetime of the animal (Shubic, 1950). Such observations were new and surprising for toxicologists. They emphasized, for carcinogens, the importance of low doses and suggested that, for carcinogens, experimentally observed no effect levels in test animals might not reflect a true biological threshold. Anticancer efforts were international in scope. A symposium on the potential hazards from chemical additives and contaminants in foodstuffs was held in 1956 in Rome under the auspices of the International Union Against Cancer. The Union recommended that federal governments prohibit the addition of carcinogens to food. This made headlines in the USA when several widely used carcinogenic color additives were
explicitly identified. In 1960, at the color additive hearings, an NIH report was read into the record. The Mider Report stated the National Cancer Institute (NCI) position: No one at this time can tell how much or how little of a carcinogen would be required to produce cancer, or how long it would take cancer to develop. . . . Whenever a sound scientific basis is developed for the establishment of tolerances for carcinogens, we will request the Congress to give us that authority. (Mider, 1960)
Even as the Secretary gave his testimony, Mantel and Bryan were preparing their 1961 paper 'Safety testing of carcinogenic agents', which showed how virtually safe levels for carcinogens could in principle be established by downward extrapolation from observed animal data (Mantel and Bryan, 1961). Acceptable levels of risk. Mantel and Bryan were the first to try to establish safe levels for carcinogens by mathematical high to low dose extrapolation of animal data. They used a probit tolerance distribution model, with a shallow slope. The model yielded a virtually safe dose level corresponding to a risk of 1 in 100000000. The important feature was the concept of a virtually safe level for a carcinogen. This acknowledged that thresholds for carcinogens could not be assumed and therefore some small cancer risk might indeed be present at low doses. Mantel and Bryan suggested that if the doses were kept small enough, these possible risks could be deemed societally acceptable. This 'acceptable risk 'concept was eventually accepted by federal agencies. In October 1969 an ad hoc committee was established by the US Department of Health, Education and Welfare (HEW) to advise it on the problems relating to the evaluation of low levels of environmental carcinogens. This committee recommended an approach which, in effect, placed a societal burden squarely on the regulators. No level of exposure to a chemical carcinogen should be considered toxicologically insignificant for man. For carcinogenic agents a safe level for man cannot be established by application of our present knowledge. The concept of [socially acceptable risk] represents a more realistic notion. (US HEW, 1971)
This philosophy was first embodied in the food laws, albeit none too clearly, with the passage of a modified Delaney Clause to the Animal Drug Amendments of 1962. The so-called DES proviso permitted added carcinogens intended for use as drugs or feed additives for food-producing animals if two conditions were met: (1) the additive did not adversely affect the target animal; and (2) no residue of the additive would be detectable in the edible tissues of the animal by methods of examination approved by the FDA through regulations. By contrast with the Delaney
Clause itself, the DES proviso makes the detection of residues in animal tissues, rather than the addition of the compound to the diet, the critical inquiry. The eventual result of attempts to implement the proviso would be to substitute the concept of a 'residue with no significant risk' for 'no residue'. It took 12 years for the FDA to successfully formulate these regulations. The first attempt was made in 1973 with the publication of the FDA's SOM (Sensitivity of the Method) regulation. The SOM regulation prescribed the manner of determining the acceptable sensitivity of an analytical method to permit trace levels of carcinogens in commercial beef, pork and poultry. It was the first regulatory use of dose-response extrapolation to obtain a virtually safe dose. Absolute safety can never be conclusively demonstrated experimentally. The [virtually safe] level defined by the . . . procedure is an arbitrary but conservative level of maximum exposure resulting in a minimal probability of risk to an individual (e.g., 1/100,000,000 . . . ) (Food and Drug Administration, 1983)
The SOM proposed rule relied on the probit model and 10~8 lifetime risk. The final SOM rule relied on a linear model and 10~6 lifetime risk. These two 'virtually safe' risk levels led to approximately similar dose levels given the properties of the two different mathematical models, if the dose extrapolations were not too great. Thus, a 1 in 1 000 000 lifetime risk became the standard definition of an acceptable cancer risk. (The 1977 SOM was challenged in the court on procedural grounds and remanded back to the FDA, where it remained until 1979. It was finally reissued in 1985.) In 1969 the FDA impaneled a committee of experts to consider how food additives, pesticides and animal drugs should be evaluated for possible carcinogenicity. Their report, published in 1971, recommended the basic minimums for animal bioassays and risk extrapolation that we still use today. 1. Testing should be done at high dose and under experimental conditions likely to yield maximum tumor incidence. 2. At least two species should be used for all carcinogenicity studies. 3. For compounds judged carcinogenic at test levels, a virtually safe dose could, in principle, be estimated by downward extrapolation using some arbitrarily selected but conservative dose-response curve. These recommendations sanctioned the regulatory use of the maximum tolerated dose (MTD) bioassay and linear dose-response extrapolation (Food and Drug Administration, 1971). Details of the bioassay and the method of extrapolation would be further refined later, but this 1971 Panel
Report made their development inevitable. The panel's recommendations were motivated essentially by two concerns: (1) the potential risk from lifetime exposure to low doses of carcinogens, and (2) the statistical limitations of negative studies on small groups of animals. It was clear that no unqualified negative answer is ever possible. All a negative study can do is to supply an upper statistical limit to the possible carcinogenicity. It was pointed out that these limits are uncomfortably large. Even with as many as 1000 test animals and using only 90% confidence limits, the upper limit yielded by a negative experiment is 2.3 cancers per 1000 test animals; a lifetime risk of 2.3 x 10 ~3. The 1971 FDA Report stated: No one would wish to introduce an agent into a human population for which no more could be said than that it would probably produce no more than 2 tumors per 1000. (Food and Drug Administration, 1971)
So how does one increase the sensitivity of the bioassay with a limited number of test animals? The answer is to increase the dose well beyond the anticipated human use level and extrapolate the results down to these lower doses. In 1976 the EPA published Interim Cancer Guidelines. Referencing the experience with ionizing radiation, the guidelines supported the idea that while science could not provide a safe dose for a carcinogen, absolute safety was not feasible either. there is no such thing as a completely safe dose; in other words any exposure, however small, will confer some risk of cancer to the exposed population. Evidence has accumulated that indicates that the no-threshold concept can also be applicable to chemical carcinogens. . . . However it has become increasingly clear that in many areas risk cannot be eliminated completely without unacceptable social and economic consequences. . . . We thus have a comparable conceptual basis for the regulation of chemicals as for ionizing radiation, where the philosophy has been to reduce exposure to the greatest extent possible consistent with the acceptability of the costs involved. . . . The procedure will involve a variety of risk extrapolation models, e.g. the linear non-threshold and the log-probit model. (Environmental Protection Agency, 1976)
By the early 1970s, regulators at the FDA and EPA accepted the concept of acceptable cancer risk and the necessity for dose extrapolation to determine an upper bound to that risk. Downward extrapolation of rodent bioassays conducted at high doses has proven more difficult and less reliable than its proponents in 1971 and 1976 envisaged. Many assumptions and scientific inferences had to be made in the evaluation process and these could differ between the regulatory agencies. Some of the internal inter-agency disputes became public. In 1982 Congress funded an NAS study in an effort to minimize these
differences and restore credibility to the process. The outcome was the Redbook and the four steps of risk assessment. One of these steps is dose-response assessment, which includes high to low dose extrapolation. The effect of the National Academy of Sciences (NAS) Redbook, among other things, was to establish the use of risk assessment throughout the federal government. The Supreme Court in the Occupational Safety and Health Administration (OSHA) benzene decision virtually made it a requirement for all chemical regulation (US, 1980). The use of the MTD had been challenged throughout the period. Gehring and Watanabe showed in 1976 that large doses could exceed metabolic and physiological thresholds, leading to prolonged retention in the body and disproportionate increases in carcinogenic electrofiles (Gehring and Blau, 1977). There are several examples of food substances that are carcinogenic at high doses but are probably not carcinogenic at lower doses. Saccharin, ortho-phenylphenate, sulfamethazine, formaldehyde, butylated hydroxyanisole and d-limonene are just a few examples where enough mechanistic data have been developed to be convincing (Scheuplein, 1995). The bioassay does not allow one to make distinctions between those compounds that entail risks at low doses and those that do not. A Committee of the National Research Council has stated: [The current MTD bioassay in rodents] . . . is neither perfect nor unalterable, and by itself it is insufficient to produce data from which accurate human-health assessments can be made. . . . In most cases additional information is likely to be needed to determine the extent to which the induction of cancer in rodents adequately predicts the human response and how the results of the relatively high-dose assay can best be used to make inferences about the expected effects at low doses. (National Research Council, 1993)
The NRC Committee on Risk Assessment Methodology (CRAM) recommended the continued use of the MTD and did not reach agreement on how the additional information on mechanisms of carcinogenicity should be used. This is an unsatisfactory state of affairs and no doubt will provide cause for continued debate and uncertainty. The NRC committee essentially discredited the current testing methodology without producing a satisfactory alternative. (National Research Council, 1993). Regulatory applications of QRA. In the USA, food additives that are found to be carcinogenic in animal feeding studies are not approvable. Substances found to be carcinogenic in rodent cancer bioassays, like saccharin and cyclamate, are therefore illegal in the USA. However, as noted, there are significant exceptions to this ban, not all food substances being covered by the Delaney Clause. For these categories of food
substances, quantitative risk assessment (QRA) may be used to set acceptable levels of exposure. Animal drugs and feed additives may be permitted if the extrapolated risk is less than 1 in 1 000 000 (Food and Drug Administration, 1985). Other additives such as pesticides, which are regulated by the EPA, are permitted to be used if the benefit is judged to exceed the risk. This can result in acceptable carcinogenic pesticides with calculated risks of 10 ^ (by linear extrapolation). These pesticide risk numbers refer to the risk at the calculated tolerance level on the applied crop. The actual consumer risk, after the commodity has been subjected to rain, weathering, washing by the shipper, and final preparation and cooking by the consumer, is usually hundreds of times less. Unavoidable food contaminants such as aflatoxins, discontinued pesticides such as DDT, and environmental contaminants such as dioxins, polynuclear hydrocarbons and nitrosamines, are also permitted in food. Section 406 of the statute has no Delaney Clause, and QRA can be used to establish a tolerance level. A major consideration in these instances is the trade-off between the need to protect the public from the potential risk from these carcinogenic contaminants and the need to preserve major commodities in the food supply. 19.4.4
Comparison with other national regulatory systems
No one national regulatory system is exactly like another. Even when their purposes are largely the same and their officials share a similar public health viewpoint, health regulatory agencies in different nations often see issues differently. Food regulatory issues cut across national laws and national cultures, are influenced by different national experiences, and engage different commercial and national interests. These factors virtually ensure some differences in attitude, procedure and regulatory outcome. Described briefly below are some areas where FDA policy and procedures differ from those in other countries and can contribute to different points of view and regulatory outcomes. Risk assessment of carcinogens. The USA stands virtually alone in its major reliance on QRA. There are two basic reasons for the FDA's use of QRA. Initially, the impetus was to lessen the impact of the Delaney Clause and its philosophy of 'zero risk'. This provision prohibits adding any animal carcinogen to food. It does not matter how minimal the exposure to humans is, or whether the substance occurs naturally in other food, or whether the animal bioassay is scientifically relevant to the human response. Other countries with a more flexible legal basis for food additive regulation could scientifically discount the animal study or state that the risk was too small to be worth regulating, but the FDA had to ban
the substance until the 1962 Animal Drug Amendments suggested a legally defensible path around the statute for certain categories of additives. The crucial idea was the societal acceptability and insignificant cancer risk, and this required the identification of the size of that risk and some means to quantify it. The second reason for the FDA's use of QRA is the realization that no substance is absolutely safe, coupled with the public's demand to know something about the magnitude of the risk to which it is being subjected. The efforts to extend QRA beyond carcinogenic risk to other chemical risks and even to microbiological risks are an outcome of these circumstances. The basic problem with the entire effort is that the essential science and the detailed mechanistic modeling required to estimate the carcinogenic risk are not yet very reliable and have achieved little scientific consensus. Non-government technical participation. Many food safety decisions in Europe and in Japan involve the participation of technical experts from outside the regulatory agency. Technical committees with both academic experts and technical experts from industry make many of the key technical safety decisions. The scientific qualification, reputation and experience of the person is the essential criterion for membership on the technical committees. Potential conflict of interest concerns do not play as large a role in Europe as they do in the USA. In addition, the deliberative process in Europe is typically closed to the public and virtually immune from any effective 'second guessing' by interested consumer groups. A written summary of the meeting containing the decisions and their rationale is made available, but the raw data and the detailed deliberations of the evidence are not ordinarily revealed. Finally the decisions of the technical committee are dispositive insofar as the technical issue is concerned. The committee is, in this sense, more than strictly 'advisory' to the regulatory body. The FDA also makes use of advisory committees of technical experts to advise on food safety issues, but the decisions of the committee are strictly advisory, and may be overruled or disregarded by the FDA. The final decision is made by the FDA, and the legal safety standard and documentation of evidence are explicit considerations. The expert scientific opinions of scientists matter less than those opinions that can be documented with written data. Conflict of interest rules typically do not allow 'interested' parties to participate as members, effectively ruling out experts from industry, no matter how technically qualified. The advisory committee may choose to formally hear from industry technical consultants, but they are not part of the deliberative process or members of the advisory committee. Additionally, advisory committee members must become temporary government employees, which also tends to further restrict the membership. The announcement of the advisory committee
meeting is made in the Federal Register, several weeks before, and a part of the meeting is reserved for outside comments from any interested person or group. Finally, a very extensive 'paper trail' of the proceedings is maintained. Consumer interests. In the USA consumer interests in food safety are not always satisfied but they are always heard. As noted above, while legal concerns over conflict of interest can inhibit the availability of technical expertise, the food laws and FDA regulations actively encourage citizen participation in all phases of the process. Any person may submit a Citizen's Petition to the FDA on any food issue, which must be responded to in writing in a timely manner by the FDA. Representatives of consumer groups can and do participate in the meetings of the Food Advisory Committee (FAC), but usually not as members. When the food regulations are finally promulgated in the Federal Register, they are accompanied by an extensive preamble, which, by law, must explain the purpose and impact of the regulation and in addition must respond in writing to any arguments against it that were submitted as written comments or raised at the FAC meetings. In summary, the FDA regulatory process tends to be more participatory, more transparent and better documented than the European and EU procedures, but it is also tends to be slower, more litigious, and more contentious. Additive regulation. In the USA, a food or color additive or a GRAS substance must be safe under the conditions of its use and it must serve a function in the food or in its production. The FDA must determine that the intended physical or technical effect of the additive is established by the data and will set the tolerance limitation at a level just high enough to accomplish that function (section 409(c)(4)(A-B)). Adequate safety and demonstrated functionality are the only two FDA general requirements for the approval of food additives. In the UK and the EU a third requirement exists; the additive must fill a technological need that cannot be achieved by other means that are economically and technologically practical. The purpose of the 'need' requirement is to limit the addition of unnecessary chemicals to food. The need requirement has been built into the EU additive scheme as a criterion of consumer benefit. The extent to which consumer benefit is actually addressed in practice in the EU is a matter of some concern to consumer groups (Jackson, 1993). Scientific differences. National differences in animal testing protocols and in experimental safety data requirements contribute some impediments to international food commerce. However, there are few international controversies that arise strictly from different scientific interpretations of
safety studies per se. Perhaps one exception to this occurs in the area of risk assessment, which in a crucial part - the determination of an acceptable level of risk - is not strictly scientific anyway. The USA makes wide use of quantitative risk assessment techniques, while other countries are generally more comfortable with the traditional SF-NOEL approach. There are no generally accepted methods for conducting risk assessments. They currently vary by country and even within countries by agency (EPA/FDA) or by province (Ontario/Canada). Recently, Codex sponsored a Joint FAO/WHO Expert Consultation that attempted to develop a common risk analysis methodology for international food standard issues (World Health Organization, 1995). The USA has its Delaney Clause, which bans cyclamates, whereas these are permitted food additives in Canada and many EU countries. In a recent case, the EU banned the use of anabolic steroids in livestock production. Despite the fact that the WHO Joint Expert Committee on Food Additives found that the use of the hormone was safe, the EU nevertheless prohibited such use on the alleged grounds of consumer preference. This is in contrast to the regulation in the USA, where the allowed use of animal hormones in livestock production is based on scientific determinations that the hormones are safe when used as intended. In Germany, the enzyme chymosin, used in making cheese, is prohibited if produced by genetic engineering techniques. Only chymosin produced in the traditional way from calves' stomachs may be used. Germany similarly prohibits the use of irradiation, whereas The Netherlands and the USA permit irradiation of food for certain purposes. Regulatory 'safe' levels for dioxin (TCDD) contamination in food vary over 1000-fold: 0.01 pg TCDD/kg/day is safe in the USA, while 10.0 pg TCDD/kg/day is safe in Canada and most of Europe, while 1.0 pg TCDD/kg/day is safe in the UK. These different outcomes arise in part from different views of the science.
19.5 International regulation of food-borne substances 1 9.5.1
GATT
Food commerce is a major part of world trade. There is a common interest in assuring that health-related standards are firmly based on scientific evidence, so these standards cannot be used as disguised barriers to trade. In December 1993, national negotiators in the Uruguay Round of Multilateral Trade Negotiations of the General Agreement on Tariffs and Trade (GATT) concluded their discussions. The official ratification of the agricultural provisions, Sanitary and Phytosanitary Measures (SPS), of the GATT treaty has given new impetus to harmonization of food regulation, food standards and certification.
The basic provisions of the SPS are that any measures which may affect international trade must not be stricter than necessary for the protection of human, animal or plant health, must be based on scientific principles, and must not be maintained without scientific evidence. The SPS agreements permit each contracting party (national government) to decide on the level of such protection it deems appropriate. However, the countries also agree to abide by certain obligations, to achieve the aims of the Act: SPS measures must have a scientific justification. SPS measures shall not arbitrarily or unjustifiably discriminate between contracting parties. SPS measures shall not be applied in a manner that would constitute a disguised restriction on trade. Consistent with its chosen level of protection, each contracting party must choose SPS measures that are not more restrictive to trade than is necessary. Contracting parties shall establish and maintain their SPS measures in a transparent manner. To harmonize SPS measures on as wide a basis as possible, contracting parties shall base their SPS measures on international standards, guidelines and recommendations. Contracting parties shall play a full part in the Codex Alimentarius Commission and other relevant international organizations. An effect of the SPS agreement is that all countries must base all of their food safety and food standards measures that may affect international trade on Codex standards, where they exist, except as otherwise provided for in the SPS agreement. Where they do base them on Codex standards, their measures are automatically accepted as 'justified measures'. The GATT agreement has given Codex enormous importance and potential power. Codex standards, guidelines and recommendations relating to these issues have now assumed a new dimension as the reference of national requirements. In the future, GATT members could be required to furnish justification for food import restrictions based on national regulations that are stricter than Codex standards, guidelines or recommendations. In addition to the SPS measures, the GATT agreement includes provisions on technical barriers to trade, which deal with regulations having a technical content or effect, such as labeling, packaging and measurements. The GATT agreements are to be implemented by the newly created World Trade Organization. There is no guarantee that this effort will be rapidly successful. Cultural factors very strongly influence disputes over food safety. Most countries do not wish to change their traditional practices. Others may find that proposed changes are politically unacceptable
to large domestic groups. Risk assessment and other scientific issues may prove to be equally contentious as political and cultural issues. Clearly, food safety harmonization will not be easy to achieve.
79.5.2
Codex Alimentarius Commission
The Codex Alimentarius Commission (CAC) is perhaps the foremost international organization working towards harmonization of food standards (Gardner, 1995). CAC is a part of the WHO and has 146 national governments as members. The work of the Commission is carried out by several specialized subject committees and also involves the use of independent expert advisory committees. These include the Joint FAO/WHO Expert Committee on Food Additives (JECFA), the Codex Committee of Food Additives and Contaminants (CCFAC), the Codex Committee on Residues of Veterinary Drugs (CCRVDF), the Codex Committee on Pesticide Residues (CCPR), the Codex Committee on Food Hygiene (CCFH), the Codex Committee on Food Import and Export and Export Certification and Inspection Services (CCFICS) and the Codex Committee on Meat Hygiene (CCMH). In addition, there are Joint Meetings of the FAO Panel and the WHO Expert Group on Pesticide Residues (JMPR), who independently assess the ADIs and maximum residue levels (MRLs) for pesticide and pesticide-food combinations respectively (Food and Agricultural Organization/World Health Organization, 1993). JECFA. This is an expert scientific advisory body of the FAO and WHO. It is composed of 14-16 scientists from member countries who participate independently, not as official representatives of their governments. JECFA has a wide mandate in the hazard assessment area. It assesses the toxicity of potentially hazardous substances in food. The ADI and the provisional tolerable weekly intake (PTWI) are the health endpoints for food additives/veterinary drugs and contaminants respectively. In the case of veterinary drugs, a quantitative exposure characterization is carried out: potential MRLs are calculated from the ADI and fixed dietary intake factors. CCFAC. This is largely a technical-regulatory group. It translates the technical advice from JECFA into draft Codex standards and advises CAC on all matters relating to food additives and contaminants. Technological need, justifications for proposed levels of use and probable levels of intake and their relationship to the ADI are taken into account when endorsing or recommending maximum levels in food and animal feed. Endorsements by the CCFAC are then approved by the full Codex Commission before a standard is considered final.
CCRVDF. The primary role of the CCRVDF is to recommend MRLs for residues of veterinary drugs in food. The CCRVDF receives complete expert advise from JECFA, usually in the form of recommended MRLs. CCPR. This advises the CAC on matters relating to pesticide residues affecting international trade, primarily by establishing draft Codex MRLs in food and animal feed. Draft Codex MRLs are based on the estimated MRLs recommended by the JMPR, also taking into account monitoring data, estimates of dietary intake and GATTs in member countries. The CCPR considers all government comments, including those affecting a member state's economic interests, when proposing draft Codex MRLs. CCFH. This is a general purpose committee that has overall responsibility for all provisions of food hygiene prepared by Codex commodity committees. Two major areas of activity are the setting of microbiological criteria for foods and the development of guidelines for HACCP systems for ensuring food safety. CCFICS. This committee determines principles and develops guidelines for certification systems and quality assurance systems. It develops guidelines and criteria for official certificates that may be required of importers (Gardner, 1995). CCMH. As a commodity-oriented committee, the function of the CCMH is to elaborate standards and/or codes of practice for meat hygiene. It develops GMPs and process control standards for safe meat handling. Until recently, with the development of new international food safety standards under GATT, the work of the CAC has primarily benefited the underdeveloped countries. The Codex standards were good substitutes for weak or non-existent national regulatory standards. Countries with large regulatory efforts, such as the USA, were informed by their participation in Codex activity, but did not usually adopt Codex standards in lieu of their own. But, as explained above, there is every likelihood that international food safety standards will become more important and that the CAC will play an increasing role in helping develop them. 19.5.3 European Union (EU) The 12 nations comprising the European Union (soon to become 15 with the ratification of Austria, Finland and Sweden) are engaged in a concerted harmonization program for food products. Unlike the CAC, which is an advisory body, the EU is a governmental body with significant regulatory authority. Its purpose is to achieve a system of uniform regulation within the EU that will assist in ensuring food safety and
promoting EU trade in food products. The role of the EU is growing in importance in the global harmonization activities of the CAC because of its market size and its political integration. The EU food regulatory system is composed of three principal political and policy bodies: the Council of Ministers, the Parliament and the Commission. The Commission is responsible for initiating legislation and then for administering its execution. The Parliament debates legislation in draft, giving a public voice to industry and other interests. The Council of Ministers finally adopts EU legislation while attempting to reconcile national interests. Supporting these bodies with expert technical and commercial advice are: the Scientific Committee for Food (SCF), the Advisory Committee on Foodstuffs (ACF) and the Standing Committee on Foodstuffs. The SCF consists of 18 scientists who are experts in scientific and technical matters relating to food safety, nutrition and food processing. The SCF has established eight working groups to facilitate its work: additives, flavorings, contaminants, food contact packaging materials, food hygiene, novel foods and novel processes, nutrition and food intake and exposure. The SCF may seek the assistance of outside experts and other scientific organizations within the member states of the EU. The SCF is an advisory body only. The Standing Committee on Foodstuffs operates as an interface between the European Commission and the member states of the EU. It serves to accept or reject final proposals adopted by the European Commission for consideration. The ACF serves as a sounding board early in the process of preparing food legislation. The permanent members of the ACF are selected from the food industry, consumer organizations, agricultural organizations, commercial organizations and trade unions. 19.6 Summary There are as many national approaches to food safety regulation as there are nations in the world. Safety evaluation is never a strictly scientific process. It involves different perceptions of the value of new technology, different degrees of protection afforded by governments to home industries, different values on the benefits of free trade and even different views of the science itself. The most recent incident involving the potential threat of BSE from British beef illustrates both the difficulty and the promise in dealing with food safety issues that involve international trade. This incident shows that in the presence of an ill-defined but potentially large food-borne risk, national politics and trade factors can quickly overwhelm more sober debate. However, it also illustrates how the combined resources of several nations acting in concert can provide solutions that otherwise would be economically unfeasible for any single nation acting alone.
References Berenblum, I. (1945) Systems of grading carcinogenic potency. Cancer Research, 5, 561. Crump, K.S. (1984) Fundamental Applied Toxicology, 4, 854-871. Dourson, M.L. and Stara, J.F. (1983) Regulatory history and experimental support of uncertainty (safety) factors. Regulatory Toxicology and Pharmacology, 3, 224-238. Druckrey, H. (1943) Quantitative Grundlagen der krebsergengung. Klinische Wochenschrift, 22, 532. Dunkelberger, E. (1995) The statutory basis for FDA food safety assurance programs: from GMP, to emergy permit control, to HACCP. Food and Drug Law Journal, 50(3), 357-383. Environmental Protection Agency (1976) Federal Register, 41(102), 21402. Environmental Protection Agency (1992) Draft Report: A cross-species scaling factor for carcinogen risk assessment based on equivalence of mg/kg3/4/day; notice, Part V Environmental Protection Agency. Federal Register, 57(109). Filby, F.A. (1934) A History of Food Adulteration and Analysis, Food and Agriculture Organization/World Health Organization (1993) Codex Alimentarius Commission, Joint FAO/WHO Food Standards Programme, Twentieth Session, 28 June to July 1993, Risk Assessment Procedures Used by the Codex Alimentarius Commission, and Its Subsidiary Advisory Bodies, Alinorm 93/37. Geneva. Food and Drug Administration (1971) Food and Drug Administration Advisory Committee on Protocols for Safety Evaluation: Panel on Carcinogenesis Report on Cancer Testing in the Safety Evaluation of Food Additives and Pesticides. Toxicology and Pharmacology, 20, 419-438. Food and Drug Administration (1973) Compounds used in food-producing animals. Federal Register, 38(19) 226. Food and Drug Administration (1982) Toxicological Principles for the Safety Assessment of Direct Food Additives and Color Additives Used in Food. Redbook L US Food and Drug Administration, Bureau of Foods. Washington, DC. Food and Drug Administration (1985) Sponsored compounds used in food-producing animals; criteria and procedures for evaluating the safety of carcinogenic residues. Federal Register, 50, 45530. Food and Drug Administration (1993) DRAFT Toxicological Principles for the Safety Assessment of Direct Food Additives and Color Additives Used in Food. Redbook IL US Food and Drug Administration, Center for Food Safety and Applied Nutrition. Washington, DC. Food and Drug Administration (1995) Development of Hazard Analysis Critical Control Points for the Food Industry. Request for comments, advanced notice of proposed rulemaking, pp. 1-9. Gardner, S. (1995) Food safety: an overview of international regulatory programs. Journal of the Regulatory Affairs Professionals Society, 1, 87-114. Gehring, PJ. and Blau, G.E. (1977) Mechanisms of carcinogenesis: dose response. Journal of Environmental Pathology Toxicology, 1, 163. Hart, F.L. (1952a) A history of the adulteration of food before 1906. Food Drug and Cosmetic Law Journal, 7(1), 5-22. Hart, F.L. (1952b) A history of the adulteration of food before 1906. Food Drug and Cosmetic Law Journal, 7(8), 485-497 Hart, F.L. (1952c) A history of the adulteration of food before 1906. Food Drug and Cosmetic Law Journal, 7(5), 724-737. Hutt, P.B. (1960) Criminal procecution for adulteration of food at common law. Food Drug and Cosmetic Law Journal, 15, 382-398. Hutt, P.B. and Merrill, R.A. (1991) Food and Drug Law, Cases and Materials, 2nd edn. The Foundation Press, Inc., Westbury, New York. ISO 9000 (1992) ISO-9000 International Standards For Quality Management, 2nd edn. International Organization for Standardization, Geneva. Jackson, C. (1993) Regulating the global food system: harmonization and hurdles, Part II. In: Gaull, G. and Goldberg, R.A. (eds) The Emerging Global Food System Public and Private Sector Issues. John Wiley & Sons, Inc., New York. Kebler, L.F. (1930) The work of three pioneers in initiating federal food and drug legislation. Journal of the American Pharmaceutical Association, 19, 592-593.
Lehman, AJ. and Fitzhugh, O.G. (1954) Quarterly report to the editor on topics of current interest, 100-fold margin of safety. Quarterly Bulletin of the Association of Food and Drug Officials, January. Lehman, AJ., Laug, E.P., Woodward, G. et al (1949) Procedures for the appraisal of the toxicity of chemicals in food. Food Drug and Cosmetic Law Quarterly, 4, 412-434. Mantel, N. and Bryan, W.R. (1961) Safety testing of carcinogenic agents. Journal of the National Cancer Institute, 27(2), 455-470. Mider, G.B. (1960) The role of certain physical and chemical agents in the causation of cancer. National Institute of Health, National Cancer Institute, Health Education and Welfare (HEW) report to Congress on HR 7624 and SZ197. National Research Council (1993) Issues in Risk Assessment. National Research Council, Committee on Risk Assessment Methodology (CRAM) Commission on Life Sciences, National Academy Press, Washington, DC. NAS (1993) Risk Assessement in The Federal Government: Managing the Process. Committee on the Institutional Means for Assessment of Risks to Public Health, Commission on Life Sciences, National Research Council, National Academy Press, Washington DC. O'Flaherty, H. (1989) Interspecies conversion of kinetically equivalent doses. Risk Analysis, 9(4), 587-597. Olin, S., Farland, W., Park, C. et al. (1995) Low-Dose Extrapolation of Cancer Risk, Issues and Perspectives. International Life Sciences Institute/ILSI Risk Science Institute. ILSI Press, Washington DC. Pierson, M.D. and Corlett, D.A. (eds) (1992) HACCP Principles and Applications. Avi, New York. Renwick, A.G. (1991) Safety factors and establishment of acceptable daily intakes. Food Additives and Contaminants, 8(2), 135-150. Scheuplein, RJ. (1995) Use of 'secondary mechanism' in the regulation of carcinogens: a chronology. Cancer Letters, 93, 103-112. Shubic, P. (1950) Studies on the promoting phase in the stages of carcinogenesis in mice, rats, rabbits and guinea pigs. Cancer Research, 10, 13. Stringer, M.F. (1994) Safety and quality management through HACCP and ISO 9000. Dairy Food and Environmental Sanitation, 14, 478-481. US (1890) Inspection Of Meats for Exportation. US Stat. 414, 30 August 1890. 51 Congress, 1 ses House Report 1792. US (1980) Industrial Union Department, AFL- CIO v. American Petroleum Institute and Others, 448 US 607. US Department of Agriculture (1994) National Advisory Committee on Microbiological Criteria for Foods. The role of regulatory agencies and industry in HACCP. International Journal of Food Microbiology, 21, 187-195. US District Court (1942) United States v. 7232 Cases American Beauty Brand Oysters. United States District Court, Western District of Missouri., 1942, 43 F. Suppl. 749. US Federal Register (1986) Good manufacturing regulations. Federal Register, 51(22), 458. US Department of Health, Education and Welfare (1971). US Supreme Court (1914) United States v. Lexington Mill & Elevator Co.. Supreme Court of the United States, 1914 232 US 399. Weil, C.S. (1972) Statistics versus safety factors and scientific judgement in the evaluation of safety for man. Toxicology and Applied Pharmacology, 21, 454-463. White, W.B. (1948) Encyclopedia Britannica, Vol. I. World Health Organization, (1987) Principles for the Safety Assessment of Food Additives and Contaminants in Food. Environmental Health Criteria 70. WHO, Geneva. World Health Organization (1995) Application of Risk Analysis to Food Standard Issues. Report of the Joint FAO/WHO Consultation, WHO/FNU/FOS/95.3. WHO, Geneva. Young, J.H. (1989) Securing the Federal Food and Drug Acts of 1906. Princeton University Press, Princeton, New Jersey, p. 130.
Part Four Conclusion
20 Integrated food chemical risk analysis D.R. TENNANT
20.1
Introduction
The foregoing chapters of this book have described a multi-compartmental paradigm for food chemical risk analysis which suggests a series of discreet activities leading from the identification of a potential hazard through to its efficient control. Whilst this is a valuable means of describing the process, it is a poor representation of what actually occurs within organizations managing food safety. In reality, many of the activities merge into one another, producing a continuous process. The disadvantage of this blurring of functions is that it makes it difficult to track the process of decision-making, which in turn prevents transparency and impedes effective communication to those outside of the process. Compartmentalization can also have its disadvantages, particularly where those acting within compartments are unaware of their role in the system around them. This is a failure that this book is specifically aimed at overcoming. A good example of this problem is provided by hazard assessors who, when asked for expert advice on the character of a toxicological hazard, believe that they should also advise on measures needed to control the risk. This latter advice, although well-intentioned, is inappropriate, since it disregards all of the other sources of expert advice which should be considered when identifying an optimal risk management strategy. The distinction between the risk assessment and risk management functions is particularly important. In some organizational structures, risk assessors and risk managers are physically separated from each other. For example, risk assessment may be performed at a national level, whilst risks are managed locally. The benefit of such 'Chinese walls' between risk assessors and risk managers is that they maintain objectivity and promote consistency. The main disadvantage is that internal communications are impeded. For example, risk assessors will not necessarily know whether the information they are providing is relevant. On the other hand, important contextual information about a particular risk may not be communicated effectively to risk managers. Information flow within risk analysis has been seen traditionally as oneway: from risk assessment to risk management. Recently, more emphasis has been placed on the need for dialogue between risk assessors and risk
managers. For example, the initiation of the risk analysis process, hazard identification and prioritization might be triggered by consumer concerns or by the results of monitoring or surveillance undertaken as a risk control measure. Risk reduction may require a revision of the exposure analysis and of the risk characterization. Risk analysis requires the input of specialist knowledge from many disciplines. However, specialist advice is only of value if the right questions have been asked and if the answers can be properly understood and applied. It is therefore necessary to stress the need for better risk communication, not only between risk analysts and the public but also between practitioners in food chemical risk analysis and in particular across the risk assessment-risk management divide. Risk analysis must be viewed as an integrated process where boundaries between disciplines are maintained whilst information is encouraged to flow freely across them. 20.2 Integrated risk assessment 20.2.1 Integrated hazard characterization Concerns are often expressed, particularly by consumer organizations, about the effects of combinations of chemicals in food, so-called 'cocktail effects'. They argue that regulatory authorities assess individual substances one by one, failing to take into account any possible interactions with other substances which are present in the diet. Scientific evidence is sparse in this area. However, preliminary evidence from animal studies suggests that where intakes of chemicals are above their individual 'no effect' levels, then there may be interactions between substances in the diet (Seed et al, 1995). However, this would be a very exceptional case for human exposures, where concentrations are generally orders of magnitude below no effect levels. Where several chemicals are present below their no effect levels, then no evidence of interactions is seen. This is why little attention is presently paid by regulatory toxicologists to mixtures of chemicals in the diet. The exception to this rule is where different chemicals relate to the same toxicological endpoint. Here, the combined effects of several low doses of different chemicals might be additive, and if sufficient were present in the diet could cause intakes to exceed an acceptable intake. Some veterinary pharmaceutical residues are regulated so that the total amounts of related substances are controlled instead of individual compounds. This is a satisfactory solution when the potencies of compounds are very similar. However, for some classes of substance the relative potencies of different compounds can vary enormously. Certain classes of environmental contaminants, such as polycyclic aromatic hydrocarbons (PAHs) and the dioxins and furans, include many structurally related congeners which can vary in toxic potency by factors of up to 1000. Here, toxic equivalency factors are used to adjust concentrations of
individual substances so that the toxicological significance of the total amount can be estimated. Further work is required to identify other classes of toxicologically related compounds and to develop toxic equivalency factors which can be used in their risk assessment. Where toxicological thresholds are not observed, such as for genotoxic carcinogens, then additive or even synergistic effects might result, even at low concentrations (Seed et al, 1995). At the present time, a thorough understanding of the potential consequences of such interactions is lacking and they may prove to be of little or no significance to human health. 20.2.2 Biomarkers - integrated indicators of exposure and effect Biomarkers provide the potential to make more direct estimates of risk by measuring actual levels of the hazardous chemical in body tissues or by measuring a marker related to the anticipated toxic effect. The ideal biomarker would measure the target organ dose or the net biological effect. Few existing biomarkers measure up to this standard, although there are some used in occupational medicine, such as blood lead, which are very effective. Biomarkers are unlikely to provide a replacement for routine monitoring in the short term, because of many serious practical and ethical problems. Their immediate value probably lies in their potential to validate other, more indirect, methods of estimating exposure, uptake and risk (Crews and Hanley, 1995). The development of biomarker techniques is discussed in greater detail in Chapter 4. 20.2.3 PB-PK modelling - an integrated approach to hazard characterization An increasing awareness of the desirability of including absorption, distribution, metabolism and excretion (ADME) studies in in vivo toxicology has led to greater availability of such data. Whilst such data are intended to assist in the understanding of the animal toxicology, they may have a further potential in PB-PK applications. PB-PK models are multi-compartment dynamic models of biological systems. The input and output parameters for each compartment can be derived from known constants (e.g. blood perfusion rates) or from ADME studies. Data from biomarker, quantitative risk assessment, structure-activity relationships and in vitro studies can all be accommodated in the so-called 'parallelogram' approach to predict the lin vivo' consequences in humans (Figure 20.1). 20.2.4 Integrated exposure analysis People can sometimes be exposed to chemicals by a variety of different routes in addition to food. Air, water, cosmetics and household products are important routes for most people (Figure 20.2). For example, heavy
QRA
HUMAN
Biomarkers
PB-PK
In Vitro
In Vitro
ANIMAL
PB-PK
In Vivo
Biomarkers
In Vivo
SAR
QRA -
Quantitative risk assessment
PB-PK -
Physiologically based pharmacokinetic modelling
SAR -
Structure-activity relationships Figure 20.1 'Parallelogram' approach to risk assessment.
metals such as lead are all derived from minerals in the environment. Some are leached naturally from soils and can be taken up by food plants. The industrial use of metals in pipes, cooking utensils, etc. can also result in their presence in food and drinking water. Other industrial uses, such as the use of lead as an anti-knock agent in motor fuel, can cause the dispersion of metals into air and the environment. If each exposure route is considered in isolation then the total intake will be underestimated. It is therefore necessary to consider all possible exposure routes in chemical risk assessment. It is also important to integrate the relevant exposure period for a particular hazard into the acceptable intake expression. A convention has grown up over recent years where acceptable intakes for additives have been frequently expressed as an ADI (acceptable daily intake) and tolerable intakes for contaminants as a PTWI (provisional tolerable weekly intake). However, these terms give little indication of the real time interval over which the hazard will be expressed. For example, the hazard from cadmium (an environmental contaminant) relates to damage caused by lifetime accumulation in the kidneys, whereas the hazard from sulphur dioxide (a preservative) relates to gut irritancy resulting from intake in a single meal.
Chemical sources
Medium
Exposure route
Integrated exposure
Air
Environmental contaminants
Water
Inhalation Occupational exposure
Food
lngestion
Natural substances
Medicines
Skin absorption Manufactured chemicals
Cosmetics
Household products
Figure 20.2 Integrated exposure assessment.
Internal dose
In some cases children may be more susceptible to the chemical than adults. This information should also be integrated into the acceptable intake expression so that an appropriate intake estimate can be produced. 20.2.5 Integrated risk characterization Many chemicals and most foods have not only risks but benefits associated with them and it is therefore unrealistic to consider risks or benefits separately and necessary to analyse both together. The 'whole food' approach to risk assessment (Chapter 13) takes this idea forward to some extent. Here it is applied specifically to foods containing naturally occurring toxicants, but the approach could equally well apply to any combination of natural or synthetic chemicals in any food or combination of foods. For example, many chemicals are suspected of having carcinogenic potential because when they are studied in isolation it is possible to show that they are mutagenic. However, other chemicals have protective effects against carcinogens (see Chapter 10). The question is: do the protective effects of one set of chemicals mitigate the carcinogenic effects of the others? In the overwhelming majority of cases, the answer must be that they do - otherwise there would be many more cases of cancer than we see. The challenge for risk analysts is to integrate risks and benefits in order to identify the small number of cases where there may be a potential net risk to consumers and to identify suitable strategies to manage these. 20.2.6
Comparative risk assessment
Harvey et al (1995) argue that the process of integration should incorporate environmental risk assessments into human health risk assessments and address the issue of competing risks. The example they give is of water chlorination, which is effective at controlling widespread diseases such as cholera, but which must be balanced against the possibility of introducing potentially harmful chemical by-products. Harvey et al. suggest that the expression of all relevant factors in what they term 'holistic risk assessment' should improve the understanding of risk choices among risk managers and the public and allow risk managers to consider choices better. 20.3 Integrated risk management Factors essential to effective risk management were described in Part 3 of this book. These must now be drawn together into a workable framework which will allow all factors to be balanced and policy options to be identified and assessed. Figure 20.3 describes a possible framework. In this
Risk Assessment
Risk Management
Food consumption assessment Risk reduction
Exposure Analysis Hazard identification and prioritization
Occurrence assessment
Intake estimation
Hazard characterization
Doseresponse assessment
Risk characterization
Ethical and moral factors
Economic analysis
Stakeholder analysis
Monitoring and surveillance
Risk / benefit evaluation
Risk control
Regulation
Consumer perceptions
Benefit characterization
Figure 20.3 Integrated framework for food chemical risk analysis.
Risk communication
system, risk evaluation is the core process where information from the risk assessment, economic analysis, consumer perceptions and ethical and moral factors is balanced. The output from this process may be a call for risk reduction or, if risk reduction is impracticable, risk control. Risk control identifies and evaluates policy options and results in a combination of all or some of regulation, monitoring and surveillance or risk communication. This framework may not be applicable to all situations but should serve as a template for developing new risk analysis applications. 20.3.1
The role of science in risk management
There has been a polarization of views in international circles over the proper role of science in risk management. One camp believes that risk management solutions should be based solely on scientific criteria, whilst the other believes that it is legitimate to take other relevant factors into account. This debate has been particularly heated within the Codex Alimentarius Commission when considering mechanisms for setting international standards for additives, contaminants, pesticides and veterinary residues in food. The Codex Committee on General Principles concluded that standards should be based on scientific principles only. At the centre of this philosophical argument is the European regulatory authorities' banning of the use of growth-promoting hormones in animal production. EU food law is based on public health, other consumer affairs, fair trading and official control. There is no argument against hormones on public health grounds, as long as they are properly used. The European ban on hormones is based on lack of consumer acceptance of meat which has been produced using hormones. The EU regulatory authorities claim that this is a legitimate clause, but we have yet to see the success of this argument when the ban is challenged by the US authorities at the World Trade Organization. The US authorities claim that the ban contravenes the General Agreement on Trades and Tariffs whose standards are based on those of the Codex Alimentarius Commission. It would be a very brave manager in a commercial organization who chose to ignore consumer opinions when formulating risk management policy. For example, whilst food irradiation has been widely approved by regulatory authorities throughout the world, there are very few examples of its application. The unquestionable benefits of food irradiation are foregone because of low consumer acceptance of the technology. Whilst it is reasonable for such factors to influence commercial decisions, is this a justifiable basis for law? The answer probably lies in consumers' ability to follow their own principles. If foods can be labelled so that consumers can identify those foods produced using particular technologies, then it would be unreasonable to prevent their sale - unless there were public health concerns. Consumers
would be free to make their own moral and ethical decisions. However, in some cases, such as with milk from bovine somatotrophin (BST)-treated herds, labelling is impractical because of the bulking of wholesale supplies. In this case, the European Parliament imposed a moratorium on the use of genetically engineered BST. Sometimes it is impossible to divorce food safety decision-making from social and economic factors. If, for example, food safety regulations were based solely on scientific assessments of risk, then regulations on food additives might be repealed, whilst the sale of barbecued meat would be outlawed. Of course, such action would be absurd, because it ignores the real-world influences of sociology, politics, economics and psychology. Blindness to the importance of social and economic factors can lead to these factors being poorly attended to, being considered as an afterthought or, worst of all, being used to colour scientific advice. This, in turn, can lead to inconsistent and obscure policies which are difficult to explain and defend. Whilst it is vital that risk analyses should be based on the best science, it is equally important that they should relate to the real world and take account of social and economic factors openly and fully. 20.3.2 Integrating consumer perceptions The process of food chemical risk analysis is driven to a major extent by consumers' perceptions of risk. This explains why the amounts of resources spent on certain items, such as pesticides and food additives, is disproportional to the actual levels of risk they present. The risk analysis model in Figure 20.3 indicates a feedback trail from consumer perceptions to hazard identification and prioritization which represents this strong influence. This aspect is not necessarily beyond the control of risk managers. Figure 20.3 also shows how effective risk communication can influence consumer perceptions and in turn affect the priorities for risk assessment. It is vital that risk managers recognize that all risks are not considered equally by the public. In particular, artificial risks are given a much higher weighting than are risks of natural origin. Conversely, natural benefits may be far more valued than those derived from chemicals added to food. This means that the determination of an 'acceptable' level of risk is extremely complex and based on more than just the probability of an adverse event. The acceptable level of risk will also depend on the nature of the risk, so that a given frequency of minor adverse effects is more acceptable than the same frequency of more severe effects. This makes it very difficult to justify the arbitrary 1 in 1 000 000 risk which many organizations take as being the acceptable level. More research into the acceptability of risk may reveal that this level is unnecessarily conservative in some situations.
20.3.3
Integrating risk communication
Much has been said already in this book about the need for risk communication to be regarded as a dialogue rather than a one-way process. In the model shown in Figure 20.3, risk communication is shown as an output from risk management. However, it is also an input to consumer perceptions, since perceptions are clearly driven by inputs of all kinds. Consumer perceptions in turn have an influence over policy. The degree to which this feedback system should be formalized will vary from case to case. However, there will be few circumstances in which risk communication, including both the supplying and gathering of information, will not need to be an integral part of the risk management process. Public opinion surveys provide useful ways of gathering information but it is extremely difficult to avoid bias. Conventional public relations can also have a role, but unless care is taken, can sometimes verge on propaganda. It is a mistake to assume that consumers are naive where risk information is concerned and that simple safety messages will convince them. A more sophisticated approach which is sensitive to their perceptions is required. Some organizations already gather information on consumer views either formally, through 'consumer panels', or informally through focus groups and market research. Such 'listening' activities have a vital role in effective risk communication. Drawing consumers into the risk management process not only assists in reaching more realistic solutions. Openness can help to build trust and confidence in organizations, so that, when unpredictable crises do occur, consumers understand that everything practicable has been done to avoid the situation and everything possible is being done to protect public safety. Of course, it is vital that such communication systems are established long before the crisis strikes.
20.3.4 Regulation and deregulation Food is one of the most highly regulated of all commodities, and many national governments and intergovernmental organizations are examining the need for and effectiveness of food safety legislation. Furthermore, food legislation is often complex and highly detailed, prompting calls for radical deregulation among those who are the subjects of regulation. Whilst noone would argue that regulation should be unnecessarily complex, caution should be exercised in calling for the removal of regulations. It is often believed that food safety and quality regulations impose a cost on producers whilst bringing benefits to consumers., In practice, the risk-benefit equation is not so simple. Regulations can bring considerable benefits to producers, whilst the cost is ultimately borne by consumers. The most important way in which regulations can protect producers is by
preserving their good reputation. Most food producers, processors and retailers operate to very high standards. For such industries, regulations may seem an unnecessary burden, since they would aim for high standards regardless of whether regulations were in place. However, if a less scrupulous competitor was free to market a similar product, but produced to much lower standards, this could undermine the reputation of all products of that type. In this sense, deregulation can represent a significant risk to food producers and retailers. The solution is to revise regulations, making them simpler to understand and follow, more relevant and more flexible. However, the powers of enforcement officers should not be weakened by changes in the law. Most important of all, the confidence of consumers should be heightened by re-regulation, not undermined by it.
20.4 Integrating uncertainty Unlike many fields where risk analysis is applied, food chemical risk analysis generates a high degree of uncertainty. Sources of uncertainty and its analysis have been described in several earlier chapters. One of the main problems associated with uncertainty in risk analysis is its expression, particularly to consumers. The general public often expects scientists to be able to provide 'black and white' decisions, whereas a large element of the science of risk analysis is about making judgements in the absence of firm evidence. For example, non-scientists sometimes believe that if chemical intakes are below ADIs, then this is 'safe'. As has been discussed in Chapter 2, there are many sources of uncertainty in this analysis and all that can be said with confidence is that there is no evidence that there is a significant risk. The possibility exists that the risk has been seriously underestimated, but the probability of this happening is extremely remote. In the real world, nothing is 100% safe. Consumers are therefore confused when scientists are unable to separate the world into things which are 'safe' and those which are 'unsafe'. Risk managers and administrators are also sometimes frustrated by this apparent reluctance of scientists to commit themselves and see it as their role to translate complex scientific advice into simple black and white safety messages. Providing information in such absolute terms may satisfy consumers in the short term, but if the knowledge base underlying the risk assessment changes, then it may become necessary to revise the previous 'safe' risk message. This can seriously undermine public confidence, when all that has actually happened is that a little more light has been shed on the problem and a layer of uncertainty removed. It has been suggested that the solution to this problem is to provide consumers with more information on the true level of uncertainty.
However, this may itself introduce problems, since discussions of uncertainty can increase consumers' perceptions of risk. If the source of information is trusted, then the answer may simply be careful choice of words. For example, 'this product presents an insignificant risk' is probably more accurate, more honest and as easily understood as 'this product is safe'. It is also more likely to remain unchanged if the scientific knowledge increases. 20.5 Conclusion Food chemical risk analysis is gradually emerging from its stone-age. More sophisticated technologies are being added to our armoury and we are increasingly realizing the interdisciplinary nature of our task. Nevertheless, there are still many unresolved issues, with rational arguments on both sides. Evolution can be an uncomfortable process because the more we understand about risks today the more we must call into question decisions of the past. It is folly to assume that we can create risk analysis systems which will become permanent monuments to our craft. We must face up to the fact that emerging scientific knowledge will always affect the way in which we perform our role. It is vital that future risk analysis methodologies should have built in to them facilities for upgrading and refinement. The greatest challenge we face today is global harmonization. The danger is that we will settle for the lowest common denominator because this is the best compromise that can be achieved. Oversimplification will lead to standards which are arbitrary and bear no relevance to the real world. We may achieve the goal of establishing a 'level playing field' for trade, but the cost may be standards which are either unnecessarily conservative, or fail to perform their function of protecting public health; both are equally or socially and politically unacceptable. We will never have a complete set of perfect tools, since the underlying physical, biological and social sciences will continue to evolve. We do, however, have a responsibility to make the best use of the tools which are available now so that we can identify the best possible solutions, not the most obvious. References Crews, H.M, and Hanley, A.B. (1995) Biomarkers in Food Chemical Risk Assessment. The Royal Society of Chemistry. Cambridge. Harvey, T., Mahaffey, K.R., Velaquez, S. and Dourson, M. (1995) Holistic risk assessment: an emerging process for environmental decisions. Regulatory Toxicology and Pharmacology, 22, 110-117. Seed, J., Brown, R.P., Olin, S.S. and Foran, J.A. (1995) Chemical mixtures: current risk assessment methodologies and future directions. Regulatory Toxicology and Pharmacology, 22, 76-94.
Index Page numbers appearing in bold refer to figures and page numbers appearing in italic refer to tables.
Index terms
Links
A Absorption, distribution, metabolism and excretion studies see ADME Acceptable Daily Intake see ADI Acceptable risk
438
Acute intakes
212
Acute toxicity
139
Additive intakes
203
462 155
158
21 66 447
24 142
34 362
27
221
457
Adulteration
418
425
Aflatoxins
7 320
182
243
ALARP
363
364
391
40
252
Adducts ADI
ADME
Algal toxins
90 43 433
278
7
Animal testing
133
Antimutagens
245
Antioxidants
38
This page has been reformatted by Knovel to provide easier navigation.
467
468
Index terms Artificial sweeteners
Links 388
As Low as Reasonably Practicable see ALARP Atom and bond sequences
127
Avertive expenditure approach
373
B BB-DR Beer Benchmark dose Beneficial chemicals Benzene Bioavailability
77
81
81
278
279
287
78 9
240
38 321
Biological effects
94
Biologically effective dose
90
Biologically-based dose-response model see BB-DR Biomarkers Body-weight correction
79 246
83 247
47
50
Breast milk
225
Budget method
203
87 457
89 458
141
241
C CAC definitions
11
Caffeic acid
280
Calorie restriction
268
Cancer prevention
269
Carcinogenicity
38
Carcinogens
300
Carotenoids
254
12
57
253
This page has been reformatted by Knovel to provide easier navigation.
469
Index terms
Links
Carrot
278
CASETOX
129
CCFAC
447
CCPR
448
CCRVDF
448
Chemoprevention
240
Chem-X
129
Children
47
Children's intakes
224
Chlorophyllin
253
Cholinesterase inhibitors
318
Chronic intakes
212
Chronic toxicity
141
Clementine
129
Codes of Practice
392
Codex Alimentarius Commission
461
168
219 254
158
Codex Committee on Food Additives and Contaminants see CCFAC Codex Committee on Pesticide Residues see CCPR Codex Committee on Residues of Veterinary Drugs in Food see CCRVDF Coffee
278
287
COMPACT
156
168
Computer modeling
109
156
Constrained optimization
363
Consumer perceptions
333
Contingent ranking approach
373
Contingent valuation approach Cost-benefit analysis
320
336
444
373
377
378
366
384
This page has been reformatted by Knovel to provide easier navigation.
470
Index terms
Links
Cost-effectiveness analysis
366
Coumarins
253
Critical control points
393
Cultural theory
350
Cytotoxicity testing
155
Cytochrome P450
31
254 409 165
D Decision analysis
382
Delaney clause
60
427
de minimis
61
309
De-regulation
464
DEREK
129
131
Developmental changes
222
235
Diallyl sulphides
253
254
Diet and cancer
267
Diet history
209
Diet recall
206
438
442
156
168
Dietary intake see Intake estimation Dioxins
38
Dose-response
10
31
EDI
198
200
Elaboration Likelihood Model
404
406
EMDI
198
199
E
Endocrine effects Environmental contaminants Environmental effects
205
30 6
38
388
376
This page has been reformatted by Knovel to provide easier navigation.
471
Index terms
Links
EPA see USEPA Epidemiology
267
Estimated Daily Intake see EDI Estimated Maximum Daily Intake see EMDI Ethanol Ethical considerations
278 99
Ethnicity
225
European Union
448
Expert systems
109
344
383
F FDA see USFDA Fibre
253
254
Flavonoids
253
254
Food additive usage
203
Food additives
5 282
24 296
38 387
133 428
Food consumption
25 230
44
206
207
8
297
445
461
Food contact materials Food diary
209
Food frequency questionnaire
208
Food industry
331
Food surveillance
396
G GATT
64
195
36
150
General Agreement on Trades and Tariffs see GATT Genotoxicity
This page has been reformatted by Knovel to provide easier navigation.
472
Index terms GMP
Links 432
Good Manufacturing Practice see GMP
H HACCP
393
431
129
156
168
15
23
28
6
39
Hedonic price methods
377
378
HERP index
277
Hazard Analysis Critical Control Points see HACCP HazardExpert Hazard characterization Heavy metals
Heterocyclic amines
8
38
244
260
324
228
Holistic risk assessment
460
Household production function
377
Human capital approach
373
Human studies
33 436
147
202
202
30
140
158
In utero toxicology
229
230
236
In vitro toxicology
32 248
144
149
In vivo toxicology
29
I IEDI Immunological effects
Indirect food additives
8
Indoles
253
Infants
219
Infants’ intakes
224
255 232
This page has been reformatted by Knovel to provide easier navigation.
157
473
Index terms
Links
Intakes, acute
212
chronic
212
children
47
224
estimation
43
195
224
232
infants
230
International Estimated Daily Intake see IEDI ISO 9000
396
Isothiocyanates
253
255
JECFA
195
447
JMPR
195
J
Joint Expert Committee on Food Additives see JECFA Joint Expert Meeting on Pesticide Residues see JMPR
L Lead
6
223
Linearised multi-stage model see LMS LMS
71
Logit model
69
M Malicious tampering Mantel and Byant model
389 69
Market price approach
377
Mathematical models
68
Maximum Residue Levels see MRLs Maximum Tolerated Dose see MTD
This page has been reformatted by Knovel to provide easier navigation.
474
Index terms
Links
Media
409
META
129
Metabolism
139
153
6
9
Minerals Molecular modeling
163
Monetary valuation
372
Monitoring
396
Monoterpenes
253
255
Monte Carlo analysis
214
233
Moolgavkar-Venzon-Knudsen model
379
77
Moral issues see ethical issues MRLs
24
448
MTD
274
286
314
MULTI-CASE
129
Multiple chemical exposure
214
236
456
Multiple route exposure
457
459
Mushrooms
278
Mycotoxins
7
38
271
278
8
244
NEDI
202
202
Neurotoxicity
140 228
152
Nitrosamines
38
244
439
283
N National Estimated Daily Intake see NEDI Natural pesticides Natural toxicants
270 158
NOAEL see NOEL
This page has been reformatted by Knovel to provide easier navigation.
223
475
Index terms NOEL
Links 24
31
34
66
142 433
242
258
362
9
137
173
185
249
No-observed-effect-level see NOEL Novel foods
O Ochratoxin
7
Onco-logic
168
Optimistic bias
402
P P450 cytochromes
165
PAHs
38
Patulin
7
PB-PK
32
76
81
81
154
158
457
458
231
279
348
221
435
PCB’s Pesticide intakes Pesticide residues
38 198 6 376
Pharmacodynamics
221
435
Pharmacokinetics
139
153
Pharmacophore
113
Phenolic acids
253
255
Phytic acids
253
255
Potatoes
318
320
Physiologically-based pharmacokinetic model see PB-PK
This page has been reformatted by Knovel to provide easier navigation.
476
Index terms
Links
Preservatives
424
Probabilistic methods
214
Probit model
214
372
69
Provisional Tolerable Weekly Intake see PTWI Psychometric studies PTWI
345 24
43
37
57
243
168
179
Q QRA
437
441 QSAR
164
Quantitative risk assessment see QRA Quantitative structure-activity relationships see QSAR
R Ranking cancer risks
276
Ranking risks
338
Reaction products
405
41 211
212
Regulation
390
418
464
Regulation in USA
423
Recipes data Reference dose see RfD
Relative risks
14
15
405
Reproductive toxicology
30
141
158
RfD
24
34
Risk Assessment
12
13
Risk characterization
26
Risk communication
12
Risk control
14
399
390
This page has been reformatted by Knovel to provide easier navigation.
227
477
Index terms
Links
Risk evaluation
381
Risk information
412
Risk management
13
Risk perception
336
Risk reduction
385
Risk statistics
405
Risk-benefit analysis
365
Royal Society report
57
Rule induction
401
462
234
362
114
S Safety factor
66
Scientific Committee for Food (EU)
449
Sensitivity analysis
385
Short-term genetic toxicity
139
Social amplification
412
Stakeholders
382
Sterols
253
Sub-populations
212
Supply and demand
369
370
41
89
Susceptibility
255
97
T Theoretical Maximum Daily Intake see TMDI Three R’s concept
134
Tiered approach
195
Time to tumour models
198
74
TMDI
198
Tolerability of risk
363
198
203
This page has been reformatted by Knovel to provide easier navigation.
434
478
Index terms
Links
TopKat
129
Total diet studies
196
Toxic equivalency factor
214
168
215
Toxicology
30
Toxicophore
113
Transgenic animals
148
Trust
354
406
Uncertainty
18 384
26 400
Uncertainty factors
234
USEPA
400
USFDA
426
237
456
42 465
215
82
243
126
U
V Value of life
375
Variability (human)
43
Veterinary residues
6
103
Virtually safe dose see VSD Vitamins
9
Voluntary controls
392
VSD
64 276
242 70 314
W Weibull model Willingness to pay
69 373
378
This page has been reformatted by Knovel to provide easier navigation.