16 ELR 10190 | Environmental Law Reporter | copyright © 1986 | All rights reserved
Fundamentals of Risk AssessmentChris G. WhippleChris Whipple, Ph.D., is Technical Manager of the Energy Study Center at the Electric Power Research Institute, Palo Alto, California.
[16 ELR 10190]
Quantative estimates of human risk are now used for regulation and management of many technological, environmental, and occupational health risks. Ten or more years ago, this was not the case; except for activities for which reliable data were directly available (such as transportation risk) and for a few other areas (such as ionizing radiation), risk management decisions were made without quantitative risk estimates, although analytical methods that stopped well short of estimating risk were used.
Quantitative risk assessment is largely an invention born of the needs of risk policymakers. Risk assessment did not burst upon the regulatory scene as a result of a dramatic scientific advance, although the science behind risk assessment has improved in response to increased research funding. While it is tempting to associate the growth of quantitative health risk assessment with the remarkable advances in biology over the past decade, I do not think this is the case. The scientific approach to risk assessment for carcinogenic hazards is similar conceptually to methods that have been used to assess ionizing radiation since the early 1950s. The growth of quantitative methods for risk assessment in nonbiological fields — notably, in nuclear power plants — suggests that an anxious public and a politicized regulatory environment are the primary factors behind the recent growth in these approaches.
Historical Approaches
The most common historical approach to risk management by both toxicologists and engineers involves the use of a safety factor. It is a fundamental premise in toxicology that given a sufficiently large dose any substance can have a toxic effect. In engineering, it is known that any structure will fail given a sufficiently large force. Traditionally, the design safety-factor approach in engineering has been to estimate conservatively the load likely to be experienced (by a bridge or dam, for example), and then to design for a load two or three times greater. Typically, no one knows what failure probability is associated with the resultant design; one knows only that failures are relatively rare when this approach is taken.
[16 ELR 10191]
Similarly, before carcinogens became a dominant concern, toxicologists established experimentally (in animals or in human volunteers) an exposure level at which adverse effects occur. They then set occupational or public exposure limits at a small fraction of the adverse level. While toxicologists understood that there were sensitive individuals who would respond to the established levels, and while engineers understood that an occasional load from a rare event, such as an earthquake, could reduce the actual safety factor, both groups recognized the trade-offs in terms of cost and performance of higher safety margins. Cost was not always the major factor dictating against higher safety margins; margins were set lower for airplane wings than for bridges simply because an airplane must be light enough to fly. Where costs were the major factor in considering risk reduction alternatives, explicit cost-benefit trade-offs were rarely used.
The safety-factor approach is what we would today call a judgmental de minimis risk approach. The risk residuals were not explored analytically but were assumed to be sufficiently low. The most substantial job for the safety engineer or the industrial hygienist was to ensure that the design safety margins were maintained in practice.
In many cases, the approach used to achieve desired safety factors became standardized, as with engineering codes and practices. In actuality, it is difficult to associate a particular engineering code with some level of risk, partly because the safety factor that went into the code is not cited explicitly, and partly because the codes often include conservations to account for less than perfect compliance (as, for instance, when real materials, such as wire or pipe, have imperfections and do not behave as do textbook materials). The safety code approach has worked well in general, and remains the primary means by which risk is managed for most engineered products.
Limits to Safety Factor Approaches
The primary virtue of the safety factor approach as a risk management method is that it is usually easier to make something safe than to estimate its risk. While this method has, on balance, worked well, we are dealing today with many risks for which the historical approaches seem unsuited. The traditional methods have features that are unattractive for some current risks.
First, the traditional methods are analytically unsuited to explore risk, cost, and other trade-offs in safety decisions. In current regulatory terms, these methods permit use of zero risk (or de minimis risk) or technology-based standards. In contrast, quantitative risk assessment is compatible with cost-benefit or risk-benefit analysis, with cost-effectiveness analysis, which compares marginal costs with risks, with risk-based standards such as the 10<-6> lifetime risk figure cited periodically by the Environmental Protection Agency (EPA) and the Food and Drug Administration (FDA), and with the safety goals proposed by the Nuclear Regulatory Commission (NRC).
Second, the traditional approaches seem to perform poorly for highly politicized issues. In part, this is because the old approaches are technocratic: toxicologists and engineers make the safety decisions. The traditional approaches also combine considerations of risk and policy in ways that often preclude a policy role for the nonexpert. Quantitative risk assessment, on the other hand, separates to a substantial degree the question of what risk exists from the question of how safe is safe enough.1
In addition to the above limitations, the traditional risk management methods for environmental health risks are of little use in meeting the requirements of the 1969 National Environmental Policy Act (NEPA)2 for disclosure of foreseeable impacts of technology. While the early emphasis was on environmental damage rather than on health, health effects have grown in importance over time. Clearly, the determination of anticipated health impacts requires a quantitative risk assessment. Along with NEPA, the passage of other significant risk-based environmental statutes — the Clean Air Act3 for example — has created a strong impetus for the growth of risk assessment.
Finally, the nature of risks of public concern has changed. In engineering, the safety factor and code approaches have worked well where trial and error have identified weak areas in a design. Today, however, for technologies such as nuclear power plants and commercial airplanes, the emphasis is on trial without error. These technologies demand a predictive method that does not require error for learning. If designers cannot use experience to point out the weakest aspects of a technical system, they can attempt to fill the gap analytically with a complex systems model. In engineering, such models are called probabilistic risk assessments.
Similarly, the safety factor approach is scientifically unjustified for management of carcinogens. The traditional risk management methods evolved to handle acute effects, not cumulative or delayed risks. For low-level health risks, such as those posed by carcinogens, the regulatory alternatives to quantitative risk assessment seem rather infeasible. On the one extreme, one could choose to regulate only where there is clear evidence of adverse effects in humans. While most risk analysts would agree that such information, when available, is far better than other criteria for estimating risk, most would also agree that substantial risks could go undetected, given the background incidence and variability of cancer. This approach, used prior to recent decades, now seems politically infeasible to the extent that permitting significant human exposure to known potent animal carcinogens is indefensible.
Conversely, one could prohibit exposure to materials found to be carcinogenic in animal tests, regardless of the level of human risk posed by typical exposure levels. Congress has, in fact, established such an approach in several areas, but the inherent impracticality of the approach has prevented its implementation. Against these alternatives — ignoring or banning exposure to carcinogenic materials — quantitative risk assessment emerges as a defensible risk management tool.
As mentioned earlier, radiation was an early exception to the safety factor approach. In the first several decades of this century, it was assumed that the carcinogenicity observed in animal experiments with radiation resulted from gross tissue damage. This view was consistent with the historically accepted idea of a threshold of damage. In the 1940s, however, work with animals exposed to subacute doses challenged this premise. By the early 1950s, the nonthreshold view for radiation prevailed, and the International Commission on Radiation Protection adopted protective principles based on exposure levels that were not risk-free, but rather were said "to impose a risk which is small compared to the [16 ELR 10192] other hazards of life." At this time, the radiation research community came to understand that the definition of a "safe" exposure was a question of policy, as well as a question of science.4 The approach pioneered in the field of radiation has served as a model for much subsequent carcinogen assessment.
A full risk assessment usually involves considerably more than dose-response estimation. Often, the exposures to a potentially hazardous agent are poorly known, and even when known, the sources giving rise to the exposures are difficult to identify. The current debate over the relative contribution of local and distant sources to acid rain reflects just such uncertainty.
Application of Risk Assessment to Environmental Health Issues
Environmental health risks possess a wide variety of characteristics, and the scope of scientific disciplines engaged in such risk studies is correspondingly broad. The time allowed before some risk management decision must be made is a significant factor affecting the selection of an approach to assess potential environmental health risks. Many risk management decisions are made in response to the discovery of problems or events such as spills or fires; in such cases, the applicable risk assessment tools are limited by time and other resource constraints. State and local health officials often deal with these problems. Other environmental health risks are studied prospectively, as for example, in the establishment of federal standards by EPA, NRC and the Occupational Safety and Health Administration (OSHA). At their best, these studies reflect the current scientific limits of risk assessment.
Quantitative Risk Assessment
Sufficient information will rarely be available to permit an accurate assessment of environmental health risks. Generally, the uncertainties as to the degree of risk are substantial, and many issues in assessing the risk can be estimated only judgmentally. People who work in the field do not see risk assessment as a means of pinning down risks with great precision. Instead, they see it as a language for discussing risks in which assumptions are made explicit, and estimates become traceable to some reference point. Risk assessment is a scientific process to the extent that it can be understood and replicated. One can fight over assumptions and data, but the methodology, at least, is clear.
Despite these obstacles to precision in assessing risk, the emphasis by federal regulatory agencies in recent years has been on quantitative risk assessment. In some cases, establishment of a plausible upper bound on risk is the best that can be done. EPA's Carcinogen Assessment Group takes this approach with estimates developed from animal toxicity experiments using the upper-confidence limits of a linearized multistage, dose-response model. In such cases, the plausible lower bound is generally assumed to be zero risk, except where evidence of human health effects is directly available.
The quantitative approach has several advantages. First, it permits one to examine the relative contribution of various factors (e.g., exposure levels and dose-response relationships) to overall uncertainty; additional research can then be targeted where it will do the most good to reduce total uncertainty. Second, in dealing with politically contentious issues, the use of quantitative estimates provides a natural means to separate the question of the degree of risk from that of risk management action. These advantages create an incentive for all parties to understand, evaluate, and improve the quantitative methods used for risk assessment.
Even when uncertainty is substantial — and many critical estimates can be made only judgmentally — quantification remains of significant value: witness, for example, the case of nuclear power plant risks, where, prior to 1975, no overall risk assessment had been attempted. The first study, the wellknown Rasmussen Report,5 was based heavily on judgments concerning the likelihood of failure of specific components and plant systems. Its results conflicted sharply with the conventional (judgmental) wisdom of that time regarding the level and significance of various contributors to overall risk. This study found that small losses of coolant, interruptions of feedwater and emergency power, and human error were all significant contributors to accident risk.
These conclusions conflicted with the priority given to large loss-of-coolant accidents in federal nuclear-safety research. Only after many additional studies and the Three-Mile-Island accident were research priorities reoriented. Although substantial uncertainties remain in the assessment of reactor risks, probabilistic risk assessment, based on relative contributions to risk, is now used to prioritize research.
A secondary benefit of using these analyses is that mechanisms are now in place to gather and organize the operating data on the failures of specific systems; these functions were not performed before the advent of systematic assessments. Now that many studies have been completed, a framework exists for assessing the significance of off-design events.
Models and Methods in Risk Assessment
Risk assessments are based on models. Some models describe the system that gives rise to releases of pollutants; these are used when direct measurements are not possible. There are models to describe the transport of materials through air, surface and groundwaters, and the food chain. Pharmacokinetic models estimate doses to target organs resulting from exposures at different levels; and dose-response models, with or without pharmacokinetic considerations, produce risk estimates.
A central difficulty in risk assessment is that, frequently, models cannot be verified. At best, the selection of a model form is based upon mechanistic considerations; for example, the multistage dose-response model is predicated on the belief that cancer is a multistage process.
When the models used in risk assessment rest upon unverifiable assumptions — that is, when the models are themselves unverifiable — assessments of uncertainty are necessarily imprecise. Typically, the uncertainties that can be treated analytically, such as those related to sample size, are less important than the uncertainties related to the degree of the model's deviation from reality. Consequently, assessment of uncertainty is necessarily an imprecise, judgmental undertaking.
There are three distinct analytical areas in risk assessment: source characterization, transport and exposure assessment, and dose-response assessment. Careful treatment of subjectivity, uncertainty, and statistics is required in all phases of risk assessment.
[16 ELR 10193]
Analysis of the sources of release of environmental pollutants is performed when direct measurement is not feasible, for example, in accidental, nonroutine releases. Although there are many such sources, the most detailed area of study concerns releases from nuclear power plants. The emphasis here is on a detailed analysis of the failure modes of plant systems; the analysis produces estimates of the likelihood of release of radioactive materials in varying quantities.
Given a source term, either by an analysis as described above or by direct measurement, the next step is analysis of pollutant transport. Even when direct measurement of exposure levels is possible, as with air pollutants, transport assessment is important, because it permits examination of the effects both of various control options and of alternative sites for proposed facilities on exposure. Transport models are also important when multiple sources contribute to pollution levels, as for example, in urban areas. Many exposure pathways may need to be studied. In coal-liquefaction risk assessment, for example, pathways by air, crops, meat, milk, water, and the aquatic food chain were all considered. In other areas — toxic wastes, for example — consideration of groundwater transport may also be required.
Detailed analysis of any one of these transport pathways can be an exceedingly complex undertaking. The food pathways assessment requires estimation of the uptake and concentration of materials by various crops and animals, but these processes are in general poorly understood. Chemical reactions en route are likely to be important to exposure levels, but these reactions are also poorly understood. An additional complexity in transport and exposure assessment arises when exposures are substantially different indoors and outdoors; assumptions must then be made regarding the amount of time that people spend in various places.
Another complexity, which is only beginning to be included in risk assessments, is the distinction between exposure and dose. Because nonlinear processes are often involved in the metabolism and internal distribution of a harmful substance, pharmacokinetic considerations can have a significant influence on risk estimates. Failure to consider pharmacokinetic processes can result in overestimates or underestimates of risk, depending on the specific mechanisms involved. For example, the risk of liver angiosarcomas in rats corresponds more closely to the metabolized dose of vinyl chloride than to the exposure concentration.6 Once the rat reaches a certain exposure level, the metabolized dose holds constant; the mechanism saturates. If one were to perform an estimate from exposure concentration at very high levels, one could underestimate risk based on this effect. As the target organ is saturated, most of the additional exposure does not increase the risk. Such mechanisms, however, are rarely included in risk estimates, because they are generally not known.
It is desirable, but often not possible, to base environmental health-risk estimates on human exposures and responses. Epidemiology is the mainstay of risk assessment for all effects except cancer — and other effects based on genetic damage, such as mutations and teratogenic malformations — because there are no agreed-upon conventions for relating animal test results for nongenetic health risks to human risk. The usual difficulties associated with epidemiology bear repeating: records are often inadequate, especially when exposures occurred before the risk was known to exist; population mobility adds difficulty; synergistic or antagonistic effects can occur from other exposures, such as smoking; sample sizes are often inadequate for environmental pollutants; available data may relate to high occupational exposures where methods for extrapolation are uncertain; and finally, epidemiological studies usually require a great expenditure of time and money.
For carcinogenic and related risks, the primary sources of information in risk assessment are long-term animal toxicity studies. The full array of risk assessment methods for chemical carcinogens has been reviewed recently by the Office of Science and Technology Policy (OSTP).7 Although the use of animal high-dose studies is controversial, no alternative is currently available. The OSTP document states the issue as follows:
No single mathematical procedure is recognized as the most appropriate for low dose in carcinogenesis … When data and information are limited, as is the usual case, and when much uncertainty exists regarding the mechanism of carcinogenic action, models or procedures which incorporate low-dose linearity are preferred.8
As noted above, the preferred method of EPA's Carcinogen Assessment Group is the 95 percent confidence bound of the linearized multistage model; estimates based on this model are referred to as plausible upper bounds rather than as risk estimates.
In some cases, a consensus cannot be established regarding the appropriate assumptions to use in modeling a health risk. Even when agreement exists, numerous difficulties are still present in risk assessments. In addition to the problems discussed above, the assessment of complex mixtures of pollutants is another problem that stretches the current science. Detailed chemical-by-chemical analysis is often impractical, and may provide different results than would tests based on mixtures that might be encountered in a toxic waste site or a chemical plant process stream. It is often impractical, however, to conduct animal tests for a specific mixture, since such tests would probably be of limited value.
The Uncertainty Factor
Uncertainties arise at all stages of risk assessment. When estimating risk, one must be careful to consider the collective uncertainty in source, exposure, and response. The uncertainties arise from limitations in knowledge that are both external and internal to the models used in risk assessment. Internal uncertainties include unknown release and reaction rates for pollutants, sampling limitations in animal or epidemiological tests, and unknown exposure histories. In principle, the effect of these limitations can often be considered in an analysis.
A more difficult problem is that posed by external or modeling uncertainties. A probabilistic risk assessment of a nuclear power plant is incomplete, and no method exists to estimate the significance of omitted accident sequences. As [16 ELR 10194] noted earlier, pharmacokinetic factors are often omitted entirely. Interspecies differences relevant to animal tests are not known, and extrapolations to low-dose rates are usually unverifiable. It is generally possible only to estimate uncertainty subjectively, although the use of multiple models is often advocated as a means of getting at the sensitivity of a risk estimate to the assumptions underlying a particular model.
The levels of risk associated with most current regulatory limits are generally low in comparison with those risks for which reliable data or estimates are available. When we deal with individual risks of death from cancer in the range of one per million per year to one per hundred million per year, the assumptions in a risk assessment are necessarily difficult to validate and will likely remain difficult to validate for some time.
There are limits to what research can contribute to reduce such uncertainties. The case of ionizing radiation illustrates this point. Because of the substantial interest in the health effects of radiation since the mid-1940s, enormous amounts of money have been spent to study its risk. Animal experiments have been conducted involving as many as 25,000 test animals (the so-called megamouse studies at Oak Ridge). In contrast, the current convention for assessing the risk of chemicals is to use fifty animals at each test dose.
Several human populations have been studied carefully (for instance, atomic bomb survivors in Japan). Nevertheless, extrapolation to low dose, with the attendant needfor assumptions, is required, since the size of the population necessary to reliably estimate risk for exposures of interest makes such direct human measurements infeasible. For example, Land has ascertained that to estimate the breast cancer risk from a one rad dose (approximately the exposure of a mammographic examination), a sample of roughly 100 million women would be required.9 Given prevailing radiation risk estimates and a 10 million woman sample, there is a 25 percent probability that the risk estimate would come out negative. Yet this risk — about six excess cases per year per million exposed during the second decade following exposure, versus an incidence of 1910 cases per million per year for women of the same age — is relatively high in comparison to many assessed risks. By comparison, a 10<-6> lifetime cancer risk (as proposed for exposures to some toxic materials) is lower by a factor of roughly 400. This simple example ignores other sources of bias, for example, errors in reporting exposure or incidence that confound real epidemiological studies.
Many uncertainties are associated with the application of animal test results to human health risks. In addition to the need to extrapolate from doses that may be thousands or millions of times greater than those to which humans are exposed, uncertainties also arise from the relative sensitivity of various species to specific toxic agents (hamsters, for example, are far more sensitive to dioxin than are rats) and from the question of how to scale for dose given the great differences in size between humans and test animals (by weight, percent in diet, surface area, and so forth).
Uncertainties also arise from the fact that test animals have different natural patterns of cancer than do humans. For example, the significance of an observed increase in mouse liver cancers in a test will be disputed, since cases are common in unexposed mice, yet relatively rare in other animals. An additional difficulty is that the human population is far more genetically diverse than the animal strains used in toxicological tests; this factor is generally interpreted to suggest that human sensitivities are far more variable than those of lab animals. The diversity of other agents to which humans are exposed (e.g., differences in diet, occupational exposures, smoking, etc.) adds to the likelihood that human sensitivity will vary more than that of test animals.
Frequently, one encounters the view that risk assessments are currently too uncertain to serve as a basis for risk regulation; this attitude has a long history at NRC. It is important to understand, however, that the uncertainty usually reflects our lack of precise knowledge about a risk. This lack of knowledge persists regardless of the analytical or regulatory approach taken; it is simply more apparent when a quantitative analysis is used. Quantitative risk assessments do not increase risk uncertainty; they are simply the messengers of the news that risk is uncertain.
Analytical Optimism and Conservatism
Although the value of attempting to separate issues of science from those of policy has been noted in two recent studies by the National Academy of Sciences (NAS),10 the separation is often difficult when the uncertainties are large. Furthermore, this split goes against the public health and engineering traditions that treat uncertainty through the use of conservative assumptions. In practice, risk assessments tend to follow the least conservative estimates that can be defended. One often finds that exposure and transport assessments are conducted on a best-estimate basis, in which the objective of developing as accurate an estimate as possible for pollutant concentration seems a straightforward scientific undertaking. Where dose-response models are concerned, the tendency is to select physically plausible models that are not expected to give overly optimistic results. As such models cannot generally be tested experimentally, the resultant range of uncertainty is often quite large.
The issue of analytical conservatism — where conservatism refers to a propensity to overestimate risk — is driven by the fact that, given the existence of gaps in knowledge that must be bridged by assumptions in order to develop a quantitative estimate, most analysts will choose conservative rather than optimistic assumptions. While in many cases added analytical conservatism will lower risks in comparison to decisions based on less conservative analysis, this is not always true, as when one risk is substituted for another. For example, the evidence of the carcinogenicity of cyclamates is less solid than that for saccharin, yet the "conservative" treatment of cyclamates has led to increased saccharin consumption. Such substitutions are commonplace among pesticides or alternative energy sources. Similarly, an overly conservative analysis can consume analytical and regulatory resources better used elsewhere.
The NAS Study cited above addresses the question of conservatism in a thoughtful way. In view of the potential for undesirable risk substitutions, as well as for other reasons (allocation of resources, for example), the panel recommended that the assumptions and test protocols used in risk analysis be applied consistently, so that, while absolute risks may be highly uncertain, relative risks are less uncertain.
To illustrate the practical value of the relative-risk view, one [16 ELR 10195] can consider a regulatory decision where two practical risk management options exist: substitution of a less toxic substance for the one giving rise to the risk and reduction of exposures to the original substance. As either option would lead to some risk reduction, the degree of reduction that each would provide is clearly of major importance to the decision. Here, the relative toxicities are relevant, even if absolute risk estimates are unreliable.
It is worth noting that the actual degree of risk to which people are exposed is influenced by the assessed risk, the degree of protection desired from a policy perspective11 and the way in which standards are implemented and enforced. Standard setters may compensate for perceived conservatisms in risk estimates. Conservative analysis is not necessary for very low risk.
Another relevant factor is the degree of confidence one has in a risk estimate. A 99 percent confidence that the annual failure probability is less than 0.1 may refer to a risk that can also be described as having a mean value or median failure probability of0.0001. In establishing safety goals on a risk basis, one must recognize that the risk target cannot be established independently.
One difficult issue that arises in debates on risk assessment is whether the analysis should be tailored to the individual at greatest risk or to the average individual. In practical terms, this question affects assumptions in exposure. For instance, should we assume that a person eats fish from a polluted pond three times a day or twice a week? In interpreting the results of animal tests, should we use the most pessimistic data or should we try to average the results of different tests?
Presently, we tend to use the species and sex of the test animal that gives the highest risk estimate; this selection is justified in part by the presumption that, given the assumption of highly variable human sensitivity, at least some humans may be as sensitive as the most sensitive animal tested.
A related question in interpreting animal test data concerns the interpretation of test results in which tumors increase only at specific sites, but not in overall incidence. Salsburg12 and Haseman13 have published results which suggest that this occurs frequently in animal bioassays.
Given the tradition of conservative analysis of health risks in biology as well as in engineering, and the concern that relaxed analytical assumptions would lead to lowered levels of protection, analytical assumptions have proved to be a contentious area in risk analysis. As biologists learn more about the actual mechanisms of the damage that risk assumptions are meant to describe, there will be some reason for cautious optimism. In engineering, the reliability of data that calibrate risk models also provides a consistency check on projected performance. For nuclear power plants, this data base (for example, years of plant operation) has more than tripled worldwide since the first studies were completed; and information is now used that was ignored in the past.
Conclusions
Risk assessment, as it is now conducted for environmental health risks, is a rapidly evolving field, encompassing a wide array of scientific disciplines. Risk assessment methods have been adapted to reflect the changing needs of health policymakers; these two areas have developed side-by-side in recent years. The political polarization surrounding environmental health issues has led to more explicit analyses, with science and policy considerations more distinct than in the past.
The methods used in risk assessment are experiencing rapid change for several reasons. First, the fundamental changes taking place in biology are affecting risk assessment assumptions. During the past several years, these changes have resulted in a shift from one-hit to multistage doseresponse models and to recognition of the likely significance of nonlinear pharmacokinetics. In the future, as the mechanisms of damage to and repair of DNA are better understood, we are almost certain to see revisions in assumptions.
Another reason for the rapid progress in risk assessment has been its widespread application in the past decade. The current status of methods now in use owes much to experience gained in recent studies. The fields of risk assessment have been self-critical, and this tendency has revealed large collections of pitfalls to be avoided. For participants in the policy debates over environmental standards, quantitative risk assessment has become the language of choice for dealing with regulatory issues, and nothing improves a process quite so much as good criticism. Given the great improvements over the past decade, there is every reason to be optimistic about progress in risk assessment methods.
1. National Research Council, Risk Assessment in the Federal Government: Managing the Process (1983).
2. 42 U.S.C. §§ 4321-4361, ELR STAT. 41009.
3. 42 U.S.C. §§ 7401-7642, ELR STAT. 42201.
4. E. E. Pochin, Sieverts and Safety, 46 HEALTH PHYSICS 1173 (1984).
5. U.S. Nuclear Regulatory Commission, Reactor Safety Study, WASH-1400 (now also NUREG-75/014)(1975).
6. See Table, K. CRUMP, PRINCIPLES OF HEALTH RISK ASSESSMENT (P. Ricci, ed. 1984); based on C. Maltoni, The Values of Predictive Experimental Environmental Carcinogenesis. An Example: Vinyl Chloride. 4 AMBIO. 18-23 (1975); See also, P.J. Gehring, P.G. Watanabe, and N.C. Park, Resolution of Dose-Response Toxicity Data for Chemicals Requiring Metabolic Activation: Example — Vinyl Chloride, 44 TOXICOLOGY AND APPLIED PHARMACOLOGY 581-91 (1978).
7. Office of Science and Technology Policy, Chemical Carcinogens: Notice of the Review of the Science and Its Associated Principles, Vol. 49, No. 100, Fed. Reg. 21594-21661 (1984).
8. Id.
9. Land, Estimating Cancer Risks from Low Doses of Ionizing Radiation, 209 SCIENCE 1197-1203 (1980).
10. National Research Council, Risk Assessment in the Federal Government, infra and National Research Council, Risk and Decision Making: Perspectives and Research (1982).
11. This may include consideration of the cost of achieving protection.
12. Salsburg. The Lifetime Feeding Study in Mice and Rats — An Examination of Its Validity as a Bioassay for Human Carcinogens. 3 FUNDAMENTAL AND APPLIED TOXICOLOGY 63-67 (1983).
13. Haseman, Patterns of Tumor Incidence in Two-Year Cancer Bioassay Feeding Studies, in Fischer 344 Rats, 3 FUNDAMENTAL AND APPLIED TOXICOLOGY 1-9 (1983).
16 ELR 10190 | Environmental Law Reporter | copyright © 1986 | All rights reserved
|