By Thomas R. DeGregori
The genetic modification debate has given new life to what is being called the "precautionary principle," which claims that if the possible adverse consequences are catastrophic, then preventive action should be undertaken, in spite of lacking evidence to support the prediction. This may sound like a prudent course of action, but it would in fact hold public policy hostage to those with vivid imaginations who are most vocal in proclaiming their phobias. It has been called "the categorical imperative of environmentalists," which translates to mean "thou shall not tolerate even a risk of risk" (Bate 1997). Noting a similar extreme formulation to the precautionary principle by a conservation organization, Foster, Vecchia, and Repacholi (2000, 979) argued that this interpretation meant that there could be no technological progress.
Even if the technology can be used safely, environmentalist writer George Monbiot (2000) would have us oppose its introduction because scientists and other humans, being inherently corrupt, will use it in a destructive manner. It is hard to imagine new technologies or scientific discoveries being allowed under these extreme restrictions on humanity; our capacity to use our technology intelligently would be squelched. As the editor of the website Junkscience.com said, "had Monbiot been around in pre-historic times, he probably would have discouraged the use of fire."
The precautionary principle is often defined as the view that "absence of evidence is not the same as absence of risk." What this really says is that the proponents of the principle have lost the argument on the evidence (otherwise they would argue the evidence), so they argue that we should follow their policy prescriptions anyway. Stated differently: if our fears and phobias are right, we are right, but even if we are wrong, well, we are still right. It's "my policy, right or wrong." In a similar instance, after a study that found it unlikely that genetically modified food crops would turn insects into much-feared "super bugs," a spokesman of Friends of the Earth said he found the study to be "interesting," but he didn't think that it was "the definitive piece of research." This came from a movement and organization that consider any study, no matter how shoddy, to be definitive if it supports the anti-g.m. foods phobia (Arthur 2000).
As with many antiscience and antitechnology phobias, the precautionary principle would have us ignore the continuing benefits derived from using a technology. In capturing public attention and valuable resources for trivial or nonexistent problems, it would prevent these resources from being used to address clearly identifiable problems where the benefit-to- cost ratio is much higher (Breyer 1993). The precautionary principle applied to food production "would require that we grant legitimacy to the belief that scientifically unwarranted concerns for environmental safety take absolute precedence over providing a population with the means to feed itself" (della-Cioppa and Callan 2000).
Politically, the precautionary principle is offered as a radical, even revolutionary doctrine when in reality it is fundamentally reactionary and elitist. More than even the most conservative doctrines, it assumes that the status quo is privileged and free of danger. This may be fine today for comfortable elites, but it adversely impacts the disadvantaged, particularly those in poorer countries who need technological change to raise their living standards and improve their lives. If one looks at some of the innovations, such as immunization, that have contributed so mightily to bettering the human condition, one has to ask how many such changes would have been allowed to happen had the precautionary principle been operative. The European Community's endorsement of this principle is more a capitulation to street mobs and public hysteria than a reasonable
concern for public safety (ComEurCorn 2000).
The precautionary principle assumes that the risks are there but are yet to be discovered. In an article titled "Absence of Certainty Is Not Synonymous with Absence of Risk," Cairns (1999) states: "Unrecognized risks are still risks; uncertain risks are still risks; and denied risks are still risks." This is indisputably true, provided there are in fact undiscovered risks. The author's statement simply assumes what is yet to be proved. Assuming the outcome of scientific inquiry before the results are known is becoming all too common, when a person or group wishes to advance its agenda.
In a dispute over the scientific basis for European trade sanctions on beef from cattle that had been fed hormones, an official for the European Union argued in early February 1999 that a scientific study would produce findings in May of 1999 that would allow the EU to "fully implement" its obligations under the World Trade Organization agreements (Williams 1999). The question is, if one knows the results of a scientific study a priori (actually seventeen such studies were said to be in process), why bother to carry it (or them) out? As critics of the EU have put it: "After ten years and two WTO rulings against it, the EU continues to search for the 'right' scientific evidence to support a political prejudice against beef raised with growth hormones" (Eizenstat 1999; Winestock 2000; AgWeb 2000). The United States is not opposed to the use of the precautionary principle in trade agreements, as long as it is not used as an excuse for restraining trade where there is a lack of any substantive evidence for real harm.
The precautionary principle assumes that there are risk-free alternatives. In all human endeavors, there are no risk-free actions; even inaction has a risk factor. Further, the precautionary principle purports to be science but in fact is more on the order of theology. It implies that the realm beyond what we currently know is the exclusive preserve of the critics of technology. In other words, if the known evidence does not support your phobia, then the evidence must lie in the great unknown. The position claims that all the possible unknown dangerous actions will be from technological practices, while the less technologically advanced, presumably more "natural" alternatives, carry no unseen dangers. What it means is that no matter how much we push forward the frontiers of knowledge and demonstrate that there are no proven dangers to a product or practice, there will always be the unknown dangers in the great beyond that can be called upon in an attempt to impede intelligent, problem-solving action. These are assumptions that have no basis in fact or human experience.
Thomas R. DeGregori, Ph.D. is a professor of economics at the University of Houston and an ACSH Director.
Thomas DeGregori is the author of Bountiful Harvest: Technology, Food Safety, and the Environment, from which this article was excerpted.
Responses:
May 31, 2003
I personally sprayed many elm trees to control elm bark beetles to try and stop the spread of Dutch elm disease. I suffered no illness from this, and that was a long time ago.
No chemical had so many lies told about it than DDT. It did not cause the decline of eagles or peregrines. PCB was the culprit for eagles, and the peregrine never did decline according to data from Hawk Mountain during the DDT years. It is just a rare bird, and the environmentalists couldn't understand why they couldn't see one in every tree. Nor did DDT cause cancer. In fact it is protective, because not one person in the history of Montrose Chemical ever came down with cancer. It caused the liver to emit an enzyme that is cancer-protective.
Not one good scientist who studied DDT came up with the same conclusion as the EPA, but it was banned. The amateurs won. Hugh E. Fitch, forester