CRISPR Revolution: Do We Need Tighter Gene-Editing Regulations? No

By Cameron English — Oct 25, 2021
Some scientists say we need tighter gene-editing regulations to mitigate the serious risks associated with the technology. There are some critical flaws in their argument.
Image credit: mikecohen1872 on Wunderstock (license)

Life goes on as gene-edited foods begin to hit the market. Japanese consumers have recently started buying tomatoes that fight high blood pressure, and Americans have been consuming soy engineered to produce high amounts of heart-healthy oils for a little over two years. Few people noticed these developments because, as scientists have said for a long time, the safety profile of a crop is not dictated by the breeding method that produced it. For all intents and purposes, it seems that food-safety regulators have done a reasonable job of safeguarding public health against whatever hypothetical risks gene editing may pose.

But this has not stopped critics of genetic engineering from advocating for more federal oversight of CRISPR and other techniques used to make discrete changes to the genomes of plants, animals and other organisms we use for food or medicine. Over at The Conversation, a team of scientists recently made the case for tighter rules in Calling the latest gene technologies ‘natural’ is a semantic distraction — they must still be regulated.

Many scientists have defended gene editing, in part, by arguing that it simply mimics nature. A mutation that boosts the nutrient content of rice, for example, is the same whether it was induced by a plant breeder or some natural phenomenon. Indeed, the DNA of plants and animals we eat contains untold numbers of harmless, naturally occurring mutations. But The Conversation authors will have none of this:

Unfortunately, the risks from technology don’t disappear by calling it natural ... Proponents of deregulation of gene technology use the naturalness argument to make their case. But we argue this is not a good basis for deciding whether a technology should be regulated.

They have written a very long peer-reviewed article outlining a regulatory framework based on "scale of use." The idea is that the more widely a technology is implemented, the greater risk it may pose to human health and the environment, which necessitates regulatory "control points" to ensure its safe use. It's an interesting proposal, but it's plagued by several serious flaws. 

Where's the data?

The most significant issue with a scale-based regulatory approach is that it's a reaction to risks that have never materialized. This isn't to say that a potentially harmful genetically engineered organism will never be commercialized. But if we're going to upend our biotechnology regulatory framework, we need to do so based on real-world evidence. Some experts have actually argued, based on decades of safety data, that the US over-regulates biotech products. As biologist and ACSH advisor Dr. Henry Miller and legal scholar John Cohrssen wrote recently in Nature:

After 35 years of real-world experience with genetically engineered plants and microorganisms, and countless risk-assessment experiments, it is past time to reevaluate the rationale for, and the costs and benefits of, the case-by-case reviews of genetically engineered products now required by the US Environmental Protection Agency (EPA), US Department of Agriculture (USDA) and US Food and Drug Administration (FDA).

The problem with scale

Real-world data aside for the moment, there are some theoretical problems with the scalability model as well. The argument assumes that risks associated with gene editing proliferate as use of the technology expands, because each gene edit carries a certain level of risk. This is a false assumption, as plant geneticist Kevin Folta pointed out on a recent episode of the podcast we co-host (21 minute mark).

Scientists have a variety of tools with which to monitor and limit the effects of specific gene edits. For example, proteins known as “anti-CRISPRs” can be utilized to halt the gene-editing machinery so it makes only the changes we want it to. University of Toronto biochemist Karen Maxwell has explained how this could work in practice:

In genome editing applications, anti-CRISPRs may provide a valuable ‘'off switch’ for Cas9 activity for therapeutic uses and gene drives. One concern of CRISPR-Cas gene editing technology is the limited ability to control its activity after it has been delivered to the cell …. which can lead to off-target mutations. Anti-CRISPRs can potentially be exploited to target Cas9 activity to particular tissues or organs, to particular points of the cell cycle, or to limit the amount of time it is active

Suffice it to say that these and other safeguards significantly alter the risk equation and weaken concerns about a gene-edits-gone-wild scenario. Parenthetically, scientists design these sorts of preventative measures as they develop more genetic engineering applications for widespread use. This is why the wide variety of cars in production today have safety features that would have been unheard of in years past.

Absurdity alert: The A-Bomb analogy

To bolster their argument, The Conversation authors made the following analogy:

Imagine if other technologies with the capacity to harm were governed by resemblance to nature. Should we deregulate nuclear bombs because the natural decay chain of uranium-238 also produces heat, gamma radiation and alpha and beta particles? We inherently recognize the fallacy of this logic. The technology risk equation is more complicated than a supercilious 'it’s just like nature' argument

If someone has to resort to this kind of rhetoric, the chances are excellent that their argument is weak. Fat Man and Little Boy, the bombs dropped on Japan in 1945, didn't destroy two cities because a nuclear physicist in New Mexico made a technical mistake. These weapons are designed to wreak havoc. Tomatoes bred to produce more of an amino acid, in contrast, are not.

The point of arguing that gene-editing techniques mimic natural processes isn't to assert that natural stuff is good; therefore, gene editing is also good. Instead, the point is to illustrate that inducing mutations in the genomes of plants and animals is not novel or uniquely risky. Even the overpriced products marketed as “all-natural” have been improved by mutations resulting from many years of plant breeding.

Nonetheless, some scientists have argued that reframing the gene-editing conversation in terms of risk vs benefit would be a smarter approach than making comparisons to nature. I agree with them, so let's start now. The benefits of employing gene editing to improve our food supply and treat disease far outweigh the potential risks, which we can mitigate. Very little about modern life is “natural”—and it's time we all got over it.

Category

Cameron English

Director of Bioscience 

Cameron English is a writer, editor and co-host of the Science Facts and Fallacies Podcast. Before joining ACSH, he was managing editor at the Genetic Literacy Project.

Recent articles by this author:
ACSH relies on donors like you. If you enjoy our work, please contribute.

Make your tax-deductible gift today!

 

 

Popular articles