Are Large Population Studies Worth The Cost?

Related articles

Given the media attention devoted to weak observational claims about health (miracle vegetables, chemophobia of the month) and the rampant mistrust of science that has resulted, it is worth asking if they're worth the expense at all. 

The answer is that they probably are, though only for smaller programs. 

If you are not familiar with the term, an observational study is just what it sounds like; in contrast to an experiment, which is what most people think of as science, an observational study instead observes a population to see what they have in common, such as the impact of various lifestyle and environmental factors on diseases. There are two types of observational studies: case-control and cohort. In my short primer IARC Diesel Exhaust and Lung Cancer: An Analysis I outlined the difference, basically that case-control studies (often retrospective) will look at people with a disease and people without, and find out what was different about their environment or behavior, while a cohort study (also called longitudinal or prospective) follows a group of people over time and logs what happens to them.

You can imagine the first has a great deal of potential for recall bias but it has been essential for discovering what harmed people in the past. If you know someone has lung cancer, the first thing you will wonder now is if they smoked. That is because the link between smoking and lung cancer was found in experiments on humans, it was shown using observational studies. In the 1920s we learned a lot about risk factors for breast cancer using an observational (retrospective) study.

The harms of smoking were a giant home run for observational studies and given such a spectacular success, it's no wonder that a government fixated on the imagery of Moonshots and Manhattan Projects of (insert whatever the government wants to tackle next) would believe even more large population studies will be valuable. However, we have found that throwing more money at every problem is not always better.(1) 

Biases and bandwagons

At its core, epidemiological data couples biology with questionnaires. Casual science fans who are getting their knowledge through mainstream media are alarmed when they find out that these can lead to sloppy papers afflicted by obvious bias, such as asking people to recall what they ate a decade ago or what their environment was like, and having scholars declare that a food or chemical "causes" cancer that way. Baby powder being linked to various cancers is an example in the news recently.

Yet sometimes observational studies are the only way, and as long as everything is on the up-and-up, it works well. However, things are not always on the up-and-up, unethical people and groups are wrapping themselves in the halo of experimental science credibility in order to promote observational work that is flawed at best, financially motivated at worst, or both. Examples of financial motivation are books like "Grain Brain" and "Wheat Belly" while meta-analyses declaring organic food superior because they included papers where organic shoppers said organic strawberries 'felt' better in their mouths are so methodologically flawed they were not only agenda-driven, but worthless. 

As more and more students want to stay in academia and grant money becomes harder to get, academic researchers have even more pressure on them to publish. But the methods that are needed to do the research can easily be abused. For example, epidemiological databases now exist that contain information on exposures to hundreds of chemicals. As a result, the number of possible associations between trace chemicals and disease has skyrocketed. It is now possible to find a study claiming that almost any food or trace chemical both causes and prevents cancer. (2) 

In his book "Getting Risk Right", ACSH Board of Scientific Advisors member Dr. Geoffret Kabat talked about "biases and bandwagons." The bias comes from two sources; scholars who are determined to find a provocative association, and participants who are writing diaries, sometimes from memory. The bandwagons come from media fads: popular claims that red wine and resveratrol will protect your heart, wheat ruins your brain, high-fructose corn syrup causes diabetes, bacon causes cancer, and many more. Once those are published and get widespread attention, less-than-ethical people rush into those fields to exploit popular interest and write a diet book. Far too often, it works.

So are these large studies useful? Are they worth the expense?

Given that there are more than 10,000 epidemiological papers produced each year, and every group can find a match to fit an agenda they may have using observational studies, are they beneficial? If they are, why not do really big ones?

The federal government thought the same thing so they tried a large one and failed. In the U.S., the National Children's Study was halted after it had already cost a whopping $1.3 billion dollars. The Obama administration pulled the plug because, while the goal of following the health of 100,000 U.S. children from before birth to age 21 to learn about childhood diseases was laudable, it had clearly become a hyper-expensive job works program by the time NIH Director Francis Collins canceled it. The $165 million it was going to cost per year could be better spent on smaller projects with achievable goals. Even 14 years into gathering data, the NCS still didn't have a protocol for what to do with it.

Yet some railed against its cancelation. We have developed a Big Science fetish, Moonshots and Manhattan Projects, even if the funding drain harms smaller, more realistic programs. I wrote about the cost overruns and delays of the James Webb Space Telescope, for example, and noted how every time it went over budget it caused smaller experiments to be denied. And that article was written in 2010. Since the Webb telescope will (maybe) be finished next year, and may or may not work if it is, that means that thousands of small programs never got done during 10 years of delays and increased costs. Nonetheless, the Webb program has received scant criticism from the science community both inside and outside aerospace. Insiders denied funding because Big Science sucked up more money do not want to go on the record because of fears about blowback.

Maybe, like with the Large Hadron Collider being built by CERN in Europe after the Superconducting Super Collider was canceled, we should simply accept that Americans don't do Big Science very well any more. Obviously we are the world leader in science, we lead the world in adult science literacy, we produce 30 percent of the science with 5 percent of the world population, we lead in Nobel prizes. But Big Science is another issue.

Childhood diseases are incredibly important, but an observational study that was going to cost $2.5 billion with no definable benefit shows that perhaps we can no longer do large projects at a reasonable cost. Meanwhile, Denmark, with its Danish National Birth Cohort (BSIG), and the Norwegian Mother and Child Cohort Study (MoBa), have done well. Government controls their healthcare, so no one needs elaborate consent to be enrolled, and they have very detailed health registries.  MoBa has information on over 114,000 kids, more than the US effort at a tiny fraction of the cost, and those programs have produced 800 papers. They are doing this well, without the need to invoke Moonshots and Manhattan Projects to justify the expense.

For that reason, it may be better that they investigate birth defects and childhood diseases, such as cancers and autism, while the US can spend its enormously larger sums of science funding on lots of smaller projects.

NOTE:

(1) It may instead lead to junk science, especially if government starts to believe that observational studies should overrule toxicology and hard science. A worrisome trend at EPA last year.

(2)