Late last year, without much fanfare, the National Academies of Sciences, Engineering, and Medicine published a document entitled “Redesigning the Process for Establishing the Dietary Guidelines for Americans.” It was the kind of thing only a bureaucrat could love: more than 250 pages on roles, workflow, and analytic standards. Here’s a sample, chosen at random:
“Employing and modeling different standards of ‘typical consumption’—operationalized by composite nutrient profiles weighted to reflect population averages—are critical, as they help evaluate what the population’s average nutrient intake would be if they followed the recommendations under varying circumstances. The approach taken by [the committee] is in contrast with others that rely on especially nutrient-dense foods (such as salmon, apricots, or almonds), which might result in insufficient nutrient intakes when the patterns are put into practice with more typically consumed foods. However, the range of expected nutrient intakes, as well as the average, could be obtained if the variability in intakes were accounted for.”
You get the flavor of the thing: soporific.
But while some of us were dozing off, other people were getting worked up, including a researcher named Edward Archer, who coauthored an open letter to the Academy that labeled the report “extremely misleading,” stating that it “contained errors of fact and omission, and failed to address, or even acknowledge a large body of rigorous research that is explicitly contrary to the authors’ conclusions,” and calling for it to be retracted.
So what’s so wrong about the process that leads to the dietary guidelines? When you get down to it, just one thing. Unfortunately, it’s kind of the most important thing.
Let’s back up a moment. In my February 15th column, we looked at the many problems in nutritional research: The studies tend to be small and speculative; the effects of any given food or food component tend to be small; research designs are often faulty; and researcher bias is somewhere between rife and universal. All of this contributes to the likelihood that the conclusions of a lot of nutrition research, possibly even most nutrition research, are wrong.
What we didn’t talk about last time is the data problem.
You see, most nutrition researchers are forced to collect their data using a notoriously unreliable scientific instrument: the human brain. It’s hard and expensive to conduct rigorous nutritional experiments where you know through direct observation and measurement exactly what people are eating. Instead, most studies are conducted by asking people what they ate.
And that, says Edward Archer, is a huge problem. “Now, human memory has been demonstrated to be flawed for hundreds of years. Memory is not like a video recording. It’s a reconstructive process and every time you remember something you change it and more importantly you have other memories getting in the way of your current memory. And not only do we have mis-estimation and false memories, we also have lying. I have a paper under review right now demonstrating that about 60 percent of people will admit to lying about the foods that they eat.”
How far off are the data? Archer and a pair of collaborators published a paperin 2013 that attempted to put a number on it. They looked at almost four decades of results of the National Health and Nutrition Examination Survey (NHANES)—the big U.S. government database on diet—and compared what people said they ate with how much they’d need to eat simply to stay alive. Their findings: “Across the 39-year history of the NHANES, IE data [that is, energy intake data—how many calories were consumed] on the majority of respondents (67.3% of women and 58.7% of men) were not physiologically plausible.” That is, if people ate what they said they ate, they would have starved. In other words, more than half the data were demonstrably wrong. And that’s not even counting people who were over-reporting.
Now, there’s wrong and there’s wrong. If you have an otherwise functional clock that just happens to be set ten minutes slow, it’s no problem to adjust the incorrect data it provides you at any given moment. The question is whether faulty food data can be similarly manipulated to make it useful.
And there, opinions seem to vary. Some researchers (these for instance) argue that survey techniques have gotten better, that the amount of error is less than Archer and others have argued, and that with appropriate statistics techniques self-reported data can be useful. Others, like the Energy Balance Measurement Working Group, take a harder line: “For research, clinical practice, and policy determination,” they argue, “a great need exists for accurately determining EI and expenditure. It is time to recognize that even if methods are cheap and convenient, inaccurate scientific methods will lead to inaccurate conclusions. Interpretations based on inadequate measurement tools may cause misguided national and local health care policies, funding for and by major organizations, and health care advice to individuals. Like other antiquated methods, self-reported PAEE [physical activity energy expenditure] and EI need to be recognized as inaccurate measurements by the scientific and funding communities.”
Pay attention to that mention of the funding community. I mentioned that nutrition research suffers from conflicts of interest. Sometimes that means corporate sponsorship, of course. But almost always it involves preserving nutrition research itself. What happens if survey-based data suddenly become unacceptable? A lot of nutrition researchers will quickly discover it’s a lot harder to find funding, conduct studies, and publish the kinds of articles that provide tenure, job security, and prestige. And these endangered scholars are the peers who pass judgment on the articles that appear in peer-reviewed journals. They have a powerful incentive not to rock the boat.
You could look at food’s data problem and say that it means we should be a lot more circumspect about believing the latest bit of hype coming out of universities (especially if the university in question is Cornell and the researcher is working in the lab of Brian Wansink). But Archer goes considerably further. He doesn’t just say that most nutritional studies are questionable. He says they’re wrong.
Dietary cholesterol? There was never evidence that it caused harm—something that even the scientific committee of the Dietary Guidelines for Americans conceded in 2015.
Does consuming sugar cause diabetes? No, he says. (Though if you havediabetes, diet remains important.) Archer believes that many of the problems we associate with diet are actually the product of insufficient physical activity. But in the case of type 2 diabetes—or adult-onset—he thinks it’s actually passed on from generation to generation. (I can’t do justice here to the theory—“non-genetic evolution.” But you can read about it for yourself here.)
The Mediterranean diet? Reducing sodium? The evidence is actually against the Mediterranean diet, Archer says. And for many people, reducing sodium might actually do more harm than good.
(How did we travel down this path in the first place? That’s a question for another day. But if you’re impatient, Archer maps it out in great detail here.)
Let’s pause and take a deep breath.
OK, there are cranks out there. Could Archer be one? I suppose. He’s got that whole voice-of-one-crying-in-the-wilderness thing going on, which is worrisome. He was recently let go by the University of Alabama, where he worked as a nutrition researcher. (Don’t feel too sorry for him. He’s now chief science officer at a startup focused on using data to improve health.) But his articles are showing up in the right places, and even the people who disagree with him seem to treat him with respect. And he’s not the only scientist who thinks the way he does. For those of us who haven’t got the scientific expertise to sort things out for ourselves, it’s always hard to know what to think when faced with this sort of controversy. Here’s how I approach it:
First, I already knew I wasn’t satisfied with the quality of what I’ve been reading about food. The expert opinions swing about too wildly, and there’s too much out there that’s just stupid. (Did you read that the “most nutritious” food of all is the almond? In second place is the cherimoya, a tropical fruit. Pig fat is eighth. I’m sure that someone has a methodology that justifies the list. But in what world does it mean anything?) I’m getting to the point where I believe very, very little.
Second, the question about data is truly bothersome. I ask, “Would I lie on a food questionnaire?” and the answer is “almost certainly.” I’m pretty sure I have lied to my doctor about my diet, even though I think it’s a terrible idea. Fortunately, I’m not sure he actually listens to me. If there’s no data, I don’t want to hear the conclusions.
So I’m inclined to think that Archer may well be right. At least he’s given me a tool for chipping away at the quasi-scientific garbage that I encounter on a daily basis. And if there’s no evidence of connections between specific foods and diets and diseases, does that mean that we just haven’t proven them yet, or that there aren’t any to be discovered?
At this point, you may have written me off as being a crank myself. Fair enough, I suppose. But stick around for one last question: If it is in fact true, as it might be, that diet has little or no impact on diseases—if that becomes the accepted wisdom—what happens to the “good food” movement? Would it be a disaster, or maybe the beginning of a truer, kinder approach to the vexing problems we all face.
That’s what we’ll talk about next time. See you then.