Origin of the p-value

The story of the p-value used in statistics starts with William Gossett, better known under the pseudonym Student. Gossett worked in a brewery and was tasked with maintaining quality by assessing the looks and fragrance of plants used in the brewery for flavoring. To do this, he had to understand how ideal and representative a sample would exactly be with small sample sizes. To put it in other words, what do the results of a sample size of 1 or 2 say about the results when applied to a population of millions of samples. In statistical terms, you could answer the question by calculating the error distribution of the mean and comparing it with small and large sample sizes.

In more and other words…

Gossett would collect a few samples and average some results. Yet the average result never matched the actual result in larger samples. There was a certain error. He wanted to know, how the sample size would affect that error. For example, he wanted to know how many sample results one needs in order to make good predictions of the results of a million samples.

In more practical language…

Gossett needed to determine the quality of beer. Lets say the company produced 100 bottles of beer. Gossett could not test all the bottles, so he wanted to know the minimum of bottles one should test for quality in order to make accurate predictions of the quality of those 100 bottles.

One could ask the question: What is the chance that 100 bottles are perfect when only 5 bottles are tested to be perfect. Gossett turned the question around: How many bottles should I test that turn out to be perfect, before I can say with a certainty of x% that all 100 bottles are perfect. The term perfect refers to the degrees sacharrine of the malt extract, which had to be within 0.5 degrees of the targeted 133 degrees.

Gossett calculated this accuracy of estimates from different sample sizes and plotted it in a table. We call it the t-table and the method is named after the Student t-test. He found out that with only 4 samples, he could get within the 0.5 degrees in 92% of the time.

The Student pseudonym

Gossett was not allowed to publish under his own name, since other breweries would find out the efficiency of their method to test for quality. However, he was allowed to publish under a pseudonym, namely Student. The publication of his work went largely unnoticed. However, Some people, amongst which R.A. Fisher thought Gossett’s ideas were brilliant and could determine whether results between two groups were statistically significant. In 1925 1 Fisher published an important work. If a certain result has a probability of less than 5% to occur by chance, it’s stated to be statistically significant. The choosing of 5%, or 0.05 (also known as the p-value) was arbitrary and became quite controversial. Medical, Economic and Psychology Journals use the p-value in research.

Crisis in science

However, lately there has been a crisis in science 2. Not only because of so-called p-hacking, but also low power, publication bias, transparency etc. which make reproducibility of studies a major difficulty. According to several studies of medicine, psychology and economics, more than half of studies are not reproducible. 34.

With these issues coming to mind, a p-value of 0.005 is proposed. This value, however, is also arbitrary. Perhaps we should follow Gossett’s methodology. He was a man that looked to be rational in his work. He did not want to create structures for how research ought to be or create criteria for quality assessment. No, he was more interested in solving problems.

P-values have no significance when they do not solve problems. Significant p-values have no value when they do not solve problems. 

 

Articles used:

Researchers want to redefine the threshold for scientific discovery from 0.05 to 0.005
The Guinness brewer who revolutionzed statistics

Consumption of Fruit

Consumption of Fruit and vegetables prevents major diseases

Fruit and vegetable intake and the risk of cardiovascular disease, total cancer and all-cause mortality–a systematic review and dose-response meta-analysis of prospective studies

 

Fruit and vegetables are important components of a healthy diet. A sufficient daily consumption could help prevent and reduce the risk of major diseases, such as cardiovascular diseases and cancer. Approximately 1.7 million (2.8%) of deaths worldwide are attributable to low fruit and vegetable consumption. 1 A recently published WHO report recommends a minimum of 400g of fruit and vegetables per day (excluding potatoes and other edible starchy tubers such as cassava) for the prevention of chronic diseases such as heart disease, cancer, diabetes and obesity. 2 They conducted a systematic review and meta-analysis to clarify the strength and shape of the dose-response relationship between fruit and vegetable intake and risk of cardiovascular disease, cancer and mortality, and the effects of specific types of fruit and vegetables that are most strongly associated with a reduced risk of cardiovascular disease, total cancer or all-cause mortality and with regard to the burden of disease and mortality that may be attributed to a low fruit and vegetable intake.

A total of 142 publications from 95 unique cohort studies were included in the analyses. A systematic review and meta-analysis of published prospective studies relating fruit and vegetable consumption to risk of incidence or mortality from coronary heart disease, stroke, total cardiovascular disease, and total cancer, and to all-cause mortality were performed. They specifically aimed to clarify the strength and shape of the dose-response relationship for these associations and whether specific types of fruit and vegetables were associated with risk.

Researchers observed a reduction in risk of cardiovascular disease and all-cause mortality up to an intake of 800 g/day of fruit and vegetables. For the total cancer no further reductions in risk were observed above 600 g/day. Inverse associations were observed between intake of apples/pears, green leafy vegetables/salads, citrus fruits and cruciferous vegetables3and cardiovascular disease and mortality. As well between green-yellow vegetables and cruciferous vegetables and total cancer risk. On the other hand intake of tinned fruits was associated with increased risk of cardio-vascular disease and all-cause mortality.

Nowadays data regarding fruit and vegetable intake and cancer risk are less clear. But a modest association between fruit and vegetable intake or specific subtypes of fruits and vegetables and total cancer risk cannot yet be excluded. 4The included studies have been inconsistent. Some studies found no clear association, whereas other studies reported inverse associations. However, some of the studies that not found clear associations may have had statistical power too low to detect a modest association. Cohort studies have been more consistent in finding an inverse association between fruit and vegetable intake and risk of coronary heart disease and stroke than for cancer, and this has also been shown in previous meta-analyses as well as in several additional studies that have been published since these meta-analyses. Some of these cohort studies may have had statistical power too low too. Combining studies from different populations increases the sample size and statistical power, but also results in heterogeneity because of differences in the characteristics of the study populations. The results may have influenced by measurement errors in the assessment of fruit and vegetable intake. There was also a difference in serving per day between the studies. A high intake of Fruit and vegetable is often associated with other lifestyle factors such as lower prevalence of smoking, less overweight and obesity, higher physical activity and lower intakes of alcohol and red and processed meat, which could have confounded the observed associations. Many studies adjusted for confounding factors, but they found a little evidence that the results varied substantially whether or not adjustment for most of these confounders was done. It is possible that persons with a high fruit and vegetable intake may be more likely to undergo screening or have better access to or compliance with treatment. This could lead to an improved survival and bias the results for mortality.

Further studies are needed, because of the low number of studies on subtypes of fruit and vegatables, the potential for selective reporting and publication of subtypes that are significantly associated with risk. Moreover, studies are needed to clarify the association between fruit and vegetable intake and specific causes of death other than cancer and cardiovascular disease.

Cohort study

Cohort studies

Suppose we want to know whether an exposure causes a disease. Let’s say whether smoking causes adenocarcinoma of the lung. We could create a randomized control trial where we have collect 1000 people, divide them randomly in two groups of 500. Group A and group B. Group A has to start smoking and group B shall not smoke. That is not possible, since you cannot force people to smoke if they don’t want to. This is considered unethical. However, we can use a different study design; the cohort study. This is a study design where we collect x number of smokers and x number of non-smokers. We assign them to two groups: one that is exposed (to smoking) and one that is not exposed (to smoking). We follow them in time and measure information like death or disease incidence or number of hospitalizations etc. We call the most important information we measure our primary outcome. Usually studies also measure secondary outcomes, but these are of less importance.

Benefits & limitations

In short, a cohort study is where you have an exposed group vs. a non-exposed group and you follow them to see whether the exposure has any relation to the primary outcome. We can do this prospectively, meaning participants have not developed a disease or have not died yet, but have been exposed. We can also choose to assess retrospectively, meaning a group of people has died and we examine whether they have been exposed or not to an exposure we wish to know about. For example 1000 people have died in an asbestos factory and you go back in time to see if there is an exposure difference. You find out that 900 of them were exposed to asbesots while 100 were not. The key distinction of retrospective cohort resaerch is that a researcher goes back in time to find out what might be associated with an outcome.
To repeat in other words, a cohort study involves participants being placed into two groups, followed over a period of time and we measure outcomes that interest us: how many people died, how many people got a disease? And we try to correlate that with factors or exposures they were exposed to. But how do we know this disease they developed is due to the specific exposure and not due to differences in age, sex, income, education level, etc.  For example the group of smokers might have less money that the non-smokers group and therefore visit the GP less. Or the group of non-smokers could be 70-years old while the group of smokers is 40-years old. The researcher tries to correct for these confounders. They try to cancel them out or even better, compare groups who have everything in common, but differ in only exposure or risk factor. That would be ideal, but is almost impossible in real life to find such groups. Thus, one of the biggest limitations of cohort studies is to assess whether associations between a group and risk factors are causal. In other words: how sure can we be that exposure X really causes disease Z without any other factors playing a role, such as age, sex, the location where one lives, the food one eats etc.

Thus to effectively correct for these confounders and find significant results, a cohort study usually takes a long time. Imagine that to develop adenocarcinoma of the lung will take some time after starting with smoking. This long period of time causes another limitation: the condition of people changes. Some in the smoking group decide to stop smoking, others die, others move to another country and discontinue their involvement in the research. What about social factors? A society might evolve to look down on smokers and they might under report their usage. Conversely, it might be considered cool to smoke, and some in the non-smoker group start to smoke, etc. etc. There are so many variables that can change and which make the results hard to interpret. Researchers then want to select people who do not move or who commit and this leads to selection bias.

History

The word cohort comes from the Latin cohors, meaning a group of warriors proceeding together in time. The word cohort study is attributed to Frost, who studied tuberculosis in the beginning of the 20th century. In the 1960s the Dutch scientist Korteweg used this method of study to analyse the incidence of lung cancer in the Netherlands. Yet this does not mean that before the 1930s no cohort studies were performed. They just had a different name, such as longitudinal or follow-up or just prospective studies. In the late 1800s a need for data on health became needed to make effective policy. Not only policymakers required data, but insurance companies as well. They recorded for example the number of deaths for specific occupations. In the 1950s, some very landmark studies which helped us gain insight into risk factors for certain outcomes were implemented. They continue until today. For example the Framingham study which studies an entire town in the US to find out what the risk factors of cardiovascular disease are. Closer to home, we have the Generation X study of the Erasmus MC which follows children over a period of time.

A lot of our knowledge in medicine (cancer and radiation, asbest and mesotheliama, high blood pressure and heart attack) comes from cohort studies. Cohort studies show an association in relative risks. For example high cholesterol gives a 4x greater risk for a heart attack compared to those that have low cholesterol. We have to note that cohorts do not prove anything in the strict scientific sense, they only express risks or probabilities of association. A high cholesterol is associated with heart attack, but it is not proven in the strict scientific sense. But what if researchers take large groups of people and examine them? What if not only a researcher in the USA, but also one in India, Thailand, Sydney and Greece examines the same outcome for the same exposure and they reach the same conclusion and (approximately) same relative risks? Or what if 90% of the smokers develop lungcarcinoma and only 5% of the non-smokers? Is a consistent relative risk of 20 times greater risk of developing a certain outcome enough prove?

What is meant with a strict scientific sense are the postulates of Koch pre the cohort study area. Koch postulated criteria to establish a hard causative relationship between a disease and a microbe, since at the end of the 19th century people were more concerned with communicable diseases (diseases that spread, like virus and bacteria). Koch stated for example that a microbe that causes the disease must be present in all sick individuals and not in healthy ones and then upon introduction of the microbe in a healthy individual, that person would also become sick. These postulates could not really be used for risk factors. With these, you could not prove smoking to be causing lungcancer, because healthy individuals got the same lungcancer too. Therefore, the need for a different type of ‘proving’ emerged with the emergence of non-communicable disease (chronic diseases).

Over the years, scientists have used different analytical models to make cohort studies better and make predictions based on regression / proportional hazard models. Results are being accompanied by many analyses, p-values and confidence intervals and we will discuss in other articles.

Quality of Trials

Quality of Trials

There are many studies that conflict each other in conclusions that can be drawn for them. A study might say that drug X is good for all, while another study might say drug X is actually very bad for your health. The essential question here is how we can rate an article. Not only the problem of conflicting studies is an issue, but also the fact that many articles are just wrong. They are not reproducable, and if they are, the results are not. Research has been flawed and will continue to be flawed. The reasons as to why and how to investigate this will be discussed in another post.  In this post, we will learn how to discern between well executed research and poorly executed research.

When reading an article, you need to know whether the results could be due to chance and what the biases and limitations are, because there always will be.  We also need to make a judgement about the validity by asking questions such as when research is done on a subject population in Sweden, does that mean you can apply the results on a population in the Netherlands?

A study is build on a structure of introduction, method, results, and discussion. All these are essential, while the methodology and results form the core of the research done. In general, a study, through the different sections, answers the following questions:

  1. What was the motive behind the research?
  2. How did we perform the research?
  3. What did we find in the experiment?
  4. Why did we find these results and what biases/limitations did we experience?

As a general rule, we assume that all conclusions of an article are wrong unless they are reproduced. This way, we can be super critical. Let’s start with important aspects that a trial needs to have, starting from the beginning.

The title should be concise, describe in one sentence what the research is about and do so in an objective manner.
The introduction of well written articles describes why the research is being done in the first place. Why now? And how is this trial different from others that have preceded it? I mean, you have to have a good reason to do a trial. Good quality studies use the PICO model. This stands for patients, intervention, control and (primary) outcome. Studies should describe the study subjects. Who are they and were do they come from? What intervention was used, was it a drug, a procedure or something else? Who is the control group and how are they ‘controlled’. Finally, what do we want to know? For example a primary outcome could be the mortality after 3 years. Mortality is a fairly objective parameter to be measured. We have a good objective definition of death. In contrast, happiness is not (yet) an objective measurement. A PICO model could look like this: We tested 60 male patients between 30-40 years old who were divided randomly into an intervention group with drug X and a control group with placebo and we want to measure the mortality after 3 years due to this intervention.

Concerning the methodology of a study, the most important factor is whether the study is reproducable. This does not necessarily mean having the same results, but whether you can reproduce the same conditions in which a trial took place. For example if researchers used substance X and don’t specify what that entails, other researchers cannot reproduce the study. Another important aspect is how generalizable is the study? This connects to the question asked earlier about (external) validity. If an intervention proves to work in athletes, what makes you so sure it will also work on common people? The third most important factor is the power of a study and this relates to sample size.  If an intervention really works, what is the probability that the study will detect a difference between intervention and control group? We usually want this to be 80% and by adjusting the sample size, we can achieve this.

Of course there are other important methodological aspects that apply. These include whether a control population is set up. Otherwise an intervention cannot be compared. Is randomization applied? Randomization shows that a subject in the intervention group was not hand picked to be put there, but was chosen randomly. A researcher might hand pick a healthy person to undergo the intervention of surgery, knowing that his chances of survival are better. That is why we need randomization. Even better would be a double-blind randomization. This is when both the researcher and the subject don’t know in which group they belong. A computer randomly decides. Notice that randomization is sometimes not possible or ethical.

Other important factors, which we will not deal with are how the subjects are analysed. Is this done on an intention-to-treat model or a per-protocol model. Both models can be used to play with randomization.
During the results, the most important aspects are to look for what the drop-out rate is. How many people did not continue with the study? This will affect the study’s power. Also, it shouldn’t be that in the intervention group 50% of the people dropped-out and in the control group only 20% and the researcher still making significant claims. When people drop out, it means that only the better or healthier patients remained. So what happened to those patients that don’t come to the trial center anymore? Are they too ill to make it? That is why the drop-out rate is important to consider. Of importance, albeit less, is that figures and charts should be possible to interpret without reading the text. They should be able to be self explanatory.

During the discussion we need to focus on why and how the results of this trial are different or similar to other trials. Every study has limitations and biases, has the author described them and taken measures to correct for them?

 

This list is not comprehensive, but provides a grip to discern articles. Future posts will focus on the pyramid of trial structures (epidemiological studies, cohorts, RCTs, case-reviews etc.), p-values & confidence intervals and reasons as to why articles might be flawed.

Prevention of coronary heart disease in women

Primary Prevention of Coronary Heart Disease in Women through Diet and Lifestyle

Summary

Despite the drastic declines in coronary heart diseases, it remains the number one cause of death worldwide. About the establishment of coronary heart diseases, a lot is already known. However, research is mostly done in men. Even though, studies show that heart diseases are also an important cause of death in women. Furthermore, it is clear that women show atypical symptoms of coronary disease, these atypical symptoms can cause a delay in the recognition of a heart disease. As heart diseases are one of the main killers, prevention methods should be implemented to improve the overall health of the population. This study assesses the effect of a combination of lifestyle practices on the risk of coronary heart disease in women. The study determines the proportion of coronary events that could potentially be prevented by following a set of dietary and behavioural guidelines. The study also evaluated the effects of the practices on the risk of stroke.

How is the research carried out?

The population consists of female nurses from ‘The Nurses’ Health Study’, which was established in 1976. The nurses from age 30 to 55 years provided detailed information by questionnaire and every two years, a follow-up questionnaire was sent to update the information on potential risk factors and to identify newly diagnosed cases of various diseases.

What does the research tell us?

Woman which were classified in the group of low risk factors (which only made up 3% of the population) had a 5.8 times lower risk of coronary events compared to all the other women. The low risk factor group consisted of women who had stopped smoking or never smoked, who had moderate alcohol consumption, who engaged in physical activity for at least one-half-hour per day of vigorous or moderate activity, who had a BMI (body-mass index) less that 25 and who followed a diet low in trans fat and glycaemic load, high in cereal fibre, high in marine n-3 fatty acids and folate and a high ratio of poly-unsaturated to saturated fat. 82 % of the coronary events in the study group, could be contributed to lack of adherence to this low-risk pattern.

What is to be done?

Among women, adherence to lifestyle guidelines involving diet, exercise and abstinence from smoking is associated with a very low risk of coronary heart disease. However, a big part of the population does not follow these guidelines. Primary prevention should focus on life style factors (smoking, diet, physical activity) on the total population to reduce the incidence of coronary diseases.

Source: N Engl J Med 2000; 343:16-22

Lifestyle intervention in diabetes

Intensive Lifestyle Intervention in Type 2 Diabetes

Summary

Weight loss is recommended for overweight or obese patients with type 2 diabetes on the basis of short-term studies, but long-term effects on cardiovascular disease remain unknown. This study examined whether an intensive lifestyle intervention for weight loss would decrease cardiovascular morbidity and mortality among such patients.

In 16 study centers in the United States, researchers randomly assigned approximately 5000 obese patients with type 2 diabetes to participate in an intensive lifestyle program that promoted weight loss through decreased caloric intake and increased physical activity 1 or to receive diabetes support and education 2. The researchers then followed both groups over a period of maximum 13.5 years to see how they compared on number of deaths from cardiovascular causes, nonfatal myocardial infarction 3, nonfatal stroke or hospitalization for pain.

Naturally, weight loss was greater in the intervention group than in the control group throughout the study. The intensive lifestyle intervention also produced greater reductions in glycated hemoglobin 4 and greater initial improvements in fitness and all cardiovascular risk factors, except for low-density-lipoprotein cholesterol levels.

What do we want to know?

Would an intensive lifestyle intervention designed to achieve weight loss through caloric restriction and increased physical activity decrease cardiovascular morbidity and mortality among overweight or obese adults with type 2 diabetes?

What is the research about?

Researchers took overweight or obese adults with diabetes type 2 and researched whether those people have a decreased cardiovascular morbidity and mortality if they undergo an intensive lifestyle intervention( to achieve weight loss through caloric restriction and increased physical activity.

What does the article tell us and what does this mean?

The study was published in 2013 and showed that an intensive lifestyle intervention did not reduce the risk of cardiovascular morbidity or mortality, as compared with a control program of diabetes support and education, among overweight or obese patients with type 2 diabetes.

 

Source: N Engl J Med 2013; 369:145-154

 

 

The rise of the low-fat diet

The rise of the low-fat diet

In the 1980s and 1990s American 1 medical culture was dominated by the idea that a low-fat diet was the key to a healthy diet. Not only science promoted this idea, but government, food industry and media played an important role too. How did this ideology of low-fat conquer America?

The author uses four areas to answer this question

  • The American tradition of low-fat low-calorie diets for weight reduction
  • The diet-heart hypothesis dating from the post-WW II era
  • The politics of food and low fat
  • The promotion of low fat by the popular health media


The American tradition of low-fat low-calorie diets for weight reduction
Peter Stearns 2 showed that dieting culture started early in the 20th century. American women wanted to lose weight, especially the middle and upper-middle class white women. The impetus was aesthetic rather than health. Women wanted to look good in the new fashionable and less revealing clothing of the 1920s. And by the end of the 20th century we knew that 1g of fat contains 9 calories, whereas 1 g of carbohydrates or protein contains 4 calories. Low calorie intake meant low fat intake. Women started to weigh themselves and the conquering idea was that your weight should stay constant over the years. A 40-year old women should weigh the same as when she was 18 or 25-years old. This notion of weight stability is contested now. According to new research 3, being slightly overweight is health-promoting, especially in elderly people.

The main point, argues the author, is that the low-fat diet to reduce weight idea started off for aesthetic reasons and only in 1950s became popularly promoted by physicians for cardiovascular health. Even though middle-class Americans were familiar with the low-fat concept, they did not like it and this was well before the fast-food market rise in the 1950s. Middle-class Americans consumed a lot of meat and less emphasis was placed on fruits and vegetables. Anne Barone [4.  A lifestyle author], recounts the 1950s in Texas: “The only sanctioned pleasurable activity was eating”. These American values stem from consumerism: “bigger is better” and “quantity over quality”. A supermarket would have more desserts than fresh fruits. Only in the late 1950s – due to medical and technological research advances – experts advised a low-fat diet to those with risk factors and by the 1980s for every American. The idea of a low-fat diet became the norm. It was widely accepted that all Americans would benefit from a low-fat diet.

The diet-heart hypothesis dating from the post-WW II era
In the 1940s coronary heart disease was the leading cause of death in the USA. This triggered a host of studies 4 to identify the causes of heart disease. These studies formulated what became known as the diet-heart hypothesis which holds that diets high in saturated fats and cholesterol are a major cause of coronary heart disease. Several scientists 5 and the American Heart Association proposed a modest proposal to subscribe to low fat diets. In 1977, the American Senate issued a report urging Americans to eat healthier and avoid high-fats. At the same time, economic forces were trying to sabotage this effort. By 1984 scientists supported the idea that healthy food not only prevented heart disease, but also promoted weight loss. Thus a healthy diet was not only for those at risk, but for everyone except babies. The diet-heart hypothesis won support in federal public policy, among health-care practitioners and the popular media, despite not being proven true. Fat was blamed for overweight, obesity and coronary heart disease. Other reports by Surgeon General emphasized the health dangers of dietary fat and in 2000 it was labeled 6 as the unhealthiest part of the American diet.

The politics of food and low fat
The Senate’s report was issued in 1977 telling people to eat low fat. They recommended eating more fruits, vegetables, poultry, and fish and less meat, eggs and high-fat foods. Under pressure from especially the food industry, the report changed its recommendation. For example, the revision changed ‘reduce meat consumption’ to ‘choose meats and fish that will reduce saturated fats’. Other institutions followed, and year after year, the policies became clearer despite protests by the food industry and a consensus among scientists was forged. By the end of the century both Surgeon General and WHO were promoting low fat. The food industry changed its course and realized profit-making opportunities. By the 1990s the industry replaced fat with sugar in processed foods and these took over the supermarket shelves. Low-fat foods had just as many calories as former high-fat versions 7. In 1992, the food pyramid 8 was released and gained wide publicity, promoting low fat. The AHA 9 released its own low-fat campaign by selling approval seals on food products. Fresh food was not labeled, and consumers thought processed foods were healthier. At the end of the paragraph, the author asks rhetorically whether low fat is the only determinant to healthy food, since processed products filled with sugar qualified for the AHA healthy-heart seal. These products, ironically, promoted the fattening of America.

The promotion of low fat by the popular health media
Cholesterol and fat were destroyed by the 1980s due to scientific reports and federal campaigns. The magazine Prevention gained popularity with articles promoting healthy dieting and generated revenue from low-fat processed food advertisements. Another popular media platform, the New York Times, was spreading the message too. In the late 1980s cholesterol took the center stage. Scientific studies suggested that cholesterol was very bad. Yet there was no proof of low-fat dieting reducing heart disease. The New York Times promoted low-fat dieting based on ‘an accumulation of indirect evidence.’ Studies done by Dr.Ornish 10 suggested that a low-fat diet could not only prevent heart disease, but also reverse it. This led to the idea of if low fat is good, then no fat is better. Scientists wondered about the general applicability: whether low-fat dieting should be recommended for every American, since by then it was known that the body produces its own cholesterol too. Also, studies until then studied middle-aged men. Scientists had not studied women or the elderly. Yet the consensus was that cholesterol should be lowered and low-fat dieting should apply for every American. With this consensus in place and the idea that no fat is better than low-fat, Prevention magazine filled their pages with nonfat processed food advertisements. Weight loss was the hype of 1992 and people focused more on fat than on calories. Editor Bricklin praised Snackwell’s nonfat cookies. Two years later, he would write about the Snackwell phenomenon; that the nonfats in fact contributed to the rise of obesity. Yet, fat-free ads continued to dominate the public mind. Americans thought they could eat anything as much as they liked if it was low in fat. With the advancement of science, researchers found out that men and women react differently to heart disease. Also, the idea of the set-point theory found ground. This is the idea that each person’s weight has a stable set point which is difficult, if not impossible, to change. In the late 1990s Jane Brody 11 explained that dieting needs an individualized approach, suggesting that a one-size-fits-all model might not be the most effective. This is a change from her earlier stance on the matter.

Two important challenges to low-fat diet emerged: the development of statin and its proven use in lowering cholesterol therapeutically, suggesting drugs are more effective than dieting and thus challenging the hegemony of low-fat diet. The second is scientific dissent. Already in the early 1950s researchers 12 claimed it was not about the amount of fat, but the kind of fat one consumes. This skepticism re-emerged in the 1990s. Dr. Willet 13 noted that public health officials had been very dogmatic about the low-fat diet’s efficacy and explained that there was no proof the diet worked. Other researchers noted that the evidence is not as clear as wanted, but that the strategy of low-fat consumption was overall a laudatory goal. 14. Dr. Willet also noted that the substitution of carbohydrates for fats caused the body to reduce HDL 15 while raising triglyceride fats, thought to be bad fats. The skeptics recommended the Mediterranean diet: high in monounsaturated fats, low in saturated fats. This idea of the kind of fat was making place for the idea of the amount of fat. Another popular idea was that only calories count.

Since the 1970s America had gotten fatter. Obesity rose by 50%. People were confused. The millennium change brought new problems; sugar and stress came to the center of the stage and nor people nor scientists agreed on the best method to prevent obesity. Scientists had no consensus and thus the general advice would be to consume in moderation. There is no one-size-fits-all advice. Fat diets differ between genders, ages, borders, races, and cultures. By now, it is clear that the low-fat diet campaign had unintended consequences and made people believe that they could eat anything in any quantity as long as its low in fat.

Conclusion

So how did the low-fat idea conquer America? There was already a culture beginning to form of low-fat dieting for aesthetic reasons. When science confirmed the health benefits, people were reluctant to follow and have faith in science. This tremendous faith in science as the ultimate truth came to the political scene. The state promoted low-fat dieting and the food industry played in on this. The popular media, which carried this faith in science, preached the message of low-fat, without regarding the voice of skeptics such as Robert Atkins – whose diet turned out to have effect. Not only did they not regard their ideas, they vigorously attacked them.

Can we really say that the low-fat campaign was not effective? Epidemiologists note a paradox. The mortality due to heart disease fell enormously between 1950-1999, yet the incidence of heart disease remained the same. A study in 1998 16 suggested that the reduced mortality was due to better medical/surgical interventions and secondary prevention rather than primary.

And the low-fat diet, does it work? Not really 17. As of now – but maybe not in the future – the so-called Mediterranean diet is favored.

 

This article is an excerpt of a summary of La Berges How the Ideology of Low Fat Conquered America
J Hist Med Allied Sci (2008) 63 (2): 139-177