Wayne Hsiung

Published on:

December 12, 2014

Why the HRC study of former vegetarians is wrong

by Wayne Hsiung

 What do gunshots and retroviruses have to do with a recent study on vegan messaging?
What do gunshots and retroviruses have to do with a recent study on vegan messaging?

The Humane Research Council released a study a few days ago claiming to show, among other things, that meat reduction is a more effective strategy than veganism. The first and most important point to make is that this is an example of what social scientists call a "reductionist" approach. Looking through the lens of individual consumers, as the brilliant sociologist Duncan Watts has instructed us, is like trying to understand forest fires by examining individual sparks or individual trees. The properties of the ecosystem as a whole -- in particular, the existence of highly flammable and connected kindling -- are what cause forest fires, so looking at the characteristics of individual components of the ecosystem simply won't provide any answers. Similarly, the changes activists are seeking to cause involve the interactions between many individuals -- moderated through network variables such as social norms and legal rules -- and looking at individual motivations will simply lead us down the wrong path. The best research on systemic change, in turn, shows us that, instead of focusing on individual-level vegan outcomes, we should be trying to build empowered networks of activists. We have to, in short, build up the kindling. 

But there are also two very important technical limitations that should give us pause in giving any weight to the conclusions drawn from the study. I'll try to describe both problems in layman's terms. 

1. The study is "sampling on the dependent variable."

HRC claims to test how various messages (i.e. various "treatments") affect the success of veg*n advocacy. But, importantly, they only look at people who were successfully converted, rather than all people exposed to a message. This is what scientists call "sampling on the dependent variable," and it statistically distorts the study in a fundamental way. 

A comparison may be helpful here. Say, for example, I were comparing the lethality of AIDS (a slow progressing disease), on the one hand, and gun shots (a fast progressing "disease"), on the other, but I only looked at people who died or nearly died as a result of each "treatment" rather than the entire universe of people who are exposed to AIDS or a gunshot. I might erroneously conclude that AIDS is much more dangerous than gun shots because all the people who have a near-death experience with gun shots seem to recover (naturally, because they are otherwise healthy people) while those who have a near-death experience with AIDS seem to continue on to death. But that ignores the fact that there are plenty of people with AIDS who never even get to the point of a near death experience because we have so many drugs to control the disease. It's essentially a treatable chronic condition in the West. We would have basically said, "Oh my gosh, AIDS is so deadly!" but only because we've ignored all the people who get AIDS, but never reach the point where they're on the precipice of death. 

Looking at only people who were successfully converted to veganism "slowly" is similar to looking at only people who have a deadly experience with AIDS. *Of course* it will look like the treatment at issue is powerful because, well, you're only looking at people for whom it's had a powerful effect, while ignoring the many others (millions others, in both cases) for whom it's had no effect. (For those interested in a numerical example, see the bottom of this blog post.) 

2. The study doesn't test causality. 

The HRC study is essentially a series of self-reported correlations. Any professional scientist will tell you that there are an infinite number of models to fit any particular set of data. For example, say I wake up, then I see the sun rise. Did I cause the sun to rise? Did the sun wake me up ahead of time because it wanted to show me its bright colors? Was there a magical elf who wanted to tell a story of me waking up, then the sun rising, who caused both to happen? Or was the correlation entirely random? 

You cannot distinguish between any of these theories with correlations. The correlations are still interesting, as they do limit the sorts of causal theories you should test. (For example, if I see the sun rises every morning after I get up, I should probably not test a theory that predicts the sun only rises in the evening when I go to sleep.) But the best way to look at these results -- and the way any professional economist would look at them -- is that this provides some raw data upon which we can actually perform some interesting experiments. For example, one could try to perform an instrumental variable analysis that would replicate the effects of a randomized controlled trial (the gold standard in empirical science) if we could find a variable that's correlated with the independent but not dependent variable. I don't know what that instrumental variable might be, however, as the data is not yet open to the public. It also doesn't solve the former problem -- sampling on the dependent variable -- so my hopes are fairly slim.

Upshot? Great effort. But it's probably best for social science to be performed by professional social scientists at research institutions. Frankly, even most of that research is spurious. (One of my former advisers used to tell me that only 1 out of 20 articles in even the best journals actually had any result to trust.) It's unreasonable to think we can do any better, as a movement, with our limited funds and expertise. 

Numerical Example (Note that this is hypothetical and used only to illustrate the problem) 

- 100 people exposed to "go fast" message
- 100 people exposed to "go slow" message

- 50 people converted with "go fast"
- 5 people converted with "go slow"

Relapse after one month
- 15 people relapse after going fast
- 1 person relapses after going slow. 

HRC-equivalent analysis:
- We looked at the 55 people who are current or former vegetarians, and we found that 15 out of the 16 who relapsed within a month, i.e. 94%, transitioned to veg*ism quickly. This shows us that people are going too fast and that we have to change our message to "go slow." 

Correct analysis: 
- We looked at 200 people who were exposed to two different messages: go fast and go slow. The go fast message appeared to be 1000% more effective than go slow. However, relapse occurred very quickly, and those who changed under the "go fast" treatment appeared to relapse at slightly faster rates (30% compared to 20% relapse within a month) than those who changed under "go slow." This could be because the go fast message was less robust. It also could be because those who change quickly in one direction, i.e. towards veg*ism, also change quickly in the other direction. The overwhelming number of former vegetarians who transitioned to vegetarianism quickly, in turn, could simply indicate that the "go fast" message is much more effective than "go slow." Further analysis is warranted to make any definitive conclusions.