Investigating Woo: Spring Forest Qigong “research”

Posted on October 29, 2011

5


Originally posted on An American Atheist.

This is a follow-up to my previous post investigating a study from the Mayo Clinic in collaboration with the University of Minnesota claiming that external qigong, a form of ancient Chinese medicine, is an effective treatment for chronic pain.  My critique apparently got on the nerves of at least one person, Drew Hempel, qigong enthusiast and woo extraordinaire, who offered his assurance regarding the validity of the study and its methodology.  Sadly, it’s not assurance that I am after—it’s evidence.  However, maybe I was wrong; maybe the study was academically rigorous and its conclusions actually sound.  After all, I am only an undergraduate (despite the fact that, in a recent blog post, Hempel incorrectly described me as a “university senior biologist”), and I admittedly only read the abstract.

Mr. Hempel has posted on internet blogs and forums statements such as the following:

Last fall there was a new study done by doctors from one of the top rated hospitals in the world — the Mayo Clinic in Minnesota. The study proved the existence and the efficacy of external qi (paranormal energy) healing transmission. . .  O.K. I want to emphasize the implications of this study. This is ground-breaking official proof of something that undermines the very foundation of science.

Such extraordinary claims require even more extraordinary evidence, and Hempel believes, along with many, many others, that this evidence exists in a study performed at the Spring Forest Qigong center in Minnesota, published in The American Journal of Chinese Medicine.

After Hempel’s criticisms of my post and his request that I not “give up so easily” in my search for truth (I suggest Hempel do the same), I decided to check whether or not my university subscribed to the specific journal in order to obtain the full text of External Qigong for Chronic Pain (2010), the study that had supposedly demonstrated the efficacy of qigong.  Much to my surprise, they do, and I found it.  While reading the study, my initial criticisms based on the abstract alone became more and more cemented.  I am now–more than ever–convinced that the study is absolutely bunk from the top down.  The flaws are numerous, and I have included them below in point form, followed by a more in-depth criticism regarding the methodology behind each.

1.  Flawed sampling method.

2.  Lack of adequate controls.

3.  Subjectivity in data collection.

4.  Reliance on anecdotal evidence.

Flawed sampling method

Generally, when attempting to draw statistics from a given population, a method is used to ensure that those individuals (the units) chosen for the study are representative of the population as a whole and do not reflect some sort of uneven sampling which could skew results.  For example, if one were to try to acquire statistics estimating the percentage of Americans who are bedridden by sitting in a park, tallying people as they walk past, the estimate would be radically off kilter due to the fact that bedridden people, because they are bedridden, are not going to be in the park, and thus will be excluded from the study.  This is known as a sampling bias, and the Spring Forest Qigong (SFQ) experiment represents a textbook case.  Here is how subjects were chosen to participate in the SFQ study (emphasis mine):

All 50 participants were recruited from those who called the Spring Forest Qigong (SFQ) Center, in Minneapolis, Minnesota, to make an initial appointment for external qigong.  The SFQ receptionist asked callers if they suffered from chronic pain.  Those who responded positively to the question were offered the opportunity to participate in the study . . .

It should be immediately clear to anyone that the above is extremely sloppy, biased methodology.  The population (or the parameter of interest) is not defined.  Presumably it is “all humans with chronic pain,” but this study fails to obtain that because they only chose subjects who called the clinic requesting qigong treatment.  Every subject chosen to participate in the study was therefore already partial to the idea that qigong would be an effective treatment  for their pain.  The actual parameter they are testing is “humans with chronic pain who believe qigong works,” which creates extremely fertile ground for a placebo effect.

Furthermore, the fact that subjects were able to decide whether or not they would even participate in the study is problematic.  This potentially creates what is known as a self-selection bias, which is a recognition of that fact that the choice to participate in a study may often be correlated with other factors about the person that could affect the results; in other words, it sometimes means the participants are not truly representative regarding the population of interest.

To further add to the list of biases evident throughout the study, I found a bit of information to be particularly interesting.  I refer to the way in which subjects were compensated for their participation.

In compensation for their time, each participant received a 50% discount off the usual fee for EQT for each of their four qigong visits.  This represented a discount of approximately $200 for each participant.

In case you missed it, I’ll spell it out for you.  The subjects were paying money to have the treatment.  It is obvious that someone who has invested $200 in a treatment that they already believe to be effective would be more likely to report that their condition has improved, perhaps fooling themselves, otherwise they have to admit that they’ve just been duped by con people.  I am reminded of a family friend who cannot tell the difference between a plain glass earring and one made from authentic diamond, that is, until she is told which is which, whereby thereafter she insists she can, of course, now see the difference.  Now it is obviously shinier!  If she were told the glass was the diamond, I am sure she would have responded similarly.  The bottom line is, if you’re investing serious cash for treatment, you’re likely, though unconsciously, preparing yourself for it to work.  I don’t even know if this bias has a name, so I’ll coin a term and dub it the investment bias.  Once again, the subjects are susceptible to the placebo effect.

Lastly, and this is a point I made in my first article, 74% of the participants were taking other forms of medication in concert with qigong.  Even more worrying, 30% (15 people) reported increasing or decreasing their doses, beginning new prescriptive medications, or discontinuing a previous medication during the four-week trial period.  This adds a lot of noise to the data, making it extremely difficult to draw conclusions ascertaining the cause of any positive results.

Lack of adequate control groups

This is perhaps the most revealing, and certainly the most crippling aspect of the SFQ study.  One gets the impression that proponents of ancient Chinese wisdom, especially those purporting the validity of a force, or energy, called qi (also known as “chi”), desperately want to demonstrate its reality scientifically.  They have science envy.  Indeed, many believers of qi, and the use of qigong as a means to “regulate its flow” and achieve “dynamic mind-body integration,” whatever that means, claim that qigong is extremely effective in treating serious conditions and illnesses such as late-term cancer, multiple sclerosis, and Parkinson’s disease.  So, one would think, scientifically demonstrating qigong’s efficacy with respect to the amelioration of acute, chronic pain should be a cake-walk.  However, when actually given the chance to put up or shut up, the “researchers” involved fail to implement the most basic of techniques to ensure reliability of their results.  This failure usually arises due to the lack of adequate controls set up to help rule out alternative hypotheses.  The SFQ study is, sadly, no different.

The subjects were divided into two groups, one that received external qigong treatment (EQT), and one that received equal attention time (EAT).  Right away a mental red-flag is raised; what on earth is equal attention time?  The paper explains:

For the control [EAT] group, an investigator engaged each participant in conversation and provided full attention to the participant for 25 to 30 min.

This is their idea of a control, and it is seriously inexcusable methodology.  This study is masquerading around as science, attempting to persuade people of the legitimacy of qigong, yet their actual intent—to deceive—is all too apparent due to the lack of adequate controls protecting against the placebo effect.  Here is what they should have done, and its simplicity counters any excuse for its omission.  There should have been three groups.  One being the group receiving actual qigong treatment, another receiving a mock qigong treatment, where subjects think they are receiving proper qigong treatment but are not, and a final group that receives no treatment (a zero stimulus group).  What this adds to the experiment is a way to tell whether or not external qigong treatment performs better than a similarly administered placebo (the mock or “sham” treatment).  It also allows comparison of any of the stimulus groups to the zero stimulus group.  Although no excuse would be adequate to resolve the issue of the missing control, they sure enough offered one, and it’s pretty ridiculous.

. . . in keeping with qigong philosophy and at the request of the qigong master, sham treatments were avoided and replaced with EAT and delayed treatments.  Thus, deception was avoided and control subjects also had the opportunity to benefit from their experience.  Our goal was to provide the actual qigong experience for the research subjects as much as possible without imposing Western biomedical concepts on the practice.

Tell me, what is the point?  Why bother having any experiment at all in an attempt to convincingly demonstrate the validity of qigong to the greater scientific community if you’re going to side-step the whole process?  Who is the target audience of this study?  It’s certainly not people already into ancient Chinese medicine—they are already sold on the idea.  And the target audience simply couldn’t be the scientific community, since they state in clear terms that qigong philosophy forbids “Western biomedical concepts,” such as including a controlled placebo group which is a sine qua non condition for these types of studies.  I believe the intended crowd is the general, scientifically illiterate public.  Most people won’t read the actual study, and simply trust the researchers at their word.  Now, with a study to point to, qigong practitioners gain a wider public audience and the appearance of credibility.

The fact that no sham treatments were given is extremely concerning, and it indeed undermines the validity of the entire study.  Additionally, subjects knew that they were participating in a study, which leaves the door wide open for an attention bias on top of the already prevalent placebo effect.  The term attention bias refers to the fact that people who know they are participating in a study generally behave differently than they would if they were totally oblivious to the fact that data taken about them was going to be used in a study.  Subjects should not have been informed that they were being studied.

Subjectivity in data collection

The method in which initial pain data was collected from test subjects was extremely subjective, and all subsequent pain data was subject to a recall bias.  Participants’ pain was assessed using a visual analog scale (VAS), which is simply a 10-cm line with the left side representing “no pain” and the far right labeled “pain as bad as it could possibly be.”  Here is how it looks:

Prior to testing, the test subjects were told to draw a line, from left to right, stopping at a point along the line that accurately represents their current level of pain.  This is very subjective, since obviously they are experiencing some pain, yet probably have no real way to conceptualize the worst pain possible.  The scale simply isn’t in the realm of common experience.  Furthermore, before each of the four weekly treatments, subjects retook a VAS pain test, and all of the VAS pain data was later compared at the end of the study.  The problem with this method of assessing pain is that each participant is relying on memory alone when reporting updated pain data, and memory isn’t very reliable.  This is known as a recall bias.  In the study, each person knows how long of a line they drew the previous week, and must rely solely on memory of that week-old state in order to decide how much their pain is receding away from the worst, most excruciating pain imaginable.  I don’t think I could do that.  Poor recollection of the past, coupled with all the biases elucidated above and the fact that people who know they are being studied act differently, paves the way for substantial conscious or unconscious distortion of data.

Reliance on anecdotal evidence

The textbook Statistics for the Life Sciences states the following:

The accumulation of anecdotes often leads to conjecture and to scientific investigation, but it is predictable pattern, not anecdote, that establishes a scientific theory.

I couldn’t have said it better myself.  It really gets to the heart of what science is all about: making predictions.  The website for Spring Forest Qigong boasts about its page full of personal testimonies of those supposedly healed of major ailments.  But, as indicated above, these testimonies should only be used as justification for further inquiry.  Taken alone, these testimonies do not amount to evidence for the efficacy of qigong.  Michael Shermer, founder of The Skeptics Society, wrote an article for Scientific American discussing the reason people erroneously find anecdotal evidence so convincing.

. . . [T]hinking anecdotally comes naturally, whereas thinking scientifically does not. . . The reason for this cognitive disconnect is that we have evolved brains that pay attention to anecdotes because false positives (believing there is a connection between A and B when there is not) are usually harmless, whereas false negatives (believing there is no connection between A and B when there is) may take you out of the gene pool. Our brains are belief engines that employ association learning to seek and find patterns. Superstition and belief in magic are millions of years old, whereas science, with its methods of controlling for intervening variables to circumvent false positives, is only a few hundred years old. So it is that any medical huckster promising that A will cure B has only to advertise a handful of successful anecdotes in the form of testimonials.

And anecdotal testimonies are literally all that Spring Forest Qigong has; no wonder they’re so obsessed with them.  However, they’re completely lacking where it actually counts—hard data derived from properly controlled experiments.

Conclusion

I have demonstrated that the Spring Forest Qigong study does not meet even the lowest bar of relevant experimental standards to support their conclusion that external qigong treatment is effective at ameliorating chronic pain.  In fact, I hesitate to even call it a study, since those performing it inexcusably left out many essential procedural necessities, the most obvious being a proper placebo control group.  Ignorance of this methodology is no excuse, as this is considered the “Gold Standard” in experimental procedure, and is perhaps one of the first research tools taught in science and statistics classrooms worldwide.

In order to preemptively counter some criticisms, I feel that I must emphasize that I do not believe a study must be flawless in order to be worthy of publication.  It is impossible to perform a perfect experiment, controlling for every conceivable source of bias and error.  However, what is expected of an experiment is that it be performed using the best possible methods under reasonable conditions.  It seems quite obvious that those performing the study in question showed little to no effort or concern with carrying out properly controlled experimental procedure that would have, in no small way, safeguarded their study against such novice error.

I find it difficult not to question the sincerity of those involved in the study, namely Ann Vincent, Jamia Hill, Kelly M. Kruk, Stephen S. Cha, and Brent A. Bauer.  I do not believe they approached this study in an attempt to honestly demonstrate the validity of qigong, but instead purposefully cut experimental corners in order to lend credence to their own paranormal leanings.  The Mayo Clinic and the University of Minnesota should be ashamed and embarrassed to have their name associated with a study demonstrating such a poor grasp of experimental methodology, scientific rigor, and intellectual honesty.

Lastly, I call into question the peer review process (or perhaps lack thereof) within The American Journal of Chinese Medicine.  What does it say about the academic standards of such a journal when a biology undergraduate is able to render a paper completely and utterly invalid?

It cannot be repeated enough, extraordinary claims require extraordinary evidence.  Has the Mayo Clinic and University of Minnesota supported their extraordinary claim with equally compelling evidence?  The answer is a resounding “No,” they have not.

NOTE:  Replication of copyrighted materials above is in compliance with the Fair Use Doctrine.

Advertisements