Placebo Placebo
A well-recognized component of therapeutic drug testing is comparing the experiences of people receiving the drug with those of people not receiving the drug. In short, the testers want to know whether the drug actually does anything. In these tests, the control group– the people not taking the drug– often receives an inert substance the group nevertheless believes to be the drug being tested. Sometimes, despite receiving no medication whatsoever, members of the control group experience improvements in their symptoms. Rather than from a targeted, scientific testing process, this effect also can result from a long, steady, general drumbeat about the efficacy of a therapy. The touted ability of vitamin C to prevent the common cold appears to be such an example.
When people discuss the placebo effect, they usually apply a negative connotation. If the appearance of the placebo effect isn’t a disappointment, as in the case of a tested drug that is less effective than hoped, it’s a fraud, a shorthand way of saying Emergen-C doesn’t really work.
That is, unless you believe it does, in which case it might.
Instead of dismissing the placebo effect as shorthand for failed expectations or a dead end, perhaps it is an opening for new exploration. (Perhaps, and likely so, such exploration already has occurred.) If the mind, through delusion (conscious or unconscious), belief (actual or fraudulently induced), or faith (earnest, blind, or false), can achieve physiological results in the body, we may need to consider manifestations of that capability like the placebo effect an entry point rather than a concluding point, something to be harnessed or developed, rather than dismissed.
I will speculate that such an experiment has been done, where everyone is receiving a placebo, but the experimenters accidentally let it slip somehow to one group (or each group) that they are/aren’t receiving the real thing.
A related topic that I recently heard about from Freeman Dyson who collaborated with Bill Press on an iterative prisoners dilemma type problem was applied to clinical trials. The paper is here:
http://www.pnas.org/content/106/52/22387
Even if the math is a bit tricky, the concept isn’t too hard. Current legislation requires two equal size sample sizes when comparing two approaches to treat a life-threatening condition. Press, (with some help from Dyson) showed that it is possible to make the same statistical claim while exposing many fewer patients to the worse treatment by waiting until some patients have some results before deciding what to give to the next patient on the list (and maintaining a double blind process).