2008 - Bad Science
Afterwards, when they measured alertness—as well as any subjective effects—the researchers found that two pills were more effective than one, as we might have expected (and two pills were better at eliciting side-effects too). They also found that colour had an effect on outcome: the pink sugar tablets were better at maintaining concentration than the blue ones. Since colours in themselves have no intrinsic pharmacological properties, the difference in effect could only be due to the cultural meanings of pink and blue: pink is alerting, blue is cool. Another study suggested that Oxazepam, a drug similar to Valium (which was once unsuccessfully prescribed by our GP for me as a hyperactive child) was more effective at treating anxiety in a green tablet, and more effective for depression when yellow.
Drug companies, more than most, know the benefits of good branding: they spend more on PR, after all, than they do on research and development. As you’d expect from men of action with large houses in the country, they put these theoretical ideas into practice: so Prozac, for example, is white and blue; and in case you think I’m cherry-picking here, a survey of the colour of pills currently on the market found that stimulant medication tends to come in red, orange or yellow tablets, while antidepressants and tranquillisers are generally blue, green or purple.
Issues of form go much deeper than colour. In 1970 a sedative—chlordiazepoxide—was found to be more effective in capsule form than pill form, even for the very same drug, in the very same dose: capsules at the time felt newer, somehow, and more sciencey. Maybe you’ve caught yourself splashing out and paying extra for ibuprofen capsules in the chemist’s.
Route of administration has an effect as well: salt-water injections have been shown in three separate experiments to be more effective than sugar pills for blood pressure, for headaches and for postoperative pain, not because of any physical benefit of salt-water injection over sugar pills—there isn’t one—but because, as everyone knows, an injection is a much more dramatic intervention than just taking a pill.
Closer to home for the alternative therapists, the BMJ recently published an article comparing two different placebo treatments for arm pain, one of which was a sugar pill, and one of which was a ‘ritual’, a treatment modelled on acupuncture: the trial found that the more elaborate placebo ritual had a greater benefit.
But the ultimate testament to the social construction of the placebo effect must be the bizarre story of packaging. Pain is an area where you might suspect that expectation would have a particularly significant effect. Most people have found that they can take their minds off pain—to at least some extent—with distraction, or have had a toothache which got worse with stress.
Branthwaite and Cooper did a truly extraordinary study in 1981, looking at 835 women with headaches. It was a four-armed study, where the subjects were given either aspirin or placebo pills, and these pills in turn were packaged either in blank, bland, neutral boxes, or in full, flashy, brand-name packaging. They found—as you’d expect—that aspirin had more of an effect on headaches than sugar pills; but more than that, they found that the packaging itself had a beneficial effect, enhancing the benefit of both the placebo and the aspirin.
People I know still insist on buying brand-name painkillers. As you can imagine, I’ve spent half my life trying to explain to them why this is a waste of money: but in fact the paradox of Branthwaite and Cooper’s experimental data is that they were right all along. Whatever pharmacology theory tells you, that brand-named version is better, and there’s just no getting away from it. Part of that might be the cost: a recent study looking at pain caused by electric shocks showed that a pain relief treatment was stronger when subjects were told it cost $2.50 than when they were told it cost 10c. (And a paper currently in press shows that people are more likely to take advice when they have paid for it.)
It gets better—or worse, depending on how you feel about your world view slipping sideways. Montgomery and Kirsch [1996] told college students they were taking part in a study on a new local anaesthetic called ‘trivaricaine’. Trivaricaine is brown, you paint it on your skin, it smells like a medicine, and it’s so potent you have to wear gloves when you handle it: or that’s what they implied to the students. In fact it’s made of water, iodine and thyme oil (for the smell), and the experimenter (who also wore a white coat) was only using rubber gloves for a sense of theatre. None of these ingredients will affect pain.
The trivaricaine was painted onto one or other of the subjects’ index fingers, and the experimenters then applied painful pressure with a vice. One after another, in varying orders, pain was applied, trivaricaine was applied, and as you would expect by now, the subjects reported less pain, and less unpleasantness, for the fingers that were pre-treated with the amazing trivaricaine. This is a placebo effect, but the pills have gone now.
It gets stranger. Sham ultrasound is beneficial for dental pain, placebo operations have been shown to be beneficial in knee pain (the surgeon just makes fake keyhole surgery holes in the side and mucks about for a bit as if he’s doing something useful), and placebo operations have even been shown to improve angina.
That’s a pretty big deal. Angina is the pain you get when there’s not enough oxygen getting to your heart muscle for the work it’s doing. That’s why it gets worse with exercise: because you’re demanding more work from the heart muscle. You might get a similar pain in your thighs after bounding up ten flights of stairs, depending on how fit you are.
Treatments that help angina usually work by dilating the blood vessels to the heart, and a group of chemicals called nitrates are used for this purpose very frequently. They relax the smooth muscle in the body, which dilates the arteries so more blood can get through (they also relax other bits of smooth muscle in the body, including your anal sphincter, which is why a variant is sold as ‘liquid gold’ in sex shops).
In the 1950s there was an idea that you could get blood vessels in the heart to grow back, and thicker, if you tied off an artery on the front of the chest wall that wasn’t very important, but which branched off the main heart arteries. The idea was that this would send messages back to the main branch of the artery, telling it that more artery growth was needed, so the body would be tricked.
Unfortunately this idea turned out to be nonsense, but only after a fashion. In 1959 a placebo-controlled trial of the operation was performed: in some operations they did the whole thing properly, but in the ‘placebo’ operations they went through the motions but didn’t tie off any arteries. It was found that the placebo operation was just as good as the real one—people seemed to get a bit better in both cases, and there was little difference between the groups—but the most strange thing about the whole affair was that nobody made a fuss at the time: the real operation wasn’t any better than a sham operation, sure, but how could we explain the fact that people had been sensing an improvement from the operation for a very long time? Nobody thought of the power of placebo. The operation was simply binned.
That’s not the only time a placebo benefit has been found at the more dramatic end of the medical spectrum. A Swedish study in the late 1990s showed that patients who had pacemakers installed, but not switched on, did better than they were doing before (although they didn’t do as well as people with working pacemakers inside them, to be clear). Even more recently, one study of a very hi-tech ‘angioplasty’ treatment, involving a large and sciencey-looking laser catheter, showed that sham treatment was almost as effective as the full procedure.
‘Electrical machines have great appeal to patients,’ wrote Dr Alan Johnson in the Lancet in 1994 about this trial, ‘and recently anything to do with the word LASER attached to it has caught the imagination.’ He’s not wrong. I went to visit Lilias Curtin once (she’s Cherie Booth’s alternative therapist), and she did Gem Therapy on me, with a big shiny science machine that shone different-coloured beams of light onto my chest. It’s hard not to see the appeal of things like Gem Therapy in the context of the laser catheter experiment. In fact, the way the
evidence is stacking up, it’s hard not to see all the claims of alternative therapists, for all their wild, wonderful, authoritative and empathic interventions, in the context of this chapter.
In fact, even the lifestyle gurus get a look in, in the form of an elegant study which examined the effect of simply being told that you are doing something healthy. Eighty-four female room attendants working in various hotels were divided into two groups: one group was told that cleaning hotel rooms is ‘good exercise’ and ‘satisfies the Surgeon General’s recommendations for an active lifestyle’, along with elaborate explanations of how and why; the ‘control’ group did not receive this cheering information, and just carried on cleaning hotel rooms. Four weeks later, the ‘informed’ group perceived themselves to be getting significantly more exercise than before, and showed a significant decrease in weight, body fat, waist-to-hip ratio and body mass index, but amazingly, both groups were still reporting the same amount of activity.*
≡ I agree: this is a bizarre and outrageous experimental rinding, and if you have a good explanation for how it might have come about, the world would like to hear from you. Follow the reference, read the full paper online and start a blog, or write a letter to the journal that published it.
What the doctor says
If you can believe fervently in your treatment, even though controlled tests show that it is quite useless, then your results are much better, your patients are much better, and your income is much better too. I believe this accounts for the remarkable success of some of the less gifted, but more credulous members of our profession, and also for the violent dislike of statistics and controlled tests which fashionable and successful doctors are accustomed to display.
Richard Asher, Talking Sense, Pitman Medical, London, 1972
As you will now be realising, in the study of expectation and belief, we can move away from pills and devices entirely. It turns out, for example, that what the doctor says, and what the doctor believes, both have an effect on healing. If that sounds obvious, I should say they have an effect which has been measured, elegantly, in carefully designed trials.
Gryll and Katahn [1978] gave patients a sugar pill before a dental injection, but the doctors who were handing out the pill gave it in one of two different ways: either with an outrageous oversell (‘This is a recently developed pill that’s been shown to be very effective…effective almost immediately…’); or downplayed, with an undersell (‘This is a recently developed pill…personally I’ve not found it to be very effective…’). The pills which were handed out with the positive message were associated with less fear, less anxiety and less pain.
Even if he says nothing, what the doctor knows can affect treatment outcomes: the information leaks out, in mannerisms, affect, eyebrows and nervous smiles, as Gracely [1985] demonstrated with a truly ingenious experiment, although understanding it requires a tiny bit of concentration.
He took patients having their wisdom teeth removed, and split them randomly into three treatment groups: they would have either salt water (a placebo that does ‘nothing’, at least not physiologically), or fentanyl (an excellent opiate painkiller, with a black-market retail value to prove it), or naloxone (an opiate receptor blocker that would actually increase the pain).
In all cases the doctors were blinded to which of the three treatments they were giving to each patient: but Gracely was really studying the effect of his doctors’ beliefs, so the groups were further divided in half again. In the first group, the doctors giving the treatment were told, truthfully, that they could be administering either placebo, or naloxone, or the pain-relieving fentanyl: this group of doctors knew there was a chance that they were giving something that would reduce pain.
In the second group, the doctors were lied to: they were told they were giving either placebo or naloxone, two things that could only do nothing, or actively make the pain worse. But in fact, without the doctors’ knowledge, some of their patients were actually getting the pain-relieving fentanyl. As you would expect by now, just through manipulating what the doctors believed about the injection they were giving, even though they were forbidden from vocalising their beliefs to the patients, there was a difference in outcome between the two groups: the first group experienced significantly less pain. This difference had nothing to do with what actual medicine was being given, or even with what information the patients knew: it was entirely down to what the doctors knew. Perhaps they winced when they gave the injection. I think you might have.
‘Placebo explanations’
Even if they do nothing, doctors, by their manner alone, can reassure. And even reassurance can in some senses be broken down into informative constituent parts. In 1987, Thomas showed that simply giving a diagnosis—even a fake ‘placebo’ diagnosis—improved patient outcomes. Two hundred patients with abnormal symptoms, but no signs of any concrete medical diagnosis, were divided randomly into two groups. The patients in one group were told, ‘I cannot be certain of what the matter is with you,’ and two weeks later only 39 per cent were better; the other group were given a firm diagnosis, with no messing about, and confidently told they would be better within a few days. Sixty-four per cent of that group got better in two weeks.
This raises the spectre of something way beyond the placebo effect, and cuts even further into the work of alternative therapists: because we should remember that alternative therapists don’t just give placebo treatments, they also give what we might call ‘placebo explanations’ or ‘placebo diagnoses’: ungrounded, unevidenced, often fantastical assertions about the nature of the patient’s disease, involving magical properties, or energy, or supposed vitamin deficiencies, or ‘imbalances’, which the therapist claims uniquely to understand.
And here, it seems that this ‘placebo’ explanation—even if grounded in sheer fantasy—can be beneficial to a patient, although interestingly, perhaps not without collateral damage, and it must be done delicately: assertively and authoritatively giving someone access to the sick role can also reinforce destructive illness beliefs and behaviours, unnecessarily medicalise symptoms like aching muscles (which for many people are everyday occurrences), and militate against people getting on with life and getting better. It’s a very tricky area.
I could go on. In fact there has been a huge amount of research into the value of a good therapeutic relationship, and the general finding is that doctors who adopt a warm, friendly and reassuring manner are more effective than those who keep consultations formal and do not offer reassurance. In the real world, there are structural cultural changes which make it harder and harder for a medical doctor to maximise the therapeutic benefit of a consultation. Firstly, there is the pressure on time: a GP can’t do much in a six-minute appointment.
But more than these practical restrictions, there have also been structural changes in the ethicil presumptions made by the medical profession, which make reassurance an increasingly outre business. A modern medic would struggle to find a form of words that would permit her to hand out a placebo, for example, and this is because of the difficulty in resolving two very different ethical principles: one is our obligation to heal our patients as effectively as we can; the other is our obligation not to tell them lies. In many cases the prohibition on reassurance and smoothing over worrying facts has been formalised, as the doctor and philosopher Raymond Tallis recently wrote, beyond what might be considered proportionate: ‘The drive to keep patients fully informed has led to exponential increases in the formal requirements for consent that only serve to confuse and frighten patients while delaying their access to needed medical attention.’
I don’t want to suggest for one moment that historically this was the wrong call. Surveys show that patients want their doctors to tell them the truth about diagnoses and treatments (although you have to take this kind of data with a pinch of salt, because surveys also say that doctors are the most trusted of all public figures, and journalists are tlie least trusted, but that doesn’t seem to be the lesson from the
media’s MMR hoax).
What is odd, perhaps, is how the primacy of patient autonomy and informed consent over efficacy—which is what we’re talking about here—was presumed, but not actively discussed within the medical profession. Although the authoritative and paternalistic reassurance of the Victorian doctor who ‘blinds with science’ is a thing of the past in medicine, the success of the alternative therapy movement—whose practitioners mislead, mystify and blind their patients with sciencey-sounding ‘authoritative’ explanations, like the most patronising Victorian doctor imaginable—suggests that there may still be a market for that kind of approach.
About a hundred years ago, these ethical issues were carefully documented by a thoughtful native Canadian Indian called Quesalid. Quesalid was a sceptic: he thought shamanism was bunk, that it only worked through belief, and he went undercover to investigate this idea. He found a shaman who was willing to take him on, and learned all the tricks of the trade, including the classic performance piece where the healer hides a tuft of down in the corner of his mouth, and then, sucking and heaving, right at the peak of his healing ritual, brings it up, covered in blood from where he has discreetly bitten his lip, and solemnly presents it to the onlookers as a pathological specimen, extracted from the body of the afflicted patient.
Quesalid had proof of the fakery, he knew the trick as an insider, and was all set to expose those who carried it out; but as part of his training he had to do a bit of clinical work, and he was summoned by a family ‘who had dreamed of him as their saviour’ to see a patient in distress. He did the trick with the tuft, and was appalled, humbled and amazed to find that his patient got better.