Kitabı oku: «Placebo: Mind over Matter in Modern Medicine», sayfa 2
PLACEBOS BECOME RESPECTABLE
In 1955, Beecher summed up the new view of placebos in an influential article published in the Journal of the American Medical Association.5 Entitled ‘The Powerful Placebo’, the article claimed that placebos could ‘produce gross physical change’, including ‘objective changes at the end organ which may exceed those attributable to potent pharmacological action’. Placebos, in other words, had real effects on real bodies. No longer were sugar pills to be dismissed as a harmless but ineffectual sop given to please hypochondriacs and desperate people. The placebo effect was born.
Beecher’s article has been enormously influential. Fifty years after publication, it is still regularly cited in almost every scientific paper on the placebo effect, and even those that do not cite it directly usually repeat its claims without acknowledgement, or refer to papers that do cite it. It is unlikely, however, that all of those who refer to it have bothered to read it; if they had done, they might not be quite so enthusiastic. For one thing, the range of conditions that Beecher claims can be affected by placebos is not very extensive. Most of the studies he reports concern the effects of placebos on various forms of pain – postoperative pain, pain from angina and headache. The only other medical problems mentioned are cough, common cold, seasickness and anxiety, and for each of these Beecher mentions only one study. More importantly, none of the studies he refers to provides any real evidence at all for the existence of a placebo effect.
The reason why these early studies provide no evidence of a placebo effect is that, with one exception, all failed to include a control group who received no treatment. In any group of people suffering from a particular condition, some will get better without any medical help. To provide convincing evidence of a placebo effect, you would have to show that those receiving the placebo did significantly better than those who received no treatment at all. Yet almost all the studies cited by Beecher merely show that some of those who received a placebo – a third, on average – felt better afterwards. Without a no-treatment control group to compare it with, this figure is meaningless. The improvement shown by those who received the placebo might well have occurred anyway, even if they had received no placebo. The one study that did include a no-treatment group found no difference between it and the placebo group.
The authors of the original studies did not think they had found any evidence of placebo effects. A few of them even said so explicitly. They realised that the improvement shown by those receiving placebos could easily be accounted for by the natural course of the disease (spontaneous remission, random fluctuation of symptoms and so on) and various other factors such as additional treatments. In one of the studies, for example, 35 per cent of the patients with mild colds felt better within two days of taking a placebo (or six days after the start of their cold). The authors of the study pointed out that many patients with a mild cold get better within six days even if they receive no medical treatment at all. Beecher, however, ignored this remark, and attributed all the improvement shown by these patients to the fact that they had taken a placebo.
Besides spontaneous remission, other factors also play a part in the improvement shown by patients receiving placebos. In one study cited by Beecher, for example, patients with a variety of conditions were treated for anxiety and tension. After four months, during which they were given two-week courses of an anti-anxiety drug called mephensin and placebo alternately, between 20 and 30 per cent of them improved. However, another 10–20 per cent of patients deteriorated. The authors of the study subtracted the deterioration rate from the improvement rate and reported, correctly, a net improvement of around 10 per cent. Considering the rather long observation period of sixteen weeks, this seems quite a low figure, especially when one considers that the patients had taken an active drug for eight of the sixteen weeks. Some of them may also have received other medical support outside the context of the study. Yet Beecher failed to take the deterioration rate into account, and claimed that the study showed a placebo effect of 30 per cent.
In fact, Beecher misquoted ten of the fifteen trials he cited – including one which he had co-authored himself.6 His cavalier attitude in reporting these studies is paradoxical, since his underlying objective in marshalling evidence for the placebo effect was to persuade medical researchers to be more rigorous in their approach to evaluating new treatments. Before World War II, the evaluation of new therapies was largely determined by the personal judgement of distinguished doctors. Beecher was a leading figure in the movement to reform this long tradition. Along with Gold and others, he argued vigorously that medical treatments could best be tested by a new method: the randomised, placebo-controlled clinical trial.
THE LONG BIRTH OF THE CLINICAL TRIAL
As to different methods of treatment, it is possible for us to assure ourselves of the superiority of one or another … by enquiring if the greater number of individuals have been cured by one means than another. Here it is necessary to count. And it is, in great part at least, because hitherto this method has not at all, or rarely been employed, that the science of therapeutics is so uncertain.
PIERRE LOUIS, Essay on Clinical Instruction (1834)
In some ways, the clinical trial was not new. Something very similar is described in as venerable a text as the Old Testament. The first chapter of the Book of Daniel describes how Nebuchadnezzar, King of Babylon, offered his own food to some of the most noble Israelites he had taken prisoner after capturing the city of Jerusalem. Daniel refused to eat the foreign food, since it did not conform to the Jewish dietary laws. Nebuchadnezzar’s chief eunuch was sympathetic, but warned Daniel that his own head would be in danger if the King saw him looking thinner in the face than the other Israelites. At this, Daniel turned to the guard and made a request: ‘Please allow your servants a ten-day trial, during which we are given only vegetables to eat and water to drink. You can then compare our looks with those of the boys who eat the King’s food.’ The guard agreed, and after ten days Daniel, and the friends who had accompanied him on his vegetarian diet, looked in better shape than those who had eaten at the royal table. This is not exactly a clinical trial – for one thing, it concerns a dietary regime rather than a medical treatment – but the basic idea of comparing an experimental group with a control group is there.
Historians have unearthed various other ancient progenitors of the modern clinical trial. In the thirteenth century, the King of Sicily, Frederick II (1272–1337), is reported to have studied the effects of exercise on digestion by giving identical meals to two knights and then sending one out hunting while ordering the other to bed. After several hours, he killed both and examined the contents of their alimentary canals; digestion had, apparently, proceeded further in the stomach of the sleeping knight.7 A century later, Petrarch reported, in a letter to Boccaccio, a remark by a fellow physician that explicitly recommended experimental studies of therapeutic methods by comparative means.8 But, like the story of Daniel, these early gestures toward comparative studies lack one of the most distinctive features of the modern clinical trial – a formal mathematical treatment. This had to wait until the birth of statistics in the seventeenth century.
Some philosophers and historians of science have argued that the development of statistics and probability theory in the eighteenth and nineteenth centuries constituted a revolution no less dramatic and influential than the ‘scientific revolution’ of the seventeenth century.9 In reality the so-called ‘probabilistic revolution’ was a pretty slow affair, more akin to the stately orbit of a celestial body than to a political upheaval. Its impact on medical research was positively sluggish. Statistical methods were not explicitly used to investigate a therapeutic intervention until the 1720s, when the French physician James Jurin showed that smallpox inoculation was a safe procedure by comparing the mortality of inoculated people with the death rates of those with natural smallpox. Even then, the new methods did not meet with much respect; Jurin’s findings were ignored by his colleagues, and smallpox inoculation remained illegal in France until 1769.
A hundred years later, the same mistrust of statistical methods led Viennese physicians to reject the recommendations of Ignaz Semmelweis on the need for better hygiene by doctors. In 1847 Semmelweis noticed that there were marked differences between the death rates on two wards in the obstetric hospital in Vienna. Mortality was much higher on the ward run by physicians and medical students than on the ward run by student midwives. Moreover, the difference between the two wards had only begun in 1841, when courses in pathology were included in medical training. Semmelweis guessed that physicians and students were coming to the obstetric ward with particles of corpses from the dissection room still clinging to their fingers. He made them wash more thoroughly with chlorinated lime (which, by luck, just happened to be a disinfectant), and the death rate on the medical ward immediately returned to the same level as on that run by the midwives. Despite this startling evidence, the antiseptic measures proposed by Semmelweis were not embraced by his colleagues for several decades, by which time Semmelweis had, quite understandably, gone insane.
British doctors were, in general, more accepting of statistical research than were their colleagues on the Continent. In the eighteenth century, a few physicians on board British naval vessels employed comparative methods to study the effects of various treatments for scurvy and fever. John Lind noted that sailors on his ship who had scurvy recovered when given citrus fruits, and the navy responded by issuing lemons (and later limes) to all sailors – which is, of course, the origin of the epithet ‘limey’. But the British were not so open-minded with all such statistical research. In the late 1860s, Joseph Lister published a series of articles showing that the use of antiseptics at the Glasgow Royal Infirmary had reduced the mortality from amputations, but his findings were not universally accepted by the British medical establishment until the end of the century.
By the first half of the twentieth century, there was a growing acceptance of comparative methods in medical research among doctors in Europe and America, but even then it was a slow process. The term ‘clinical trial’ does not appear in the medical literature until the early 1930s, and when Linford Rees presented the results of a trial comparing electro-convulsive therapy (ECT) with insulin coma therapy to a meeting of the Royal Medico-Psychological Association in 1949, his research methodology caused as much of a stir as his results.10 Very few of the psychiatrists at that meeting could have guessed that, within half a century, the randomised clinical trial would have become the standard tool for medical research.
THE PLACEBO CONTROL
The pre-twentieth-century progenitors of the clinical trial established the basic principle of comparing various groups of patients undergoing different treatment regimes. The twentieth century added two more refinements: randomisation and the placebo control. Randomisation simply means that patients are assigned to the various groups on a random basis. The placebo control means that the control group is treated with a fake version of the experimental therapy – one which, ideally, should be identical in every way to the treatment being tested with the exception of the crucial component. With one or two notable exceptions, the few clinical trials that were carried out before World War II did not include a placebo control group. Rather, they compared one treatment with another, or with no treatment at all. Placebos were used as controls in the studies of effects of substances such as caffeine on healthy volunteers, but the idea of deliberately withholding a treatment believed to be active from someone who was ill and in danger of death was felt by most doctors to be unethical.
Beecher played a major role in persuading doctors that placebo controls were both ethical and scientifically necessary. He countered the ethical objections by arguing that the administration of a placebo was far from ‘doing nothing’. If placebos could provide at least half as much relief as a real drug, and often even more, then the patients in the control group would not be that much worse off than those in the experimental arm. Similar considerations were used to support the claim that placebo-controlled studies were the most sound from a scientific point of view. After all, if a therapy was simply shown to be better than no treatment at all, how could doctors be sure that the effect was not due to the placebo response? And if one therapy were compared to another and found to be equally effective, how could scientists be sure that both were not placebos? By the end of the 1950s, the work by Beecher, Gold and others had convinced most medical researchers that only by comparing a therapy with a placebo could they discover its specific effect.
Beecher argued that all kinds of treatment, even active drugs and invasive surgery, produced powerful placebo effects in addition to their specific effects. Therefore, to determine the specific effect of a treatment, medical researchers would have to subtract the placebo effect from the total therapeutic effect of the treatment being tested. If they simply compared the experimental treatment with a no-treatment control group, they would overestimate the specific effect by confounding it with the placebo effect. To support this argument, Beecher needed to provide evidence showing that the placebo effect was large enough to worry about. This was the whole point of the 1955 article whose many flaws we have briefly glimpsed. Without misquotation and systematic misrepresentation, the original studies that Beecher cited would not have provided the evidence he needed.
At the time, nobody noticed the flaws in Beecher’s article. His evidence was cited again and again in support of the placebo-controlled clinical trial, which continued its rise to dominance. Crucial in this process was the decision in the 1970s by the US Food and Drug Administration (FDA) that new drugs be tested by clinical trials before they could be licensed. As one expert on the history of psychiatry has remarked, the FDA occupies something of a magisterial role in global medicine.11 It has no legal powers to control the health policies of nations other than the United States, yet its influence is enormous. The decision of the FDA to require new drugs to prove their mettle in randomised, placebo-controlled clinical trials paved the way for similar policies in other countries. During the 1980s, scientific journals followed suit by requiring that claims for the efficacy of new drugs be backed up by evidence from clinical trials. Finally, the 1990s saw the emergence of a movement known as ‘evidence-based medicine’ whose proponents urged GPs to make use of the evidence from clinical trials in their everyday clinical practice.12
A FLAW IN THE METHOD
A physician who tries a remedy and cures his patients, is inclined to believe that the cure is due to his treatment. But the first thing to ask them is whether they have tried doing nothing, i.e. not treating other patients; for how can they otherwise know whether the remedy or nature cured them?
CLAUDE BERNARD, An Introduction to the Study of Experimental Medicine (1865)
To the proponents of evidence-based medicine, the tortured history of the clinical trial is an epic of an almost biblical nature. Its eventual acceptance, in the late twentieth century, as the gold standard of medical research is the triumph of rational medicine over quackery, dimly foreseen by such early prophets as Jurin, Semmelweis and Lister who were, during their day, lone voices crying in the wilderness.
The truth is not quite so simple. The rise to dominance of the clinical trial has not been an unambiguous victory for rational medicine. Beecher saw it as a way of overcoming centuries of blind appeal to authority and intuition. Ironically, however, the final acceptance of placebo controls owed more to Beecher’s own authority and intuition than to proper scientific evidence. Beecher was a respected researcher, so nobody paused to question the accuracy of his 1955 paper. Nobody suspected that he had reshaped the data so that they would support his prior intuitions about the power of the placebo effect.
The result is that, fifty years later, many medical researchers accept without question that placebo effects are ubiquitous and powerful. Take this dramatic passage by two experts on alternative medicine, Dr Robert Buckman and Karl Sabbagh, for example:
… placebos are extraordinary drugs. They seem to have some effect on almost every symptom known to mankind, and work in at least a third of patients (usually) and sometimes in up to 60 per cent. They have no serious side-effects and cannot be given in overdose. In short, they hold the prize for the most adaptable, protean, effective, safe and cheap drugs in the world’s pharmacopoeia. Not only that, but they’ve been around for centuries, so even their pedigree is impeccable.13
A respected biologist states that ‘placebo medical procedures have proved to be effective against a wide range of medical problems including chronic pain, high blood pressure, angina, depression, schizophrenia and even cancer’.14 A leading authority on alternative medicine goes even further, claiming that ‘the range of susceptible conditions appears to be limitless’.15
In fact, almost all the supposed ‘demonstrations’ of the placebo effect on which these hyperbolic claims are based turn out to embody the same flaws that bedevil Beecher’s paper. Whenever people in the placebo arm of a clinical trial get better, they assume that this improvement is due entirely to the placebo, without considering any of the other possible causes – spontaneous remission, natural fluctuation of symptoms, other treatments, and so on. If this kind of sloppy thinking was applied to the testing of real drugs, it would be spotted immediately. When it comes to testing placebos, however, rigour goes out of the window. There seems to be a clear double standard in medical research.
To be consistent, we should apply the same rigorous scientific principles to the study of placebos that we apply to the evaluation of real treatments. No scientist would accept claims advanced on behalf of a new drug without evidence that people who are treated with it are at least more likely to get better than those who remain untreated. It should be no different when it comes to the claims made on behalf of placebos. In other words, to calculate the true placebo effect, the rate of spontaneous remission shown by those receiving no treatment at all must be subtracted from the observed placebo effect. Without a no-treatment arm, there is no way to distinguish the effects of the placebo from the natural course of the disease and various other confounding variables, such as other treatments taken outside the context of the trial.
In fact, no-treatment groups are rarely included in clinical trials today. One survey of the medical literature between 1986 and 1994 found that fewer than 4 per cent of clinical trials and meta-analyses published during that period included both placebo and untreated groups.16 The result is that, despite half a century of placebo-controlled clinical trials, we have surprisingly little solid data about the extent of the placebo response. The lack of such data has even led a few sceptics to argue that the placebo response does not really exist. They claim that the improvement shown by patients receiving placebos in clinical trials is due entirely to spontaneous remission and random fluctuations in the course of the disease.17
This is going too far. Solid evidence in favour of the placebo response is hard to find, but it does exist. Some studies do include no-treatment control groups, and some of these show that patients receiving placebos do better than those who receive nothing. But they are few and far between, and they do not always make sure that the only difference between the placebo group and the no-treatment group is the placebo itself. For example, the patients in the placebo group may receive all sorts of extra attention that those in the no-treatment group do not. As a result, we cannot really be sure that any improvement they may show, compared to the no-treatment group, is due to the placebo rather than to the various other things that they received, but which the no-treatment group did not.
Ücretsiz ön izlemeyi tamamladınız.