To the Editor:

We read with interest the recent article by Vlad et al (1). The authors ask an interesting question, and they use a heterogeneity statistic to answer it with data from 15 randomized trials of 2 glucosamine salts at different doses and by different routes of administration, produced by 7 manufacturers, lasting between 4 weeks and 3 years, with both intent-to-treat and completer analysis, and with an unknown number of different outcomes. Heterogeneity tests have their own problems, being inadequate measures of heterogeneity or homogeneity (2), and funnel plots cannot detect publication bias (3). Confusion is hardly unexpected in the circumstances, but the strong message was that effect sizes were consistently higher among trials with industry involvement.

Of the 4 trials with no industry involvement, 2 used glucosamine hydrochloride, which has never worked in any trial. One trial using glucosamine sulfate lasted only 8 weeks, a duration that is arguably inadequate for seeing the full benefits of glucosamine. The remaining trial of glucosamine sulfate without industry involvement was of an enriched-enrollment, randomized-withdrawal design, and it showed no difference between glucosamine and placebo (4). While they are of real methodologic interest, such designs are infrequently used, and their interpretation is at present uncertain. For example, the trial by Cibere et al (4) was complicated by the fact that it used telephone recruitment, which has been implicated in other pain trials with unexpectedly negative results. To put it mildly, it would seem imprudent to use these 4 trials to judge the other 11, and to use them as the basis for making any judgment about the presence or absence of industry bias. No sensible conclusions can be drawn.

Both the article by Cibere et al (4) and the otherwise important editorial by Reginster (5) fall into the trap of accepting that industry bias has been demonstrated previously. Our group has argued that it has not (6). As stated by Hill (7), “The essence of the (statistical) method lies in the determination that we are really comparing like with like, and that we have not overlooked a relevant factor which is present in Group A and absent from Group B.” To make a valid comparison we need not only to compare like with like, but also to have data that fulfill criteria of quality, validity, and size (8). Only one single study has ever attempted to compare like with like and to use criteria of quality, validity, and size, in studies of acute pain and migraine, putting forward a way of looking for an industry bias where industry clinical trials gave different results from nonindustry trials (6). That study could find no evidence of industry bias in the trials themselves. Industry-sponsored outcome trials of statins all showed the same risk reduction as did the large, independent Heart Protection Study (9). Nor was there any significant difference found in the effects of 5 mg/day finasteride for benign prostatic hyperplasia between a systematic review in which the majority of trials were industry funded (10) and the Medical Therapy of Prostatic Symptoms trial (11), which was largely funded by the National Institutes of Health.

The idea that there is a proven industry bias in clinical trials is spin, not evidence; its existence is not proven. More useful would be working in a transparent way with the pharmaceutical industry to make sensible use of clinical trial information, most of which is in their hands, to define new and better outcomes from clinical trials and to make future clinical trials better.


Dr. Moore has served as a consultant to, and has received speaking fees and honoraria from, Pfizer, Menarini, and Futura (less than $10,000 each) and Merck, Sharp, and Dohme (more than $10,000). Dr. McQuay has served as a consultant to, and has received speaking fees and honoraria from, AstraZeneca Pharmaceuticals, Blue Rubicon, Pfizer, and Pierre Fabre (less than $10,000 each).

Andrew Moore DSc*, Sheena Derry MA*, Henry McQuay DM*, * University of Oxford, Oxford, UK.