There are a number of problems with this review: primarily that the results do not support the conclusions drawn from them, and also that the authors did not account for publication bias. I have written a blogpost describing my concerns at http://dianthus.co.uk/cochrane-review-on-industry-sponsorship. [This is pasted below]
Many papers have been published that compare clinical trial publications sponsored by the pharmaceutical industry with those not sponsored by industry. Last week, the Cochrane Collaboration published a systematic review by Lundh et al of those papers. The stated objectives of the review were to investigate whether industry sponsored studies have more favourable outcomes and differ in risk of bias, compared with studies having other sources of sponsorship.
There are some rather extraordinary things about this review.
The most extraordinary thing is a high level of discordance between the results and the conclusions. This is a little odd, since one of the outcomes they investigated was whether industry studies were more prone to discordance between results and conclusions, so you’d have thought Lundh et al would understand the importance of making them match.
But nonetheless, they don’t seem to. The conclusions of the review state “our analyses suggest the existence of an industry bias”. In their results section, however, they investigated various items known to be associated with bias, such as randomisation and blinding. They found that industry studies had a lower risk of bias than non-industry studies. I’ve written before about bias in papers about bias, and this seems to be another classic example of the genre. This is disappointing in a Cochrane review. Cochrane reviews are supposed to be among the highest quality sources of evidence that there are, but this one falls a long way short.
It appears that they drew this conclusion because they found that industry sponsored trials were more likely to produce results or conclusions favourable to the sponsor’s product than independent trials (although that finding may not be as sound as they think it is, for reasons I’ll explain below). They therefore concluded that industry-sponsored trials must be biased, because they’re systematically different from independent trials. That does not make logical sense. Three explanations are possible: either industry trials are biased in favour of favourable results, independent trials are biased towards the null, or the two types of trial investigate systematically different questions. Any of those is possible, and they have not presented any evidence that allows us to distinguish between the possibilities. However, given that where they did measure bias, they found less bias in industry studies, the conclusion that the bias must be a result of industry sponsorship seems hard to support.
Another of Lundh et al’s conclusions was that industry-sponsored trials are more likely to have discordant results and conclusions, for example claiming that a result was favourable in the conclusions when the results don’t support that conclusion (I know, it’s hard to imagine anyone could do that, isn’t it?) This is stated as fact, despite the little drawback that their meta-analysis estimate of the difference between industry and non-industry studies did not reach statistical significance. Also, there is one study I happen to be aware of which would seem to be relevant to this analysis (Boutron et al 2010) as it investigated “spin” in conclusions, which seems to me to be exactly the same concept as discordance between results and conclusions. That study, for reasons not explained in the paper, was not included in their analysis. It can’t be because they didn’t know about it, as they cited it in their discussion (and, incidentally, misrepresented its results when they did so). Boutron et al found no significant difference between industry and non-industry studies in the prevalence of spin in conclusions, so if it had been included it could have weakened their results further.
I mentioned above that I was not totally convinced by their conclusion that industry-sponsored studies are more likely to have results favourable to the sponsor’s products than independent studies. Oddly enough, until I read this systematic review, I had taken that assertion as established fact. I have seen various papers that found that result, and had felt that the finding was robust. However, I now have my doubts.
One of the big challenges for any systematic review is the problem of publication bias. This is the tendency of positive studies to be published and negative studies to be quietly forgotten. This is a big problem, because if you look at all published studies in a systematic review, you are actually looking at a biased subset of studies, usually those with positive results.
A good systematic reviewer will investigate the extent to which this is a problem. The Cochrane handbook, the instruction manual for Cochrane systematic reviews, recommends that reviewers investigate publication bias by means of funnel plots or statistical tests. The idea behind such methods is that large studies are likely to be published whatever the results, as so much has been invested in them that the final stage of publication is unlikely to be overlooked, whereas small studies may well be unpublished if they are negative, but are more likely to be published if they are positive. If you see a correlation between study size and effect size, with smaller studies showing larger effects than larger studies, that is strongly suggestive of publication bias. For those who are not familiar with these concepts, Wikipedia has a good explanation.
However, despite the recommendation in the Cochrane handbook that reviewers should investigate publication bias, Lundh et al seem to have largely overlooked it. They mention a small number of studies published only as conference abstracts or letters, and found they provided similar results to the main analysis, and concluded that publication bias was therefore unlikely. This is a very superficial examination of publication bias that falls well short of what should happen in a Cochrane review.
Fortunately, they present their data in full, so it is easy enough for anyone reading the review to do their own test for publication bias. So I did this for their primary analysis: comparing industry and non-industry studies for their probability of producing favourable results. The results are strongly indicative of publication bias. This is what the funnel plot looks like [available from http://dianthus.co.uk/cochrane-review-on-industry-sponsorship].
As you can see, there is striking asymmetry here, with most small studies (those towards the bottom: the y scale is actually the reciprocal of the standard error of the relative risk, but this is strongly related to study size) having much larger effects than larger studies, and no small studies showing smaller effects. This is very strongly suggestive of publication bias. I also did a statistical test for publication bias (the Egger test, one of those recommended in the Cochrane handbook), with a regression coefficient for effect size on standard error of 2.3, which was statistically significant at P = 0.026.
So there is clear evidence that these results were subject to publication bias. It is therefore highly likely that their estimate of the difference between industry and non-industry studies was overstated. Maybe there isn’t really a difference at all. It’s very hard to tell, when the literature is not complete.
I could go on, as there are other flaws in the paper, but I think that’s long enough for one blog post. So to sum up, this Cochrane review had methods that fell short of what is expected for Cochrane reviews. Lundh et al found that industry sponsored studies, when assessed using well established measures of bias, were less likely to be biased than independent studies, and yet drew the opposite conclusion, based on nothing but speculation. This, in a study which investigated discordance between results and conclusions, is bizarre. Their main finding, that industry sponsored studies were more likely to generate favourable results than independent studies, appears to have been affected by publication bias, which makes it considerably less reliable than Lundh et al claim.
I am normally a great fan of the Cochrane Collaboration, which usually produces some of the best quality syntheses of clinical evidence that you will ever find. To see such a biased review from them is deeply disappointing.
I have modified the conflict of interest statement below to declare my interests: I run a commercial company that provides consultancy services to clinical researchers from the pharmaceutical industry and to academic researchers, but more often to the former.
We thank Adam Jacobs for the interest in our paper and would like to respond. The comment deals with three issues: the exclusion of the Boutron paper, the use of the term industry bias and possible publication bias.
First, Jacobs believes we should have included the Boutron paper (1). This paper was identified in our search, but excluded from our review because it did not meet the inclusion criteria pre-specified in our protocol. The Boutron paper deals with spin in conclusion of trials with statistically nonsignificant results. Spin can be focusing on results of subgroups or focusing on secondary outcomes in the discussion. This is something very different from concordance between results and conclusions investigated in our review (i.e. whether the conclusions agreed with the study results). Also, we did not misrepresent the Boutron paper as Jacobs suggests. Jacobs states that, “Boutron et al found no significant difference between industry and non-industry studies in the prevalence of spin in conclusions.” Boutron does not make this claim. In fact Boutron et al. writes “Our results are consistent with those of other related studies showing a positive relation between financial ties and favourable conclusions stated in trial reports.”
Second, Jacobs misstates our results by suggesting that we found that industry studies had a lower risk of bias than non-industry studies. We did not find a difference, except in relation to blinding, which we discuss further in our review. Furthermore, Jacobs actually seems to agree with our conclusion that the industry bias may be due to factors other than the traditional risks of bias that are usually assessed, e.g., randomization and blinding. That is just our point. For example, he states that the industry studies may “investigate systemically different questions”. We mention this possibility in our Discussion section. Based on the available external evidence (2,3), we believe that a plausible explanation for the more favorable results and conclusions in industry studies is that industry studies may be biased in design, conduct, analysis and reporting.
Third, Jacobs suggests that our results may be due to publication bias (i.e. that papers finding no difference in favorable outcomes between industry sponsored versus non-industry sponsored studies are not published) and criticizes us for not assessing publication bias using a funnel plot. However, as the Cochrane Handbook states (4) (section 10.4), there may be various reasons for funnel plot asymmetry, publication bias being one of them. Other reasons are true heterogeneity and risk of bias in the primary material being used in the meta-analysis. The included papers investigated many different drugs and devices and it is likely that any ‘industry bias’ may be different between the various types of treatments. Due to the anticipated heterogeneity of these papers, we did not assess publication bias using a funnel plot, as such a plot would be difficult to interpret as noted in the Cochrane Handbook. Instead, we included conference abstracts and letters in an additional analysis and found it had no impact.
To argue for his case about publication bias Jacobs presents a funnel plot of analysis 1.1 on the association between sponsorship and favorable results. Based on this funnel plot four small studies are outliers to the right and should provide evidence of bias. However, three of these studies were related to specific drugs (glucosamine, nicotine replacement therapy and antipsychotics) and the fourth study dealt with psychiatric research, a field where biased industry research has been well documented (5,6). So the study domain may explain the difference. Also three of the four studies had high risk of bias, which could also explain the findings and if we restrict the analysis to studies of low risk of bias (analysis 7.1) the results are less heterogeneous. Last, even if we assume publication bias to be present, to exclude the four papers from our analysis (the ones that should provide evidence for publication bias) has no impact on our analysis (RR 1.27 (95% CI: 1.16 to 1.39 ) instead of 1.32 (95% CI: 1.21 to 1.44 )). A similar plot for conclusions (analysis 3.1) has two studies as outliers and excluding these studies from the analysis has no impact (RR 1.27 (95% CI: 1.16 to 1.38) instead of RR 1.31 (95% CI: 1.20 to 1.44)). We therefore completely disagree with Jacob’s statement that “there is clear evidence that these results were subject to publication bias. It is therefore highly likely that their estimate of the difference between industry and non-industry studies was overstated. Maybe there isn’t really a difference at all”. On the contrary, our findings that industry sponsorship leads to more favorable results and conclusions, are very robust.
In sum, none of the comments provided by Jacobs have any impact on our results.