A dramatic increase in pharmaceutical industry funding and support of biomedical research has occurred in the past few decades (1, 2). This has led to strong concerns regarding the possible inappropriate influence of industry funding on biomedical research (3). A preponderance of evidence shows that industry-funded research is associated with an increased likelihood of pro-industry results and conclusions (4–11).
Randomized controlled trials (RCTs) are considered the “gold standard” means of assessing healthcare interventions. They are designed to eliminate bias by randomly distributing known and unknown confounding factors. RCTs must be methodologically sound to eliminate sources of bias that may appear at various stages. Bias causes results to differ systematically from the truth through a combination of various factors, including study design, data analysis, and presentation (12). Substantial evidence shows that the methodologic quality of RCTs affects estimates of intervention efficacy (13–15). Limited data on the association of industry funding with the methodologic quality of RCTs show conflicting results, with some studies showing no difference (5, 16) and others showing either a trend toward higher quality (17, 18) or significantly higher quality of industry-funded RCTs compared with non–industry-funded RCTs (8, 19–21).
Rheumatoid arthritis (RA) is a chronic systemic autoimmune disease that chiefly manifests as destructive inflammatory arthritis and affects 0.5–1% of adults (22). The options for drug therapy for RA have improved remarkably over the past 15 years. In particular, the discovery and availability of biologic agents for RA treatment were facilitated by the funding of clinical trials by pharmaceutical companies. A study assessing the secular changes in the methodologic quality of published RCTs in rheumatology showed no differences between industry-funded and non–industry-funded RCTs (23). However, that study included both RCTs with a drug intervention and those with a nonpharmaceutical intervention, and only 102 (42.5%) of the 240 study RCTs assessed therapy for RA. No data are available regarding the influence of industry funding on the outcome of RCTs of drug therapy for RA. The objective of this study was to determine the association of industry funding with the characteristics, outcome, and reported methodologic quality of RCTs of drug therapy for RA.
- Top of page
- AUTHOR CONTRIBUTIONS
Our study revealed no association between the source of funding of “published” RCTs of drug therapy for RA and the outcome of these trials. A trend toward publication bias was observed for the industry-funded RCTs. Industry-funded RCTs performed significantly better than non–industry-funded RCTs in terms of the use of certain methodologic quality measures.
Our finding that industry is the funding source for the majority of published as well as registered RCTs is consistent with the trend toward an increased proportion of biomedical research being industry funded (1, 2). The significant differences in RCT characteristics according to the funding source have important implications for the characteristics of RCTs conducted and thus for the evidence generated for the clinical care of patients with RA. Although industry-funded RCTs predominantly focused on assessment of the efficacy and safety of newer therapeutic drugs, the majority of non–industry–funded RCTs evaluated established drugs and different strategies for using these drugs to treat RA. Evidently, industry-funded RCTs had more financial resources, because they were more likely to be multicenter, multinational, and to have higher subject enrollment. Despite this financial advantage, the duration of industry-funded RCTs was shorter than that of non–industry-funded RCTs. These differences clearly highlight the importance of both industry and nonprofit sources for funding of RCTs to generate efficacy and safety evidence for newer as well as established drugs and strategies for their use in clinical care.
Although a preponderance of the data in the medical literature show that industry funding leads to higher chances of pro-industry results and conclusions (4–11), we did not observe any association between the funding source and the study outcome of “published” RCTs of RA drug therapies. Adjustment for differences in RCT characteristics and reported methodologic quality measures did not affect this finding. A total of 1,850 RCTs (with 80% power) would be needed to show a significant association between funding source and study outcome, assuming a relative frequency of explicitly stated industry and nonprofit funding (∼3:1) and a percentage of trials with positive outcomes (75.5% and 68.8%) similar to that in our study. Therefore, among “published” RCTs of RA drug therapy, the differences in outcome between those with industry funding and those with non-industry funding are relatively small. One potential reason for the lack of association between funding source and study outcome could be publication bias. Indeed, we did observe that industry-funded “registered” RCTs at CTG showed a significant trend toward nonpublication. Because these “registered” RCTs had investigator-declared “completed” status, nonpublication of their results suggests an unfavorable outcome. We could not ascertain whether “published” RCTs more commonly presented outcomes that were favorable but different from the originally planned primary outcomes, thus inflating the frequency of positive “published” RCTs, because only a few “published” RCTs had actually registered at CTG. Further studies are needed to address the extent and implications of publication bias in RCTs of RA therapy.
Nearly 75% of the “published” RCTs had a positive outcome. This could be partly attributable to publication bias and partly attributable to the difficulty in study outcome assignment due to the complex structure of study intervention arms. The majority of RCTs had >2 intervention arms. The experimental drug often showed positive results compared with only placebo and not the ACD, or only the combination of experimental drug and ACD had positive results compared with the experimental drug or ACD alone. Most published RCT reports lacked a clear description of the a priori intent of the RCT (superiority versus noninferiority for different intervention arms). Thus, in the absence of such guidance, a positive RCT outcome was assigned when any experimental drug intervention arm (alone or in combination with an ACD) showed a statistically significant result favoring the primary outcome. Finally, conducting RCTs with such a high frequency of positive outcome raises ethical issues. An RCT should be conducted only if there is substantial uncertainty (equipoise) about the relative value of one treatment versus another (17). RCTs in which experimental intervention and control are thought to be nonequivalent based on the existing store of knowledge may cause unnecessary harm to study subjects and waste precious resources.
A study of 240 RCTs of rheumatic diseases showed no difference in any methodologic quality measure between those that were manufacturer supported and those that were not manufacturer supported (23). A more recent study of 64 RCTs for the treatment of systemic lupus erythematosus showed a trend toward better study quality in pharmaceutical company–supported RCTs (18). However, our study showed that industry funding was associated with better reporting of some key methodologic quality measures. There are several potential reasons for this finding.
First, the availability of greater financial resources to industry-funded RCT investigators may allow performance of more expensive measures such as double-blinding and more vigorous tracking and followup of study subjects.
Second, non–industry-funded RCTs studied strategies for RA drug therapy more often than did industry-funded RCTs (10 [25%] versus 4 [6%]), and double-blinding was considered impractical by the investigators for most such RCTs due to the complexity of study protocol requirements. Indeed, only 1 industry-funded and 1 non–industry-funded treatment-strategy RCT were conducted in a double-blind manner.
Third, it is conceivable that the mandates of regulatory organizations, such as the US Food and Drug Administration, for methodologically rigorous RCTs to generate efficacy and safety data for a new drug may also account for better quality of the industry-funded RCTs (33). Fourth, better reporting of methodologic aspects in the “published” RCTs may also reflect attempts to dispel notions of bias that tend to be associated with industry funding.
Finally, because we assessed the RCT methodologic quality using the published manuscript, we cannot be certain whether our findings represent incomplete reporting or inadequate performance of these measures. However, measures such as ITT analysis can be performed without additional financial burden and can be ascertained from the published report itself. Nonetheless, a lower proportion of non–industry-funded RCTs reported the performance of ITT analyses, suggesting that the funding source may be associated with real systematic differences in the performance of methodologic quality measures.
The overall reporting of most RCT methodologic quality measures, particularly for random sequence generation and allocation concealment, was suboptimal. Poor reporting/performance of RCT methodologic quality measures has been reported across multiple specialties, including rheumatology (19, 23, 34). Encouragingly, our study showed improvement in several quality measures, including randomization (35% versus 17.4%), allocation concealment (30.1% versus 19%), participant flow (77.7% versus 58.7%), and ITT analysis (64.1% versus 29.8%) when compared with 121 rheumatology RCTs published in the years 1997–1998 (23). However, only 38.8% of the trials published in 1997–1998 studied RA, and non–drug therapy RCTs were included in the referenced study. Hence, the above comparison may not represent true secular changes in the quality of reporting of RCTs of RA therapy.
The CONSORT statement was developed to promote standardized reporting of RCTs that would help readers assess their validity and interpret the results appropriately. The CONSORT statement was originally proposed in 1996, with subsequent revisions in 2001 and 2010 (24, 25, 35). The current CONSORT statement includes a list of 25 recommended items and a flow diagram (36). Adoption of the CONSORT guidelines by biomedical journals has been shown to improve reporting quality, particularly reporting of randomization and double-blinding (37–39). However, the improvements have been inconsistent with continued suboptimal reporting of measures such as allocation concealment (37, 40, 41). Nonetheless, the authors of RCT reports should be encouraged to strictly adhere to the CONSORT guidelines for improving RCT reporting quality.
Our study has some limitations. Nearly one-fifth of the “published” RCTs had no funding source disclosure. For most of these analyses, we considered the funding to be from a nonprofit source; plausibly, some of these were industry funded. For sensitivity analysis, we reassessed our study results considering an extreme scenario of industry funding of all such RCTs. This did not alter our finding of a lack of association between the funding source and the RCT outcome. However, differences in the study quality measures were attenuated and remained significant only for ITT analysis performance in favor of industry funding. An improvement in funding source reporting is expected, because this is mandatory for CTG registration, and the 2010 CONSORT statement has added an item explicitly for funding source reporting (32, 35).
We assessed the methodologic quality of an RCT based on its published report. Plausibly, investigators may not have reported important quality measures despite their adequate performance, causing an underestimation of the study quality. In fact, a discrepancy has been noted between methodologic aspects of the “published” RCT reports and their study protocol or the report by the RCT investigators of the actual study methods (42–44). However, an overwhelming majority of healthcare literature users rely on the published report of an RCT to assess its quality and validity and do not have access to the study protocol or RCT investigators. Hence, inadequate reporting of an RCT hinders assessment of its quality and validity, even though it may have been appropriately conducted.
We did not evaluate the conclusions or recommendations offered by the RCT investigators in the discussion section or abstract of the manuscript. The conclusions provided by the investigators in RCTs with industry funding are more likely to include a recommendation for the experimental drug as the treatment of choice, unrelated to the observed effect size (6). Finally, there is an issue of how best to assess the quality of RCTs. Inadequate performance of the quality measures used in our study may bias estimates of treatment effect (13–15). However, the association with treatment effect size is not consistent across different specialties, varies for individual quality measures, and is dependent on whether the study outcome is or is not subjective (45, 46). Moreover, different groups of investigators vary considerably in terms of the methods used to assess RCT quality (47). Some groups assess quality measures individually (an approach recommended by the Cochrane Collaboration), while others use a composite quality scale (47).
In conclusion, industry funding of “published” RCTs of RA drug therapy was not associated with a higher likelihood of positive outcomes favoring the sponsored experimental drug. A trend toward a higher nonpublication rate of “registered” industry-funded RCTs suggests that publication bias partially explains the observed lack of such association. The availability of adequate funds for RCT conduct from both industry and nonprofit sources is essential to generate evidence for optimal advancement of RA treatment. Improvement in reporting of methodologic quality measures is needed to enable better assessment of the validity of RCTs.