SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. METHODS
  4. RESULTS
  5. DISCUSSION
  6. AUTHOR CONTRIBUTIONS
  7. REFERENCES

Objective

To assess the association of industry funding with the characteristics, outcome, and reported quality of randomized controlled trials (RCTs) of drug therapy for rheumatoid arthritis (RA).

Methods

The Medline and Cochrane Central Register of Controlled Trials databases were searched to identify original RA drug therapy RCTs published in 2002–2003 and 2006–2007. Two reviewers independently assessed each RCT for the funding source, characteristics, outcome (positive [statistically significant result favoring experimental drug for the primary outcome] or not positive), and reporting of methodologic measures whose inadequate performance may have biased the assessment of treatment effect. RCTs that were registered at ClinicalTrials.gov and completed during the study years were assessed for publication bias.

Results

Of the 103 eligible RCTs identified, 58 (56.3%) were funded by industry, 19 (18.4%) were funded by nonprofit sources, 6 (5.8%) had mixed funding, and funding for 20 (19.4%) was not specified. Industry-funded RCTs had significantly more study centers and subjects, while nonprofit agency–funded RCTs had longer duration and were more likely to study different treatment strategies. Outcome could be assessed for 86 (83.5%) of the 103 RCTs studied. The funding source was not associated with a higher likelihood of positive outcomes favoring the sponsored experimental drug (75.5% of industry-funded RCTs had a positive outcome, compared with 68.8% of non-industry–funded RCTs, 40% of RCTs with mixed funding, and 81.2% of RCTs for which funding was not specified). Industry-funded RCTs showed a trend toward a higher likelihood of nonpublication (P = 0.093). Industry-funded RCTs were more frequently associated with double-blinding, an adequate description of participant flow, and performance of an intent-to-treat analysis.

Conclusion

Industry funding was not associated with a higher likelihood of positive outcomes of published RCTs of drug therapy for RA, and industry-funded RCTs performed significantly better than non–industry-funded RCTs in terms of reporting the use of some key methodologic quality measures.

A dramatic increase in pharmaceutical industry funding and support of biomedical research has occurred in the past few decades (1, 2). This has led to strong concerns regarding the possible inappropriate influence of industry funding on biomedical research (3). A preponderance of evidence shows that industry-funded research is associated with an increased likelihood of pro-industry results and conclusions (4–11).

Randomized controlled trials (RCTs) are considered the “gold standard” means of assessing healthcare interventions. They are designed to eliminate bias by randomly distributing known and unknown confounding factors. RCTs must be methodologically sound to eliminate sources of bias that may appear at various stages. Bias causes results to differ systematically from the truth through a combination of various factors, including study design, data analysis, and presentation (12). Substantial evidence shows that the methodologic quality of RCTs affects estimates of intervention efficacy (13–15). Limited data on the association of industry funding with the methodologic quality of RCTs show conflicting results, with some studies showing no difference (5, 16) and others showing either a trend toward higher quality (17, 18) or significantly higher quality of industry-funded RCTs compared with non–industry-funded RCTs (8, 19–21).

Rheumatoid arthritis (RA) is a chronic systemic autoimmune disease that chiefly manifests as destructive inflammatory arthritis and affects 0.5–1% of adults (22). The options for drug therapy for RA have improved remarkably over the past 15 years. In particular, the discovery and availability of biologic agents for RA treatment were facilitated by the funding of clinical trials by pharmaceutical companies. A study assessing the secular changes in the methodologic quality of published RCTs in rheumatology showed no differences between industry-funded and non–industry-funded RCTs (23). However, that study included both RCTs with a drug intervention and those with a nonpharmaceutical intervention, and only 102 (42.5%) of the 240 study RCTs assessed therapy for RA. No data are available regarding the influence of industry funding on the outcome of RCTs of drug therapy for RA. The objective of this study was to determine the association of industry funding with the characteristics, outcome, and reported methodologic quality of RCTs of drug therapy for RA.

METHODS

  1. Top of page
  2. Abstract
  3. METHODS
  4. RESULTS
  5. DISCUSSION
  6. AUTHOR CONTRIBUTIONS
  7. REFERENCES

Study years.

RCTs of drug therapy for RA that were published in the years 2002–2003 and 2006–2007 were studied. The selection of the study years was based on the latest available version of the Consolidated Standards of Reporting Trials (CONSORT) statement (publication year 2001) at the time of data collection (24). The CONSORT statement, originally proposed in 1996, was developed to improve the quality of RCT reporting (25). The quality of RCTs in rheumatology before (1987–1988) and after (1997–1998) the original CONSORT statement has already been reported (23). Hence, we chose an immediate time period (2002–2003) and a late time period (2006–2007; latest possible at the time of data collection in 2008) after the revised CONSORT statement publication.

Search strategy.

The literature was searched using PubMed and the Cochrane Central Register of Controlled Trials (CENTRAL) databases. The search terms used were “rheumatoid arthritis” and “arthritis, rheumatoid.” The limits used were “Clinical Trials” (in PubMed), “English” (in PubMed), and the years “2002–2003” and “2006–2007.” The retrieved reports were screened by review of the title and abstract. The full published report was retrieved when the subject allocation method (random or nonrandom) remained unclear from the title and abstract. A full review was performed for reports that randomly allocated patients to different intervention groups.

Inclusion and exclusion criteria.

RCTs were selected if they met the following criteria: 1) original report of a single RCT, 2) pharmacologic intervention, 3) parallel design, and 4) clinical primary end point(s). If multiple publications originated from a single RCT, the first published report with clinical primary end point(s) was selected. RCTs were excluded based on one or more of the following criteria: 1) phase I RCT, 2) report of the open-label extension phase of an RCT, 3) non-English language, and 4) inclusion of non-RA subjects. We refer to the thus-identified eligible studies as “published” RCTs, when needed, to distinguish them from “registered” RCTs (see below).

Data abstraction.

A reference sheet that listed and defined criteria that would fulfill an adequate description of each study variable, when applicable, was created. Fifteen RCTs (in sets of 5 each) published in nonstudy years were evaluated using the reference sheet, to further clarify definitions of the study variables. Two reviewers (NAK and JIL) independently assessed each eligible “published” RCT for outcomes of interest. Differences were resolved by consensus. All data were collected from the original RCT report. An earlier publication (in a nonstudy year) was reviewed when the study authors specifically referred to that publication for the methodologic details.

RCT characteristics.

Data on the journal and year of publication, number of authors, total number of study subjects, duration of the study, number of study centers, and the number of study countries, study phase, and the design of the study intervention arms were recorded. The study agent(s) were classified as experimental drug, active comparator drug (ACD), or placebo. An RCT with an experimental drug that targeted a specific molecule in the inflammatory pathway or a specific receptor in the circulation or on a particular cell surface was considered to have used a biologic disease-modifying antirheumatic drug (DMARD). RCTs with ≥1 of the following as experimental drugs were classified as having used traditional DMARD(s): methotrexate, sulfasalazine, leflunomide, hydroxychloroquine, corticosteroids, azathioprine, cyclosporine, tacrolimus, or cyclophosphamide. The remaining RCTs were classified as using “other” drug as the experimental intervention (e.g., nonsteroidal antiinflammatory agents, bisphosphonates, or antibiotics). The impact factor of the journal of study publication was obtained from the 2007 Journal Citation Reports Science Edition. RCTs were considered to have been published in a high-impact journal or a low-impact journal if the journal had at least the median or less than the median impact factor, respectively.

Funding information.

RCT funding source(s) were classified, based on the disclosure in the published manuscript, as one of the following: 1) industry (manufacturer of the experimental drug), 2) nonprofit (such as the National Institutes of Health or the Arthritis Foundation), 3) mixed (both industry and nonprofit sources), or 4) unspecified (no funding source disclosure). We categorized RCTs with full or partial funding from industry as being “industry funded” and those with a nonprofit or unspecified funding source as “nonprofit funded” for presenting most of the analyses in this study.

Outcome assessment.

The RCT intervention arms were classified as using experimental drug(s), ACD(s), or placebo. The outcome assessment of the experimental intervention arm was based on the designated primary outcome in the published RCT. The first reported outcome was used as the primary outcome if none was explicitly specified. Table 1 summarizes the criteria used for outcome designation for RCTs with different intervention arm structures. Broadly, an RCT with a statistically significant result for the primary outcome favoring the experimental drug arm was classified as having a positive outcome. Outcome could not be assessed for RCTs with safety of an experimental drug as the primary outcome or if no study intervention arm was declared as experimental a priori.

Table 1. Outcome assessment criteria for the randomized controlled trials
Study intervention armsOutcome classification
PositiveNonpositive
Experimental drug vs. active comparator drugStatistically significant result in favor of the experimental drug for the primary outcome 
Experimental drug vs. active comparator drug vs. placeboStatistically significant result favoring the experimental drug over placebo, and no significant difference compared with the active comparator drugStatistically significant result favoring the active comparator drug over the experimental drug
Experimental drug vs. active comparator drug vs. combination of experimental and active comparator drugAny study arm with the experimental drug significantly better than the active comparator group 
Multiple doses of experimental drug vs. placeboAt least 1 dose showed significant efficacy for the primary outcome without increased adverse eventsSignificant safety concerns even if efficacy was observed for the primary outcome, and the authors did not recommend use of the experimental drug

Assessment of reported methodologic quality of the RCTs.

Individual quality measures that are important for the internal validity of an RCT and whose inadequate performance may bias the treatment effect assessment were evaluated for adequate reporting (24, 26, 27). These measures include the following:

Randomization.

Randomization is a method used to allocate study participants to different study intervention arms by chance alone. Randomization was considered adequate if an explicit appropriate description of the allocation method (such as random number table use, computer-generated random sequence) was provided.

Allocation concealment.

Allocation concealment is a method used to conceal the treatment assignment from the investigators who enroll RCT participants, in order to prevent the investigators from being influenced by such knowledge. Adequate reporting of allocation concealment required a explicit description of the measures used to conceal subject allocation to intervention groups from those responsible for assessing patients for the trial entry. Examples of such measures include central treatment arm allocation using a telephone/automated assignment system, local pharmacy use, serially numbered sealed opaque envelopes, and numbered/coded bottles.

Blinding.

An RCT was considered to have adequate reporting of “double-blinding” if specific measures (such as use of identical-looking placebo) were described that would make the study subjects and healthcare providers unaware of who received which study intervention. Reporting of outcome assessor blinding was noted for RCTs that were not conducted in a double-blinded manner.

Participant flow description.

A description of the flow of participants, either in the text or as a flow diagram, through each RCT stage was assessed for adequacy. Specifically, reporting of the number of participants who were randomized to each intervention arm, received the intended treatment, completed the study protocol, and were analyzed for the primary outcome was noted.

Intent-to-treat (ITT) analysis.

An ITT analysis was deemed to have been performed if all randomized patients were analyzed according to the original assigned intervention group. Exclusion of ≥1 of the following categories of study subjects from the RCT analyses, including those with no postbaseline assessment, those found to be ineligible for the RCT after randomization, and those who never received any study intervention, was not considered a violation of the ITT principle (28, 29). ITT analysis is not applicable for RCTs with safety of the experimental drug as the primary end point.

Assessment of publication bias.

Publication bias occurs when only positive or partial results of a trial are published or when a primary outcome is different from that according to the original protocol (30, 31). Two authors (KDT and MS) performed publication bias assessment using the following methods:.

  • 1
    The ClinicalTrials.gov (CTG) registry was searched for clinical trials of RA. The US National Library of Medicine established CTG in the year 2000 as an internet-based publicly accessible registry of clinical trials (32). Funding source information is mandatory for CTG registration. CTG search results were screened for clinical trials that were conducted exclusively for RA, had drug therapy as the intervention, specified a “randomized” method for subject allocation, were described as phase II or higher, and reported completion in the years 2002–2003 and 2006–2007. CTG defines a completed trial as a study that has concluded and for which participants are no longer being examined or treated. The completion dates for most “published” RCTs were not recorded. Therefore, in order to closely approximate characteristics of “published” and “registered” RCTs, completion years for “registered” RCTs were selected to be identical to the publication years for “published” RCTs. The lead study sponsors were classified as industry or nonprofit (e.g., governmental organizations, universities). “Registered” RCTs were assessed for publication using a standardized search strategy. First, the CTG record was searched for links to publications resulting from the registered RCT. PubMed, CENTRAL, and Google Scholar databases were sequentially searched if CTG had no publication information. Publication of “registered” RCTs was confirmed by matching the study descriptions at CTG with that in the manuscript.
  • 2
    All “published” RCTs were assessed for CTG registration to study discordance in the published and the registered primary outcomes. First, “published” RCT manuscripts were searched for CTG registration reporting. Second, the PubMed listing of each “published” RCT was assessed for CTG registration information. Finally, we searched the CTG using the terms “rheumatoid arthritis” and “name of the interventional drug(s).”

Statistical analysis.

Categorical data were described as the number (percent). Because all of the continuous variables had a non-normal distribution, they were described as the median (interquartile range [IQR]). Association of the RCT funding source with the study outcome and quality parameters was assessed using Pearson's chi-square tests. Fisher's exact test or likelihood ratio tests were used for contingency tables with 4 or >4 cells, respectively, when the expected cell count was <5. Mann-Whitney U tests were used to compare continuous data. Logistic regression was performed to adjust for potential confounders when assessing the association between the funding source and RCT outcome. SPSS version 16 was used for data analysis.

RESULTS

  1. Top of page
  2. Abstract
  3. METHODS
  4. RESULTS
  5. DISCUSSION
  6. AUTHOR CONTRIBUTIONS
  7. REFERENCES

Characteristics of the study RCTs.

Figure 1 shows the process and outcome of screening of the 1,115 reports identified from the literature search. A total of 103 RCTs (9.2%) met the eligibility criteria. Most reports were excluded because of a nonrandomized study design or because a non-RA disease was the main focus. Fifty-eight RCTs (56.3%) were funded by the manufacturer of the experimental drug, 19 (18.4%) were funded by a nonprofit source(s), 6 (5.8%) had mixed funding sources, and 20 (19.4%) had an unspecified funding source. Therefore, 64 RCTs (62.1%) had complete or partial industry funding.

thumbnail image

Figure 1. Flow chart showing the process of selecting eligible randomized control trials (RCTs) for the study. RA = rheumatoid arthritis.

Download figure to PowerPoint

No significant difference was observed in the RCT funding source between the years 2002–2003 and 2006–2007. However, several characteristics of the RCTs differed significantly according to the funding source (Table 2). Industry-funded RCTs had a larger number of study subjects and were more likely to be conducted in multiple centers and countries. The vast majority of phase II RCTs and those with biologic DMARDs as the experimental drug were industry funded. Industry funding was also associated with publication in a journal with a higher impact factor. Non–industry-funded RCTs were more likely to study traditional DMARDs and to test different strategies for using drugs for RA treatment. The study duration of non–industry-funded RCTs was significantly longer than that of industry-funded RCTs (Table 2). Even after exclusion of phase II studies (which typically are of shorter duration), non-industry funding was associated with a longer study duration (median 12 months [IQR 5.6–21 months] versus 6 months [IQR 3.5–12 months]; P = 0.046).

Table 2. Characteristics of the 103 randomized controlled trials (RCTs)*
CharacteristicAll RCTsIndustry-funded RCTsNon–industry-funded RCTsP
  • *

    Except where indicated otherwise, values are the number (%). IQR = interquartile range; DMARD = disease-modifying antirheumatic drug; ED = experimental drug; ACD = active comparator drug.

  • Industry funded versus non–industry funded, using a chi-square test for categorical variables and a Mann-Whitney U test for continuous variables, unless specified otherwise.

  • Maximum likelihood ratio.

Year of publication    
 2002−200348 (46.6)28 (58.3)20 (41.7)0.46
 2006−200755 (53.4)36 (65.5)19 (34.5) 
Number of study centers    
 Single30 (29.1)9 (30)21 (70)<0.001
 Multiple73 (70.9)55 (75.3)18 (24.7) 
Number of countries involved in study    
 Single66 (62.9)27 (40.9)39 (59.1)<0.001
 Multiple37 (37.1)37 (100)0 (0) 
Number of study authors, median (IQR)9 (5−11)9 (7−11)6 (4−10)0.004
Number of study subjects, median (IQR)126 (53−393)295 (78−633)66 (45−151)<0.001
Duration of study, median (IQR) months6 (3−12)5.6 (3−12)12 (4.7−18)0.012
Type of study    
 Phase II16 (15.5)14 (87.5)2 (12.5)0.026
 Phase III/unspecified87 (84.5)50 (57.5)37 (42.5) 
Study agent    
 Traditional DMARD(s)25 (24.3)5 (20)20 (80)<0.001
 Biologic DMARD(s)38 (36.9)36 (94.7)2 (5.3) 
 Other40 (38.8)23 (57.5)17 (42.5) 
Type of study intervention arms    
 ED vs. placebo34 (33)20 (58.8)14 (41.2)0.02
 ED vs. ACD19 (18.4)11 (57.9)8 (42.1) 
 ED vs. ACD vs. placebo7 (6.8)6 (85.7)1 (14.3) 
 ED vs. ACD vs. ED + ACD combination4 (3.9)4 (100)0 (0) 
 Multiple doses of ED6 (5.8)4 (66.7)2 (33.3) 
 Multiple doses of ED vs. placebo19 (18.4)15 (78.9)4 (21.1) 
 Different treatment strategies with drug(s)14 (13.6)4 (28.6)10 (71.4) 
Journal impact factor, median (IQR)4 (2.7−7.7)6.4 (3.1−7.7)2.9 (1.3−5.8)<0.001

Funding source and study outcome.

Eighty-six studies (83.5%) could be assessed for efficacy outcome. Nine RCTs had safety as the primary outcome, while 8 RCTs did not declare a specific intervention as experimental a priori and hence could not be assessed for efficacy outcome. A large proportion of the RCTs, with the exception of those funded by mixed sources, had a positive outcome (Table 3). Industry funding of RCTs was not associated with higher likelihood of a positive outcome (likelihood ratio 3.28 [3df], P = 0.35) favoring the experimental drug. An association between industry funding and study outcome was not observed when comparing RCTs with any industry funding (39 of 54 [72.2%]) with those that had no declared industry funding (24 [75%] of 32; χ2 [1df] = 0.08, P = 0.77) or when comparing only RCTs that were exclusively industry funded (37 [75.5%] of 49) with those that were funded exclusively by a nonprofit source (11 [68.8%] of 16; χ2 [1df] = 0.28, P = 0.59). No association between the funding source and the study outcome was observed after adjustment for the type of study drug used, the number of study centers, the study phase, the number of study subjects, or the journal impact factor (data not shown).

Table 3. Funding source and efficacy outcome of the randomized controlled trials (RCTs)*
Funding sourceNo. of RCTsNo. (%) of RCTs with positive outcome
  • *

    Of 103 RCTs, 86 (83.5%) could be assessed for efficacy outcome.

Industry4937 (75.5)
Nonprofit1611 (68.8)
Mixed52 (40)
Unspecified1613 (81.2)

Funding source and reporting quality of RCTs.

Several key methodologic aspects of “published” RCTs were not adequately reported, irrespective of the funding source (Table 4). This was particularly true for adequate reporting of random sequence generation and allocation concealment. Adequate reporting of several methodologic quality measures, such as double-blinding, providing adequate descriptions of participant flow during the study, and performance of ITT analysis, was significantly more frequent in industry-funded RCTs compared with non–industry-funded RCTs (Table 4). Blinding of the outcome assessor and RCT funding source showed no significant associations. Analyses using all 4 categories of funding (industry, nonprofit, mixed, unspecified) or excluding RCTs with mixed funding or using RCTs with explicitly stated industry and nonprofit funding source showed similar results for the association of the funding source with the methodologic quality of RCT reporting (data not shown). Adjustment for the individual quality measures did not change the results of association between the funding source and the study outcome (data not shown).

Table 4. Methodologic quality measures used in industry-funded and non–industry-funded randomized controlled trials (RCTs)*
 Industry funded (n = 64)Non–industry funded (n = 39)P
  • *

    Values are the number/number assessed (%).

  • By chi-square or Fisher's exact test.

  • For RCTs that were not double blinded.

  • §

    Excluding RCTs with safety as the primary outcome.

Random sequence generation22 (34.4)14 (35.9)0.87
Allocation concealment20 (31.2)11 (28.2)0.74
Double blinding55 (85.9)20 (51.3)<0.001
Blinding of outcome assessor7/9 (77.8)8/19 (42.1)0.11
Participant flow54 (84.4)26 (66.7)0.036
Intent-to-treat analysis§45/56 (80.4)21/38 (55.3)0.009

Funding source and publication bias.

Sixty-two RCTs registered at CTG (9 registered in the years 2002–2003 and 53 registered in the years 2006–2007) met the eligibility criteria. Forty-four (71%) had industry and 18 (29%) had a nonprofit source as the lead study sponsor. Industry-sponsored RCTs showed a trend toward a lower likelihood of publication compared with non–industry-funded RCTs (27 [61.4%] versus 15 [83.3%]; P = 0.093). Only 6 (5.8%) of the 103 “published” RCTs were registered at CTG. The “published” and “registered” primary outcomes were identical in 5 (83.3%) of these RCTs.

DISCUSSION

  1. Top of page
  2. Abstract
  3. METHODS
  4. RESULTS
  5. DISCUSSION
  6. AUTHOR CONTRIBUTIONS
  7. REFERENCES

Our study revealed no association between the source of funding of “published” RCTs of drug therapy for RA and the outcome of these trials. A trend toward publication bias was observed for the industry-funded RCTs. Industry-funded RCTs performed significantly better than non–industry-funded RCTs in terms of the use of certain methodologic quality measures.

Our finding that industry is the funding source for the majority of published as well as registered RCTs is consistent with the trend toward an increased proportion of biomedical research being industry funded (1, 2). The significant differences in RCT characteristics according to the funding source have important implications for the characteristics of RCTs conducted and thus for the evidence generated for the clinical care of patients with RA. Although industry-funded RCTs predominantly focused on assessment of the efficacy and safety of newer therapeutic drugs, the majority of non–industry–funded RCTs evaluated established drugs and different strategies for using these drugs to treat RA. Evidently, industry-funded RCTs had more financial resources, because they were more likely to be multicenter, multinational, and to have higher subject enrollment. Despite this financial advantage, the duration of industry-funded RCTs was shorter than that of non–industry-funded RCTs. These differences clearly highlight the importance of both industry and nonprofit sources for funding of RCTs to generate efficacy and safety evidence for newer as well as established drugs and strategies for their use in clinical care.

Although a preponderance of the data in the medical literature show that industry funding leads to higher chances of pro-industry results and conclusions (4–11), we did not observe any association between the funding source and the study outcome of “published” RCTs of RA drug therapies. Adjustment for differences in RCT characteristics and reported methodologic quality measures did not affect this finding. A total of 1,850 RCTs (with 80% power) would be needed to show a significant association between funding source and study outcome, assuming a relative frequency of explicitly stated industry and nonprofit funding (∼3:1) and a percentage of trials with positive outcomes (75.5% and 68.8%) similar to that in our study. Therefore, among “published” RCTs of RA drug therapy, the differences in outcome between those with industry funding and those with non-industry funding are relatively small. One potential reason for the lack of association between funding source and study outcome could be publication bias. Indeed, we did observe that industry-funded “registered” RCTs at CTG showed a significant trend toward nonpublication. Because these “registered” RCTs had investigator-declared “completed” status, nonpublication of their results suggests an unfavorable outcome. We could not ascertain whether “published” RCTs more commonly presented outcomes that were favorable but different from the originally planned primary outcomes, thus inflating the frequency of positive “published” RCTs, because only a few “published” RCTs had actually registered at CTG. Further studies are needed to address the extent and implications of publication bias in RCTs of RA therapy.

Nearly 75% of the “published” RCTs had a positive outcome. This could be partly attributable to publication bias and partly attributable to the difficulty in study outcome assignment due to the complex structure of study intervention arms. The majority of RCTs had >2 intervention arms. The experimental drug often showed positive results compared with only placebo and not the ACD, or only the combination of experimental drug and ACD had positive results compared with the experimental drug or ACD alone. Most published RCT reports lacked a clear description of the a priori intent of the RCT (superiority versus noninferiority for different intervention arms). Thus, in the absence of such guidance, a positive RCT outcome was assigned when any experimental drug intervention arm (alone or in combination with an ACD) showed a statistically significant result favoring the primary outcome. Finally, conducting RCTs with such a high frequency of positive outcome raises ethical issues. An RCT should be conducted only if there is substantial uncertainty (equipoise) about the relative value of one treatment versus another (17). RCTs in which experimental intervention and control are thought to be nonequivalent based on the existing store of knowledge may cause unnecessary harm to study subjects and waste precious resources.

A study of 240 RCTs of rheumatic diseases showed no difference in any methodologic quality measure between those that were manufacturer supported and those that were not manufacturer supported (23). A more recent study of 64 RCTs for the treatment of systemic lupus erythematosus showed a trend toward better study quality in pharmaceutical company–supported RCTs (18). However, our study showed that industry funding was associated with better reporting of some key methodologic quality measures. There are several potential reasons for this finding.

First, the availability of greater financial resources to industry-funded RCT investigators may allow performance of more expensive measures such as double-blinding and more vigorous tracking and followup of study subjects.

Second, non–industry-funded RCTs studied strategies for RA drug therapy more often than did industry-funded RCTs (10 [25%] versus 4 [6%]), and double-blinding was considered impractical by the investigators for most such RCTs due to the complexity of study protocol requirements. Indeed, only 1 industry-funded and 1 non–industry-funded treatment-strategy RCT were conducted in a double-blind manner.

Third, it is conceivable that the mandates of regulatory organizations, such as the US Food and Drug Administration, for methodologically rigorous RCTs to generate efficacy and safety data for a new drug may also account for better quality of the industry-funded RCTs (33). Fourth, better reporting of methodologic aspects in the “published” RCTs may also reflect attempts to dispel notions of bias that tend to be associated with industry funding.

Finally, because we assessed the RCT methodologic quality using the published manuscript, we cannot be certain whether our findings represent incomplete reporting or inadequate performance of these measures. However, measures such as ITT analysis can be performed without additional financial burden and can be ascertained from the published report itself. Nonetheless, a lower proportion of non–industry-funded RCTs reported the performance of ITT analyses, suggesting that the funding source may be associated with real systematic differences in the performance of methodologic quality measures.

The overall reporting of most RCT methodologic quality measures, particularly for random sequence generation and allocation concealment, was suboptimal. Poor reporting/performance of RCT methodologic quality measures has been reported across multiple specialties, including rheumatology (19, 23, 34). Encouragingly, our study showed improvement in several quality measures, including randomization (35% versus 17.4%), allocation concealment (30.1% versus 19%), participant flow (77.7% versus 58.7%), and ITT analysis (64.1% versus 29.8%) when compared with 121 rheumatology RCTs published in the years 1997–1998 (23). However, only 38.8% of the trials published in 1997–1998 studied RA, and non–drug therapy RCTs were included in the referenced study. Hence, the above comparison may not represent true secular changes in the quality of reporting of RCTs of RA therapy.

The CONSORT statement was developed to promote standardized reporting of RCTs that would help readers assess their validity and interpret the results appropriately. The CONSORT statement was originally proposed in 1996, with subsequent revisions in 2001 and 2010 (24, 25, 35). The current CONSORT statement includes a list of 25 recommended items and a flow diagram (36). Adoption of the CONSORT guidelines by biomedical journals has been shown to improve reporting quality, particularly reporting of randomization and double-blinding (37–39). However, the improvements have been inconsistent with continued suboptimal reporting of measures such as allocation concealment (37, 40, 41). Nonetheless, the authors of RCT reports should be encouraged to strictly adhere to the CONSORT guidelines for improving RCT reporting quality.

Our study has some limitations. Nearly one-fifth of the “published” RCTs had no funding source disclosure. For most of these analyses, we considered the funding to be from a nonprofit source; plausibly, some of these were industry funded. For sensitivity analysis, we reassessed our study results considering an extreme scenario of industry funding of all such RCTs. This did not alter our finding of a lack of association between the funding source and the RCT outcome. However, differences in the study quality measures were attenuated and remained significant only for ITT analysis performance in favor of industry funding. An improvement in funding source reporting is expected, because this is mandatory for CTG registration, and the 2010 CONSORT statement has added an item explicitly for funding source reporting (32, 35).

We assessed the methodologic quality of an RCT based on its published report. Plausibly, investigators may not have reported important quality measures despite their adequate performance, causing an underestimation of the study quality. In fact, a discrepancy has been noted between methodologic aspects of the “published” RCT reports and their study protocol or the report by the RCT investigators of the actual study methods (42–44). However, an overwhelming majority of healthcare literature users rely on the published report of an RCT to assess its quality and validity and do not have access to the study protocol or RCT investigators. Hence, inadequate reporting of an RCT hinders assessment of its quality and validity, even though it may have been appropriately conducted.

We did not evaluate the conclusions or recommendations offered by the RCT investigators in the discussion section or abstract of the manuscript. The conclusions provided by the investigators in RCTs with industry funding are more likely to include a recommendation for the experimental drug as the treatment of choice, unrelated to the observed effect size (6). Finally, there is an issue of how best to assess the quality of RCTs. Inadequate performance of the quality measures used in our study may bias estimates of treatment effect (13–15). However, the association with treatment effect size is not consistent across different specialties, varies for individual quality measures, and is dependent on whether the study outcome is or is not subjective (45, 46). Moreover, different groups of investigators vary considerably in terms of the methods used to assess RCT quality (47). Some groups assess quality measures individually (an approach recommended by the Cochrane Collaboration), while others use a composite quality scale (47).

In conclusion, industry funding of “published” RCTs of RA drug therapy was not associated with a higher likelihood of positive outcomes favoring the sponsored experimental drug. A trend toward a higher nonpublication rate of “registered” industry-funded RCTs suggests that publication bias partially explains the observed lack of such association. The availability of adequate funds for RCT conduct from both industry and nonprofit sources is essential to generate evidence for optimal advancement of RA treatment. Improvement in reporting of methodologic quality measures is needed to enable better assessment of the validity of RCTs.

AUTHOR CONTRIBUTIONS

  1. Top of page
  2. Abstract
  3. METHODS
  4. RESULTS
  5. DISCUSSION
  6. AUTHOR CONTRIBUTIONS
  7. REFERENCES

All authors were involved in drafting the article or revising it critically for important intellectual content, and all authors approved the final version to be published. Dr. Khan had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Study conception and design. Khan, Lombeida, Torralba.

Acquisition of data. Khan, Lombeida, Singh, Torralba.

Analysis and interpretation of data. Khan, Lombeida, Spencer, Torralba.

REFERENCES

  1. Top of page
  2. Abstract
  3. METHODS
  4. RESULTS
  5. DISCUSSION
  6. AUTHOR CONTRIBUTIONS
  7. REFERENCES