Selection of studies
Two review authors (HHBY and RED) independently screened the trials identified by the literature search. We resolved disagreements by consulting with the third review author (THATQ) and consulted with her for quality assurance of the processes.
Data extraction and management
We planned for two review authors (HHBY and RED) to independently extract data. We planned to resolve any discrepancies by discussion. We planned to use a standard data extraction form to extract the following information: characteristics of the study (design, method of randomisation); participants; interventions; outcomes (types of outcome measures, adverse events). We then planned to check for errors before entering the data into Review Manager.
Assessment of risk of bias in included studies
For the assessment of study quality, we planned to use the risk of bias approach for Cochrane reviews (Higgins 2009). We would use the following six criteria.
Is the allocation sequence adequately generated, for example with random number tables, computer-generated random numbers? We planned to record this as 'low risk of bias' (the method used is either adequate or unlikely to introduce confounding), 'uncertain risk of bias' (there is insufficient information to assess whether the method used is likely to introduce confounding), or 'high risk of bias' (the method used, for example a quasi-randomised trial, is likely to introduce confounding).
Is allocation adequately concealed in a way that would not allow either the investigators or the participants to know or influence allocation to an intervention group before an eligible participant was entered into the study (for example using central randomisation or sequentially numbered, opaque, sealed envelopes held by a third party)? We planned to record this as 'low risk of bias' (the method used, for example central allocation, is unlikely to introduce bias in the final observed effect), 'uncertain risk of bias' (there is insufficient information to assess whether the method used is likely to introduce bias in the estimate of effect), or 'high risk of bias' (the method used, for example an open random allocation schedule, is likely to introduce bias in the final observed effect).
Are the study participants and personnel blinded from knowledge of which intervention a participant received? We planned to note where there has been partial blinding (for example where it has not been possible to blind participants but where outcome assessment was carried out without knowledge of group assignment). We planned to record this as 'low risk of bias' (blinding was performed adequately, or the outcome measurement is not likely to be influenced by lack of blinding), 'uncertain risk of bias' (there is insufficient information to assess whether the type of blinding used is likely to introduce bias in the estimate of effect), or 'high risk of bias' (no blinding or incomplete blinding, and the outcome or the outcome measurement is likely to be influenced by lack of blinding).
Are incomplete outcome data adequately addressed? Incomplete outcome data essentially include attrition, exclusions, and missing data. If any withdrawals occurred, were they described and reported by treatment group with the reasons given? We planned to record whether or not there were clear explanations for withdrawals and dropouts in the treatment groups. An example of an adequate method to address incomplete outcome data is the use of an intention-to-treat analysis (ITT). This item was planned to be recorded as 'low risk of bias' (the underlying reasons for missing data are unlikely to make treatment effects depart from plausible values, or proper methods have been employed to handle missing data), 'uncertain risk of bias' (there is insufficient information to assess whether the missing data mechanism in combination with the method used to handle missing data is likely to introduce bias in the estimate of effect), or 'high risk of bias' (the crude estimate of effects, for example a complete case estimate, will clearly be biased due to the underlying reasons for missing data and the methods used to handle missing data are unsatisfactory).
Are reports of the study free from any suggestion of selective outcome reporting? We planned to interpret this as evidence that statistically non-significant results might have been selectively withheld from publication, for example selective under-reporting of data or selective reporting of a subset of data. We planned to record this as 'low risk of bias' (the trial protocol is available and all of the trial’s pre-specified outcomes that are of interest in the review have been reported, or similar), 'uncertain risk of bias' (there is insufficient information to assess whether the magnitude and direction of the observed effect are related to selective outcome reporting), or 'high risk of bias' (not all of the trial’s pre-specified primary outcomes have been reported, or similar).
As a first step, we planned to copy information relevant to making a judgment on this criterion from the original publication into an assessment table. If additional information was available from the study authors, we planned to also enter this in the table along with an indication that it was unpublished information. Two review authors (HHBY and RED) planned to independently make a judgment as to whether the risk of bias for each criterion was considered to be 'low', 'uncertain', or 'high'. We would resolve disagreements by discussion.
We planned to consider trials which were classified as low risk of bias in sequence generation, allocation concealment, blinding, incomplete data, and selective outcome reporting as low bias-risk trials.
Measures of treatment effect
(a) Binary outcomes
For dichotomous data, we planned to use the risk ratio (RR) as the effect measure with 95% confidence interval (CI).
(b) Continuous outcomes
For continuous data, we planned to present the results as mean differences (MD) with 95% CIs. When pooling data across studies we would estimate the MD if the outcomes were measured in the same way between trials. We planned to use the standardised mean difference (SMD) to combine trials that measured the same outcome but used different methods.
Unit of analysis issues
The unit of analysis was planned to be each patient recruited into the trials.
Dealing with missing data
An intention-to-treat analysis (ITT) is one in which all the participants in a trial are analysed according to the intervention to which they were allocated, whether they received the intervention or not. We would assume that participants who dropped out were non-responders. For each trial we planned to report whether or not the investigators stated if the analysis was performed according to the ITT principle. If participants were excluded after allocation, we would report any details provided in full.
Furthermore, we would perform the analysis on an ITT basis (Newell 1992) whenever possible. Otherwise, we planned to adopt an available-case analysis.
Assessment of heterogeneity
We planned to quantify inconsistency among the pooled estimates using the I2 statistic. This illustrates the percentage of the variability in effect estimates resulting from heterogeneity rather than sampling error (Higgins 2003; Higgins 2009). I2 = [(Q - df)/Q] x 100%, where Q is the Chi2 statistic and df its degrees of freedom. We would assess heterogeneity between the trials by visual examination of the forest plot to check for overlapping CIs, the Chi2 test for homogeneity with a 10% level of significance, and the I2 statistic. We planned to use an I2 statistic value of less than 25% to denote low heterogeneity, 50% or greater significant heterogeneity, and 75% or greater substantial heterogeneity.
Assessment of reporting biases
Apart from assessing the risk of selective outcome reporting, considered under assessment of risk of bias in included studies, we planned to assess the likelihood of potential publication bias using funnel plots provided that there were at least eight trials. When small studies in a meta-analysis tend to show larger treatment effects, we would consider other causes including selection biases, poor methodological quality, heterogeneity, artefact and chance.
We planned to use the fixed-effect model to analyse the data. If significant heterogeneity (for example I2 greater than 50%) was identified, we would compute pooled estimates of the treatment effect for each outcome using a random-effects model (with two or more studies). We planned to undertake quantitative analyses of outcomes on an ITT basis.
Subgroup analysis and investigation of heterogeneity
In the case of significant clinical heterogeneity (I2 > 50%), we planned to use subgroup analysis. Subgroup analyses are secondary analyses in which the participants are divided into groups according to shared characteristics and the outcome analyses are conducted to determine if any significant treatment effect occurs according to that characteristic. If data permit, we intend to carry out the following subgroup analyses:
different type of anticoagulant therapy (e.g. fractionated, non-fractionated heparin);
different doses for the same anticoagulant therapy (e.g. 1 mg/kg of fractionated heparin, 2 mg/kg of fractionated heparin);
different routes for the non-fractionated heparin (e.g. subcutaneous, intravenous, oral);
short-term (up to 30 days) versus long-term follow-up (> 30 days);
different co-morbidities (e.g. chronic obstructive pulmonary disease (COPD), cardiac insufficiency);
single SSPE versus multiple SSPE.
We would perform the Chi2 test for subgroup differences set at a P value of 0.05.
If there were an adequate number of studies, we intended to perform a sensitivity analysis to explore causes of heterogeneity and the robustness of the results. We planned to include the following factors in the sensitivity analysis, separating studies according to:
type of study design (RCTs versus CCTs);
trials with low risk of bias versus those with high risk of bias;
rates of withdrawal for each outcome (< 20% versus ≥ 20%).