Criteria for considering studies for this review
Types of studies
Randomised controlled trials. In order to diminish bias, only double-blind trials will be included in the review. We will not include quasi-randomised studies or historically controlled trials.
Types of participants
Adults (> 18 years) with a diagnosis of vestibular migraine or possible vestibular migraine according to the Bárány Society/IHS criteria, treated in any setting.
Definition of the disease
When trying to define patients with vestibular migraine we face a dilemma. If we strictly apply the criteria for the diagnosis of vestibular migraine or possible vestibular migraine as defined by the Bárány Society and the IHS (Lempert 2012), the population of trial participants (patients) will be tightly defined. However, we risk not finding any trials that have applied these criteria as they have only recently been codified. If we include trials with patients with migraine and any vestibular symptom we will undoubtedly have a broader range of patients, but their disease status will be ill-defined and the population more heterogeneous.
We have decided only to include trials where the participants fulfil the 2012 Bárány Society/IHS criteria for vestibular migraine and possible vestibular migraine for two reasons: firstly because of the robustness of the diagnosis made using those criteria and secondly because using these internationally accepted criteria will make it easier to keep this review up to date.
Since we anticipate a small number of trials, we have decided to merge certain and possible vestibular migraine patients into a single group.
Types of interventions
Pharmacological treatments used in the prevention of vestibular migraine, including beta-blockers, calcium antagonists, anticonvulsants, antidepressants, serotonin antagonists and NSAIDs.
Placebo or no treatment.
Types of outcome measures
We will include all outcomes that we consider to be relevant to the general public, primary health care providers, vertigo specialists and policy decision-makers. These include quality of life parameters, number of new episodes of vertigo, duration of the episodes, secondary effects and economic parameters.
We will divide timeframes (i.e. time of observation) into short-term, medium-term and long-term.
Duration of new episodes
Effectiveness of the intervention in reducing the intensity of the crisis
Adverse effects as described in the trials and, if possible, comparison of their rates in the treatment and the control group
Changes in quality of life
Economic factors (cost-effectiveness)
Search methods for identification of studies
We will conduct systematic searches for randomised controlled trials. There will be no language, publication year or publication status restrictions. We may contact original authors for clarification and further data if trial reports are unclear and we will arrange translations of papers where necessary.
We will identify published, unpublished and ongoing studies by searching the following databases from their inception: the Cochrane Ear, Nose and Throat Disorders Group Trials Register; the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library, current issue); PubMed; EMBASE; CINAHL; LILACS; KoreaMed; IndMed; PakMediNet; CAB Abstracts; Web of Science; ISRCTN; ClinicalTrials.gov; ICTRP; Computer Retrieval of Information on Scientific Projects (CRISP); Agencia Española del Medicamento; Google Scholar and Google.
We will model subject strategies for databases on the search strategy designed for CENTRAL (Appendix 6). Where appropriate, we will combine subject strategies with adaptations of the highly sensitive search strategy designed by The Cochrane Collaboration for identifying randomised controlled trials and controlled clinical trials (as described in theCochrane Handbook for Systematic Reviews of Interventions Version 5.1.0, Box 6.4.b. (Handbook 2011)).
Searching other resources
We will scan the reference lists of identified publications for additional trials. We will search PubMed, TRIPdatabase, The Cochrane Library and Google to retrieve existing systematic reviews relevant to this systematic review, so that we can scan their reference lists for additional trials. We will search for conference abstracts using the Cochrane Ear, Nose and Throat Disorders Group Trials Register and EMBASE. We will check Otolaryngology and Neurology text books for vestibular migraine chapters and we will scan their bibliographies for additional trials. We will contact the authors of trials and scan relevant publications on migraine-related vertigo for further unknown or unpublished studies.
Data collection and analysis
We will analyse single studies (not reports), meaning that we will link together multiple reports of a given study. Likewise, in a report that includes several studies, we will consider studies independently (Egger 2007).
In order to detect duplicate or multiple publication, we will evaluate the following.
Author names (most duplicate reports have authors in common)
Location and setting of the studies (institutions, such as hospitals)
Specific details of the interventions, such as dose or frequency
Numbers of participants and baseline data
Date and duration of the study (which can also clarify whether different sample sizes are due to different periods of recruitment)
If doubts remain, we will contact authors to clarify whether there is multiple publication of a single trial.
We will use the Review Manager 5 software to record and analyse the data (RevMan 2012). We will also follow the PRISMA Flow Diagram and Checklist as guidelines to strengthen the quality of the systematic review (Moher 2009).
We will study adverse effects with a narrow scope, focusing on the detection of well-known adverse effects of the drugs studied, rather than trying to detect a wide spectrum of effects, known or unknown. The reason for this strategy is that we can focus on important side effects and may draw more solid conclusions than with a wide focus. We are aware that a wide approach would detect more effects, but it has been proven to be highly resource-consuming and retrieves little useful information in comparison with the narrow focus approach (Handbook 2011). Furthermore, unknown adverse effects are better detected by primary surveillance rather than with a systematic review.
We will specifically avoid the use of drop-out rates as a surrogate for side effects, because several other factors may be responsible for treatment withdrawal. In addition, trial conditions try to impose a low drop-out rate if possible, so it may not be a good indicator of the real rate of adverse effects. Finally, patients suffering from symptoms suggesting an adverse effect are usually withdrawn more readily if they are receiving the active intervention than if they are receiving placebo. This may be another source of bias regarding rates of side effects.
Selection of studies
Miguel Maldonado Fernández (MMF), Louisa Murdin (LM) and Jasminder Birdi (JB) will independently review the studies obtained, selecting double-blinded, randomised controlled trials. Any disagreement will be settled by discussion among the team of review authors.
Data extraction and management
Two review authors (MMF and Greg Irving (GI)) will extract data independently.
We will use a data collection form (Appendix 7) for each study, based on a Cochrane template, to record the criteria for the eligibility of trials, to keep track of all the decisions regarding the trial and to save the relevant data that will be used in the meta-analysis.
We will use an electronic database (Excel) for the recording of the trials. The data collection form will have a Microsoft Word format, so that open-end data may be recorded. This electronic format will also enable the review authors to share and compare their work over the internet.
Two review authors will independently perform a pilot test of the database and the data collection form, to try to detect flaws that need correction.
We will highlight and correct any errors in data entry and keep track of them in the data extraction form.
We will use the RevMan 5 software to analyse the data.
Assessment of risk of bias in included studies
Bias, or systematic error, means that repetition of the study many times will obtain a flawed result on average. We will investigate the following sources of bias.
Selection bias. Systematic differences between baseline characteristics of the groups that are compared. Randomisation of the allocation to treatment/control group tries to prevent selection bias.
Performance bias. Systematic differences between groups regarding the care that is provided, or in exposure to factors other than the interventions of interest. Blinding seeks to prevent this bias.
Detection bias. Systematic differences between groups in how outcomes are determined. Blinding the personnel who carry out the measurement helps reduce the risk that knowing the intervention received will affect the determination of the observation, especially in the case of a subjective variable, such as the intensity of vertigo.
Attrition bias refers to systematic differences between groups in withdrawals from a study. Missing data from a study may be due to participant information available to the authors not being reported (exclusion) or due to unavailable data (attrition).
Reporting bias refers to systematic differences between reported and unreported findings.
MMF and JB will undertake assessment of the risk of bias of the included trials independently. We will use the Cochrane 'Risk of bias' tool in RevMan 5 (RevMan 2012), which involves describing each of seven domains as reported in the trial and then assigning a judgement about the adequacy of each entry: 'low', 'high' or 'unclear' risk of bias. The following domains will be taken into consideration, as guided by theCochrane Handbook for Systematic Reviews of Interventions (Handbook 2011).
Blinding of participants and personnel (double-blinding)
Blinding of outcome assessment
Incomplete outcome data
Selective outcome reporting. There are no well-developed statistical methods to detect within-study reporting biases, therefore we will use the following methods. If there is access to the protocol of the trial, we will compare the objectives in the protocol with the actual results reported in the trial. If the protocol is not available, we will compare the objectives mentioned in the methods section with the actual data reported in the results section. If there are discrepancies, we will report these and contact the authors to clarify them. They will be asked to provide the protocol and the full report of the results. We will measure the possible impact of selective outcome reporting using sensitivity analysis.
Other sources of bias.
We will carry out a pilot study with three to six trials, in which two independent review authors will assess the consistent application of the risk of bias criteria, to see if consensus may be reached. One of the review authors will be a methodological expert and another a content expert.
Measures of treatment effect
MMF, JB and GI will enter the data into RevMan 5, analyse and interpret it.
Measurement of the treatment effect compared to the control (placebo or other interventions) will try to answer the following questions:
What is the direction of effect?
What is the size of effect?
Is the effect consistent across studies?
What is the strength of evidence for the effect?
In our study we expect to find several treatments for migraine-related vestibular symptoms. We will analyse the different treatments independently in order to achieve meaningful comparisons (that is, we will avoid merging studies that analyse different active drugs). We will group studies according to the active treatment studied.
We will consider that studies are totally comparable when they use the same drug, dose and the same route of administration, and when the control group receives the same alternative treatment (alternative drug/placebo). For studies that use different routes, or different doses, we will include the trials in the group but will carry out sensitivity analysis to find out the impact of their inclusion on the outcome of the review.
We expect to include parallel trials, although other types of design might be encountered (i.e. cross-over).
We will perform a meta-analysis within each homogeneous group we encounter (defined by the treatment studied).
Several types of data are likely to be obtained:
dichotomous data (some studies might classify participants according to having suffered vertigo symptoms or not, although a time-to-event design would be more appropriate; existence of secondary effects or not);
continuous data (number of crises, duration of symptoms);
ordinal scales to classify severity of symptoms;
counts and rates (number of events that each individual experiences); and
censored time-to-event data (i.e. time to a vertigo crisis).
For binary (dichotomous) data we expect OR (odds ratio), RR (relative risk or risk ratio), RD (risk difference, also called absolute risk reduction) and NNT (number of participants needed to treat to avoid a case of the disease).
For the effect measures of continuous data we anticipate the use of difference in means (MD) between the groups, if we find that the different studies used the same measuring scale, and SMD (standardised mean difference or, properly, the difference in standardised means) if they used different scales to measure the variable. SMD assumes that all variability among studies comes from differences in the scale of measurement, which may not be the case, for example if pragmatic trials are included in the comparison. If SMD is measured, we will take care to ensure that the direction of all scales is the same (i.e. that all scales increase with the disease severity). If not, we will multiply the group of mean values from scales that decrease with disease severity by -1 and record this step in the data extraction form.
For ordinal data studies, we will check the reference to the ordinal scale used, first to see if the scale has been validated (and therefore measures what it claims to measure) and, secondly, to be sure that the authors of the study have not used a version of the scale adapted by themselves. Although special methods for proportional odds ratios exist for analysing ordinal outcome data, they are not available in RevMan 5. As suggested in the Cochrane Handbook for Systematic Reviews of Interventions, we will analyse small ordinal scales as dichotomous and large ordinal scales as continuous.
We will analyses counts and rates (number of events, such as crisis of vertigo, that each individual experiences) with rate ratios (RR), in the case of rare events (with a Poisson distribution). If they are common events, we will treat them like continuous outcome data.
We will analyse time-to-event data using survival analysis and express intervention effects as hazard ratios, defined as how many times more (or less) likely a participant is to suffer the event at a particular point in time if they receive the experimental rather than the control intervention. We will make the proportional hazard assumption (the hazard ratio is considered constant across the follow-up period, even though hazards themselves may vary continuously).
Unit of analysis issues
Cluster-randomised trials allocate groups instead of individuals. The participants in each group may be related in some way, therefore this needs to be taken into account in the analysis, otherwise we would incur a unit of analysis error (the allocation unit being different from the analysis unit), which would produce an artificially small P value and a risk of false positive results. For this purpose we will use a special statistical method (multilevel model, variance components analysis or generalised estimating equations), with the appropriate statistical advice.
Cross-over trials of pharmacological treatments for vestibular migraine are not expected to have a strong carry-over effect. In addition, outcomes are not irreversible and the nature of the disease does not change significantly over time as in the case of a patient with a degenerative condition like Alzheimer's disease. However, vestibular migraine crises are, in our opinion, too acute to be studied with a cross-over design. For this reason we will exclude cross-over trials from the meta-analysis.
If we find studies with more than two groups (several active treatments being tested, or several placebos being used), we will establish which of the comparisons are relevant to the systematic review, and relevant to each of the meta-analyses that we may implement. If the study has elicited independent groups (i.e. group with drug A, group with control A, group with drug B, group with control B), we can confidently treat the study as independent comparisons. However, if we encounter participants that have been included in several groups, a risk of unit of analysis error would exist. In this case, we would have to combine groups and create a single pair-wise comparison. We will try to avoid selecting one pair of comparisons and discarding the rest, because this would mean losing information.
Repeated observations on participants
In long studies, we expect that results may be harvested from several periods (i.e. three-month, six-month, one-year follow-up). In order to avoid unit of analysis error when combining these results in a single meta-analysis (and therefore counting the same participants in more than one comparison), we may do the following:
if possible, retrieve individual patient data and perform a time-to-event analysis using the whole follow-up for each participant;
try to establish secondary outcomes defining short-term, medium-term and long-term effects of the intervention.
Dealing with missing data
In the case of missing data from trials, we will contact the authors for clarification. If no useful response is obtained, we will treat missing data differently if they are judged to be 'missing at random', in which case the effect may not be important, or 'missing not at random', where missing data may affect the overall result. In the first case, the data can be ignored. For data not missing at random, we will impute the mean of the remaining data as substitute values for the trial.
We will perform sensitivity analysis in these cases, to assess the impact of missing data in the overall result. In any case, we will address the fact that missing data may affect the results in the Discussion section of the review.
Assessment of heterogeneity
We expect that the trials included in the systematic review will have been performed according to different protocols, therefore a certain degree of heterogeneity will be anticipated, due to differences in the participants, clinical settings or ways used to deliver the treatment. The presence of considerable heterogeneity will not exclude the studies from subsequent meta-analysis.
A rule of thumb for checking if the results in the trials are homogeneous is to compare the mean outcomes in the trials and see if there is consistency in the results. Another way is to see if there is overlapping in the confidence intervals of the results in the trials.
A statistical way to look for heterogeneity is to use the Chi2 test. There are two main problems with this method. One is that the power of the test is low when the number of trials is small. For that reason, a non-significant result cannot be taken as proof of homogeneity. A low number of trials is the expected situation in this systematic review and therefore we will measure heterogeneity using the I2 statistic (Higgins 2003). I2 ranges from 0% to 100%, 0% meaning complete lack of heterogeneity and bigger values meaning increasing heterogeneity.
According to the I2 results, we will interpret heterogeneity as follows (Handbook 2011):
0% to 40%: might not be important;
30% to 60%: may represent moderate heterogeneity;
50% to 90%: may represent substantial heterogeneity;
75% to 100%: considerable heterogeneity.
Assessment of reporting biases
We will use funnel plots (scatter plot of the treatment effect estimates from individual studies against the standard error of the effect in each study) to detect reporting biases. We will try to spot void areas in the scatter plot that might correspond to studies that, for some reason, may have not been published. We will use RevMan 5 for this purpose. However, we are aware that funnel plot asymmetries detect small sample effects, which may be due to publication bias but also to other reasons, such as poor methodological quality due to a small sample. Heterogeneity is another possible source of funnel plot asymmetry (severe patients, who may respond more significantly to the treatment, are prone to be included in the early smaller studies). Sampling variation and chance may be other explanations for plot asymmetries.
We will plot ratio measures of intervention effect (such as odds ratios and risk ratios) on a logarithmic scale so that effects of the same size but opposite directions (i.e. OR of 0.5 and 2) are equidistant to 1.
We will use tests for funnel plot asymmetry only if at least 10 studies are included in the meta-analysis (for fewer studies the test would not distinguish between chance and real asymmetry). We will interpret the results of the test according to the visual information in the funnel plot.
Choosing between fixed-effect or random-effects
If there is substantial heterogeneity in the methodology in the different studies, we will choose a random-effects model (Handbook 2011).
We will analyse the data recorded by including in the analysis the total number of persons originally allocated to each group (intervention/alternative treatment), instead of the number that actually accomplished the trial in each group (intention-to-treat method).
We will carry out meta-analysis with a double-step procedure:
We will calculate a summary statistic describing the observed intervention effect for each study.
We will calculate a pooled intervention effect as a weighted mean of the summaries of each study.
We will use the standard error of the pooled effect to calculate the confidence interval to measure the precision of the summary estimate. We will obtain a P value to assess the strength of the evidence against the null hypothesis of no intervention effect.
We will measure variation of the study results to detect inconsistencies of intervention effects across studies.
We will perform meta-analysis of continuous data with the inverse-variance fixed-effect method and the inverse-variance random-effects method, as needed. For meta-analysis of dichotomous data, we will use the Mantel-Haenszel method for the fixed-effect model and the DerSimonian and Laird method for the random-effects model. In every case we will use RevMan 5.
Subgroup analysis and investigation of heterogeneity
In order to detect variations of effect related to characteristics of the population (male/female) or the intervention (dose of the drugs or route of administration), we may perform subgroup analysis. However, we will regard subgroup analysis with extreme caution due to the risk of finding spurious associations because repeated comparisons are made. We will use the RevMan 5 tool if subgroup analyses are carried out.
To measure the robustness of the results, we will perform sensitivity analysis to see if the conclusion obtained by the review is affected by the estimation of uncertain data.
If a problem with the data is detected using sensitivity analysis, we will present the final results in the form of a summary table, instead of individual forest plots.