To assess the conditions under which employing an overview of systematic reviews is likely to lead to a high risk of bias.

To synthesise existing guidance concerning overview practice, a scoping review was conducted. Four electronic databases were searched with a pre-specified strategy (PROSPERO 2015:CRD42015027592) ending October 2015. Included studies needed to describe or develop overview methodology. Data were narratively synthesised to delineate areas highlighted as outstanding challenges or where methodological recommendations conflict.

Twenty-four papers met the inclusion criteria. There is emerging debate regarding overlapping systematic reviews; systematic review scope; quality of included research; updating; and synthesizing and reporting results. While three functions for overviews have been proposed—identify gaps, explore heterogeneity, summarize evidence—overviews cannot perform the first; are unlikely to achieve the second and third simultaneously; and can only perform the third under specific circumstances. Namely, when identified systematic reviews meet the following four conditions: (1) include primary trials that do not substantially overlap, (2) match overview scope, (3) are of high methodological quality, and (4) are up-to-date.

Considering the intended function of proposed overviews with the corresponding methodological conditions may improve the quality of this burgeoning publication type. Copyright © 2017 John Wiley & Sons, Ltd.

When we speak about heterogeneity in a meta-analysis, our intent is usually to understand the substantive implications of the heterogeneity. If an intervention yields a mean effect size of 50 points, we want to know if the effect size in different populations varies from 40 to 60, or from 10 to 90, because this speaks to the potential utility of the intervention. While there is a common belief that the *I*^{2} statistic provides this information, it actually does not. In this example, if we are told that *I*^{2} is 50%, we have no way of knowing if the effects range from 40 to 60, or from 10 to 90, or across some other range. Rather, if we want to communicate the predicted range of effects, then we should simply report this range. This gives readers the information they think is being captured by *I*^{2} and does so in a way that is concise and unambiguous. Copyright © 2017 John Wiley & Sons, Ltd.

Recently, the number of regression models has dramatically increased in several academic fields. However, within the context of meta-analysis, synthesis methods for such models have not been developed in a commensurate trend. One of the difficulties hindering the development is the disparity in sets of covariates among literature models. If the sets of covariates differ across models, interpretation of coefficients will differ, thereby making it difficult to synthesize them. Moreover, previous synthesis methods for regression models, such as multivariate meta-analysis, often have problems because covariance matrix of coefficients (i.e. within-study correlations) or individual patient data are not necessarily available. This study, therefore, proposes a brief explanation regarding a method to synthesize linear regression models under different covariate sets by using a generalized least squares method involving bias correction terms. Especially, we also propose an approach to recover (at most) threecorrelations of covariates, which is required for the calculation of the bias term without individual patient data. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Temporal changes in magnitude of effect sizes reported in many areas of research are a threat to the credibility of the results and conclusions of meta-analysis. Numerous sequential methods for meta-analysis have been proposed to detect changes and monitor trends in effect sizes so that meta-analysis can be updated when necessary and interpreted based on the time it was conducted. The difficulties of sequential meta-analysis under the random-effects model are caused by dependencies in increments introduced by the estimation of the heterogeneity parameter *τ*^{2}. In this paper, we propose the use of a retrospective cumulative sum (CUSUM)-type test with bootstrap critical values. This method allows retrospective analysis of the past trajectory of cumulative effects in random-effects meta-analysis and its visualization on a chart similar to CUSUM chart. Simulation results show that the new method demonstrates good control of Type I error regardless of the number or size of the studies and the amount of heterogeneity. Application of the new method is illustrated on two examples of medical meta-analyses. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

Using Toulmin's argumentation theory, we analysed the texts of systematic reviews in the area of workplace health promotion to explore differences in the modes of reasoning embedded in reports of narrative synthesis as compared with reports of meta-analysis. We used framework synthesis, grounded theory and cross-case analysis methods to analyse 85 systematic reviews addressing intervention effectiveness in workplace health promotion. Two core categories, or ‘modes of reasoning’, emerged to frame the contrast between narrative synthesis and meta-analysis: practical–configurational reasoning in narrative synthesis (‘what is going on here? What picture emerges?’) and inferential–predictive reasoning in meta-analysis (‘does it work, and how well? Will it work again?’). Modes of reasoning examined quality and consistency of the included evidence differently. Meta-analyses clearly distinguished between warrant and claim, whereas narrative syntheses often presented joint warrant–claims. Narrative syntheses and meta-analyses represent different modes of reasoning. Systematic reviewers are likely to be addressing research questions in different ways with each method. It is important to consider narrative synthesis in its own right as a method and to develop specific quality criteria and understandings of how it is carried out, not merely as a complement to, or second-best option for, meta-analysis. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

]]>Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product–moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product–moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Although well developed to assess efficacy questions, meta-analyses and, more generally, systematic reviews, have received less attention in application to safety-related questions. As a result, many open questions remain on how best to apply meta-analyses in the safety setting. This appraisal attempts to: (i) summarize the current guidelines for assessing individual studies, systematic reviews, and network meta-analyses; (ii) describe several publications on safety meta-analytic approaches; and (iii) present some of the questions and issues that arise with safety data. A number of gaps in the current quality guidelines are identified along with issues to consider when performing a safety meta-analysis. While some work is ongoing to provide guidance to improve the quality of safety meta-analyses, this review emphasizes the critical need for better reporting and increased transparency regarding safety data in the systematic review guidelines. Copyright © 2016 John Wiley & Sons, Ltd.

]]>We describe a meta-analytic scatterplot that indicates precision of points for two variables paired within studies; this is equivalent in form to a ‘cross-hairs’ plot used to portray specificity and sensitivity in diagnostic testing. At the user's discretion, the plot also displays boxplots for each of the X and Y variable distributions, means for each of the variables, and the correlation between the two. The cross-hairs may be suppressed for dense point clouds. The program is written in R, so it can be modified by the user and can serve as a companion to existing meta-analysis programs. Some of the program's novel uses are described and illustrated with (1) independent effect sizes, (2) dependent effect sizes, and (3) shrunken estimates. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Network meta-analysis is becoming a common approach to combine direct and indirect comparisons of several treatment arms. In recent research, there have been various developments and extensions of the standard methodology. Simultaneously, cluster randomized trials are experiencing an increased popularity, especially in the field of health services research, where, for example, medical practices are the units of randomization but the outcome is measured at the patient level. Combination of the results of cluster randomized trials is challenging. In this tutorial, we examine and compare different approaches for the incorporation of cluster randomized trials in a (network) meta-analysis. Furthermore, we provide practical insight on the implementation of the models. In simulation studies, it is shown that some of the examined approaches lead to unsatisfying results. However, there are alternatives which are suitable to combine cluster randomized trials in a network meta-analysis as they are unbiased and reach accurate coverage rates. In conclusion, the methodology can be extended in such a way that an adequate inclusion of the results obtained in cluster randomized trials becomes feasible. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Meta-analyses in orphan diseases and small populations generally face particular problems, including small numbers of studies, small study sizes and heterogeneity of results. However, the heterogeneity is difficult to estimate if only very few studies are included. Motivated by a systematic review in immunosuppression following liver transplantation in children, we investigate the properties of a range of commonly used frequentist and Bayesian procedures in simulation studies. Furthermore, the consequences for interval estimation of the common treatment effect in random-effects meta-analysis are assessed. The Bayesian credibility intervals using weakly informative priors for the between-trial heterogeneity exhibited coverage probabilities in excess of the nominal level for a range of scenarios considered. However, they tended to be shorter than those obtained by the Knapp–Hartung method, which were also conservative. In contrast, methods based on normal quantiles exhibited coverages well below the nominal levels in many scenarios. With very few studies, the performance of the Bayesian credibility intervals is of course sensitive to the specification of the prior for the between-trial heterogeneity. In conclusion, the use of weakly informative priors as exemplified by half-normal priors (with a scale of 0.5 or 1.0) for log odds ratios is recommended for applications in rare diseases. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

]]>Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of ‘mixed-effects’ or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the ‘true’ regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Phase I trials aim to establish appropriate clinical and statistical parameters to guide future clinical trials. With individual trials typically underpowered, systematic reviews and meta-analysis are desired to assess the totality of evidence. A high percentage of zero or missing outcomes often complicate such efforts. We use a systematic review of pediatric phase I oncology trials as an example and illustrate the utility of advanced Bayesian analysis. Standard random-effects methods rely on the exchangeability of individual trial effects, typically assuming that a common normal distribution sufficiently describes random variation among the trial level effects. Summary statistics of individual trial data may become undefined with zero counts, and this assumption may not be readily examined. We conduct Bayesian semi-parametric analysis with a Dirichlet process prior and examine the assumption. The Bayesian semi-parametric analysis is also useful for visually summarizing individual trial data. It provides alternative statistics that are computed free of distributional assumptions about the shape of the population of trial level effects. Outcomes are rarely entirely missing in clinical trials. We utilize available information and conduct Bayesian incomplete data analysis. The advanced Bayesian analyses, although illustrated with the specific example, are generally applicable. © 2016 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd.

]]>In meta-analysis, the random-effects model is often used to account for heterogeneity. The model assumes that heterogeneity has an additive effect on the variance of effect sizes. An alternative model, which assumes multiplicative heterogeneity, has been little used in the medical statistics community, but is widely used by particle physicists. In this paper, we compare the two models using a random sample of 448 meta-analyses drawn from the Cochrane Database of Systematic Reviews. In general, differences in goodness of fit are modest. The multiplicative model tends to give results that are closer to the null, with a narrower confidence interval. Both approaches make different assumptions about the outcome of the meta-analysis. In our opinion, the selection of the more appropriate model will often be guided by whether the multiplicative model's assumption of a single effect size is plausible. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Random-effects meta-analysis methods include an estimate of between-study heterogeneity variance. We present a systematic review of simulation studies comparing the performance of different estimation methods for this parameter. We summarise the performance of methods in relation to estimation of heterogeneity and of the overall effect estimate, and of confidence intervals for the latter. Among the twelve included simulation studies, the DerSimonian and Laird method was most commonly evaluated. This estimate is negatively biased when heterogeneity is moderate to high and therefore most studies recommended alternatives. The Paule–Mandel method was recommended by three studies: it is simple to implement, is less biased than DerSimonian and Laird and performs well in meta-analyses with dichotomous and continuous outcomes. In many of the included simulation studies, results were based on data that do not represent meta-analyses observed in practice, and only small selections of methods were compared. Furthermore, potential conflicts of interest were present when authors of novel methods interpreted their results. On the basis of current evidence, we provisionally recommend the Paule–Mandel method for estimating the heterogeneity variance, and using this estimate to calculate the mean effect and its 95% confidence interval. However, further simulation studies are required to draw firm conclusions. Copyright © 2016 John Wiley & Sons, Ltd.

]]>When considering data from many trials, it is likely that some of them present a markedly different intervention effect or exert an undue influence on the summary results. We develop a forward search algorithm for identifying outlying and influential studies in meta-analysis models. The forward search algorithm starts by fitting the hypothesized model to a small subset of likely outlier-free studies and proceeds by adding studies into the set one-by-one that are determined to be closest to the fitted model of the existing set. As each study is added to the set, plots of estimated parameters and measures of fit are monitored to identify outliers by sharp changes in the forward plots. We apply the proposed outlier detection method to two real data sets; a meta-analysis of 26 studies that examines the effect of writing-to-learn interventions on academic achievement adjusting for three possible effect modifiers, and a meta-analysis of 70 studies that compares a fluoride toothpaste treatment to placebo for preventing dental caries in children. A simple simulated example is used to illustrate the steps of the proposed methodology, and a small-scale simulation study is conducted to evaluate the performance of the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Goodness of fit evaluation should be a natural step in assessing and reporting dose–response meta-analyses from aggregated data of binary outcomes. However, little attention has been given to this topic in the epidemiological literature, and goodness of fit is rarely, if ever, assessed in practice. We briefly review the two-stage and one-stage methods used to carry out dose–response meta-analyses. We then illustrate and discuss three tools specifically aimed at testing, quantifying, and graphically evaluating the goodness of fit of dose–response meta-analyses. These tools are the deviance, the coefficient of determination, and the decorrelated residuals-versus-exposure plot. Data from two published meta-analyses are used to show how these three tools can improve the practice of quantitative synthesis of aggregated dose–response data. In fact, evaluating the degree of agreement between model predictions and empirical data can help the identification of dose–response patterns, the investigation of sources of heterogeneity, and the assessment of whether the pooled dose–response relation adequately summarizes the published results. © 2015 The Authors. *Research Synthesis Methods* published by John Wiley & Sons, Ltd.

When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd.

]]>No abstract is available for this article.

]]>No abstract is available for this article.

]]>No abstract is available for this article.

]]>This paper investigates how inconsistency (as measured by the *I ^{2}* statistic) among studies in a meta-analysis may differ, according to the type of outcome data and effect measure. We used hierarchical models to analyse data from 3873 binary, 5132 continuous and 880 mixed outcome meta-analyses within the Cochrane Database of Systematic Reviews. Predictive distributions for inconsistency expected in future meta-analyses were obtained, which can inform priors for between-study variance. Inconsistency estimates were highest on average for binary outcome meta-analyses of risk differences and continuous outcome meta-analyses. For a planned binary outcome meta-analysis in a general research setting, the predictive distribution for inconsistency among log odds ratios had median 22% and 95% CI: 12% to 39%. For a continuous outcome meta-analysis, the predictive distribution for inconsistency among standardized mean differences had median 40% and 95% CI: 15% to 73%. Levels of inconsistency were similar for binary data measured by log odds ratios and log relative risks. Fitted distributions for inconsistency expected in continuous outcome meta-analyses using mean differences were almost identical to those using standardized mean differences. The empirical evidence on inconsistency gives guidance on which outcome measures are most likely to be consistent in particular circumstances and facilitates Bayesian meta-analysis with an informative prior for heterogeneity. © 2015 The Authors.

When meta-analysing intervention effects calculated from continuous outcomes, meta-analysts often encounter few trials, with potentially a small number of participants, and a variety of trial analytical methods. It is important to know how these factors affect the performance of inverse-variance fixed and DerSimonian and Laird random effects meta-analytical methods. We examined this performance using a simulation study.

Meta-analysing estimates of intervention effect from final values, change scores, ANCOVA or a random mix of the three yielded unbiased estimates of pooled intervention effect. The impact of trial analytical method on the meta-analytic performance measures was important when there was no or little heterogeneity, but was of little relevance as heterogeneity increased. On the basis of larger than nominal type I error rates and poor coverage, the inverse-variance fixed effect method should not be used when there are few small trials.

When there are few small trials, random effects meta-analysis is preferable to fixed effect meta-analysis. Meta-analytic estimates need to be cautiously interpreted; type I error rates will be larger than nominal, and confidence intervals will be too narrow. Use of trial analytical methods that are more efficient in these circumstances may have the unintended consequence of further exacerbating these issues. © 2015 The Authors. *Research Synthesis Methods* published by John Wiley & Sons, Ltd. © 2015 The Authors. *Research Synthesis Methods* published by John Wiley & Sons, Ltd.

Meta-analysis is a popular and flexible analysis that can be fit in many modeling frameworks. Two methods of fitting meta-analyses that are growing in popularity are structural equation modeling (SEM) and multilevel modeling (MLM). By using SEM or MLM to fit a meta-analysis researchers have access to powerful techniques associated with SEM and MLM. This paper details how to use one such technique, multiple group analysis, to test categorical moderators in meta-analysis. In a multiple group meta-analysis a model is fit to each level of the moderator simultaneously. By constraining parameters across groups any model parameter can be tested for equality. Using multiple groups to test for moderators is especially relevant in random-effects meta-analysis where both the mean and the between studies variance of the effect size may be compared across groups. A simulation study and the analysis of a real data set are used to illustrate multiple group modeling with both SEM and MLM. Issues related to multiple group meta-analysis and future directions for research are discussed. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In prognostic studies, a summary statistic such as a hazard ratio is often reported between low-expression and high-expression groups of a biomarker with a study-specific cutoff value. Recently, several meta-analyses of prognostic studies have been reported, but these studies simply combined hazard ratios provided by the individual studies, overlooking the fact that the cutoff values are study-specific. We propose a method to summarize hazard ratios with study-specific cutoff values by estimating the hazard ratio for a 1-unit change of the biomarker in the underlying individual-level model. To this end, we introduce a model for a relationship between a reported log-hazard ratio for a 1-unit expected difference in the mean biomarker value between the low-expression and high-expression groups, which approximates the individual-level model, and propose to make an inference of the model by using the method for trend estimation based on grouped exposure data. Our combined estimator provides a valid interpretation if the biomarker distribution is correctly specified. We applied our proposed method to a dataset that examined the association between the biomarker Ki-67 and disease-free survival in breast cancer patients. We conducted simulation studies to examine the performance of our method. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In a network meta-analysis, comparators of interest are ideally connected either directly or *via* one or more common comparators. However, in some therapeutic areas, the evidence base can produce networks that are disconnected, in which there is neither direct evidence nor an indirect route for comparing certain treatments within the network. Disconnected networks may occur when there is no accepted standard of care, when there has been a major paradigm shift in treatment, when use of a standard of care or placebo is debated, when a product receives orphan drug designation, or when there is a large number of available treatments and many accepted standards of care. These networks pose a challenge to decision makers and clinicians who want to estimate the relative efficacy and safety of newly available agents against alternatives. A currently recommended approach is to insert a distribution for the unknown treatment effect(s) into a network meta-analysis model of treatment effect. In this paper, we describe this approach along with two alternative Bayesian models that can accommodate disconnected networks. Additionally, we present a theoretical framework to guide the choice between modeling approaches. This paper presents researchers with the tools and framework for selecting appropriate models for indirect comparison of treatment efficacies when challenged with a disconnected framework. Copyright © 2016 John Wiley & Sons, Ltd.

The rapid review is an approach to synthesizing research evidence when a shorter timeframe is required. The implications of what is lost in terms of rigour, increased bias and accuracy when conducting a rapid review have not yet been elucidated.

We assessed the potential implications of methodological shortcuts on the outcomes of three completed systematic reviews addressing agri-food public health topics. For each review, shortcuts were applied individually to assess the impact on the number of relevant studies included and whether omitted studies affected the direction, magnitude or precision of summary estimates from meta-analyses.

In most instances, the shortcuts resulted in at least one relevant study being omitted from the review. The omission of studies affected 39 of 143 possible meta-analyses, of which 14 were no longer possible because of insufficient studies (<2). When meta-analysis was possible, the omission of studies generally resulted in less precise pooled estimates (i.e. wider confidence intervals) that did not differ in direction from the original estimate.

The three case studies demonstrated the risk of missing relevant literature and its impact on summary estimates when methodological shortcuts are applied in rapid reviews. © 2016 The Authors. *Research Synthesis Methods* Published by John Wiley & Sons Ltd.

Systematic review (SR) abstracts are important for disseminating evidence syntheses to inform medical decision making. We assess reporting quality in SR abstracts using PRISMA for Abstracts (PRISMA-A), Cochrane Handbook, and Agency for Healthcare Research & Quality guidance.

We evaluated a random sample of 200 SR abstracts (from 2014) comparing interventions in the general medical literature. We assessed adherence to PRISMA-A criteria, problematic wording in conclusions, and whether “positive” studies described clinical significance.

On average, abstracts reported 60% of PRISMA-A checklist items (mean 8.9 ± 1.7, range 4 to 12). Eighty percent of meta-analyses reported quantitative measures with a confidence interval. Only 49% described effects in terms meaningful to patients and clinicians (e.g., absolute measures), and only 43% mentioned strengths/limitations of the evidence base. Average abstract word count was 274 (SD 89). Word count explained only 13% of score variability. PRISMA-A scores did not differ between Cochrane and non-Cochrane abstracts (mean difference 0.08, 95% confidence interval −1.16 to 1.00).

Of 275 primary outcomes, 48% were statistically significant, 32% were not statistically significant, and 19% did not report significance or results. Only one abstract described clinical significance for positive findings. For “negative” outcomes, we identified problematic simple restatements (20%), vague “no evidence of effect” wording (9%), and wishful wording (8%).

Improved SR abstract reporting is needed, particularly reporting of quantitative measures (for meta-analysis), easily interpretable units, strengths/limitations of evidence, clinical significance, and clarifying whether negative results reflect true equivalence between treatments. Copyright © 2016 John Wiley & Sons, Ltd.