Automated generation of node‐splitting models for assessment of inconsistency in network meta‐analysis
Abstract
Network meta‐analysis enables the simultaneous synthesis of a network of clinical trials comparing any number of treatments. Potential inconsistencies between estimates of relative treatment effects are an important concern, and several methods to detect inconsistency have been proposed. This paper is concerned with the node‐splitting approach, which is particularly attractive because of its straightforward interpretation, contrasting estimates from both direct and indirect evidence. However, node‐splitting analyses are labour‐intensive because each comparison of interest requires a separate model. It would be advantageous if node‐splitting models could be estimated automatically for all comparisons of interest.
We present an unambiguous decision rule to choose which comparisons to split, and prove that it selects only comparisons in potentially inconsistent loops in the network, and that all potentially inconsistent loops in the network are investigated. Moreover, the decision rule circumvents problems with the parameterisation of multi‐arm trials, ensuring that model generation is trivial in all cases. Thus, our methods eliminate most of the manual work involved in using the node‐splitting approach, enabling the analyst to focus on interpreting the results. © 2015 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd.
1 Introduction
Network meta‐analysis (Caldwell et al., 2005; Lumley, 2002; Lu and Ades, 2004) is a general framework for the synthesis of evidence from clinical trials comparing any number of treatments. It includes pair‐wise meta‐analysis (Hedges and Olkin, 1985) and indirect‐comparison meta‐analysis (Bucher et al., 1997; Song et al., 2003) as special cases (Jansen et al., 2011; Dias et al., 2013a). The key assumption underlying any meta‐analysis is exchangeability of the included trials (Lu and Ades, 2009). Violations of the exchangeability assumption can manifest as heterogeneity (within‐comparison variability) or inconsistency (between‐comparison variability). Although the most important defense against such violations is the a priori evaluation of trial design and population characteristics, the (statistical) evaluation of both heterogeneity and inconsistency is also important to ensure valid results from a network meta‐analysis.
A number of methods have been proposed to detect inconsistency (Lu and Ades, 2006; Dias et al., 2010; Lu et al., 2011; Higgins et al., 2012; Dias et al., 2013b), and they can be subdivided into three classes according to their approach to inconsistency. The ‘null’ approach, consisting only of the unrelated mean effects model, does not attempt to model inconsistency at all; it simply estimates each pair‐wise comparison separately. Inconsistency is then assessed by comparing the model fit and between‐study variance (heterogeneity) estimate of the pair‐wise comparisons against the results of the consistency model (Dias et al., 2013b). The ‘loop inconsistency’ approach proposes that inconsistency can only occur in closed loops in the evidence network and is exemplified by the inconsistency factors (Lu and Ades, 2006) and node splitting (Dias et al., 2010) models. The potential for loop inconsistency was first recognised in relation to indirect treatment comparisons (Bucher et al., 1997). These models increase the power with which inconsistency can be detected by limiting the degrees of freedom in the model. However, the presence of multi‐arm trials introduces ambiguities in how these models should be specified, especially for the inconsistency factors model. The ‘design inconsistency’ approach addresses this concern by introducing the concept of design inconsistency, in which ABC trials can be inconsistent with AB trials (Higgins et al., 2012). Essentially, the design inconsistency approach allocates additional degrees of freedom to resolve the ambiguity of loop inconsistency models. We view both the design‐by‐treatment‐interaction model (Higgins et al., 2012) and the two‐stage linear inference model (Lu et al., 2011) as belonging to this approach. The design inconsistency models also enable a global test for inconsistency across the network (Higgins et al., 2012), but the loop inconsistency models do not. On the other hand, the interpretation of individual parameters of the design‐by‐treatment‐interaction model is not straightforward because, in any multiparameter model, the meaning of each parameter depends on what other parameters are in the model. Conceptually, design inconsistencies are also hard to grasp: why would three‐arm trials result in systematically different results from two‐arm or four‐arm trials? Why would the included treatments be a better predictor of inconsistency than any other design or population characteristic?
Therefore, although the design inconsistency approach offers advantages, specifically unambiguous model specification and the global test for inconsistency, there are also reasons to favour the loop inconsistency approach. These are the clearer conception of inconsistency occurring in loops and the easier interpretation of local inconsistencies. The node‐splitting approach is especially attractive because inconsistency is evaluated one comparison at a time by separating the direct evidence on that comparison from the network of indirect evidence. The discrepancy between the estimates of relative treatment effects from these two sets of trials indicates the level of (in)consistency. However, node‐splitting analyses can be labour‐intensive, because each comparison of interest requires a separate model. Moreover, the analyst must decide which comparisons should be investigated, which is not trivial in the presence of multi‐arm trials. Finally, there may be several possible node‐splitting models for one comparison when it has been included in multi‐arm trials. In this paper, we present a decision rule to determine which comparisons to split that also ensures that each of the alternative node‐splitting models is valid. We build upon previous work on automated model generation for network meta‐analysis (van Valkenhoef et al., 2012a) to automatically generate the node‐splitting models.
Automation is not a substitute for proper understanding of the implemented statistical methods and their limitations. Rather, it reduces the effort that well‐versed users of the methods must expend, enabling them to focus on other issues. In addition, the statistical analysis of inconsistency is not a substitute for a thoughtful selection of trials prior to applying evidence synthesis. It is also unwise to investigate inconsistency alone while ignoring heterogeneity, as the two are closely related, and in one model, the heterogeneity parameter may absorb some of the variance that another model would classify as inconsistency. Finally, when significant inconsistency or excess heterogeneity is detected, the analyst faces the difficult question of how to address it. A careful analysis of the included trials and (local) discrepancies between their effect estimates is required to identify potential confounding factors. If a satisfactory explanation is found, the synthesis may be repaired, either by excluding the offending subset of trials or by correcting for the confounder through a meta‐regression analysis. Unexplained inconsistency or heterogeneity may mean that the meta‐analysis must be abandoned altogether, or at the very least must be interpreted with extreme caution.
2 Background
In this paper, we consider network meta‐analysis in a Bayesian framework (Dias et al., 2013a) and limit the discussion to homogeneous‐variance random‐effects models (Lu and Ades, 2004). First, we briefly review the consistency model, which is a simple extension of the Bayesian formulation of pair‐wise meta‐analysis. Then, we introduce node‐splitting models and, finally, review previous work on automated model generation for network meta‐analysis.
2.1 Consistency models
A network of evidence consists of a set of studies S numbered 1, …, n, where each study Si has a number of arms that evaluate a set of treatments T(Si), where we assume that each arm evaluates a unique treatment (thus we may identify an arm by its treatment). Moreover, we assume that the studies form a connected network, that is, that there is a path between any two treatments included in the network.



is the random‐effects variance, a measure of the heterogeneity between trials. In a homogeneous‐variance model, these variances are identical,
, for all comparisons in the treatment network (w, x, y, z are treatments, and w ≠ x, y ≠ z). In such a model, the covariances between comparisons in multiarm trials work out to σ2/2 (Higgins and Whitehead, 1996):
(1)
(2)The right‐hand‐side parameters are the basic parameters, for which we estimate probability distributions. Although a network containing m treatments can have up to m(m − 1)/2 comparisons, it will have only m−1 basic parameters. Any other relative effect can be calculated from the consistency relations. Hence dx,y, a functional parameter, is completely defined in terms of the basic parameters on the right‐hand side. Although the basic parameters are usually expressed relative to a common reference treatment (e.g. z in the aforementioned example), that is not a requirement (van Valkenhoef et al., 2012a).
2.2 Node‐splitting models
A node‐splitting analysis (Dias et al., 2010) splits one of the treatment comparisons, say dx,y, into a parameter for direct evidence
and a parameter for indirect evidence
, in order to assess whether they are in agreement (i.e. the hypothesis is that
). The term node‐splitting may be confusing for some, because the treatment network represents a comparison as an edge rather than a node (or vertex). However, in the Bayesian hierarchical model, each parameter is represented by a node in a directed acyclic graph. When one of these parameters is split into two to evaluate conflict, the term node‐splitting is used. A node‐splitting analysis is thus performed separately for each of the comparisons in the treatment network on which both direct and indirect evidence are available, to assess evidence consistency.
, and a network meta‐analysis of the remaining evidence is used to estimate
. The heterogeneity parameter σ2 is shared between direct and indirect evidence to enable estimation even when the direct evidence consists of few trials. However, node‐splitting models for various comparisons and the consistency model will result in different estimates for σ2, and comparing these estimates may also shed some light on potential inconsistencies (Dias et al., 2010). A two‐arm trial comparing x and y could thus be parameterised relative to the reference treatment x as:


to interact with any of the other d*,*, and thus, δi,x,y is given a distribution independent from the other relative effects in the study. If dx,y has been investigated in multi‐arm trials, the node‐split model can be parameterised in multiple ways. In the aforementioned parameterisation of the wxyz trial, x has been chosen as the reference treatment, thus leaving the y arm of this trial out of the network of indirect evidence. We could alternatively have chosen y as the reference treatment, giving another (non‐equivalent) node‐splitting model, where the x arm is omitted from the indirect evidence. Figure 1 illustrates this for a three‐arm trial xyz: because there is no other evidence on the yz comparison, choosing x as the reference treatment for the multi‐arm trial results in a model in which there is no indirect estimate for yz (Figure 1(b)). This can be rectified by choosing y as the reference treatment instead (Figure 1(c)). Even if the model results in an indirect estimate with either choice of reference treatment, the choice of reference treatment may affect the results. These issues are discussed further in Section 3.3.

Only those comparisons where an indirect estimate can be made should be split, so if a comparison is not part of any loop in the evidence graph, it should not be considered. Multi‐arm trials complicate this situation somewhat, because evidence within a multi‐arm trial is consistent by definition. Thus, if we consider a situation where the evidence structure consists of only multi‐arm trials including x, y and z, then even though the comparison dx,y is part of a loop, it cannot be inconsistent, and hence, no comparisons should be split. In complex networks that contain both two‐arm and multi‐arm trials, it may not be obvious whether there is potential inconsistency.
2.3 Note on relative‐effect data

is the vector of contrasts reported for study i, typically expressed against a specific chosen reference treatment, which may differ from the desired reference treatment. The variance–covariance matrix Σ is fully determined by the marginal variances of each contrast and the variance of the absolute effect in the reference arm (Franchini et al., 2012). If δi is the vector of relative effects against the desired reference treatment, then there is a matrix A such that δi' = Aδi. The likelihood then becomes:

2.4 Automated model generation
Automated model generation for network meta‐analysis consists of generating the model structure (choosing the basic parameters and study reference treatments) and choosing appropriate priors and starting values (van Valkenhoef et al., 2012a). It was previously shown that for consistency models, the choice of basic parameters and study reference treatments is arbitrary, so long as the basic parameters form a spanning tree of the evidence network (van Valkenhoef et al., 2012a), but for inconsistency models that does not hold (Lu and Ades, 2006; van Valkenhoef et al., 2012b). A spanning tree is a sub‐network that connects all vertices of the original network, but contains no loops. To the best of our knowledge, no work has been published on model generation for node‐splitting models. General strategies for choosing vague priors and for generating starting values for the Markov chains are given in van Valkenhoef et al. (2012a).
The choice of prior for the heterogeneity parameter can have a large impact on its estimation, especially when few studies are available (Lambert et al., 2005). Because heterogeneity and inconsistency are closely linked, this choice will also affect the estimated degree of inconsistency. A similar phenomenon occurs in the frequentist framework, where the choice of estimators was shown to affect the detection of inconsistency (Veroniki et al., 2013). A sensitivity analysis may be necessary in some cases. An alternative or complementary approach is the use of prior data rather than heuristics or expert judgement to define the priors. A recent review of meta‐analyses published in the Cochrane library investigated the random‐effects variance commonly encountered in practice and stratified by outcome type, intervention type and medical specialty (Turner et al., 2012). The predictive distributions derived in that paper can be used as informative priors for the variance parameter σ2 (Turner et al., 2012). A similar study provides informative priors for the variance in meta‐analyses on the standardised mean‐difference scale (Rhodes et al., 2015) and gives some guidance on how they may be applied on the mean‐difference scale as well. In principle, the same approach applies to other scales, and future research may produce the necessary data and methods.
3 Model generation
In a node‐splitting analysis of inconsistency, the first problem is deciding which comparisons can and should be assessed using a node‐splitting model. Then, given a comparison to be split, the usual model‐generation problems have to be solved. Priors and starting values for node‐splitting models can be chosen in the way described for consistency models (van Valkenhoef et al., 2012a), but generating the model structure may have some problems. If the comparison being split has only been assessed in two‐arm trials, the network of evidence for
can be analysed using a standard consistency model, and because the xy comparison must occur in a loop, the network is connected. Thus, as for consistency models, the choice of basic parameters and study reference treatments is arbitrary (van Valkenhoef et al., 2012a). However, in the presence of multi‐arm trials, more than just the comparison of interest may be removed from the network. As an example, the evidence network in Figure 1(a) has trials xy, xz and xyz. If we split dx,y and choose x as the reference treatment for the xyz trial,
cannot be estimated (Figure 1(b)). This happens because the estimation of
requires an estimate of dy,z, but the xyz trial has been parameterised using xy and xz, so there is no remaining evidence on yz. If we choose y as the reference treatment, the problem disappears (Figure 1(c)). This problem was pointed out earlier for loop inconsistency models (Lu and Ades, 2006). Our strategy carefully chooses the comparisons to split so that such problems do not occur and that the choice of basic parameters and study reference treatments is again arbitrary.
3.1 Defining potential inconsistency
To arrive at a rule on whether to split specific comparisons, we require a definition of when a loop in the evidence network is (potentially) inconsistent. Because there is no clear‐cut distinction between inconsistency and heterogeneity (Higgins et al., 2012; Jansen and Naci, 2013), finding the right definitions is difficult. For example, in a network where three treatments (x, y, z) have been investigated in a three‐arm trial xyz, but only two out of three comparisons have been investigated in two‐arm trials (Figure 1(a)), it is unclear whether loop inconsistency could occur. Clearly, the two‐arm trials on xy and xz could disagree with the three‐arm trial; but if they do, this would manifest not only as a loop inconsistency, but also as heterogeneity on xy and xz. On the other hand, if the x arm had been omitted from the xyz trial, loop inconsistency could clearly be present. Our position is that investigating inconsistency of this loop could yield additional insight beyond looking at heterogeneity alone and thus that this should be carried out. The network in Figure 2(a) is similar, in that we could view the differences between the four‐arm trial wxyz and the two‐arm trials wz and xy as heterogeneity on those comparisons, or as loop inconsistency on the wxyzw loop or the wyxzw loop. However, unlike in the previous example, if we remove any of the arms of the four‐arm trial, no potentially inconsistent loops remain. Therefore, we consider any discrepancies between the two‐arm trials and the four‐arm trial in this network to be heterogeneity. To reiterate, because heterogeneity and inconsistency cannot always be distinguished, many of these distinctions are somewhat arbitrary and could have been made differently. For example, in the design‐by‐treatment‐interaction model, differences between two‐arm and three‐arm trials are considered to be ‘design inconsistencies’ (Higgins et al., 2012). Our definitions focus on loop inconsistency alone, as the node‐splitting model does not evaluate design inconsistency.

To determine whether a given loop is potentially inconsistent, we use the definition of Lu and Ades (2006): there must be at least three independent sources of evidence supporting the (three or more) comparisons in the loop. We define trials (i.e. sources of evidence) as independent if their treatment sets, T(Si), differ on treatments in the loop under consideration. For example, when judging whether the loop xyzx can be inconsistent, wxy and xy trials are considered the same because w does not occur in the loop. This is so because different estimates from studies that include the same set of treatments are more appropriately viewed as heterogeneity (Jansen and Naci, 2013). We adopt a stronger condition for longer loops: loops where two or more comparisons are included in exactly the same set of multi‐arm trials are not considered potentially inconsistent, because inconsistency occurring in such a loop can more parsimoniously be viewed as inconsistency in simpler loops, or as heterogeneity. By this definition, the network in Figure 1(a) contains a potentially inconsistent loop xyzx, because the comparison xy is supported by the xy and xyz studies, the xz comparison by the xz and xyz studies and the yz by the xyz study, and hence, no two comparisons are supported by exactly the same set of studies. Conversely, the network in Figure 2(a) does not contain a potentially inconsistent loop, because no matter how we construct the loop, at least two comparisons will be supported only by the four‐arm trial.
- Among the comparisons in the loop, no two comparisons share the same set of supporting studies
- The loop has at least three comparisons, and no comparison or treatment occurs more than once
The formal graph‐theoretic definition is given in Appendix A.
3.2 Choosing the comparisons to split
We give a simple decision rule to determine whether to split a specific comparison, based on properties of the evidence structure that are easily verified:
For a given set of studies S, split dx,y if and only if the modified network consisting of the studies S′ that do not include both x and y contains a path between x and y.
Intuitively, S′ is the set of studies that could generate inconsistency on the xy comparison. An advantage of this approach is that we do not need to assess the global inconsistency degrees of freedom, which currently have no completely satisfactory definition and no efficient algorithm (Lu and Ades, 2006; van Valkenhoef et al., 2012b). Figure 3 shows a number of examples to demonstrate how the rule works. Figure 3(a) shows a structure in which no inconsistency can occur: disagreement between the two‐arm and three‐arm trials would be modelled as heterogeneity on xy. When we evaluate the rule for the xy comparison, the modified network is empty (contains no studies), and thus, we do not split xy. For the xz comparison, the modified network contains only the xy studies, so x is not connected to z, and we do not split xz. The yz comparison is similar to the xy comparison. By contrast, Figure 3(b) has three independent sources of evidence and thus is potentially inconsistent. Here, the rule selects only the yz comparison to split, as the reduced network consists of xy and xz studies, and thus, y and z are connected in the modified network. In theory, we could split all three comparisons, but yz is the only comparison in which the choice between including either of the other comparisons from the three‐arm trial in the indirect evidence network is arbitrary (also see Figure 1). In Figure 3(c), all comparisons have pair‐wise evidence, and thus, all comparisons are selected to be split. From the loop inconsistency perspective, splitting all three comparisons is redundant, yet using a node‐splitting model, each of the three will have different results. This is due to the way multi‐arm trials are handled, and, for each comparison, a different choice of reference treatment for the multi‐arm trial may also result in different results. However, because heterogeneity and inconsistency are so closely related, if inconsistency is less present in one of these models, heterogeneity would be greater. Therefore, it is important to consider both together.

In Appendix A, we prove that the decision rule corresponds to the definitions of potential inconsistency set out in Section 3.1. In particular, we show that in any potentially inconsistent loop, we split at least one comparison, and conversely, that any comparison selected to be split is part of a potentially inconsistent loop.
3.3 Implications for model generation
In Section 2.4, we remarked that when the network consists only of two‐arm trials, the model generation problem for a node‐splitting model can be decomposed into generating a model for a pair‐wise meta‐analysis of the direct evidence and generating a consistency model for the indirect evidence. However, in general, this does not hold for networks with multi‐arm trials (Figure 1). Fortunately, we show in this section that the model generation does decompose in this way if the comparisons to be split are chosen according to the decision rule proposed in the previous section.
- Include the arms T(Si) − {x} in the model for
.
- Include the arms T(Si) − {y} in the model for
.
- Include the arms T(Si) − {x, y} in the model for
.
However, removing an additional arm from multi‐arm trials potentially decreases the precision of the indirect estimate
. If we decide to include either the x arm or the y arm of multi‐arm trials, we can either consistently include the same arm for all trials – pure option (1) or (2), or make this decision individually for each trial – a mixture of options (1) and (2). In the pure case, there are two alternative models with potentially different results, whereas in the mixture case, there are 2k alternative models, where k is the number of multi‐arm trials that include both x and y.
This is illustrated for a simple network in Figure 4. Usually, only the pure options are considered (Dias et al., 2010), but one could argue that choosing a different included treatment in different trials can result in a more balanced evidence network, and might thus be preferred. In any case, exploring all 2k + 1 alternative models is generally infeasible, which is probably why it has not received any attention. Moreover, given the computationally intensive nature of model estimation, even estimating the two alternative models that correspond to options (1) and (2) is undesirable, and in practice, one of them is chosen arbitrarily.

Now we show that model generation is trivial if the comparisons to be split are chosen according to the decision rule and if we parameterise the node‐splitting model according to option (3), and by extension that it is also trivial if we use option (1) or (2) instead. First, if the reduced network defined by the decision rule contains components that are not connected to the comparison xy under consideration, then we can safely remove those components from the network because they will not contribute to inconsistency on xy. However, it may be desirable to include disconnected components in the model to estimate the heterogeneity parameter, especially if estimates of this parameter are being compared between models. In addition, the decision rule guarantees that x and y are connected even in the absence of any trials that include the xy comparison. Given this, the network of indirect evidence can simply be analysed using a consistency model that connects x and y indirectly, so its parameterisation is arbitrary, and existing algorithms can be applied (van Valkenhoef et al., 2012a). Any disconnected components can be parameterised similarly and estimated in a single model in which the heterogeneity parameter is shared. Moreover, the direct evidence can be synthesised in a pair‐wise model, which is also trivial to parameterise.
This discussion extends to options (1) and (2) because x and y are already connected in the network of indirect evidence, so adding one of these arms back into the relevant multi‐arm trials will again result in a connected network, which can be parameterised as a consistency model with the amendment that the study reference effect parameter μi will be shared between the two sub‐models. The model corresponding to option (3) has no such shared reference treatment, as each multi‐arm study that includes the comparison being split is subdivided into two virtual studies: one including the two treatments of interest and another containing all remaining arms. If the second virtual study contains only a single arm, it can be eliminated altogether because it provides no information on relative effects.
Thus, however we decide to parameterise the node‐splitting model, generating the model is trivial if the comparison being split was chosen according to the decision rule proposed in the previous section. The 2k + 1 alternative parameterisations correspond to 2k mixtures of options (1) and (2) and a single model corresponding to option (3) described earlier. If a single model is to be estimated, one could argue that one of the 2k mixtures of options (1) and (2) is preferred because these models make fuller use of the evidence, or that option (3) should be preferred because it results in a unique model that more closely mimics a consistency model.
4 Implementation and example
The methods have been implemented in version 0.6‐1 of the gemtc package (http://cran.r-project.org/package=gemtc) for the R statistical software (http://www.r-project.org). Source code is available on GitHub: https://github.com/gertvv/gemtc/tree/0.6‐1. gemtc currently generates node‐splitting models according to option (3): for multi‐arm trials that include the comparison being split, it includes neither treatment of that comparison in the network of indirect evidence. If the evidence network becomes disconnected as a result, the disconnected components are not discarded, but are included in the model to aid the estimation of the heterogeneity parameter. The R package can generate and estimate all relevant node‐splitting models according to the decision rule proposed in this paper and summarise the results textually or graphically. Estimation uses standard Markov chain Monte Carlo software, and the package requires one of JAGS (Plummer, 2003), OpenBUGS (Lunn et al., 2009), or WinBUGS (Lunn et al., 2000), to be installed, as well as the corresponding R package. Because it is more actively maintained and integrates more nicely with R, we recommend JAGS and the rjags package.
In this section, we illustrate the methods and implementation using a worked example based on a real‐life evidence network. The dataset consists of seven trials comparing placebo against four dopamine agonists (pramipexole, ropinirole, bromocriptine and cabergoline) as adjunct therapy for Parkinson's disease (Franchini et al., 2012). Parkinsons patients often experience fluctuations in their response to treatment: ‘on‐time’ periods when the drugs appear to be effective alternate with ‘off‐time’ periods when symptoms are not under control. We compare the drugs' ability to reduce the amount of ‘off‐time’ relative to the amount of ‘off‐time’ on placebo (both in conjunction with the background therapy). The data are summarised in Table 1, and the treatment network is shown in Figure 5. Naturally, automation is most useful for large and complex networks, but a small network makes the example easier to follow.
| Study | Treatment | Mean | Standard deviation | Sample size |
|---|---|---|---|---|
| 1 | A | − 1.22 | 3.70 | 54 |
| C | − 1.53 | 4.28 | 95 | |
| 2 | A | − 0.70 | 3.70 | 172 |
| B | − 2.40 | 3.40 | 173 | |
| 3 | A | − 0.30 | 4.40 | 76 |
| B | − 2.60 | 4.30 | 71 | |
| D | − 1.20 | 4.30 | 81 | |
| 4 | C | − 0.24 | 3.00 | 128 |
| D | − 0.59 | 3.00 | 72 | |
| 5 | C | − 0.73 | 3.00 | 80 |
| D | − 0.18 | 3.00 | 46 | |
| 6 | D | − 2.20 | 2.31 | 137 |
| E | − 2.50 | 2.18 | 131 | |
| 7 | D | − 1.80 | 2.48 | 154 |
| E | − 2.10 | 2.99 | 143 |
- A = placebo; B = pramipexole; C = ropinirole; D = bromocriptine; E = cabergoline.

Here, lines that start with a ‘>’ signify commands entered into R, and lines that do not are output of those commands. The output has been truncated (indicated by ‘…’) for inclusion in the paper, and R will display the full dataset given in Table 1. As aforementioned, we use system.file to find an XML file included with the gemtc package (produced using the discontinued Java‐based GeMTC graphical user interface) and load it using read.mtc.network. For new datasets, it is more convenient to use mtc.network to construct networks from R data frames structured like the previous output. In addition, mtc.data.studyrow can convert the one‐study‐per‐row format commonly used in BUGS code to the format used by gemtc. The package has a wide range of features for working with network meta‐analysis datasets and models, such as evidence network plots, convergence assessment diagnostics and plots and output summaries and visualisations. In this section, we only present the specific functionality for node‐splitting, and we refer the interested reader to the manual of the gemtc package for further information.
The decision rule selects the AC, AD, BD and CD comparisons, but not AB or DE. AC and CD are selected because they only occur in two‐arm trials and are clearly still connected if those trials are removed from the network. Conversely, the DE comparison clearly has no indirect evidence.
The three comparisons involving the three‐arm trial are more interesting. The AD comparison is selected because, if we remove the three‐arm trial from the network, AD is still connected through the AC and CD trials. Similarly, the BD comparison remains connected through BA, AC and CD trials. Finally, the AB comparison is not split because, if the ABD and AB trials are removed from the network, there is no longer a connection between A and B. It could be argued that splitting only one of the AC, BD or CD comparisons might be sufficient to investigate inconsistency in the ACDBA loop. However, as we pointed out earlier, such dependencies are difficult to work out for more complex networks, and we accept potential redundant testing such as this to be able to test for inconsistency wherever in the network it may reasonably exist.
The output of the plot command in Figure 6 visually conveys the information in the summary (truncated in the output). In this case, it would appear that the results from direct and indirect evidence are in agreement with each other and with the results of the consistency model. This is also reflected by the inconsistency P‐values, which are far from concerning. Because of the small number of included trials, and consequently low power to detect differences, this is not too surprising.

Here, we first define a function that, given the results of a node‐splitting analysis, plots the densities relevant to a specific comparison in three rows using the densplot function from the coda package. Then we invoke it to produce a plot of the densities for the CD comparison, shown in Figure 7. Again, direct and indirect evidence appear to be in broad agreement, and the consistency‐model result is more precise than either the direct or the indirect evidence.

As for any analysis using Markov chain Monte Carlo techniques, it is important to assess convergence. The package supports a number of ways to do this for individual models, mostly provided by the coda package; for details, we refer to the documentation of gemtc and coda. Convergence was sufficient for each of the five models estimated in this analysis.
5 Conclusion
In this paper, we provide methods to automatically generate the models required for the assessment of inconsistency using node‐splitting models. Our work advances the state of the art in two ways. First, we provide an unambiguous decision rule for choosing the comparisons to be split and prove that it will select all comparisons of interest and only comparisons of interest. The decision rule improves upon the rule originally proposed (Dias et al., 2010) by being fully unambiguous, less computationally expensive and proven correct under a specific definition of potential inconsistency. Second, although each comparison to be split may allow several alternative parameterisations, we prove that for each comparison selected by the decision rule, generating the model is trivial. This is not true for every comparison that occurs in a potentially inconsistent loop; it required careful design of the decision rule.
Our methods have a number of limitations. First, although automation reduces the impact of some of the drawbacks of the node‐splitting approach, it does not eliminate them. Ambiguities still exist in which nodes to split and how to parameterise the model, and these may affect the results to some extent. A large number of models must still be run, and this process may be time‐consuming for larger networks. Second, especially in small networks, the decision rule tends to split more comparisons than there are potentially inconsistent loops. Future work could investigate methods for reducing such redundancies. However, it seems unlikely that redundancies can be eliminated completely, so such approaches are likely to also be heuristic.
Finally, the assessment of heterogeneity and inconsistency remains a challenge, especially because in many circumstances that involve multi‐arm trials, there is no clear distinction between the two. One model may detect an inconsistency, whereas another model detects high heterogeneity but no inconsistency. However, this situation is not problematic because the response in both cases should be the same: to investigate the cause of the observed inconsistency or heterogeneity. This holds whether it is a three‐arm trial that differs from a set of two‐arm trials, a two‐arm trial that differs from other two‐arm trials, or any other case. Hopefully, such an investigation will yield insight into the cause of heterogeneity or inconsistency, such as differences in population, study sponsorship or intervention definitions.
Appendix A: Proof of correctness of the decision rule
In the proofs, we use some standard notions from graph and set theory; in particular, we refer to loops as cycles and to comparisons as edges. We follow van Valkenhoef et al. (2012a) in defining the network (graph) of treatment comparisons, which we take to be undirected.
Definition 1.(Potential inconsistency)Let each edge (i.e. comparison) be the set of the vertices (i.e. treatments) it connects: e = {x, y}. Denote the set of studies that include an edge e as r(e) = {Si ∈ S : e ⊂ T(Si)}. Let a cycle (i.e. loop) then be the ordered list C = (e1, …, en) of its edges ei, i ∈ {1, …, n}. A cycle C = (e1, …, en) is potentially inconsistent if and only if n > 2 and each of its edges has a unique set of supporting studies: ∀ i, j : r(ei) = r(ej) ⇒ i = j.
Lemma 1.If a cycle is potentially inconsistent, at least one of its edges will be split.
Proof.Consider a potentially inconsistent cycle C = (e1, …, en). According to the decision rule, an edge ei = {x, y} will be split if
, because only those edges that are included in some studies where ei is not will survive the removal of the studies that include ei. We show by contradiction that such an edge must exist.
Assume that there is no edge ei such that
. Then, for any edge ei, we can find another edge ej such that r(ei)
r(ej). Further, r(ei) ≠ r(ej), so r(ei) ⊋ r(ej). By repeated application of this fact, we can construct a permutation p(i) of the edges such that r(ep(1)) ⊋ r(ep(2)) ⊋ … ⊋ r(ep(n)). However, there must then also be an edge ei such that r(ep(n)) ⊋ r(ei), which contradicts the strict order we just constructed. □
Lemma 2.If an edge is split, it occurs in at least one potentially inconsistent cycle.
Proof.If the edge e1 = {x, y} is split, then it is part of at least one cycle C = (e1, …, en), n > 2, where ∀ i > 1 : r(e1)
r(ei), and hence, ∀ i > 1 : r(e1) ≠ r(ei). Suppose en = {w, x} and e2 = {y, z}, then r(e2) must contain at least some studies that do not include x and r(en) must contain some studies that do not include y. By definition, all studies in r(e2) include y and all studies in r(en) include x. Thus, r(e2) ≠ r(en), r(e1) ≠ r(e2), and r(e1) ≠ r(en), so there are at least three distinct sets of supporting studies.
Now, if for any i, j > 1, i < j, we have r(ei) = r(ej), and ei = {t, u}, ej = {v, w}, t ≠ w, then we can create a shorter cycle C′ = (e1, …, ei − 1, {t, w}, ej + 1, …, en). C′ has length > 2 unless r(e2) = r(en), which we already showed is not the case. Moreover, we have r({t, w}) ⊃ r(ei) = r(ej), so
.
The cycle C′ has the same properties as C, so we can apply this step repeatedly to obtain a series C, C′, C″, … of cycles of successively smaller length. Finally, there must be a cycle C* = (e1, …, em), with m > 2, where ∀ i, j : r(ei) = r(ej) ⇒ ei = ej. □
References
Citing Literature
Number of times cited according to CrossRef: 97
- Gerta Rücker, Adriani Nikolakopoulou, Theodoros Papakonstantinou, Georgia Salanti, Richard D. Riley, Guido Schwarzer, The statistical importance of a study for a network meta-analysis estimate, BMC Medical Research Methodology, 10.1186/s12874-020-01075-y, 20, 1, (2020).
- Wu Xu, Junyi Wang, Xiang He, Junlan Wang, Dehong Wu, Guoping Li, Bronchoscopic lung volume reduction procedures for emphysema, Medicine, 10.1097/MD.0000000000018936, 99, 5, (e18936), (2020).
- Chan Hyuk Park, Se Woo Park, Eunwoo Nam, Jang Han Jung, Jung Hyun Jo, Comparative efficacy of stents in endoscopic ultrasonography‐guided peripancreatic fluid collection drainage: A systematic review and network meta‐analysis, Journal of Gastroenterology and Hepatology, 10.1111/jgh.14960, 35, 6, (941-952), (2020).
- Luis Guillermo Gómez-Escobar, Hansel Mora-Ochoa, Andrea Vargas Villanueva, Loukia Spineli, Gloria Sanclemente, Rachel Couban, Elizabeth García, Edgardo Chapman, Juan José Yepes-Nuñez, Effectiveness and adverse events of topical and allergen immunotherapy for atopic dermatitis: a systematic review and network meta-analysis protocol, Systematic Reviews, 10.1186/s13643-020-01472-w, 9, 1, (2020).
- Siwei Bi, Kaibo Sun, Shanshan Chen, Jun Gu, Surgical procedures in the pilonidal sinus disease: a systematic review and network meta-analysis, Scientific Reports, 10.1038/s41598-020-70641-7, 10, 1, (2020).
- Tong Yao Hu, Hai-Qiang Wang, Wen Ping Zhang, Ruo Fei Tian, Ge Sheng Lei, Yan Chun Deng, JunLing Xing, Network Meta-analysis of Antiepileptic Drugs in Focal Drug-resistant Epilepsy, Epilepsy Research, 10.1016/j.eplepsyres.2020.106433, (106433), (2020).
- Seun Deuk Hwang, Kipyo Kim, Yoon Ji Kim, Seoung Woo Lee, Jin Ho Lee, Joon Ho Song, Effect of statins on cardiovascular complications in chronic kidney disease patients, Medicine, 10.1097/MD.0000000000020061, 99, 22, (e20061), (2020).
- Rutger MJ de Zoete, James H McAuley, Nigel R Armfield, Michele Sterling, The comparative effectiveness of physical exercise interventions in individuals with chronic non-specific neck pain: protocol for a network meta-analysis, BMJ Open, 10.1136/bmjopen-2019-034846, 10, 5, (e034846), (2020).
- Jiawen Deng, Umaima Abbas, Oswin Chang, Thanansayan Dhivagaran, Stephanie Sanger, Anthony Bozzo, Antidiabetic and antiosteoporotic pharmacotherapies for prevention and treatment of type 2 diabetes-induced bone disease: protocol for two network meta-analyses, BMJ Open, 10.1136/bmjopen-2019-034741, 10, 1, (e034741), (2020).
- Bruno L. Ferreyro, Federico Angriman, Laveena Munshi, Lorenzo Del Sorbo, Niall D. Ferguson, Bram Rochwerg, Michelle J. Ryu, Refik Saskin, Hannah Wunsch, Bruno R. da Costa, Damon C. Scales, Noninvasive oxygenation strategies in adult patients with acute respiratory failure: a protocol for a systematic review and network meta-analysis, Systematic Reviews, 10.1186/s13643-020-01363-0, 9, 1, (2020).
- Svenja E. Seide, Katrin Jensen, Meinhard Kieser, A comparison of Bayesian and frequentist methods in random‐effects network meta‐analysis of binary data, Research Synthesis Methods, 10.1002/jrsm.1397, 11, 3, (363-378), (2020).
- Li Zhang, Xian Li, Julian M. Rüwald, Kristian Welle, Frank A. Schildberg, Koroush Kabir, Comparison of minimally invasive approaches and standard median parapatellar approach for total knee arthroplasty: A systematic review and network meta-analysis of randomized controlled trials, Technology and Health Care, 10.3233/THC-192078, (1-18), (2020).
- Li Zhang, Xiao-Peng Zhao, Li-juan Qiao, Wan-xia Wei, Min Wei, Jin Ding, Ying-dong Li, Different exercise therapies for treating heart failure, Medicine, 10.1097/MD.0000000000022710, 99, 42, (e22710), (2020).
- Jing-hong Liang, Jing Li, Rong-kun Wu, Jia-yu Li, Sheng Qian, Rui-xia Jia, Ying-quan Wang, Yu-xi Qian, Yong Xu, Effectiveness comparisons of various psychosocial therapies for children and adolescents with depression: a Bayesian network meta-analysis, European Child & Adolescent Psychiatry, 10.1007/s00787-020-01492-w, (2020).
- Bruno L. Ferreyro, Federico Angriman, Laveena Munshi, Lorenzo Del Sorbo, Niall D. Ferguson, Bram Rochwerg, Michelle J. Ryu, Refik Saskin, Hannah Wunsch, Bruno R. da Costa, Damon C. Scales, Association of Noninvasive Oxygenation Strategies With All-Cause Mortality in Adults With Acute Hypoxemic Respiratory Failure, JAMA, 10.1001/jama.2020.9524, (2020).
- Ifigeneia Mavranezouli, Odette Megnin-Viggars, Caitlin Daly, Sofia Dias, Nicky J. Welton, Sarah Stockton, Gita Bhutani, Nick Grey, Jonathan Leach, Neil Greenberg, Cornelius Katona, Sharif El-Leithy, Stephen Pilling, Psychological treatments for post-traumatic stress disorder in adults: a network meta-analysis, Psychological Medicine, 10.1017/S0033291720000070, (1-14), (2020).
- Juan Pablo Diaz Martinez, Paula D Robinson, Bob Phillips, Thomas Lehrnbecher, Christa Koenig, Brian Fisher, Grace Egan, L Lee Dupuis, Roland A Ammann, Sarah Alexander, Sandra Cabral, George Tomlinson, Lillian Sung, Conventional compared to network meta-analysis to evaluate antibiotic prophylaxis in patients with cancer and haematopoietic stem cell transplantation recipients, BMJ Evidence-Based Medicine, 10.1136/bmjebm-2020-111362, (bmjebm-2020-111362), (2020).
- Ying Wang, Long Ge, Zhikang Ye, Reed A. Siemieniuk, Annika Reintam Blaser, Xin Wang, Anders Perner, Morten H. Møller, Waleed Alhazzani, Deborah Cook, Gordon H. Guyatt, Efficacy and safety of gastrointestinal bleeding prophylaxis in critically ill patients: an updated systematic review and network meta-analysis of randomized trials, Intensive Care Medicine, 10.1007/s00134-020-06209-w, (2020).
- Yoon Ji Kim, Seun Deuk Hwang, Soo Lim, Effects of Sodium-Glucose Cotransporter Inhibitor/Glucagon-Like Peptide-1 Receptor Agonist Add-On to Insulin Therapy on Glucose Homeostasis and Body Weight in Patients With Type 1 Diabetes: A Network Meta-Analysis, Frontiers in Endocrinology, 10.3389/fendo.2020.00553, 11, (2020).
- Jian Yu, Li Ren, Su Min, You Yang, Feng Lv, Nebulized pharmacological agents for preventing postoperative sore throat: A systematic review and network meta-analysis, PLOS ONE, 10.1371/journal.pone.0237174, 15, 8, (e0237174), (2020).
- Reed AC Siemieniuk, Jessica J Bartoszko, Long Ge, Dena Zeraatkar, Ariel Izcovich, Hector Pardo-Hernandez, Bram Rochwerg, Francois Lamontagne, Mi Ah Han, Elena Kum, Qin Liu, Arnav Agarwal, Thomas Agoritsas, Paul Alexander, Derek K Chu, Rachel Couban, Andrea Darzi, Tahira Devji, Bo Fang, Carmen Fang, Signe Agnes Flottorp, Farid Foroutan, Diane Heels-Ansdell, Kimia Honarmand, Liangying Hou, Xiaorong Hou, Quazi Ibrahim, Mark Loeb, Maura Marcucci, Shelley L McLeod, Sharhzad Motaghi, Srinivas Murthy, Reem A Mustafa, John D Neary, Anila Qasim, Gabriel Rada, Irbaz Bin Riaz, Behnam Sadeghirad, Nigar Sekercioglu, Lulu Sheng, Charlotte Switzer, Britta Tendal, Lehana Thabane, George Tomlinson, Tari Turner, Per O Vandvik, Robin WM Vernooij, Andrés Viteri-García, Ying Wang, Liang Yao, Zhikang Ye, Gordon H Guyatt, Romina Brignardello-Petersen, Drug treatments for covid-19: living systematic review and network meta-analysis, BMJ, 10.1136/bmj.m2980, (m2980), (2020).
- Ben Carter, Rebecca Strawbridge, Muhammad Ishrat Husain, Brett D. M. Jones, Roxanna Short, Anthony J. Cleare, Dimosthenis Tsapekos, Fiona Patrick, Lindsey Marwood, Rachael W. Taylor, Tim Mantingh, Valeria de Angel, Viktoriya L. Nikolova, Andre F. Carvalho, Allan H. Young, Relative effectiveness of augmentation treatments for treatment-resistant depression: a systematic review and network meta-analysis, International Review of Psychiatry, 10.1080/09540261.2020.1765748, (1-14), (2020).
- Long Ge, Behnam Sadeghirad, Geoff D C Ball, Bruno R da Costa, Christine L Hitchcock, Anton Svendrovski, Ruhi Kiflen, Kalimullah Quadri, Henry Y Kwon, Mohammad Karamouzian, Thomasin Adams-Webber, Waleed Ahmed, Samah Damanhoury, Dena Zeraatkar, Adriani Nikolakopoulou, Ross T Tsuyuki, Jinhui Tian, Kehu Yang, Gordon H Guyatt, Bradley C Johnston, Comparison of dietary macronutrient patterns of 14 popular named dietary programmes for weight and cardiovascular risk factor reduction in adults: systematic review and network meta-analysis of randomised trials, BMJ, 10.1136/bmj.m696, (m696), (2020).
- Jinming Fu, Yupeng Liu, Lei Zhang, Lu Zhou, Dapeng Li, Hude Quan, Lin Zhu, Fulan Hu, Xia Li, Shuhan Meng, Ran Yan, Suhua Zhao, Justina Ucheojor Onwuka, Baofeng Yang, Dianjun Sun, Yashuang Zhao, Nonpharmacologic Interventions for Reducing Blood Pressure in Adults With Prehypertension to Established Hypertension, Journal of the American Heart Association, 10.1161/JAHA.120.016804, (2020).
- Megan Micheletti, Courtney McCracken, John N. Constantino, David Mandell, Warren Jones, Ami Klin, Research Review: Outcomes of 24‐ to 36‐month‐old children with autism spectrum disorder vary by ascertainment strategy: a systematic review and meta‐analysis, Journal of Child Psychology and Psychiatry, 10.1111/jcpp.13057, 61, 1, (4-17), (2019).
- Andrea G. Allegrini, Rosa Cheesman, Kaili Rimfeld, Saskia Selzam, Jean‐Baptiste Pingault, Thalia C. Eley, Robert Plomin, The p factor: genetic analyses support a general dimension of psychopathology in childhood and adolescence, Journal of Child Psychology and Psychiatry, 10.1111/jcpp.13113, 61, 1, (30-39), (2019).
- Fernando Salvador, Begoña Treviño, Pau Bosch‐Nicolau, Núria Serre‐Delcor, Adrián Sánchez‐Montalvá, Inés Oliveira, Elena Sulleiro, Mª Luisa Aznar, Diana Pou, Augusto Sao‐Avilés, Israel Molina, Strongyloidiasis screening in migrants living in Spain: systematic review and meta‐analysis, Tropical Medicine & International Health, 10.1111/tmi.13352, 25, 3, (281-290), (2019).
- Chandan J. Vaidya, Xiaozhen You, Stewart Mostofsky, Francisco Pereira, Madison M. Berl, Lauren Kenworthy, Data‐driven identification of subtypes of executive function across typical development, attention deficit hyperactivity disorder, and autism spectrum disorders, Journal of Child Psychology and Psychiatry, 10.1111/jcpp.13114, 61, 1, (51-61), (2019).
- Lindsay M. Alexander, Giovanni A. Salum, James M. Swanson, Michael P. Milham, Measuring strengths and weaknesses in dimensional psychiatry, Journal of Child Psychology and Psychiatry, 10.1111/jcpp.13104, 61, 1, (40-50), (2019).
- Gregory S. Young, John N. Constantino, Simon Dvorak, Ashleigh Belding, Devon Gangi, Alesha Hill, Monique Hill, Meghan Miller, Chandni Parikh, A.J. Schwichtenberg, Erika Solis, Sally Ozonoff, A video‐based measure to identify autism risk in infancy, Journal of Child Psychology and Psychiatry, 10.1111/jcpp.13105, 61, 1, (88-94), (2019).
- Diane L. Putnick, Chun‐Shin Hahn, Charlene Hendricks, Marc H. Bornstein, Developmental stability of scholastic, social, athletic, and physical appearance self‐concepts from preschool to early adulthood, Journal of Child Psychology and Psychiatry, 10.1111/jcpp.13107, 61, 1, (95-103), (2019).
- Edmund J.S. Sonuga‐Barke, Editorial: ‘People get ready’: Are mental disorder diagnostics ripe for a Kuhnian revolution?, Journal of Child Psychology and Psychiatry, 10.1111/jcpp.13181, 61, 1, (1-3), (2019).
- Ifigeneia Mavranezouli, Odette Megnin‐Viggars, Caitlin Daly, Sofia Dias, Sarah Stockton, Richard Meiser‐Stedman, David Trickey, Stephen Pilling, Research Review: Psychological and psychosocial treatments for children and young people with post‐traumatic stress disorder: a network meta‐analysis, Journal of Child Psychology and Psychiatry, 10.1111/jcpp.13094, 61, 1, (18-29), (2019).
- E Slade, C Daly, I Mavranezouli, S Dias, R Kearney, E Hasler, P Carter, C Mahoney, F Macbeth, V Delgado Nunes, Primary surgical management of anterior pelvic organ prolapse: a systematic review, network meta‐analysis and cost‐effectiveness analysis, BJOG: An International Journal of Obstetrics & Gynaecology, 10.1111/1471-0528.15959, 127, 1, (18-26), (2019).
- Grace R Macklin, Nicholas C Grassly, Roland W Sutter, Ondrej Mach, Ananda S Bandyopadhyay, W John Edmunds, Kathleen M O'Reilly, Vaccine schedules and the effect on humoral and intestinal immunity against poliovirus: a systematic review and network meta-analysis, The Lancet Infectious Diseases, 10.1016/S1473-3099(19)30301-9, (2019).
- Qiaosen Chen, Ding Zeng, Yangyang She, Yuhan Lyu, Xiao Gong, Matthew J. Feinstein, Yi Yang, Hongbo Jiang, Different transmission routes and the risk of advanced HIV disease: A systematic review and network meta-analysis of observational studies, EClinicalMedicine, 10.1016/j.eclinm.2019.10.003, (2019).
- Suzanne C. Freeman, David Fisher, Ian R. White, Anne Auperin, James R. Carpenter, Identifying inconsistency in network meta‐analysis: Is the net heat plot a reliable method?, Statistics in Medicine, 10.1002/sim.8383, 38, 29, (5547-5564), (2019).
- Tingting Liu, Zheng He, Jun Dang, Guang Li, Comparative efficacy and safety for different chemotherapy regimens used concurrently with thoracic radiation for locally advanced non-small cell lung cancer: a systematic review and network meta-analysis, Radiation Oncology, 10.1186/s13014-019-1239-7, 14, 1, (2019).
- Loukia M. Spineli, An empirical comparison of Bayesian modelling strategies for missing binary outcome data in network meta-analysis, BMC Medical Research Methodology, 10.1186/s12874-019-0731-y, 19, 1, (2019).
- Adrian D. Vickers, Katherine B. Winfree, Gebra Cuyun Carter, Urpo Kiiskinen, Min-Hua Jen, Donald Stull, James A. Kaye, David P. Carbone, Relative efficacy of interventions in the treatment of second-line non-small cell lung cancer: a systematic review and network meta-analysis, BMC Cancer, 10.1186/s12885-019-5569-5, 19, 1, (2019).
- Jing-hong Liang, Wan-ting Shen, Jia-yu Li, Xin-yuan Qu, Jing Li, Rui-xia Jia, Ying-quan Wang, Shan Wang, Rong-kun Wu, Hong-bo Zhang, Lei Hang, Yong Xu, Lu Lin, The optimal treatment for improving the cognitive function in elder people diagnosed with mild cognitive impairment incorporating Bayesian network meta-analysis and systematic review, Ageing Research Reviews, 10.1016/j.arr.2019.01.009, (2019).
- Runsheng Xie, Jinhui Tian, Yangyang Wang, Yefeng Cai, Hui Li, Efficacy and safety of different drug monotherapies for tension-type headache in adults: study protocol for a Bayesian network meta-analysis, BMJ Open, 10.1136/bmjopen-2018-023748, 9, 1, (e023748), (2019).
- Linghui Deng, Shi Qiu, Lu Wang, Yuxiao Li, Deren Wang, Ming Liu, Comparison of Four Food and Drug Administration–Approved Mechanical Thrombectomy Devices for Acute Ischemic Stroke: A Network Meta-Analysis, World Neurosurgery, 10.1016/j.wneu.2019.02.011, (2019).
- Ismaeel Yunusa, Adnan Alsumali, Asabe E. Garba, Quentin R. Regestein, Tewodros Eguale, Assessment of Reported Comparative Effectiveness and Safety of Atypical Antipsychotics in the Treatment of Behavioral and Psychological Symptoms of Dementia, JAMA Network Open, 10.1001/jamanetworkopen.2019.0828, 2, 3, (e190828), (2019).
- Loukia M. Spineli, Chrysostomos Kalyvas, Konstantinos Pateras, Participants' outcomes gone missing within a network of interventions: Bayesian modeling strategies, Statistics in Medicine, 10.1002/sim.8207, 38, 20, (3861-3879), (2019).
- Hugo Pedder, Sofia Dias, Margherita Bennetts, Martin Boucher, Nicky J. Welton, Modelling time‐course relationships with multiple treatments: Model‐based network meta‐analysis for continuous summary outcomes, Research Synthesis Methods, 10.1002/jrsm.1351, 10, 2, (267-286), (2019).
- Fang-Ping Dang, Hui-Ju Li, Rui-Juan Wang, Qi Wu, Hui Chen, Jing-Jie Ren, Jin-Hui Tian, Comparative efficacy of various antimicrobial lock solutions for preventing catheter-related bloodstream infections: A network meta-analysis of 9099 patients from 52 randomized controlled trials, International Journal of Infectious Diseases, 10.1016/j.ijid.2019.08.017, 87, (154-165), (2019).
- Paola Rogliani, Maria Gabriella Matera, Beatrice Ludovica Ritondo, Ilaria De Guido, Ermanno Puxeddu, Mario Cazzola, Luigino Calzetta, Efficacy and cardiovascular safety profile of dual bronchodilation therapy in chronic obstructive pulmonary disease: A bidimensional comparative analysis across fixed-dose combinations, Pulmonary Pharmacology & Therapeutics, 10.1016/j.pupt.2019.101841, (101841), (2019).
- Hiroko Aoyama, Kanji Uchida, Kazuyoshi Aoyama, Petros Pechlivanoglou, Marina Englesakis, Yoshitsugu Yamada, Eddy Fan, Assessment of Therapeutic Interventions and Lung Protective Ventilation in Patients With Moderate to Severe Acute Respiratory Distress Syndrome, JAMA Network Open, 10.1001/jamanetworkopen.2019.8116, 2, 7, (e198116), (2019).
- Jing Li, Shu-Zhen Shi, Jian-Shu Wang, Zhao Liu, Jin-Xu Xue, Jian-Cheng Wang, Jun-Hai Jia, Efficacy of melanoma patients treated with PD-1 inhibitors, Medicine, 10.1097/MD.0000000000016342, 98, 27, (e16342), (2019).
- Tingting Liu, Jiehao Liao, Jun Dang, Guang Li, Treatments for patients with advanced neuroendocrine tumors: a network meta-analysis, Therapeutic Advances in Medical Oncology, 10.1177/1758835919853673, 11, (175883591985367), (2019).
- Ming‐Chieh Shih, Yu‐Kang Tu, Evaluating network meta‐analysis and inconsistency using arm‐parameterized model in structural equation modeling, Research Synthesis Methods, 10.1002/jrsm.1344, 10, 2, (240-254), (2019).
- Da Wa, Pa Zhu, Ziwen Long, Comparative efficacy and safety of antiplatelet agents in cerebral ischemic disease: A network meta‐analysis, Journal of Cellular Biochemistry, 10.1002/jcb.26065, 120, 6, (8919-8934), (2019).
- Zhongbo Xu, Xin Feng, Lin Li, Ziyi Hu, Yuan Xiao, Guilin Jin, Weimin Liao, Efficacy and safety of oral traditional Chinese patent medicine for chronic cerebral circulation insufficiency patients, Medicine, 10.1097/MD.0000000000016175, 98, 27, (e16175), (2019).
- Tingting Liu, Silu Ding, Jun Dang, Hui Wang, Jun Chen, Guang Li, Elective nodal irradiation versus involved-field irradiation in patients with esophageal cancer receiving neoadjuvant chemoradiotherapy: a network meta-analysis, Radiation Oncology, 10.1186/s13014-019-1388-8, 14, 1, (2019).
- Ya-jing Chen, Xiu-xia Li, Bei Pan, Bangwei- Wang, Guang-zhuang Jing, Qian-qian Liu, Yan-fei Li, Zhi-tong Bing, Ke-hu Yang, Xue-mei Han, Long Ge, Non-pharmacological interventions for older adults with depressive symptoms: a network meta-analysis of 35 randomized controlled trials, Aging & Mental Health, 10.1080/13607863.2019.1704219, (1-14), (2019).
- Hongwei Zhang, Jun Chen, Tingting Liu, Jun Dang, Guang Li, First-line treatments in EGFR-mutated advanced non-small cell lung cancer: A network meta-analysis, PLOS ONE, 10.1371/journal.pone.0223530, 14, 10, (e0223530), (2019).
- Emil ter Veer, Martijn G. H. van Oijen, Hanneke W. M. van Laarhoven, The Use of (Network) Meta-Analysis in Clinical Oncology, Frontiers in Oncology, 10.3389/fonc.2019.00822, 9, (2019).
- Shang-He-Lin Yin, Peng Xu, Bian Wang, Yao Lu, Qiao-Yu Wu, Meng-Li Zhou, Jun-Ru Wu, Jing-Jing Cai, Xin Sun, Hong Yuan, Duration of dual antiplatelet therapy after percutaneous coronary intervention with drug-eluting stent: systematic review and network meta-analysis, BMJ, 10.1136/bmj.l2222, (l2222), (2019).
- Elyssa Wiecek, Fernanda S. Tonin, Andrea Torres-Robles, Shalom I. Benrimoj, Fernando Fernandez-Llimos, Victoria Garcia-Cardenas, Temporal effectiveness of interventions to improve medication adherence: A network meta-analysis, PLOS ONE, 10.1371/journal.pone.0213432, 14, 3, (e0213432), (2019).
- David D. Kim, Thomas A. Trikalinos, John B. Wong, Leveraging Cumulative Network Meta-analysis and Value of Information Analysis to Understand the Evolving Value of Medical Research, Medical Decision Making, 10.1177/0272989X18823008, (0272989X1882300), (2019).
- Jiaxing Zhang, Long Ge, Matt Hill, Yi Liang, Juan Xie, Dejun Cui, Xiaosi Li, Wenyi Zheng, Rui He, Standard-Dose Proton Pump Inhibitors in the Initial Non-eradication Treatment of Duodenal Ulcer: Systematic Review, Network Meta-Analysis, and Cost-Effectiveness Analysis, Frontiers in Pharmacology, 10.3389/fphar.2018.01512, 9, (2019).
- Tau Ming Liew, Cia Sin Lee, Reappraising the Efficacy and Acceptability of Multicomponent Interventions for Caregiver Depression in Dementia: The Utility of Network Meta-Analysis, The Gerontologist, 10.1093/geront/gny061, 59, 4, (e380-e392), (2018).
- A Jarde, O Lutsiv, J Beyene, SD McDonald, Vaginal progesterone, oral progesterone, 17‐OHPC, cerclage, and pessary for preventing preterm birth in at‐risk singleton pregnancies: an updated systematic review and network meta‐analysis, BJOG: An International Journal of Obstetrics & Gynaecology, 10.1111/1471-0528.15566, 126, 5, (556-567), (2018).
- Howard Thom, Ian R. White, Nicky J. Welton, Guobing Lu, Automated methods to test connectedness and quantify indirectness of evidence in network meta‐analysis, Research Synthesis Methods, 10.1002/jrsm.1329, 10, 1, (113-124), (2018).
- Sarah Donegan, Sofia Dias, Nicky J. Welton, Assessing the consistency assumptions underlying network meta‐regression using aggregate data, Research Synthesis Methods, 10.1002/jrsm.1327, 10, 2, (207-224), (2018).
- Wen Wang, Wenwen Chen, Yanmei Liu, Reed Alexander C Siemieniuk, Ling Li, Juan Pablo Díaz Martínez, Gordon H Guyatt, Xin Sun, Antibiotics for uncomplicated skin abscesses: systematic review and network meta-analysis, BMJ Open, 10.1136/bmjopen-2017-020991, 8, 2, (e020991), (2018).
- Hassan Mir, Reed Alexander C Siemieniuk, Long Cruz Ge, Farid Foroutan, Michael Fralick, Talha Syed, Luciane Cruz Lopes, Ton Kuijpers, Jean-Louis Mas, Per O Vandvik, Thomas Agoritsas, Gordon H Guyatt, Patent foramen ovale closure, antiplatelet therapy or anticoagulation in patients with patent foramen ovale and cryptogenic stroke: a systematic review and network meta-analysis incorporating complementary external evidence, BMJ Open, 10.1136/bmjopen-2018-023761, 8, 7, (e023761), (2018).
- Fernanda S. Tonin, Elyssa Wiecek, Andrea Torres-Robles, Roberto Pontarolo, Shalom (Charlie) I. Benrimoj, Fernando Fernandez-Llimos, Victoria Garcia-Cardenas, An innovative and comprehensive technique to evaluate different measures of medication adherence: The network meta-analysis, Research in Social and Administrative Pharmacy, 10.1016/j.sapharm.2018.05.010, (2018).
- Mario Cazzola, Luigino Calzetta, Peter J. Barnes, Gerard J. Criner, Fernando J. Martinez, Alberto Papi, Maria Gabriella Matera, Efficacy and safety profile of xanthines in COPD: a network meta-analysis, European Respiratory Review, 10.1183/16000617.0010-2018, 27, 148, (180010), (2018).
- Y. S. Zhang, W. Y. Weng, B. C. Xie, Y. Meng, Y. H. Hao, Y. M. Liang, Z. K. Zhou, Glucagon-like peptide-1 receptor agonists and fracture risk: a network meta-analysis of randomized clinical trials, Osteoporosis International, 10.1007/s00198-018-4649-8, 29, 12, (2639-2644), (2018).
- Jing-Hong Liang, Jia-Yu Li, Rui-Xia Jia, Ying-Quan Wang, Rong-Kun Wu, Hong-Bo Zhang, Lei Hang, Yong Xu, Chen-Wei Pan, Comparison of Cognitive Intervention Strategies for Older Adults With Mild to Moderate Alzheimer's Disease: A Bayesian Meta-analytic Review, Journal of the American Medical Directors Association, 10.1016/j.jamda.2018.09.017, (2018).
- Chris H.P. van den Akker, Johannes B. van Goudoever, Hania Szajewska, Nicholas D. Embleton, Iva Hojsak, Daan Reid, Raanan Shamir, Probiotics for Preterm Infants, Journal of Pediatric Gastroenterology and Nutrition, 10.1097/MPG.0000000000001897, 67, 1, (103-122), (2018).
- Mariana M. Fachi, Fernanda S. Tonin, Leticia P. Leonart, Karina S. Aguiar, Luana Lenzi, Bonald C. Figueiredo, Fernando Fernandez-Llimos, Roberto Pontarolo, Comparative efficacy and safety of tyrosine kinase inhibitors for chronic myeloid leukaemia: A systematic review and network meta-analysis, European Journal of Cancer, 10.1016/j.ejca.2018.08.016, 104, (9-20), (2018).
- Joachim Krois, Gerd Göstemeyer, Seif Reda, Falk Schwendicke, Sealing or infiltrating proximal carious lesions, Journal of Dentistry, 10.1016/j.jdent.2018.04.026, (2018).
- S. C. Freeman, D. Fisher, J. F. Tierney, J. R. Carpenter, A framework for identifying treatment‐covariate interactions in individual participant data network meta‐analysis, Research Synthesis Methods, 10.1002/jrsm.1300, 9, 3, (393-407), (2018).
- Sofia Dias, A. E. Ades, Nicky J. Welton, Jeroen P. Jansen, Alexander J. Sutton, References, Network Meta‐Analysis for Decision Making, 10.1002/9781118951651, (409-445), (2018).
- Ajay Shah, Daniel Joshua Hoppe, David M Burns, Joseph Menna, Daniel Whelan, Jihad Abouali, Varying femoral-sided fixation techniques in anterior cruciate ligament reconstruction have similar clinical outcomes: a network meta-analysis, Journal of ISAKOS: Joint Disorders & Orthopaedic Sports Medicine, 10.1136/jisakos-2018-000206, 3, 4, (220-228), (2018).
- Paola Rogliani, Maria Gabriella Matera, Ermanno Puxeddu, Marco Mantero, Francesco Blasi, Mario Cazzola, Luigino Calzetta, Emerging biological therapies for treating chronic obstructive pulmonary disease: A pairwise and network meta-analysis, Pulmonary Pharmacology & Therapeutics, 10.1016/j.pupt.2018.03.004, 50, (28-37), (2018).
- Ashley Bonner, Paul E. Alexander, Romina Brignardello-Petersen, Toshi A. Furukawa, Reed A. Siemieniuk, Yuan Zhang, Wojtek Wiercioch, Ivan D. Florez, Yutong Fei, Arnav Agarwal, Juan José Yepes-Nuñez, Joseph Beyene, Holger Schünemann, Gordon H. Guyatt, Applying GRADE to a network meta-analysis of antidepressants led to more conservative conclusions, Journal of Clinical Epidemiology, 10.1016/j.jclinepi.2018.05.009, 102, (87-98), (2018).
- Andrea Torres-Robles, Elyssa Wiecek, Fernanda S. Tonin, Shalom I. Benrimoj, Fernando Fernandez-Llimos, Victoria Garcia-Cardenas, Comparison of Interventions to Improve Long-Term Medication Adherence Across Different Clinical Conditions: A Systematic Review With Network Meta-Analysis, Frontiers in Pharmacology, 10.3389/fphar.2018.01454, 9, (2018).
- Yuanqiang Lin, Qiang Wen, Li Guo, Hui Wang, Guoqing Sui, Zhixia Sun, A network meta-analysis on the efficacy and prognosis of different interventional therapies for early-stage hepatocellular carcinoma, International Journal of Hyperthermia, 10.1080/02656736.2018.1507047, (1-13), (2018).
- Roger Hilfiker, Andre Meichtry, Manuela Eicher, Lina Nilsson Balfe, Ruud H Knols, Martin L Verra, Jan Taeymans, Exercise and other non-pharmaceutical interventions for cancer-related fatigue in patients during or after cancer treatment: a systematic review incorporating an indirect-comparisons meta-analysis, British Journal of Sports Medicine, 10.1136/bjsports-2016-096422, 52, 10, (651-658), (2017).
- Peter Makai, Joanna IntHout, Jaap Deinum, Kevin Jenniskens, Gert Jan van der Wilt, A Network Meta-Analysis of Clinical Management Strategies for Treatment-Resistant Hypertension: Making Optimal Use of the Evidence, Journal of General Internal Medicine, 10.1007/s11606-017-4000-7, 32, 8, (921-930), (2017).
- Jenny H. Kang, Quang A. Le, Effectiveness of bariatric surgical procedures, Medicine, 10.1097/MD.0000000000008632, 96, 46, (e8632), (2017).
- Chan Hyuk Park, Jang Han Jung, Eunwoo Nam, Eun Hye Kim, Mi Gang Kim, Jae Hyun Kim, Se Woo Park, Comparative efficacy of various endoscopic techniques for the treatment of common bile duct stones: a network meta-analysis, Gastrointestinal Endoscopy, 10.1016/j.gie.2017.07.038, (2017).
- Che-Yi Chou, Ying-Tzu Chang, Jia-Lian Yang, Jiun-Yi Wang, Tsui-Er Lee, Ruey-Yun Wang, Chin-Chuan Hung, Effect of Long-term Incretin-Based Therapies on Ischemic Heart Diseases in Patients with Type 2 Diabetes Mellitus: A Network Meta-analysis, Scientific Reports, 10.1038/s41598-017-16101-1, 7, 1, (2017).
- Huey Yi Chong, Nai Ming Lai, Anucha Apisarnthanarak, Nathorn Chaiyakunapruk, Comparative Efficacy of Antimicrobial Central Venous Catheters in Reducing Catheter-Related Bloodstream Infections in Adults: Abridged Cochrane Systematic Review and Network Meta-Analysis, Clinical Infectious Diseases, 10.1093/cid/cix019, 64, suppl_2, (S131-S140), (2017).
- Mario Cazzola, Paola Rogliani, Luigino Calzetta, Nicola A. Hanania, Maria Gabriella Matera, Impact of Mucolytic Agents on COPD Exacerbations: A Pair-wise and Network Meta-analysis, COPD: Journal of Chronic Obstructive Pulmonary Disease, 10.1080/15412555.2017.1347918, 14, 5, (552-563), (2017).
- Sarah Donegan, Nicky J. Welton, Catrin Tudur Smith, Umberto D'Alessandro, Sofia Dias, Network meta‐analysis including treatment by covariate interactions: Consistency can vary across covariate values, Research Synthesis Methods, 10.1002/jrsm.1257, 8, 4, (485-495), (2017).
- Hong Zhao, James S. Hodges, Bradley P. Carlin, Diagnostics for generalized linear hierarchical models in network meta‐analysis, Research Synthesis Methods, 10.1002/jrsm.1246, 8, 3, (333-342), (2017).
- Jun Yang, Chao Huang, Shanshan Wu, Yang Xu, Ting Cai, Sanbao Chai, Zhirong Yang, Feng Sun, Siyan Zhan, The effects of dipeptidyl peptidase-4 inhibitors on bone fracture among patients with type 2 diabetes mellitus: A network meta-analysis of randomized controlled trials, PLOS ONE, 10.1371/journal.pone.0187537, 12, 12, (e0187537), (2017).
- Yang Yang, Jiaomiao Pei, Guozhen Gao, Zheng Yang, Shuzhong Guo, Bo Yue, Jianhua Qiu, Pharmacological interventions for melanoma: Comparative analysis using bayesian meta-analysis, Oncotarget, 10.18632/oncotarget.12644, 7, 49, (80855-80871), (2016).
- Emil ter Veer, Nadia Haj Mohammad, Gert van Valkenhoef, Lok Lam Ngai, Rosa M. A. Mali, Maarten C. Anderegg, Martijn G. H. van Oijen, Hanneke W. M. van Laarhoven, The Efficacy and Safety of First-line Chemotherapy in Advanced Esophagogastric Cancer: A Network Meta-analysis, Journal of the National Cancer Institute, 10.1093/jnci/djw166, 108, 10, (djw166), (2016).
- Chan Hyuk Park, Yoon Suk Jung, Eunwoo Nam, Chang Soo Eun, Dong Il Park, Dong Soo Han, Comparison of Efficacy of Prophylactic Endoscopic Therapies for Postpolypectomy Bleeding in the Colorectum: A Systematic Review and Network Meta-Analysis, The American Journal of Gastroenterology, 10.1038/ajg.2016.287, 111, 9, (1230-1243), (2016).
- Tu Yu-Kang, Node-Splitting Generalized Linear Mixed Models for Evaluation of Inconsistency in Network Meta-Analysis, Value in Health, 10.1016/j.jval.2016.07.005, 19, 8, (957-963), (2016).
- Xiaoyan Zhao, Xiuxiu Huang, Bei Li, Ying Cai, Peiye Cao, Qiaoqin Wan, The relative effectiveness of different types of exercise for people with Mild Cognitive Impairment or dementia: Systematic review protocol, Journal of Advanced Nursing, 10.1111/jan.14553, 0, 0, (undefined).










