The terms applicability, generalizability, external validity and transferability are related, sometimes used interchangeably and have in common that they lack a clear and consistent definition in the classic epidemiological literature. However, all of these terms generally describe one overarching theme: whether or not available research evidence can be directly utilized to answer the healthcare questions at hand, ideally supported by a judgment about the degree of confidence for this utilization. This concept has been called directness. The objectives of this paper were to delineate how non-randomized studies (NRS) inform judgments in relation to directness and the concepts that it encompasses in the context of systematic reviews. We will briefly review what is known and describe the theoretical and practical issues as well as offer guidance to those tackling the challenges of judging directness and using research evidence to answer healthcare questions with evidence from NRS.
In particular, we suggest a framework in which authors can use NRS as a complement, sequence or replacement for randomized controlled trials (RCTs) by focusing on judgments about the population, intervention, comparison and outcomes. Authors of systematic reviews will use NRS to complement judgments about the inconsistencies, the rationale and credibility of subgroup analysis, the baseline risk estimates for the determination of absolute benefits and downsides, and the directness of surrogate outcomes. This evidence includes contextual or supplementary evidence. Authors of systematic review and other summaries of the evidence use NRS as sequential evidence to provide evidence when insufficient evidence is available for an outcome from RCTs, but NRS evidence is available (e.g., long-term harms). Use of evidence from NRS may also serve to replace RCT evidence when NRS provide equivalent (or potentially higher) confidence in the evidence (i.e. quality) compared to indirect evidence from RCTs. These judgments will be made in the context of other domains that influence the overall quality of the body of evidence, including the risk of bias, publication bias (i.e. limitations in the detailed study design and execution), inconsistency, imprecision and factors that increase our confidence in effects.
This article will support systematic reviewers in their interaction with decision makers, that is, those who use the systematic review to develop guidelines, address health policy makers, and make clinical decisions, by making these judgments transparent. Copyright © 2013 John Wiley & Sons, Ltd.