What are the best methods for rapid reviews of the research evidence? A systematic review of reviews and primary studies

Rapid review methodology aims to facilitate faster conduct of systematic reviews to meet the needs of the decision‐maker, while also maintaining quality and credibility. This systematic review aimed to determine the impact of different methodological shortcuts for undertaking rapid reviews on the risk of bias (RoB) of the results of the review. Review stages for which reviews and primary studies were sought included the preparation of a protocol, question formulation, inclusion criteria, searching, selection, data extraction, RoB assessment, synthesis, and reporting. We searched 11 electronic databases in April 2022, and conducted some supplementary searching. Reviewers worked in pairs to screen, select, extract data, and assess the RoB of included reviews and studies. We included 15 systematic reviews, 7 scoping reviews, and 65 primary studies. We found that several commonly used shortcuts in rapid reviews are likely to increase the RoB in the results. These include restrictions based on publication date, use of a single electronic database as a source of studies, and use of a single reviewer for screening titles and abstracts, selecting studies based on the full‐text, and for extracting data. Authors of rapid reviews should be transparent in reporting their use of these shortcuts and acknowledge the possibility of them causing bias in the results. This review also highlights shortcuts that can save time without increasing the risk of bias. Further research is needed for both systematic and rapid reviews on faster methods for accurate data extraction and RoB assessment, and on development of more precise search strategies.


Highlights
What is already known?• A large variety of methodological shortcuts are used to save time when conducting rapid reviews.• The rationale for the choice of specific shortcuts is often only loosely based on high quality evidence and can increase the risk of bias of the results; this is rarely acknowledged by rapid review authors.
What is new?
• This systematic review updates and extends previous reviews that were limited to specific systematic review stages or did not include a risk of bias assessment.• We found that several commonly used shortcuts in rapid reviews are likely to increase the risk of bias in the results.
Potential impact for Research Synthesis Methods readers • We highlight shortcuts that can save time without increasing the risk of bias of rapid reviews.• Areas where further research is required for both rapid and systematic reviews include methods for accurate (and faster) data extraction and risk of bias assessment, and on development of more precise search strategies.

| BACKGROUND
The evidence-informed approach to decision-making aims to achieve better decisions for better health, avoid harm, make more effective use of scarce resources, and to improve transparency and accountability in decision-making. 1,2Evidence-informed decision-making is a systematic and transparent approach and includes decisions about clinical practice, public health, and health policy and systems.Decision-makers can include healthcare policymakers, government agencies, clinicians and their professional associations, patients, caregivers, patient groups, and the public. 3The most frequently reported barriers to evidence uptake for evidence-informed decision-making are poor access to good quality relevant research and lack of timely and relevant research output. 4In relation to good quality relevant research, high-quality systematic reviews are considered the gold standard, 2,5 and these are used as the basis for evidence products, such as policy and practice guidelines, health technology assessments, and evidence briefs for policy. 2However, there is a well-recognized need to conduct systematic reviews faster and with the needs of the decision-maker in mind, while also maintaining quality (low risk of bias) and credibility.][8][9] Rapid reviews are "a type of systematic review in which components of the systematic review process are simplified, omitted or made more efficient in order to produce information in a shorter period of time, preferably with minimal impact on quality.Further, they involve a close relationship with the end-user and are conducted with the needs of the decision-maker in mind." 10,11In 2015, we conducted a rapid review of systematic reviews and primary studies to answer the question: What are the best methodologies to enable a rapid review of research evidence for evidence-informed decision making in health policy and practice? 10As well as offering the definition for rapid reviews cited above, and informing suitable methods for rapid reviews, the review was used to inform the design of a rapid response program to support evidenceinformed decision-making. 12The review has been widely cited, showing its usefulness, particularly during the COVID-19 pandemic.4][15] There have also been updates to methods for conducting systematic reviews, 5,16 and for assessing their quality or risk of bias (RoB). 17,18here is a need to provide stronger guidance about choice of methods for rapid reviews that considers potential impact on RoB and covers the key steps of review conduct.While we have previously offered areas where "shortcuts" could be considered to reduce time to completion of rapid reviews based on our review findings and other sources of evidence, 12 our 2015 rapid review has been used selectively 10 to justify a range of shortcuts regardless of their impact on the RoB.Work done since the last version of our rapid review on best methodologies for rapid reviews could help to inform the update. 14,15,19hese include a scoping review of rapid review methodology conducted by Hamel and colleagues 15 that was used by Cochrane to offer interim recommendations for conducting rapid reviews 6 ; and a systematic review of methods for conducting various steps of systematic reviews that was conducted by Robson and colleagues. 14,19owever, the review by Hamel et al. 2020 15 is limited by a lack of RoB assessment of the primary studies included, which is usual practice for scoping reviews, and the date of last search for studies is now over 4 years ago (February 2019).The review by Robson et al. 14,19 used high-quality systematic review methods, but the date of last search for studies was September 1, 2016.While the review was aimed at systematic review methods, many of these would also be applicable for rapid reviews.
We aimed to answer the questions: 1. What is the accuracy, reliability, impact and/or efficiency of different methodological shortcuts for undertaking rapid reviews, including for preparation of a protocol, question formulation, inclusion criteria, searching, selection, data extraction, RoB assessment, synthesis, and reporting?and 2. What is the potential impact of the methodological shortcut/s on the RoB of the results of the rapid review?

| METHODS
High-quality systematic review methods were used. 5The protocol was registered on the International prospective register of systematic reviews (PROSPERO). 20Changes made to the protocol after registration can be found in Supporting Information File S1.This review is an update of a review published in 2016 10,21 but with some adjustments to the methods to account for recent developments and learning in the field, as well as limitations in the original review.The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement was used for reporting. 22

| Criteria for considering studies for inclusion
Publications in any language and from any country were included.Both gray and peer-reviewed literature was included.There was no date of publication restrictions.

| Types of studies
Systematic reviews and scoping reviews not included in our original review were included.Primary studies were included if they compared or evaluated the accuracy, reliability, or efficiency of a potential methodological shortcut or described factors that affect the method's accuracy, reliability, or efficiency.Primary research studies providing quantitative data and one of the following designs were eligible: randomized controlled trials, nonrandomized controlled trials, controlled before-after studies, interrupted time series studies, repeated measures studies, cohort, case-control, analytical cross-sectional studies.Modeling/simulation studies were included.
Qualitative studies were excluded.

| Types of participants
These include reviewers, reviews (systematic, rapid, scoping, and living reviews), studies, articles, and database records.

| Types of interventions
Methodological shortcuts for undertaking rapid reviews, including for preparation of a protocol, question formulation, inclusion criteria, searching, selection, data extraction, RoB assessment, synthesis, and reporting were included.

| Types of comparisons
Suitable comparisons include an alternative method or no comparator.

| Types of outcome measures
Relevant outcomes included accuracy, for example, sensitivity, specificity; reliability; efficiency, for example, time to complete, resources required to complete (e.g., monetary and personnel); concordance; measures of synthesis quality; and agreement in review conclusions.

| Search strategy
We actively searched for systematic reviews and scoping reviews published since January 2015.For the primary studies, we used the two previous reviews as the source of studies prior to 2019. 14,15Then we actively searched for primary studies published after January 2019.
We searched MEDLINE (Ovid), EMBASE (Ovid), PsycINFO (Ovid), LILACS (BVSalud), and the Cochrane Library (Ovid), including the Cochrane Database of Systematic Reviews and the Cochrane Central Register of Controlled Trials (CENTRAL).The databases were searched from January 2015 to present for systematic reviews and scoping reviews; and from January 2019 to present for primary studies-using search strategies specific to the study type and database.The date of last search was April 23-25, 2022.Details of the searches can be found in Supporting Information File S2.
We also searched the reference lists of the two published reviews for primary studies prior to January 2019. 14,15The author's own databases of knowledge translation literature and some of the included studies were also searched by hand for relevant studies.Searches were conducted by one review author (MMH) and references were imported into EndNote for management and removal of duplicates.

| Study selection and data collection
Titles and abstracts were screened according to the selection criteria by two review authors independently (MMH and DEGM) in Endnote.The full text of any potentially relevant papers selected by either reviewer was retrieved for closer examination.The inclusion criteria were applied against these papers by two reviewers independently (MMH and one of JOMB, SP, or LR) using Covidence.Discrepancies were resolved by discussion and consultation with a third reviewer (MT).
All relevant data was extracted from included papers by one reviewer and verified by a second reviewer (MH, CM, JOMB, MT, SP-working in pairs).Differences were resolved by discussion and consensus.Data were extracted into an Excel spreadsheet following a guidance document in Word (Supporting Information File S3).Data extracted included the objectives, target population, method/s tested, outcomes reported, date of last search or year of study (for reviews and primary studies, respectively), included study designs and number of studies (for reviews), study design and size (for primary studies), country of study, results, conclusions, and comments, for example, strengths, limitations, research gaps.

| Assessment of RoB of included studies
The RoB of included systematic reviews and scoping reviews was assessed by one reviewer and verified by another (CM, MT, and SP-working in pairs) using the Risk of Bias Assessment Tool for Systematic Reviews (ROBIS). 18Disagreements regarding scores were resolved by discussion and consensus.ROBIS covers four domains: 1) study eligibility criteria, 2) identification and selection of studies, 3) data collection and study appraisal, and 4) synthesis and findings.Each domain consists of five to six questions with six possible options: Yes, Probably yes, Probably No, No, Not indicated, or Not applicable.By design, scoping reviews do not include an assessment of RoB of the included studies or a meta-analysis.Thus, some questions were not applicable to scoping reviews (i.e., questions 3.4, 3.5, 4.5, and 4.6 in domains 3 and 4).The overall assessment for each domain and for the review overall is low, unclear, or high RoB. 18he quality of included primary studies was assessed by one reviewer (JYHK) and verified by another (MH or SP) using a modified version of the Quality Assessment of Diagnostic Studies (QUADAS)-2 checklist, 23 which was used in the review by Robson et al. 2019. 14Disagreements regarding scores were resolved by discussion and consensus.We used the same modified version as Robson et al. 2019. 14 This included the four domains: (1) Participant selection, (2) Record selection, (3) Evaluated methodological shortcut, and (4) Reference standard.The overall assessment for each domain is low, unclear, or high RoB. 23For the primary studies that were included in the Robson review we used their assessment, which was also conducted by one reviewer and verified by another.

| Data synthesis
We conducted a narrative synthesis of the included reviews and studies.No meta-analyses were performed due to the heterogeneous study designs and outcome measures.Both the reviews and primary studies were classified according to the stage of the review (preparation of a protocol, question formulation, inclusion criteria, searching, selection, data extraction, RoB assessment, synthesis, and reporting) and then by the particular methodological shortcut.Reviews and primary studies that covered a range of shortcuts or rapid review methods in general are grouped together and findings are presented according to key themes.
The results of the reviews and primary studies were also mapped to the A MeaSurement Tool to Assess systematic Reviews (AMSTAR) 2, ROBIS, and the Methodological Expectations of Cochrane Intervention Reviews (MECIR) criteria. 17,18,24When interpreting the impact of the shortcut on the speed or RoB of the review, greater weight was given to reviews and studies with the lowest RoB.Shortcuts that may impact on the RoB of the review were distinguished from those that could help to speed up the process but do not impact on the RoB.
In addition, when reporting the results of the review, we have suggested some shortcuts that are unlikely to increase the risk of bias but that were not tested in any of the included reviews or studies (see Table 2 in the results).These suggestions are based on recommendations from included reviews, from rapid review and systematic review guidance documents, and our own experience with rapid and systematic reviews.The statement that they are unlikely to increase the risk of bias is based on them not negatively affecting the rating of either the AMSTAR 2 or ROBIS tools. 17,18

| RESULTS
We identified 1976 records after duplicates were removed and assessed 162 full-text reports for eligibility (Figure 1).Of these, we excluded 71 reports, with the most common reasons for exclusion being wrong intervention/method (n = 35) or wrong study design (n = 23) (Figure 1 and Supporting Information File S4). A small number of the included primary studies were also included in one or more of the included systematic reviews. Also, oe systematic review of systematic reviews by Veginadu and colleagues 38 overlapped 100% with other included systematic reviews.14,[25][26][27][28][33][34][35]37 We have highlighted these instances when presenting the results to prevent double counting.

| Systematic reviews and scoping reviews
The countries where the reviews were most commonly conducted (based on the country affiliation of the first author) are Canada (n = 7), Germany (n = 5), United Kingdom (n = 4), and Australia (n = 2).The systematic reviews most commonly covered one particular review stage (n = 12), including aspects of the inclusion criteria, 27,34,37 searching, 27,34,37 selection, 35,39 data extraction, 31,36 or RoB assessment. 33Three of the systematic reviews addressed rapid review methods in general. 29,32,38The scoping reviews all addressed rapid review methods in general, [42][43][44][45][46][47][48] but with one also scoping the different definitions for rapid reviews, 44 one focusing on tools to support the automation of systematic reviews, 45 and one on resource use during systematic review production. 46The characteristics of the included reviews sorted by review stage and methodological shortcut can be found in Supporting Information File S5.The overall RoB is summarized in Figure 2 and the assessment for each study can be found in Supporting Information File S6.

| Primary studies
Of the 65 included primary studies, 6 were randomized controlled trials. 51,55,62,63,75,83The countries where studies were most commonly conducted (based on the country affiliation of the first author) are Canada (n = 18), United States of America (n = 15), United Kingdom (n = 12), and Australia (n = 8).The studies covered methods for one or more of the different review stages, including protocol preparation (n = 1), inclusion criteria (n = 4), searching (n = 14), selection (n = 32), data extraction (n = 11), and RoB assessment (n = 2).No studies specifically focused on question formulation separate to the inclusion criteria, or on synthesis or reporting.Four studies each covered two or three stages, 58,76,90,110 and six studies tested a package of methods across all (or most) review stages. 60,77,95,99,105,111The characteristics of the included studies sorted by review stage and methodological shortcut can be found in Supporting Information File S5.The overall RoB is summarized in Figure 3 and the assessment for each study is in Supporting Information File S6.
The main results for each of the reviews and primary studies, sorted by review stage and methodological shortcut are in Supporting Information File S7.The findings are synthesized in Table 1 and also mapped to the AMSTAR 2, ROBIS, and MECIR criteria 17,18,24 in Supporting Information File S8.When interpreting the impact of the shortcut on the potential increase in speed or RoB of the review (columns 3 and 4 in Table 1) greater weight was given to reviews and primary studies with the lowest RoB.

| Evidence for shortcuts
Shortcuts that can potentially speed up the review process with no, or minimal, impact on bias are shown in green in Table 2. Shortcuts that can potentially speed up the review process but are likely to increase the potential for bias, so are not recommended, are shown with white background in Table 2.If the shortcuts highlighted in white are used in a rapid review, the increased RoB should be acknowledged.Other shortcuts that have future potential to speed up review production include techniques to minimize the number of records found when searching, provided very high sensitivity can be achieved (e.g., close to 100%).These include the use of better methodological search filters, and more precise search strategies.For screening of titles and abstracts, machine learning shows a lot of potential but is not yet sufficiently sensitive and there is a danger that relevant studies will be missed.The use of software to assist or automate data extraction shows future promise but is dependent the quality and completeness of source data and ease of use.For example, data extraction from ClinicalTrials.gov using the EXACT automatic data extraction tool (http://bio-nlp.org/EXACT) was 100% accurate but its usefulness is limited to only this trials register and to trials with results data included. 92hortcuts/methods where we were unable to make conclusions due to insufficient evidence of effect include the use of peer-review of search strategies and the most appropriate method for removal of duplicates.Further, there is no evidence to suggest that screening of titles and abstracts is done better by experts, or that data extraction is done better by experienced reviewers.Crowd sourcing of selection has potential but under specific conditions (e.g., to select RCTs based on title/abstract provided the agreement algorithm is robust 84 ) however, further work is required for topic-based screening. 85n relation to data extraction, single-reviewer extraction clearly increases the RoB.The evidence for data extraction conducted by one reviewer with verification by a second reviewer is limited to a single primary study 51 that was also included in a systematic review 31 ; the study showed a small increase in errors compared to dual extraction but similar pooled estimates as the reference standard (duplicate data extraction).Given the potential to save time, this shortcut would benefit from further research.An alternative, suggested in the Cochrane Handbook, 5 of single reviewer extraction with verification by a second reviewer for study characteristics and duplicate data extraction for outcome data was not specifically tested in any included studies.Other alternatives, such as the use of dual extraction for a proportion of studies (e.g., 20%) with discussion, and then single reviewer extraction for the remaining studies have also been suggested but not tested.There are also a range of shortcuts or techniques that can be used to reduce the time needed to complete the review that are unlikely to increase the RoB, as assessed by the AMSTAR 2 or ROBIS tools. 17,18These were not tested in any of the included studies but are listed in Table 2 and highlighted in gold.

| Results of primary studies and reviews that assessed rapid review methods in general
Results are presented in Table 3 by theme for the three systematic reviews, 29,32,38 seven scoping reviews, [42][43][44][45][46][47][48] and six primary studies 60,77,95,99,105,111 that tested a range of methods-see also Supporting Information File S7 for further details for each review or study, and Supporting Information File S8 for a narrative summary of the results by theme.The results from the systematic review by Veginadu and colleagues 38 that examined different methods are not reported here separately due to the 100% overlap with other systematic reviews already reported.

| DISCUSSION
Several commonly used shortcuts in rapid reviews 47 are likely to increase the RoB in the results.These include restrictions based on publication date, and the use of a single reviewer for the main tasks of screening titles and abstracts, selecting studies based on the full-text, and for extracting data from included studies.These shortcuts have been shown to result in missing relevant studies and to introduce extra errors in data extraction that may impact on the results, including a small chance of changing the conclusions. 31,39,51,58,59,63,76,81,90,101,104,112The use of a single electronic database as a source of studies is also not justified by the evidence. 25,61,69,76,87,90Thus, authors of rapid reviews should be transparent by reporting their use of these shortcuts and T A B L E 3 Themes from the primary studies and reviews that assessed rapid review methods in general.

Definition of a rapid review
Eight key themes were identified (in order of frequency of reporting): 1 ScR (44) 1. compare and contrast to a full traditional systematic review acknowledging the possibility of them causing bias in the results.Some currently used shortcuts are much less likely to introduce bias or to change the conclusions of the review, including limiting the publication language to English and excluding gray or unpublished literature, 27,34,37,86 though with some caveats depending on the review question.Other shortcuts that can be used relatively safely to save time and resources include limiting searching to two to three main databases plus one Supplementary source, 25,26,28,61,69,76,87,90,97,107,109 not contacting study authors for missing information, 56,68,100 and not using blinding for the RoB assessment. 33he review has also highlighted areas where the evidence to support methodological choices for both systematic reviews and rapid reviews is lacking or limited.For example, very little attention has been given to testing different methods for data extraction and for RoB assessments and how these can be accelerated.It has also highlighted areas that may have future potential to speed up review production but are not yet sufficiently developed to enable full confidence.These include techniques to minimize the number of records found when searching but with high sensitivity (e.g., close to 100%), such as the use of methodological search filters, and more precise search strategies.
For screening of titles and abstracts, machine learning shows a lot of potential but is not yet sufficiently sensitive to replace human reviewers and there is a danger that relevant studies will be missed.However, there are ways that machine learning can be used to create efficiencies in title and abstract screening without introducing bias, as suggested by Hamel and colleagues in their useful guidance document on the topic. 116Given the rapid developments in machine learning, it may not be too long before it can be used with greater confidence to accelerate the review process. 117There are also ways to use automation tools and an experienced review team to speed up the systematic review process but without increasing the RoB, as demonstrated by Clark and colleagues 118,119 with their methodology and automation tools for completing full systematic reviews in around 2 weeks.The use of the living systematic methodology is another way to speed up the process. 120hree of the most resource-intensive stages (mostly time) during systematic review production are study selection, data extraction, and critical appraisal, 46 which are most likely influenced by the number of records found when searching and the number of studies included in the review.Thus, a particular area for time efficiencies that may have no impact on bias at all is in the question formulation and setting of the PICO (population, intervention, comparison, and outcomes) aspects of the inclusion criteria.Significant time savings could be achieved by setting a very focused review question and by limiting the included study designs to the highest level of evidence available-preferably systematic reviews or to RCTs if no systematic reviews are available.
The time used for data extraction could also be reduced by extraction of a much more limited amount of information about study characteristics and results.Sharing and reuse of data extraction and risk of bias assessments from previous systematic reviews could also facilitate faster review production and prevent duplication of effort; a goal towards which Cochrane is moving. 121Comparison of data extraction and risk of bias assessments for similar systematic reviews could also save time and improve accuracy; as done for the rapid review of COVID-19 therapeutics by the Pan American Health Organization. 9These aspects have not been tested in any of the included studies but are unlikely to increase the RoB and may actually reduce the RoB by improving the quality of the data.
Our recommendations for rapid review methods (Table 2) are mostly consistent with those made by the Cochrane rapid review group, 6 though our recommendations are strengthened by explicitly linking them with the supporting evidence and making it clear when there is no supporting evidence.In contrast to the Cochrane recommendations, we have not made recommendations relating to involvement of an information specialist or peer review of the search strategy as there is no supporting evidence of impact, and not all reviewers have access to information specialists, especially in low-and middle-income countries.The work of the Pan American Health Organization in partnership with the Latin American and Caribbean Center on Health Sciences Information to develop a guided search tool of evidence (EVID@EASY) that uses validated search filters may help to support reviewers without access to information specialists. 122,123he strengths of this systematic review include the use of a pre-registered protocol, 20 coverage of all of the systematic review stages, a comprehensive search strategy, dual independent screening of titles/abstracts and full-text to determine inclusion, and the conduct of a RoB assessment of included studies.It updates and extends previous reviews on the topic that were limited to specific systematic review stages 14 or did not include a RoB assessment. 15Limitations include our reliance on these two previous reviews as a source of prior studies and the lack of time and resources to conduct more supplementary searching, as we had previously planned.While we did not conduct dual, independent data extraction and RoB assessment, we minimized the risk of errors by ensuring that both were checked and verified by a second reviewer, with resolution by discussion and consensus.We believe that it is unlikely that these limitations will have affected the main findings and conclusions of this review.

| CONCLUSION
This review reinforces the findings of the previous reviews, including our own, that there is a limited evidence base for many of the systematic review stages, and that can also be used to inform possible shortcuts for rapid reviews. 12,14,15,20However, it does show that there are some shortcuts that are currently widely used for rapid reviews that cannot be justified, such as the use of a single electronic database (e.g., PubMed) as a source of studies, and use of a single reviewer for selection of studies and data extraction.There are also areas where efficiencies can be achieved without increasing bias, such as using a focused review question to reduce the scope of the review.There is also future potential to create efficiencies through the careful use of machine learning to ensure that it does not increase the risk of bias, development of more efficient search strategies, and improved techniques for accurate data extraction, though attention should be given to making these tools equitably accessible.
AUTHOR CONTRIBUTIONS Michelle M. Haby and Ludovic Reveiz initiated and conceptualized the review.Michelle M. Haby and Ludovic Reveiz developed the methodology with input from Jorge Ot avio Maia Barreto, Marcela Torres, and Sasha Peiris.Michelle M. Haby, Jorge Ot avio Maia Barreto, Sasha Peiris, Cristi an Mansilla, Jenny Yeon Hee Kim, Marcela Torres, and Diego Emmanuel Guerrero-Magaña conducted the selection of studies, data extraction and risk of bias assessment.Michelle M. Haby conducted the analysis with input into the interpretation of the data from all authors.All authors contributed to writing, reviewing, and editing the manuscript.

Proportion of studies with low, high or unclear risk of bias ROBIS Domain
PRISMA flow diagram for the systematic review of rapid review methods.[Colour figure can be viewed at wileyonlinelibrary.com]Potential impact of shortcuts on speed, bias, and risk of bias assessment tools-by review stage and shortcut tested.[Colour table can be viewed at wileyonlinelibrary.com] F I G U R E 2 Overall risk of bias across the Risk of Bias Assessment Tool for Systematic Reviews domains for the 15 systematic reviews and 7 scoping reviews.[Colour figure can be viewed at wileyonlinelibrary.com]F I G U R E 3 Overall risk of bias across the Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-2 domains for the 65 primary studies.[Colour figure can be viewed at wileyonlinelibrary.com]T A B L E 1 Areas where "shortcuts" could be considered to reduce time to completion of rapid reviews without increasing the risk of bias are highlighted in green or gold; shortcuts in white are not recommended.[Colour table can be viewed at wileyonlinelibrary.com] Note:The table only includes shortcuts tested in studies included in this systematic review.",potentialincrease;?, unknown effect; Min., minimal.The orange shading is a heading for 'Review stage'.The yellow shading is to signify a sub-heading of 'Review stage'.Abbreviations: PS, primary study; RCT, randomized controlled trial; RoB, risk of bias; SR, systematic review.T A B L E 2• Use of dual computer monitors for data extraction Yes None ✓ 14,[25][26][27][28][33][34][35]37 Shortcuts highlighted in green or gold can potentially speed up the review process with no, or minimal, impact on bias.Shortcuts in green have supporting evidence included in this review, while those in gold do not.Shortcuts highlighted in white may increase the risk of bias and, if used, this increased risk should be acknowledged.", potential for increased risk of bias; #, potential for decreased risk of bias; O, no evidence to support this recommendation found and included in this systematic review; ✓, evidence to support this recommendation found and included in this systematic review.This is currently exclusively available for Cochrane Reviews and/or reviews conducted using Covidence software. Note:a