Interventions to promote technology adoption in firms: A systematic review

Abstract Background The adoption of improved technologies is generally associated with better economic performance and development. Despite its desirable effects, the process of technology adoption can be quite slow and market failures and other frictions may impede adoption. Interventions in market processes may be necessary to promote the adoption of beneficial technologies. This review systematically identifies and summarizes the evidence on the effects of interventions that shape the incentives of firms to adopt new technologies. Following Foster and Rosenzweig, technology is defined as “the relationship between inputs and outputs,” and technology adoption as “the use of new mappings between input and outputs and the corresponding allocations of inputs that exploit the new mappings.” The review focuses on studies that include direct evidence on technology adoption, broadly defined, as an outcome. The term intervention refers broadly to sources of exogenous variation that shape firms' incentives to adopt new technologies, including public policies, interventions carried out by private institutions (such as NGOs), experimental manipulations implemented by academic researchers trying to understand technology adoption, and natural experiments. Objective The objective of this review is to answer the following research questions: 1. To what extent do interventions affect technology adoption in firms?2. To what extent does technology adoption affect profits, employment, productivity, and yields?3. Are these effects heterogeneous across sectors, firm size, countries, workers' skill level, or workers' gender? Selection Criteria To be included, papers had to meet the inclusion criteria described in detail in Section 3.1 which is grouped into four categories: (1) Participants, (2) Interventions, (3) Methodology, and (4) Outcomes. Regarding participants, our focus was on firms, and we omitted studies at the country or region level. In terms of interventions, we included studies that analyzed a source of exogenous variation in incentives for firms to adopt new technologies and estimated their effects. Thus, we left out studies that only looked at correlates of technology adoption, without a credible strategy to establish causality, and only included studies that used experimental or quasi‐experimental methods. Regarding outcomes, papers were included only if they estimated effects of interventions (broadly defined) on technology adoption, although we also considered other firm outcomes as secondary outcomes in studies that reported them. Search Methods The first step in selecting the studies to be included in the systematic review was to identify a set of candidate papers. This set included both published and unpublished studies. To look for candidate papers, we implemented an electronic search and, in a subsequent step, a manual search. The electronic search involved running a keyword search on the most commonly used databases for published and unpublished academic studies in the broad topic area. The words and their Boolean combinations were carefully chosen (more details in Section 3.2). The selected papers were initially screened on title and abstract. If papers passed this screen, they were screened on full text. Those studies that met the stated criteria were then selected for analysis. The manual search component involved asking for references from experts and searching references cited by papers selected through the electronic search. These additional papers were screened based on title and abstract and the remaining were screened on full text. If they met the criteria they were added to the list of selected studies. Data Collection and Analysis For the selected studies, the relevant estimates of effects and their associated standard errors (SEs) were entered into an Excel spreadsheet along with other related information such as sample size, variable type, and duration for flow variables. Other information such as authors, year of publication, and country and/or region where the study was implemented was also included in the spreadsheet. Once the data were entered for each of the selected studies, the information on sample size, effect size and SE of the effect size was used to compute the standardized effect size for each study to make the results comparable across studies. For those studies for which relevant data were not reported, we contacted the authors by email and incorporated the information they provided. Forest plots were then generated and within‐study pooled average treatment effects were computed by outcome variable. In addition, an assessment of reporting on potential biases was conducted including (1) reporting on key aspects of selection bias and confounding, (2) reporting on spillovers of interventions to comparison groups, (3) reporting of SEs, and (4) reporting on Hawthorne effects and the collection of retrospective data. Results The electronic and manual searches resulted in 42,462 candidate papers. Of these, 80 studies were ultimately selected for the review after screenings to apply the selection criteria. Relevant data were extracted for analysis from these 80 studies. Overall, 1108 regression coefficients across various interventions and outcomes were included in the analysis, representing a total of 4,762,755 firms. Even though the search methods included both high‐income and developing countries, only 1 of the 80 studies included in the analysis was in a high‐income country, while the remaining 79 were in developing countries. We discuss the results in two parts, looking at firms in manufacturing and services separately from firms (i.e., farms) in agriculture. In each case, we consider both technology adoption and other firm outcomes. Authors' Conclusions Overall, our results suggest that some interventions led to positive impacts on technology adoption among firms across manufacturing, services, and agriculture sectors, but given the wide variation in the time periods, contexts, and study methodologies, the results are hard to generalize. The effects of these interventions on other firm performance measures such as farm yields, firm profits, productivity, and employment were mixed. Policy‐makers must be careful in interpreting these results as a given intervention may not work equally well across contexts and may need to be adjusted to each specific regional context. There is great need for more research on the barriers to technology adoption by firms in developing countries and interventions that may help alleviate these obstacles. One major implication for researchers from our review is that there is a need to carefully measure technology adoption.

and estimated their effects. Thus, we left out studies that only looked at correlates of technology adoption, without a credible strategy to establish causality, and only included studies that used experimental or quasi-experimental methods. Regarding outcomes, papers were included only if they estimated effects of interventions (broadly defined) on technology adoption, although we also considered other firm outcomes as secondary outcomes in studies that reported them.
Search Methods: The first step in selecting the studies to be included in the systematic review was to identify a set of candidate papers. This set included both published and unpublished studies. To look for candidate papers, we implemented an electronic search and, in a subsequent step, a manual search.
The electronic search involved running a keyword search on the most commonly used databases for published and unpublished academic studies in the broad topic area. The words and their Boolean combinations were carefully chosen (more details in Section 3.2). The selected papers were initially screened on title and abstract.
If papers passed this screen, they were screened on full text. Those studies that met the stated criteria were then selected for analysis.
The manual search component involved asking for references from experts and searching references cited by papers selected through the electronic search. These additional papers were screened based on title and abstract and the remaining were screened on full text. If they met the criteria they were added to the list of selected studies.
Data Collection and Analysis: For the selected studies, the relevant estimates of effects and their associated standard errors (SEs) were entered into an Excel spreadsheet along with other related information such as sample size, variable type, and duration for flow variables. Other information such as authors, year of publication, and country and/or region where the study was implemented was also included in the spreadsheet.
Once the data were entered for each of the selected studies, the information on sample size, effect size and SE of the effect size was used to compute the standardized effect size for each study to make the results comparable across studies.
For those studies for which relevant data were not reported, we contacted the authors by email and incorporated the information they provided. Forest plots were then generated and within-study pooled average treatment effects were computed by outcome variable.
In addition, an assessment of reporting on potential biases was conducted including (1) reporting on key aspects of selection bias and confounding, (2) reporting on spillovers of interventions to comparison groups, (3) reporting of SEs, and (4) reporting on Hawthorne effects and the collection of retrospective data.

| The policy issue
The adoption of improved technologies is generally associated with better economic performance and development. Governments and development agencies have incorporated the promotion of firms' competitiveness into their priorities (IADB, 2016;World Bank, 2017) and recognized that the adoption of modern technologies is one of its drivers. Two reasons lie behind this interest in competitiveness in general and technological upgrading in particular: first, the expectation that technological upgrading by firms will deliver benefits like higher productivity, more jobs, better wages and better working conditions at the micro level, and higher growth at the aggregate level; and second, the idea that the slowness, or even absence, of the process of technology adoption is due in part to market failures due to externalities, imperfect information, and coordination problems that call for public intervention. Furthermore, these market failures are often more salient in developing countries, leading to lower rates of new technology adoption, and consequently slower growth rates.
History provides examples of major increases in wellbeing broadly defined (including output, output per worker, size, and productivity) associated with the widespread adoption of improved productive methods.  point to the so-called three industrial revolutions in manufacturing: the introduction of steampowered machines, the adoption of electricity-powered production methods, and the use of information technologies to automate manufacturing. In agriculture, the green revolution, which involved the adoption of new technologies, including high-yielding varieties, and the increased use of fertilizers and pesticides, had a comparable impact. The key aspect in these revolutionary transformations was not the mere invention of new technologies, but their widespread adoption by firms.
Despite its desirable effects, the process of technology adoption can be quite slow (Geroski, 2000;Rosenberg, 1972) and market failures and other frictions may impede adoption (Foster & Rosenzweig, 2010;Verhoogen, 2020). Such market failures and frictions impeding technology adoption may also be different for agricultural firms compared to nonagricultural firms. For example, information frictions may be more significant for nonagricultural firms than for agricultural firms. This may in part be because agricultural firms can observe technology adoption by neighboring firms, but this is often not possible for nonagricultural firms. Furthermore, nonagricultural firms seldom share information on new beneficial technology with competing firms. Thus, interventions in market processes may be necessary to promote the adoption of beneficial technologies.
The lack of technology adoption in cases in which the potential gains are clear have been observed in specific industries like textiles (Bloom et al., 2013) and soccer-ball production (Atkin et al., 2017). Foster et al. (2008) show that there is large heterogeneity in productivity even within narrowly defined industries, which may be the consequence of the lack of technology adoption by the firms in the left tail of the distribution.
To help inform governments' actions, this systematic review aggregates the existing evidence on interventions that induce technology adoption and their implications for firm outcomes.

| Potential channels of effects
We think of the effects of interventions as occurring in two stages. In the first stage, interventions may induce the adoption of better technologies in the directly affected firms, provided that that the technologies are indeed better (after considering adoption costs) and that, after providing the treatment, firms are able to notice that advantage. There are three broad categories of reasons for low technology adoption (i) internal to the firm, (ii) on the input side, and (iii) on the output side (Verhoogen, 2020). First, there may be low adoption of new technologies because firms may not be profit maximizing, may lack information, or may not have the capability to apply the information in practice. These are factors internal to the firm. Second, on the input side, newer technology often requires highly skilled workers and high-quality inputs, which may be scarce and expensive. Finally, on the output side, demand conditions may affect the incentives of firms to adopt (or not) new technologies. For example, richer customers may have preferences for higher-quality goods, which may require the application of new technologies.
In a second stage, the adoption of a better technology by treated firms may lead to an increase in total output, output per unit of input, unit cost, firm-level wages, employment, total factor productivity, firm survival, and/or exports. Productivity, understood as the inverse of cost, is expected to increase because the reduction in (qualityadjusted) cost is presumably what makes the new technology attractive to the firm. Output is expected to increase if a firm can increase its market share based on the reduction in costs. As consequence of the expansion in output, firm-level employment is also generally expected to increase. Wages (although not necessarily wages relative to other input prices) may also go up after adoption due to the rise in marginal labor productivity. In general, we expect a new technology to lead to an increase in inputs in production.
However, we note that it is theoretically possible that some technological innovations may be input-saving; that is, they may be labor-, capital-, or land-saving. In fact, because of the possibility that a new technology may be input-substituting, studies often report multiple firm performance indicators such as profits, and productivity along with changes in inputs (labor, capital, land). interest. The solid boxes represent the intervention and outcomes and the dashed boxes represent the assumptions. As stated before, this review is not devoted to a single intervention or technologyadoption outcome. Instead, it includes several interventions inducing technology adoption in firms (measured in several ways). Despite these differences, it is useful to provide a general theory of change describing the way in which the interventions, intermediate outcomes, and final outcomes relate.

| Why it is important to do the review
There is a need to aggregate evidence across multiple studies concerning similar economic phenomenon to translate research into policy (Allcott, 2015;Banerjee et al., 2015;Dehejia, 2015;Meager, 2019). A systematic review allows one to survey the literature on a given economic phenomenon, identify the knowledge gaps, and compare effects across contexts, implementers and time, facilitating the integration of research into policy. Systematic reviews about technology adoption are mostly focused on agricultural firms (Fuglie et al., 2019;Obayelu et al., 2017;Silva et al., 2015;Waddington et al., 2014). There has been little effort to do the same for other economic sectors, despite the recent publication of high-profile papers on technology adoption in manufacturing (e.g., Atkin et al., 2017;Bloom et al., 2013). Consequently, existing summaries of evidence for nonagricultural firms are almost all nonsystematic literature reviews. Our review addresses this knowledge gap by summarizing the existing evidence on technology adoption in nonagricultural as well as agricultural firms.
A review close to ours is Piza et al. (2016). The authors systematically review the evidence about the impact of businessessupport services for small and medium enterprises (SMEs) in low and middle-income countries. Our review complements their findings by focusing specifically on technology-adoption outcomes for a broader set of interventions and a broader set of firms and countries. Including a broader set of firms is important because, even though SMEs are particularly relevant for outcomes like employment, larger firms are key for other important outcomes like exports. Considering also the experience of high-income 1 countries is warranted because active policies to promote technology adoption are widespread around the globe, not only in the developing world. The experience of these countries is relevant for policy making everywhere. There are also aspects of the technology-adoption process, for instance knowledge spillovers across firms (Hausmann & Rodrik, 2003), that carry distinctive implications for the design of policy but that arguably are not as salient in the context of other pro-business interventions, such as programs to increase formalization and access to working capital.
Existing reviews on technology adoption in nonagricultural firms that do not follow the Campbell guidelines for systematic reviews include Herbert-Copley (1990), Keller (2004), Oliveira andMartins (2011), andRosenzweig (2010). Herbert-Copley (1990) F I G U R E 1 Potential channels of effects 1 We use the World Bank's World Development Indicators classification of countries into low, lower middle, upper middle and high. We collectively refer to all countries not in the high-income group as "developing." ALFARO-SERRANO ET AL.
| 5 of 36 reviews case studies of technical change in manufacturing firms in the 1980s in Latin America. The study assesses the role of the nature of technology, market structure, government policy, firm characteristics and the location of the international technological frontier on the level of technology adoption. Keller (2004) focuses at an aggregate level and on diffusion across countries. Coming from the information systems literature, Oliveira and Martins (2011) compile studies on the adoption of information technology at firm level. Their review does not attempt to compare the results across studies. Close in spirit to the current review, Foster and Rosenzweig (2010) review microeconomic studies of the barriers to technology adoption in lowincome countries. Among the factors they find to be important are financial and nonfinancial returns to adoption, individual learning and social learning, technological externalities, scale economies, schooling, credit constraints, risk and incomplete insurance, and departures from behavioral rules implied by simple models of rationality. The current review differs in the systematic search to select studies of interest, and in its coverage of more recent work.

| OBJECTIVE
The primary objective of this review is to assess the extent to which interventions affect technology adoption in firms. The secondary objectives are to assess to what extent technology adoption affects other firm outcomes and whether these effects differ across certain groups.
In particular, the review aims to answer the following research questions: 1. To what extent do interventions affect technology adoption in firms?
2. To what extent does technology adoption affect profits, employment, productivity, and yields? 3. Are these effects heterogeneous across sectors, firm size, countries, workers' skill level, or workers' gender?
Question 1 refers to the immediate impact of interventions to promote technology adoption. Question 2 explores the subsequent impact of technology adoption on other economic outcomes.
Question 3 explores heterogeneous effects across relevant groups.

| Criteria for considering studies for this review
After conducting predefined manual and electronic searches which used keywords and imposed restrictions in terms of years and language covered, we applied screening criteria to identify studies to be included in the review. These criteria were grouped into four categories described in detail below and in the published review protocol (Verhoogen et al., 2018).

| Types of participants considered
To be included in this review, studies must have firms as the unit of analysis. The term "firm" refers to productive units of any size, including single-person businesses and farmers or farms. This focus leaves out studies of technology adoption at the country or region level.
This review does not impose any restriction regarding the level of development of the country in which the intervention takes place.
It includes studies in both high-income and developing countries.

| Types of interventions considered
This review focuses on interventions that induce adoption of a new technology by a firm. Following Foster and Rosenzweig (2010), we define technology as "the relationship between inputs and outputs," and technology adoption as "the use of new mappings between input and outputs and the corresponding allocations of inputs that exploit the new mappings." This definition of technology adoption is broad, and includes the overall production plan that firms implement as well as changes in specific practices.
The term intervention refers broadly to sources of exogenous variation that shape firms' incentives to adopt new technologies, including public interventions, interventions carried out by private institutions (such as NGOs), experimental manipulations deliberately induced by academic researchers trying to understand technology adoption, and natural experiments. For instance, Bloom et al. (2016) study how the inclusion of China into a trade agreement allowing Chinese imports into Europe impacts technology adoption among firms. In this case, we consider the inclusion in the trade agreement as an intervention.
Although this review is not focused on a particular type of policy, it is possible to provide a broad classification of the interventions we expect to find. Here we follow the classification presented by Cirera and Maloney (2017): • Direct financial support. These are interventions that directly provide funding for technological adoption. This assistance could take many forms, including access to credit, insurance, subsidies, in-kind transfers, or cash towards the take-up of technology.
There are studies that explore the impact of access to credit to firms (Giné & Dean, 2009), access to insurance (Karlan et al., 2014), in-kind provision of equipment (Atkin et al., 2017;de Mel et al., 2012), and/or cash (Chudnovsky et al., 2011).
• Indirect financial support. These are interventions that help businesses to pay for the cost of the adoption projects without directly providing the resources. Loan guarantees (Tan, 2009), minimum support price (Abate et al., 2018), and changing policy on the supply chain to reduce the price of inputs (Ogunniyi et al., 2017) are examples of interventions in this category.
• Regulation and standards. These are rules, policies, and characteristics of the environment that affect agents' incentives, for example a trade policy that induces technology adoption (Bloom et al., 2016), or a positive demand shock to technology adoption induced by a government regulation (Crouzet et al., 2019;Higgins, 2019).
While the categories above each represent a broad set of interventions, studies may combine interventions across the categories. For instance, a government policy that helps subsidize takeup of a new input and provides training can be classified under both "direct financial support" and "indirect financial support" (e.g., Ogunniyi et al., 2017). Similarly, a study that includes provision of loans, as well as training to microfinance clients (e.g., Gine & Mansuri, 2014) can be classified under "direct financial support" and "other direct support." It is important to note that the focus of this review is the effect of the interventions on technology adoption by existing firms. In this sense, the interventions considered are different from interventions to promote entrepreneurship. We expect that most of the studies are about the adoption of technologies that are new to the firm, but not to the world.
Papers that do not focus on an intervention (defined broadly as above) are not included in this review. For example, the seminal papers by Conley and Udry (2010) and Tavneet Suri (2011), while important and influential for the broader literature, do not focus on an explicit intervention (defined as above) and have therefore been excluded.

| Methodologies considered
The review includes studies that take the firm as the unit of analysis and employ an identification strategy explicitly aiming to estimate causal effects on technology adoption. Experimental and quasiexperimental methods of causal inference attempt to disentangle the effect of the intervention of interest on a given outcome from the effect of omitted variables that also affect the outcome and correlate with the intervention. If such omitted variables are present, their influence on the outcome will be confounded with the influence of the intervention and the analyst could end up attributing to the treatment what is really the result of the omitted variable.
Randomized control trials (RCTs) are commonly perceived to be the gold standard in conducting causal inference. But RCTs are costly to run and often infeasible. In addition to experimental studies, we include papers using quasi-experimental methods, including instrumental variables, regression-discontinuity designs, differencein-differences, and propensity-score matching.
We generally require that the comparison group is a set of firms that received no treatment. Two sorts of exceptions to this rule are considered acceptable. One is when different groups receive exogenously different "doses," or amounts, of treatment, but no group receives zero treatment. The other is when the researchers provide compensation to the comparison group to replicate the nonintervention situation or to isolate some aspect of the treatment. In these cases, it is still possible to estimate causal effects cleanly, and hence they are considered in the review.

| Types of outcome measures considered
The primary outcome of interest for the review is technology adoption. Measures of adoption can take the form of dichotomous variables (that, e.g., takes value one when a technology is adopted and zero otherwise) or a count of the number of new practices adopted. Because the primary objective of this review is to examine the effects of interventions on technology adoption, we exclude papers that do not consider a technology adoption outcome.
The secondary outcomes are other firm outcomes that may be affected by technology adoption. These are variables in the second step of the causal chain. In particular, we consider measures of output per unit of input, unit cost, wages, employment, output, total factor productivity, and profits. Regarding measurement, we require that (i) output and output per worker are measured in physical or monetary units; (ii) wages are measured in monetary units; and (iii) employment is measured in hours or days or number of employees.
Total factor productivity is treated as a dimensionless variable.
Papers analyzing effects on secondary outcomes are included only if they also present estimates for the primary outcome (technology adoption).

| Search methods for identification of studies
The first step in selecting studies for the review was to identify a set of candidate papers. This set includes both published and unpublished studies. To look for candidate papers, we implemented an electronic search and, in a subsequent step, a manual search.

| Electronic search
The electronic search included bibliographic databases and specialized journals. We also looked for studies on the websites of key organizations to ensure good coverage of unpublished literature.

World Bank Group.
Our electronic search used Boolean operators to express the following requirements: 1. The studies should include terms related to the outcomes of interest for the study in the title or abstract.
2. The studies should include terms related to the methodology employed in the title, the abstract, or the main text.
3. The papers should be dated in the year 2000 or later. Similar to Piza et al. (2016), we impose this limit because we focus on studies using impact-evaluation techniques, which have been widely adopted in development economics since that time. Even though it is possible that some papers on technology adoption were produced before this year, we expect that few address endogeneity issues in the way required to be included in this review.
To keep the electronic search as broad as possible, we did not differentiate between our primary outcomes (technology adoption) or secondary outcomes (other firm outcomes). The keyword searches were piloted multiple times on at least two different databases to ensure the keywords were able to identify the right set of papers.
The pilots were assessed based on whether some of the manually selected seminal papers would show up when conducting keyword searches. The final list of key words was selected once we had found the combination to be working on at least two different databases.
In each electronic database, we first imposed filters for publication date on or after 2000 and English language. The language filter was mostly due to resource constraints and the year filter to the fact that methodologies analyzing causal effects of exogenous variations were not popular before 2000. Then, following EBSCO host's search syntax, the following Boolean expression was implemented: (TI(outcomes) OR AB(outcomes)) AND (TI(methods) OR AB (methods) OR TX(methods)) The exact syntax used in EBSCO host, ProQuest, and Ideas Repec through EBSCO Discovery is: (TI(((technolog* OR manag* OR innovat* OR practice*) W4 (adopt* OR diffus* OR chang* OR alter*)) OR productivity OR "output per" OR (unit* W4 cost)) OR AB(((technolog* OR manag* OR innovat* OR practice*) W4 (adopt* OR diffus* OR chang* OR alter*)) OR productivity OR "output per" OR (unit* W4 cost))) AND (TI(("quasi experiment*" or quasiexperiment* or "random* control* trial*" or "random* trial*" or RCT or (random* N3 allocat*) or match* or "propensity score" or PSM or "regression discontinuity" or "discontinuous design" or RDD or "difference in difference*" or difference-in-difference* or "diff in diff" or "case control" or cohort or "propensity weighted" or propensity-weighted or "interrupted time series" or "Control* evaluation" or "Control treatment" or "instrumental variable*" or heckman or IV or ((quantitative or "comparison group*" or counterfactual or "counter factual" or counter-factual or experiment*) N3 (design or study or analysis)))) OR AB(("quasi experiment*" or quasi-experiment* or "random* control* trial*" or "random* trial*" or RCT or (random* N3 allocat*) or match* or "propensity score" or PSM or "regression discontinuity" or "discontinuous design" or RDD or "difference in difference*" or difference-in-difference* or "diff in diff" or "case control" or cohort or "propensity weighted" or propensity-weighted or "interrupted time series" or "Control* evaluation" or "Control treatment" or "instrumental variable*" or heckman or IV or ((quantitative or "comparison group*" or counterfactual or "counter factual" or counterfactual or experiment*) N3 (design or study or analysis))))OR TX(("quasi experiment*" or quasi-experiment* or "random* control* trial*" or "random* trial*" or RCT or (random* N3 allocat*) or match* or "propensity score" or PSM or "regression discontinuity" or "discontinuous design" or RDD or "difference in difference*" or difference-in-difference* or "diff in diff" or "case control" or cohort or "propensity weighted" or propensityweighted or "interrupted time series" or "Control* evaluation" or "Control treatment" or "instrumental variable*" or heckman or IV or ((quantitative or "comparison group*" or counterfactual or "counter factual" or counterfactual or experiment*) N3 (design or study or analysis))))) "W4" is a search syntax that stands for "within 4 words of each other." Similarly, "N3" stands for "near 3 words of each other." For other journal websites the syntax was adapted as appropriate. The electronic searches were run on May 2nd and 3rd, 2018.

| Manual search
In addition to the electronic search, we conducted a manual search implemented between May, 2018 and June, 2020. It entailed the following three procedures, also explained in detail in Appendix A: • Recommendations: We sought recommendations of additional references from experts and practitioners.
• Backward-citation search: for papers that met the criteria and were selected initially, we conducted a backward-citation search in Google Scholar and all papers that cited the initially selected papers were screened.
• Reference search: for the initially selected papers, we screened the papers that they cited.

| Screening procedure
The output of the electronic and manual searches was a very broad set of candidate papers (42,462), including many articles that were not relevant for our review. We used a two-step screening procedure to identify papers that met the criteria for the review (see Figure 2): Step 1-Title and abstract screening: All candidate papers were screened based on their title and abstract. Candidate papers were grouped in batches of 350. Initially, two batches were double-screened, with two reviewers screening each paper. Each reviewer applied the criteria independently and once they completed a batch of 350, their assessments were compared and discussed before they continued to the second batch. In the discussion, areas of disagreement were resolved.
The objective of the double-screening procedure was to promote learning and increase consistency across reviewers. To move from double to single screening, we required a reduction in discrepancies between two reviewers in the second batch when compared to the first and in <5% of screened papers. After two batches (700 papers) were doublescreened in this way, this rule was met by all reviewers and each reviewer continued the title and abstract screening with single-screening. Therefore, the remaining papers were screened only once.
Step 2-Full text screening: Papers that passed the title and abstract screening were screened again based on the full article text. This time, to ensure consistency and learning across reviewers, we implemented the double-coding procedure for the first 20 papers coded by each reviewer, grouped in two batches of 10.
In each of the screening stages, the reviewer had to determine whether a paper should be excluded or not based on the four elements of the criteria developed in Section 3.1, as follows: 1. Exclude based on participants. The study was excluded if the unit of analysis was not firms.
2. Exclude based on intervention. The study was excluded if it did not mention explicitly that the intervention had the potential of inducing technology adoption by a firm.
3. Exclude based on methodology. The study was excluded if it did not use one of the following methodologies: RCT, instrumental variables, regression discontinuity, difference-in-differences, propensity score matching or synthetic control. 4. Exclude based on outcomes. The study was excluded if it did not report the effect of the intervention on technology adoption. At this point, we did not impose restrictions regarding the secondary outcomes.
Papers that passed both screenings were selected for the data extraction phase. The two-step screening procedure did not exclude articles based on whether the reported estimates were usable in the review or not.
Papers found manually in relevant websites as well as papers recommended by experts also went through the two-step screening procedure.
Once a paper was selected for data extraction, the backwardcitation and reference searches described in Section 3.2 were conducted.

| Data extraction and management
Using the variables for data extraction presented in Appendix B, relevant information was entered in an Excel file for papers that passed the screening. This file included a coding sheet with basic characteristics at the study level such as citation, authors, year of publication, country, and methodology, among others. In addition, the file included a data analysis sheet with information at the regression level, including impact estimates, number of observations, and SEs.
A double-entry procedure was carried out for a subset of the data from the selected papers (such as the impact estimates) to ensure the quality of the data entry. This was done for a randomly selected sample of 33% of the papers.

| Measures of treatment effect
Comparing different pieces of evidence about an intervention requires comparable effect-size estimates. These comparable measures are usually not directly provided in the original study and need to be computed by the reviewer based on the information in the paper.
Different studies often measure outcomes in different units. For example, an outcome measure such as employment, may be expressed as number of workers, number of worker hours, number of hours per F I G U R E 2 Screening procedure day, and so forth. This creates problems of comparisons across and within studies. Moreover, studies often use multiple indicators to express changes in relevant outcomes, exacerbating problems of comparison. As indicated in Borenstein et al. (2009), when the outcomes are not directly measured in a common and meaningful scale, one could use the standardized mean difference (SMD). This measure re-expresses the impact measure of a study relative to the outcome variability observed in that study. A positive value of the SMD indicates a positive impact of the intervention on the outcome.
In several studies in our setting we were not able to find the breakdown of the sample between treatment and control groups. For this reason, as well as to ensure standardization across studies, we assume Nt = Nc = N/2. We then use the formula for SMD as provided in Waddington et al. (2019, p. 26 where βˆis the treatment effect, SE is the standard error and N is the sample size of the study.

| Unit of analysis issues
As discussed earlier, we focus on papers that take the firm as the unit of analysis. However, the unit of analysis might not coincide with the level of experimental or quasi-experimental variation. For instance, a study might report estimates of firm-level impacts of a technology adoption program that was assigned to the firms in selected districts. In this case, the treatment assignment status varies at the district level instead of the firm level. Studies like this should consider the intra-cluster correlation when computing the SEs of their effect estimates. Fortunately, this practice has been increasingly adopted in social sciences with the use of cluster-robust SEs. We did not have any studies that did not consider the intra-cluster correlation while computing the SEs.

| Dealing with missing data
When relevant information is not reported (like the pooled standard deviations or sample sizes), we contacted the authors of the studies to ask for it. However, in nine cases we did not hear back or were unable to contact the authors. In such cases, we did not include the study in the analysis as we were unable to compute the SMD or the SE.

| Assessing reporting biases
We assessed reporting biases with a modified version of the 3ie risk of bias tool (Hombrados & Waddington, 2012), which is presented in detail in Appendix C. The tool covers four dimensions: (1) Reporting on key aspects of selection bias and confounding, (2) Reporting on spillovers of intervention to comparison groups, (3) Reporting of SEs, and (4) Reporting on Hawthorne effect and baseline data. If a study uses various methodologies, the assessments under selection bias and confounding are based on the most rigorous methodology.
For each of these dimensions, a set of considerations were defined. The question whether the study addresses each consideration can receive one of three answers: "reported," "not reported," or "not applicable."

| Data synthesis
For the studies under consideration, we extracted available regression coefficients. In the context of multivariate regressions we extracted only the regression coefficients relevant to the outcomes we consider. All the effect sizes from the selected papers were converted into a standardized effect using the formula discussed above in the section on "Measures of treatment effect." To avoid treating different results from the same paper as independent results, we computed an average standardized effect for all results, as well as the average SEs for these standardized effects within a paper for the same variable (Waddington et al., 2014).
The standardized effects and their respective SEs were computed manually. We generated forest plots to present the manually computed standardized effect sizes using the Stata command metan. The effect size as well as the confidence intervals for these outcomes are presented using a forest plot. It must be noted that we do not attempt to provide a summary effect across studies, because selected studies included several interventions and several definitions of technology adoption outcomes. Hence, we are not concerned with the computation of the summary effect, or the SE of the summary effect, commonly applied in other metaanalyses/meta-regressions.

| Deviation from protocol
The review has been executed to follow the steps described in the protocol (Verhoogen et al., 2018), with the following deviations: • The protocol did not describe some of the manual searches conducted (reference search and citation search). These are now described in detail in Appendix A.
• The protocol included the following research questions to be answered by the review: Few of the studies analyzed explore the effects on total output, output per unit of input, unit cost, firm-level wages, exports, and survival as listed in Question 3 in the protocol. Therefore, it was not possible to analyze effects on these secondary outcomes. The research question was revised in this version to reflect the secondary outcomes analyzed (profits, employment, productivity, and yields).
• A double-entry for a set of papers was conducted during data extraction as explained in Section 3.3, under "Data extraction and management." • We modified the tool for assessing risk of biases. We piloted the procedure described in the protocol and found that many studies received "unclear" classifications in the assessments. To avoid generating assessments that we felt might be misleading, we modified the tool to make it less subjective. In addition, we introduced the classifications "Not applicable" and "Not reported" instead of "Unclear." The updated tool is reported in Appendix C and the assessments in Appendix D.
• Given that several measures of technology and several types of interventions were used in the selected papers, we decided that averages of effect sizes across studies would not be meaningful and we do not present them. We do present average effect sizes within studies as appropriate. were from developing countries, there was one from a high-income F I G U R E 3 Number of included and excluded papers. Full-text articles can be excluded for multiple reasons. For instance, the same study can be excluded for not meeting criteria in terms of participants and methodology. N refers to the number of studies country, as can be seen ( Table 2). The data fields extracted from the included studies can be found in Appendix B.
In total, we estimated the standardized effect sizes for 1108 regression coefficients from the selected studies. Figure 5 presents the number of regression coefficients by category of intervention.
Given that a study can present multiple results for different interventions, we present the following analysis by regression coefficients and not by study. For instance, if a given study presents a regression coefficient for grant provision on technology adoption and another regression coefficient for training on technology adoption, we present them separately for each intervention. Most regressions (684) assess the effects of "other direct support"-such as provision of training, management consulting services, technology extension services, as well as informational interventions-exclusively. The second-largest category of interventions is "direct financial support" (171)-which includes grants, loans, insurance, and subsidies, among others-exclusively. There were very few regressions assessing the role of "regulations and standards" or "indirect financial support." Some studies assessed the impact of a combination of interventions and were therefore classified in more than one intervention category. For instance, a program analyzing the impact of providing firm owners with both training and contract orders (e.g., Hardy & McCasland, 2018) would fall under "direct financial support" and "other direct support." Similarly, a program providing loans and training to microfinance clients (e.g., Gine & Mansuri, 2014) would also fall under "direct financial support" and "other direct support." The outcome variables presented in the selected studies can be broadly categorized into technology-adoption outcomes and other firm outcomes. Based on our selection criteria, all included studies reported at least one technology-adoption outcome. There are 26 studies that have only reported a technology-adoption outcome and no other firm outcomes. The remaining studies present more than one other firm outcome. Therefore, we have more regression coefficients reported on other firm outcomes than on technologyadoption outcomes.
In Figure 6, we present a breakdown of the types of technologyadoption outcomes used in selected studies. In total, there were 555 regression coefficients measuring the effects of interventions on technology adoption. Since there are too many coefficients to list them individually, we grouped similar ones into broader categories.
The attempt is to provide an overview to the reader of the types of variables that have been considered by the studies in our selection.  Also for this reason, we were unable to answer the third question stated in the objectives. Given the broad range of interventions and outcomes, an analysis of heterogeneous effects was not possible.
In addition, analyzing heterogeneous effects was also hard because only a few studies reported information on variables such as gender and workers' skills.

| Excluded studies
There were two rounds of exclusions as described in Figure 3. In the first round, 40,894 studies were excluded based on title and abstract out of 42,564 candidate papers. In the second round, another 1581 studies were excluded based on full text screening. Exclusions in both rounds were based on the four-element criteria discussed in Section 3.1: intervention, participants, outcomes, and methodology.
For instance, studies were excluded based on the intervention criterion if they did not analyze the effect of credibly exogenous variation in the incentives for firms to adopt a new technology. Thus, we excluded F I G U R E 5 Number of regression coefficients by intervention categories. This figure plots the number of regression coefficients that fit into different intervention categories. Categories are described in detail in Section 3.1. DFS or "Direct Financial Support" includes the direct provision of funding for technological adoption, for example, access to credit and cash transfers. IFS or "Indirect Financial Support" includes support to pay for the cost of the adoption projects without directly providing funding, for example, fiscal incentives and credit assurances. ODS or "Other Direct Support" includes nonpecuniary interventions, for example, technology extension services and provision of management consultancy and training. RS or "Regulations and Standards" includes rules, policies, and characteristics of the environment that affect agents' incentives, for example, the level of competition or the characteristics of contracts F I G U R E 6 Number of regression coefficients for technology adoption outcomes. This figure plots the number of regression coefficients that are related to the technology adoption outcomes under a broad categorization of technology adoption measures. "Other" includes diversified business activities and spillover adoption F I G U R E 7 Number of regressions for secondary outcomes. This figure plots the number of regression coefficients that are related to other firm outcomes under a broad categorization of secondary outcomes. Other firm outcomes (or secondary outcomes) are variables that are affected by technology adoption such as financial performance, productivity, and employment, among others. "Other" includes HR index and sought government assistance ALFARO-SERRANO ET AL.
| 13 of 36 studies that only focused on correlates of technology adoption without an identification strategy to establish causality. Studies were excluded based on participants if they did not analyze effects at the firm level. For example, the study by Hjort and Poulsen (2019) analyzes the impact of adoption at the regional level and was not included in our analysis.
Studies were excluded based on outcomes if they did not have a technology adoption outcome variable, for example McKenzie (2017) and Das et al. (2013). Lastly, studies were excluded based on methodology if they did not implement one of the methodologies for causal inference prespecified in our protocol; for example, Conley and Udry (2010) was excluded for this reason.
In addition, of the 89 selected studies that met the criteria, 9 did not report the sample size and/or SE and we were unable to receive this information from their authors. These were also excluded for the analysis.

| Assessment of reporting biases
We assessed the quality of the evidence in terms of the completeness of reporting in four categories: (1) reporting on key aspects of selection bias and confounding, (2) reporting on spillovers of interventions to comparison groups, (3) reporting on SEs, and (4) reporting on Hawthorne effect and collection of retrospective data.
The tool followed in the assessments is presented in Appendix C and the results of the assessment in Appendix D.
Most studies report on key aspects of selection bias and confounding, regardless of the methodology used, as presented in Table 3. RCTs typically report on considerations regarding random assignment, baseline characteristics, and attrition. However, there is room for improvement when it comes to reporting on heterogeneous effects. Of the 39 studies reporting heterogeneous effects, 24 do not report evidence that the strategy for choosing groups was planned or announced before the randomization. Among the 30 studies not reporting on the strategy for choosing some groups, only nine report on potential threats to validity of causal claims due to specification searching in the analysis of heterogeneous impacts.
Also, there were six studies that report evidence that the strategy for choosing groups was planned or announced before the randomization for some of the groups used to analyze heterogeneous effects but not for others.
Most studies also report on considerations related to spillovers of the interventions to comparison groups and SEs. However, of 47 papers collecting data in the context of the study, only 13 discuss whether the monitoring process itself affected behavior (Hawthorne effect).

| Synthesis of results
This section presents the standardized effect sizes for all the selected papers. The results are presented using forest plots to allow for visual comparison.
The forest plots show the SMD by study and the associated confidence interval. 2 The Intervention Category column indicates the type of intervention as described in Section 3.1. DFS or "Direct Financial Support" includes the direct provision of funding for technological adoption, for example access to credit and cash transfers. IFS or "Indirect Financial Support" includes support to pay for the cost of the adoption projects without directly providing funding, for example fiscal incentives and credit assurances. ODS or "Other Direct Support" includes nonpecuniary interventions, for example technology extension services and provision of management consultancy and training. RS or "Regulations and Standards" includes rules, policies, and characteristics of the environment that affect agents' incentives, for example the level of competition or the characteristics of contracts.
To follow the logic presented under Section 1.2 "Potential Channels of Effects" and address our the research questions, we first synthesize our findings on the impact of interventions on technology-adoption outcomes and then focus on the consequent effects on other firm outcomes.
There is wide variation in the types of interventions examined by included studies. These interventions included training, consulting, information provision, provision of equipment, access to credit, and so forth, to name just a few. There was also a wide range of technologyadoption outcomes. Due to this situation, we decided it was not proper to conduct analysis of heterogeneous effects and therefore could not achieve the third objective of the review. In addition, the selected papers rarely provided sufficient information for us to address heterogeneity.
However, we analyze the effects of interventions separately for agricultural firms and for firms in the manufacturing and services (non-

Manufacturing and services
We analyze 33 studies that focus on technology adoption in manufacturing and services. Figure  The other 14 studies do not find statistically significant effects. We 2 The "metan" command on Stata is used to generate the forest plots. The average pooled effect is directly computed by the command. The average pooled effect here refers to a within-study mean.
T A B L E 3 Completeness of reporting assessment (number of studies)

Completeness of reporting (number of papers) R NR NA Total
(1) Reporting on key aspects of selection bias and confounding

RCT
(1) The study reports that the assignment was done at random, or describes a random component as part of the assignment procedure.
62 0 0 62 (2a) If the study shows heterogeneous effects at the group level, it reports some evidence of the strategy for choosing groups before the randomization.
15 24 23 62 (2b) If the study shows heterogeneous effects at the group level but no evidence of the strategy for choosing groups before the randomization, it reports potential threats to validity of causal claims based on the analysis of heterogeneous impacts. 9 21 32 62 (3) The study reports baseline characteristics of the treatment and control groups, and/or that overall they are statistically similar. 61 1 0 62 (4) If there is some difference in baseline characteristics between treated and control groups, the study reports that this difference is accounted for. (2) If the study reports baseline characteristics that are not used for control, it reports that these are overall statistically similar in different groups. 1 2 6 9 IV 1) The paper acknowledges and addresses potential threats to validity (e.g., (violation of exclusion restriction). (2) The study reports robustness to the weights used to measure de difference between the characteristics of the treated unit and the synthetic control. (2) Reporting on spillover of intervention to comparison groups (1) The study reports whether there are likely to be spillovers to comparison group. 49 19 12 80 (2) If the study reports that there are likely to be spillovers to the comparison group, then it accounts for these in the analysis.

4 48 80
(3) Reporting of standard errors (1) If the study report that observations might be independent, this is taken into account by computing the heteroskedasticity-robust SE. In all other cases, clustered SE have been reported.

12 0 80
(4) Reporting on Hawthorne effect and collection of retrospective data (1) If the data are collected in the context of the study, the study reports that the monitoring process did not affect behavior (Hawthorne effect).

Agriculture
We analyze 47 studies that assess technology adoption as an outcome variable in the agricultural sector. Figure 9 shows the forest plots along with the standardized effect sizes for the associated studies. Similar to the results for manufacturing and services, there is mixed evidence on the impact of various interventions on technology adoption in agriculture.
Of the 47 studies, 20 studies find positive and statistically significant standardized effect sizes, and are briefly highlighted below. The other 27 studies do not find statistically significant effects.
For the agricultural sector, several studies focusing on financial support (either direct or indirect) find positive impacts on technology adoption. These interventions include for example, agricultural microcredit in Bangladesh (Hossain et al., 2016), vouchers for fertilizer and improved seeds in Mozambique (Carter et al., 2014), addition of input subsidies to an agricultural extension program in Democratic Republic of Congo (Leuveld et al., 2018), an NGO-run agricultural input subsidy and extension program in Uganda (Fishman et al., 2017), random provision of fertilizer to rice farmers in Mali (Beaman et al., 2013), nonreimbursable vouchers to partially finance the total cost of improved pastures technology in the Dominican Republic (Aramburu et al., 2019), and access to credit to invest in floating net aquaculture after being relocated due to a reservoir construction project in Indonesia (Miyata & Sawada, 2007).
F I G U R E 9 Forest plot of the effects of firm-level interventions in agriculture on technology adoption. ES is the SMD and 95% CI is the associated confidence interval. DFS, direct financial support; IFS, indirect financial support; ODS, other direct support; RS, regulations and standards; SMD, standardized mean difference ALFARO-SERRANO ET AL.

| 17 of 36
Studies analyzing the provision of insurance also found positive effects on adoption. For example, Karlan et al. (2014) randomly assign farmers in northern Ghana to receive cash grants, opportunities to purchase rainfall index insurance, or a combination of the two and find that insurance leads to significantly larger agricultural investment and riskier production choices. Freudenreich and Mußhoff (2018) find that bundling hybrid seeds with an insurance scheme increases adoption in Mexico. However, many similar interventions in other contexts do not lead to increased adoption. For example, Giné and Dean (2009)

| Effects of interventions on other firm outcomes (research question 2)
Having discussed the effects of various interventions on technology adoption for firms, we next focus on the second research question on the effects of interventions on other firm outcomes: profits, employees, productivity, and yield. In the plots below, we discuss the effects of interventions on other firm outcomes and indicate whether the study also finds a positive effect on technology adoption outcomes.

Manufacturing and services
In manufacturing and services, fifteen studies report profits as an outcome variable and we present the corresponding forest plot in Figure  Mexico and finds an increase in profits for corner stores that adopt POS terminals to accept card payments. It is important to note that these three studies also find a positive effect on technology adoption.
The other 12 studies that report effects on profits find statistically insignificant standardized effect sizes. Some also involved business training (Bruhn & Zia, 2013;Gine & Mansuri, 2014) and five find positive effects of interventions on technology adoption.
Regarding employment, 13 studies report on the impact of an intervention on the number of employees as an outcome variable.
These are presented in Figure 11. Of the 13 studies, 12 report small and statistically insignificant impacts. Seven of the studies report a positive and statistically significant standardized effect size on technology-adoption outcomes. The only study that reports a positive impact on the number of employees (although no significant effect on technology adoption) is Tan (2009), which looks at the effects of a SME support program in Chile.
In Figure 12, we show the six studies that report results on the effects of interventions on productivity. The studies that have reported on total factor productivity, output per worker, and change in value added were included under this category. None of the papers find a statistically significant standardized effect size on firm productivity at the 95% level, although three report a positive and statistically significant standardized effect size on technology adoption outcomes. Bloom et al. (2013) finds the largest effect size in our selected sample of studies, but the standardized effect size is significant only at the 90% level. Programs that provide training (Valdivia, 2015), learning from peers (Cai & Szeidl, 2018) and other SME support (Tan, 2009) have point estimates (of standardized effect sizes) in the expected direction but are not statistically significant.

Agriculture
Out of the sample of selected studies, 18 report effects on yield as on outcome variable. The forest plot is presented in Figure 13. Of the 18 studies, 8 find a statistically significant and positive standardized effect size of interventions on yields and most of these (4 of 8) also find a positive effect on technology adoption outcomes. Another 9 studies find insignificant effects of interventions on yields; 4 of these find positive effects on technology adoption. One study reports a statistically significant and negative standardized effect size on yields (Fischer & Qaim, 2012).
There is mixed evidence that direct financial support to farmers has an effect on yields. On the one hand, direct provision of fertilizers (Beaman et al., 2013), grants to farmers (Beaman et al., 2015), and subsidized agricultural inputs (Ogunniyi et al., 2017) have positive standardized effect sizes on yields. However, other similar interventions do not result in higher yields-for example, an agricultural input subsidy and extension program in Uganda (Fishman et al., 2017), a provision of cash grants and rainfall index insurance in Ghana (Karlan et al., 2014), or an agricultural microcredit in Bangladesh (Hossain et al., 2016).
Other interventions that led to higher yields include farmer field schools in Pakistan that provided skills on integrated pest management Although the studies consider a broad range of interventions, those related to the provision of "Other Direct Support" and "Direct Financial Support" are more common. Relatively few papers studied "Indirect Financial Support" or "Regulation and Standards" interventions.

| Quality of the evidence
Overall, 62 out of the 80 papers selected use a RCT which is the most rigorous methodology to assess threats to validity. The rest use quasi-experimental methods. Regarding reporting biases, most studies followed best practice when it comes to reporting on selection bias and confounding factors considerations, spillovers of the interventions to the comparison groups, and SEs. However, there is room for improvement when it comes to reporting on heterogeneous effects and Hawthorne effects.

| Limitations and potential biases in the review process
While we have tried to mitigate many of the potential biases that could affect the results of our study, there were certain aspects of the process that were determined by the constraints of human F I G U R E 12 Forest plot of the effects of firm-level interventions on productivity. ES is the SMD and 95% CI is the associated confidence interval. (+) indicates the study reports a positive significant technology adoption outcome. DFS, direct financial support; IFS, indirect financial support; ODS, other direct support; RS, regulations and standards; SMD, standardized mean difference resources and time. We would like to acknowledge that these could be a potential source of concern.
1. Lack of double coding and double screening for all studies: Resource constraints dictated that we were only able to ensure double screening for the initial batch of 700 papers per person before each screener went on to screen independently. Similarly, given the small number of studies selected, data extraction was completed by one person. This may be a potential source of bias.
2. Limit to studies in English: Due to resource constraints, we confined our electronic and manual search to only the English language. This could potentially be another source of bias.
3. Exclusion of studies: In 46 of the 89 studies, we had to approach the authors either for the missing information or for clarifications.
We did not hear back from the author(s) in the case of nine studies.
4. Generalizing across contexts: The purpose of the study was to compile the impact-evaluation literature on technology adoption in a systematic manner to identify knowledge gaps as well as areas where evidence is strong. One of the challenges of a study of this scope and magnitude lies in the broad variety of contexts, interventions, and outcomes to consider and compile. Given the wide variety, the potential to overlook the many nuances become all the more likely. For this reason, we recommend the reader to take these results as indicative and not conclusive evidence.

| Agreements and disagreements with other studies or reviews
Unlike existing systematic reviews about technology adoption that are mostly focused on agricultural firms (Fuglie et al., 2019;Obayelu et al., 2017;Silva et al., 2015;Waddington et al., 2014), this review includes studies from other sectors such as manufacturing (e.g., Atkin et al., 2017;Bloom et al., 2013). It also complements Piza et al. In several cases, other direct support, including agricultural extension services, exposure to peer firms through social networks, ICT, and value chains led to increased technology adoption.
In manufacturing and services, studies have found that a wide set of interventions led to positive effects on technology adoption depending on the setting. In several cases, interventions such as direct provision of new techniques, consulting on management and production practices, management and business training including accounting, marketing, finance, and psychology-based personal initiative training increased technology adoption. However, in other contexts, studies that provided business training or management consulting to firms did not find a statistically significant standardized effect on technology adoption.
We have also studied the effects of interventions in our sample of papers on other firm performance measures. We find that studies that report a positive effect on technology-adoption outcomes do not necessarily find a corresponding positive effect on other firm outcomes.
Overall, in the agricultural sector, there is mixed evidence that direct financial support to farmers has an effect on yields. Other interventions that have led to higher yields include farmer field schools, marketing assistance, NGO-led services to help farmers adopt and market export crops, ICT-based market information and SMS messages with agricultural advice for farmers. For firms in manufacturing and services, most of the studies we reviewed find statistically insignificant effects on profits, productivity, and employment.
Taken together, our results suggest that while various interventions can generate positive impacts on technology adoption among firms, these effects tend to be context-specific. We have found that the effects on farm yields, firm profits, productivity, and employment also tend to be mixed. Therefore, overall, we are unable to say that one group of interventions led to a higher impact and can be favored over others. Policymakers must be careful in interpreting these results, as the same intervention cannot be assumed to work equally well across contexts and needs to be adjusted to each specific regional context. That said, policymakers need to try out different types of interventions to see what works in their setting. We also emphasize that a statistically insignificant finding for a type of intervention in a particular context does not mean that all interventions of that type are unworthy of consideration. Rather, attention should be paid to how programs can be improved and better tailored to particular environments to achieve better outcomes.

| Implications for research
The results of this review strongly point to the need for additional The three primary recommendations for future research based on this review are: 1. Identify intervention areas that are less studied. While some interventions classified as "Other Direct Support" and "Direct Financial Support" such as consulting, training, grants, and subsidies have received significant attention from researchers, the impact of "Indirect Financial Support" including interventions such as access to credit, and incentive payments, as well as "Regulations and Standards" have been less studied in the context of technology adoption.
2. It would be helpful for researchers to provide all the necessary information required to compute SMDs. We had to contact 46 of the 89 selected studies for either clarifications or information to compute SMDs. Reporting on the results with all the appropriate information can go a long way in reducing the effort required to collate the information in a meaningful way.
3. Based on the assessment of reporting biases, researchers could improve reporting on heterogeneous effects by including some evidence of the strategy for choosing groups before the randomization or potential threats to validity of causal claims based on the analysis of heterogeneous impacts. Researchers can also improve reporting on Hawthorne effects by reporting whether the monitoring process may affect behavior.
Although this review only focused on interventions that induce firms to adopt new technology, researchers also need to focus on the reasons behind low technology adoption by firms in developing countries. As a broad categorization, reasons for low adoption may be (i) internal to the firm, (ii) on the input side, and (iii) on the output side (Verhoogen, 2020 Understanding barriers to technology adoption will also help researchers design interventions that help alleviate them, thereby leading to increased adoption.
Overall, we believe that fruitful areas of future research would include both an understanding of barriers to technology adoption and interventions that lead to increased adoption through removal of those barriers.

Review authors
Lead review author: The lead author is the person who develops and co-ordinates the review team, discusses and assigns roles for individual members of the review team, liaises with the editorial base and takes responsibility for the on-going updates of the review

Roles and responsibilities
All authors contributed to the writing and revising of this review. Ritam Chaurey, Ana Goicoechea, and Eric Verhoogen provided content and methodological expertise. Tanay Balantrapu led the information retrieval and statistical analysis. David Alfaro-Serrano led the development of the search strategy and protocol. Advice on information retrieval was kindly offered by John Eyers.
In particular, roles and responsibilities were distributed as follows (names listed in alphabetical order): • Content: All coauthors.
• Systematic review methods: All coauthors.

DECLARATIONS OF INTEREST
There are no conflicts of interest identified at the time of the review.

PLANS FOR UPDATING THE REVIEW
Any of the coauthors may consider updating the review in 2 years' time or when there has been considerable advances in the literature. The three procedures were executed as presented in Figure A1. Once step 5 was completed, steps 2-5 were repeated to conduct reference and citation searches on additional relevant papers until no new relevant papers were found. For each procedure, the screening procedure outlined in Figure 2 of the protocol was used.

Website Search
The website search involved screening 2258 papers, and 32 of them were included based on full text screening. Ten organizations were selected when the protocol was originally conceived. The search process of the websites imitated the database search procedure as much as possible, but the search strategy had to be tailored to each website. For example, some websites only allowed a keyword search while some allowed for an advanced search. The search method of each website is detailed in Table A1 below.

Reference search
For each of the included papers, the references cited were first screened by title and abstract and then by full text, using the criteria described in section 3.1. Abstracts and full texts were accessed through google scholar and the World Bank/IMF electronic library.

Citation search
Google Scholar was the primary tool used for the citation search.
For each paper, a search within the citation results was conducted using the keywords technology adoption. Given that complex Boolean searches do not function on Google Scholar search, and that the search could not be specified to either the title or abstract, technology adoption was used as the search term. This search term prioritized papers using the phrase "technology adoption," but also yielded results that use the term "technology" or "adoption" promi- T A B L E C1 Tool for assessment of reporting biases (1) Reporting on key aspects of selection bias and confounding

RCT
(1) The study reports that the assignment was done at random or describes a random component as part of the assignment procedure.
(2a) If the study shows heterogeneous effects at the group level, it reports some evidence of the strategy for choosing groups before the randomization.
(2b) If the study shows heterogeneous effects at the group level but no evidence of the strategy for choosing groups before the randomization, it reports potential threats to validity of causal claims based on the analysis of heterogeneous impacts.
(3) The study reports baseline characteristics of the treatment and control groups, and/or that overall they are statistically similar (4) If there is some difference in baseline characteristics between treated and control groups, the study reports that this difference is accounted for.
(5) If there is attrition, the study reports that the loss of sample units can be considered random and/or intention-to-treat estimates.

RDD
(1) The study reports that the allocation is based on a threshold in a continuous variable.
(2) The study reports that the participants cannot manipulate the assignment variable.
(3) The study reports that the estimation is robust to different choices of bandwidth.
(4) The study reports that baseline characteristics are overall continuous around the assignment threshold and/or if some baseline characteristic show a discontinuity around the assignment threshold, the study reports that this difference is controlled for when estimating the treatment effects.

DiD
(1) The study reports that the outcome variables' trends are parallel before the introduction of the treatment. This could be in the form of a table [some regression-based test] or graphs.
PSM or stat control (1) The study describes the control method used and it is based on baseline characteristics that are relevant to explain participation.
(2) If the study reports baseline characteristics that are not used for control, it reports that these are overall statistically similar in different groups.

IV
(1) The paper acknowledges and addresses potential threats to validity (e.g., violation of exclusion restriction).
(2) The study reports the first-stage regression F-statistic.
Synthetic control (1) The study reports that the synthetic control matches the characteristics of the treated units before the treatment.
(2) The study reports robustness to the weights used to measure de difference between the characteristics of the treated unit and the synthetic control.
(2) Reporting on spillover of intervention to comparison groups (1) The study reports whether there are likely to be spillovers to comparison group.
(2) If the study reports that there are likely to be spillovers to the comparison group, then it accounts for these in the analysis.
(3) Reporting of standard errors (SE) (1) If the study report that observations might be independent, this is taken into account by computing the heteroskedasticity-robust SE. In all other cases, clustered SE have been reported.
(4) Reporting on Hawthorne effects and collection of retrospective data (1) If the data are collected in the context of the study, the study reports that the monitoring process did not affect behavior (Hawthorne effects).
(2) If baseline information was collected retrospectively, the study reports this.

T A B L E D1 Assessment of reporting biases
(1) Reporting on key aspects of selection bias and confounding (2) Spillovers