Can Economic Experiments Contribute to a More Effective CAP?

HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.


How is the CAP evaluated?
The EU budget allocates more than €50 billion per year to support the agricultural sector under the Common Agricultural Policy (CAP). Evaluating the effectiveness and efficiency of the policy means assessing its results against the objectives set, the funds spent, and the policy instruments used. To do so, a Common Monitoring and Evaluation Framework has been developed, and has been applied to both pillars since 2014. It includes a set of rules, procedures and indicators to assess CAP performance. But this framework, and more generally the CAP evaluation toolbox needs to evolve together with CAP objectives and instruments. The switch to decoupled payments requires a change of the unit of evaluation from market to farm, and farmers' behaviour needs to be better understood. Evaluation tools must capture the farm-specific implementation of policies and how different measures influence an individual farmer's strategy regarding farm structure and farming practices. In addition, there is a need to understand the acceptability of and compliance with the increasing number of regulatory constraints. The need to account for psychological drivers is also key for CAP measures based on farmers' voluntary enrolment. Last but not least, the New Delivery Model grants more discretion to Member States to implement both first and secondpillar payments. This requires more flexible evaluation tools in order to assess local variations of CAP implementation.
Good practices in policy evaluation include making use of the complementarities between different evaluation tools and triangulating the results drawn from different methods. To predict the likely impact of a policy change on farms and markets (ex-ante evaluation), simulation models can be used. For example, the Agricultural Policy Simulator (AgriPoliS) can simulate the evolution of agricultural structures (e.g. farm sizes, production system) under different price or policy scenarios. Such computer models must rely on assumptions regarding the behaviour of simulated decision makers, usually assuming that farm agents aim at maximising profit (Appel et al., 2018). On the other hand, to analyse the impact of a measure after its implementation (ex-post evaluation), evaluators most often rely on case studies and in-depth interviews with stakeholders, or statistical analysis of farm-level data, including sophisticated techniques to evaluate the true net impact of a measure. We argue in this article that economic experiments, presented below, should be a welcome addition to the CAP evaluation toolbox.
What have experiments to offer for the CAP evaluation?
Experiments are situations built by the experimenter that allow the study of decisions in controlled and reproducible environments. Participants in such experiments (such as farmers, when CAP measures are evaluated) are allocated to different 'treatments', as in medical trials where patients randomly receive either the medicine or a placebo. By comparing decisions in these different treatments (for example with and without a CAP measure, or alternative designs of a measure), one can assess the effect of the policy and the relative performance of alternative designs.
The economic experiment spectrum is broad (Thoyer and Préget, 2019 Colen et al. (2016) show that while traditional non-experimental approaches perform well in estimating the economic, environmental and social effects of large reforms at the regional or market scale, experiments can complement and enhance these methods in the following ways.
First, experiments are particularly useful for examining the expected effects of new policy proposals or alternative policy designs before implementation (ex-ante evaluation). Prior experimentation can help identify improvements in policy design to maximise effectiveness and to avoid unintended outcomes.
Experiments can provide answers in a short time and at a much lower cost than trial and error in the 'real world'. The time necessary to set-up an experiment depends on the type: laboratory, field, or choice experiments with farmers and other stakeholders are relatively rapid to set-up, while RCTs require more time and interaction with various partners.
Second, experiments offer the possibility to isolate the effect of a policy from other factors. By using the control group principle and assigning participants randomly to groups, experiments allow for a better causal inference than observational data studies. Responses to a set of alternative policy designs can be tested, and differences can be attributed to specific elements of a policy. Third, combined with other methods, experiments provide insights into the complex puzzle of farmers' decisionmaking, which can help refine predictions based on economic models or to interpret results from observational studies. This is particularly relevant when social, psychological and other behavioural factors are expected to be important drivers of the decision-making process, deviating from the common assumption of profit-maximisation. Dessart et al. (2020) discuss the growing evidence that understanding and accounting for these factors are key to a transition to sustainable farming systems.
The potential of experimental approaches for agricultural policy making The arguments above are not theoretical. Practitioners have walked the talk and we present two examples that illustrate the potential contribution of experimental approaches to the evaluation of agricultural policy measures.

Testing a collective bonus in agri-environmental schemes.
The efficiency of agri-environmental schemes (AES) is often questioned, as voluntary participation by farmers has, in many instances, led to low enrolment rates and disappointing environmental improvements. This is the case, for example, with AES that target a reduction in the use of fertiliser. The concentration of phosphorus and nitrogen in surface water must fall below a given threshold to significantly reduce eutrophication risk. Too few contracts signed means that enrolled farmers are paid for their fertiliser reduction efforts, even though the environmental benefit is not obtained because collective efforts are insufficient.
One potential solution is to propose a conditional subsidy system in which farmers receive financial support only if a collective threshold of participation is attained. This policy option would avoid wasting public money when there is no environmental benefit, but it may discourage participation since farmers are not guaranteed to receive the subsidy.
To test how conditionality of a subsidy affects participation decisions, a first (rapid and cheap) step was to conduct a laboratory experiment using university students (Le Coent et al., 2014). Students had to make decisions on their contribution to a public good (the reduction of fertiliser use) under two incentive systems: half of them were offered an unconditional subsidy paid to all contributors; the other half of the participants were offered a conditional subsidy paid only if the sum of all individual contributions reached a threshold. Results showed that participation rates were the same, on average, and that the conditional subsidy system did not discourage students, at least in the lab context, from contributing to the production of a public good. Of course, students are not farmers, and a choice made in the comfort of a lab may be different from a decision made in real life. Therefore, this first encouraging result had to be tested with farmers, who may behave differently, even if the underlying decision mechanisms are similar. But, following discussions with policy makers and farm unions, it appeared that conditional payments would be politically difficult to defend. Instead, a variant policy option was tested: achieving a given threshold in the collective enrolment rate of farmers in a given target area would lead to a bonus payment paid to each farmer, on top of his or her agri-environmental contract payment.
This policy option was tested with winegrowers in France for herbicide reduction contracts (Kuhfuss et al., 2016). Farmers responded to a survey in which they had to choose their preferred alternatives among several hypothetical contracts: some of the contracts included the conditional bonus, others did not (see Figure 1). The experiment concluded that the bonus could enhance the scheme's efficiency by increasing the total area under contract. It boosted the enrolment of respondents (in the experiment), even when contract payments were significantly lower. This indicated farmers' preferences for contracts Source: Sophie Thoyer

Challenges ahead for further contributions to policy evaluation
Although experiments are increasingly applied to a number of agricultural and environmental policy questions, their integration with policy cycles and execution at larger scales remain a major challenge.

Integration in the policy process.
Experimental approaches need to find their place within the policy evaluation cycle. Results from field and choice experiments are useful to build expectations on farmers' responses to changes in programme modalities (mandatory/voluntary) or new incentive mechanisms (results-based/group-based). They can also help to predict differences in uptake with different requirements or monetary reward levels. Such experiments can deliver results in a relatively short time, which makes them suitable for ex-ante impact assessment. Their design relies on a close collaboration between researchers and policy makers to shape policy instruments.
RCTs may also be useful in ex-ante policy assessment during the pilot phases of a policy: the new intervention is implemented at small scale and the beneficiaries selected randomly. This allows for evaluation of the policy's causal impact before deciding whether or not to scale up the intervention. But in the CAP reform cycle, small-scale pilots are not foreseen: once new policy instruments are approved, they are implemented simultaneously across the EU. An exception is the use Ireland has made of the European Innovation Partnership for Agriculture Productivity and Sustainability (EIP-AGRI) to test the potential of results-based agri-environmental schemes (http://burre nprog ramme. com/eip-agri-irela nds-opera tiona l-group s-2019). A specific protocol on pilot projects, and on politically and ethically acceptable randomisation strategies in programme assignment would be needed for randomised evaluations to be implemented more widely.

Methodological challenges.
Early experiments used student subjects rather than farmers or landowners. Laboratory experiments can measure more abstract behavioural responses, they allow more experimental control, and they can also be more easily replicated, thereby significantly enhancing scientific credibility. Because of their lower cost and greater control, laboratory experiments with student subjects are useful to refine experimental designs before they are taken into the field. In contrast, results from experiments with the real decision-makers may not necessarily apply in other locations and timeframes. However, they are generally preferred, and particularly informative, where the research questions concern specific policies (e.g. programme evaluation), or where researchers are interested in measuring specific characteristics (e.g. risk preferences) of a particular population (Cason and Wu, 2019).
The recruitment of farmers for participation in field experiments remains a major challenge. Unlike short surveys, experiments often require a good understanding of the instructions, which can be timeconsuming for respondents, leading to farmers dropping out of the experiment. In such cases, the experimenter may end-up with a biased sample, not representative of the overall population. The risk may be even greater when conducting experiments via the internet, instead of in-person, potentially excluding farmers who lack reliable internet connections, who are not interested in the subject of the experiment or are insufficiently digital-literate.
One can also question how far the choices made by farmers in an experiment, with hypothetical choices and low financial stakes, correspond to real-world decision making. Farmers may perceive the decision task as artificial and may not react in the experiment as they would on their farm. Moreover, their main motivation to take part in an experiment could be to influence the design of CAP payments in their favour. This strategic bias may exist, although it can be minimised through methodological precautions.
Another challenge is the administration of payments contingent on performance, which often involves handling substantial cash amounts or vouchers, posing administrative, income tax, data protection, or other legal challenges. These challenges differ across Member States, further complicating the aggregation of evidence across countries.

Ethical challenges. Ethical
challenges in experiments often arise from the researcher's need to randomise benefits or to apply treatments without the informed consent of participants. If a treated group receives benefits that a control group does not receive, this may be perceived as unfair by programme managers, policy makers, and farmers. Opaque manipulations of behaviour without the explicit consent of participants may raise questions about the researchers' legitimacy, therefore we highlight the need for researchers to: (i) engage in deliberative processes when designing experiments, aligning with open science principles; (ii) design experiments and implement practices that can address these ethical challenges (e.g. benefits to the control group may be delayed rather than withheld, obtain informed consent from participants before the experiment and organise debriefing after the experiment); (iii) investigate further the overall acceptance of experimental approaches among farmers and other stakeholders (Morawetz and Tribl, 2020).