The paper has been prepared for the symposium on the occasion of the 20th anniversary of the studies of international relations at the University of Dresden. It has benefited greatly from comments and inputs by Edgar Vogel and Regina Riphahn. All errors and inconsistencies are my own.
1 The Role of Evidence-Based Policy
Policy decisions in today’s world have to be taken in a dichotomous state between good evidence and public distrust in expert opinion.
On the one hand, the conditions for evidence-based policy have seen excellent progress. Availability of and access to high-quality data have improved, academic research provides a range of methodologies to conduct causal impact assessments, academic curricula incorporate these methods, and administrations are staffed with highly qualified personnel. Overall, trust in scientists appears to be strong. Surveys for the US show that trust in scientists has been at a fairly high and stable level since the 1970s1. Surveys for the US and the UK reveal areas where the public trusts scientific experts more than the government2. Most importantly, trust is not only inspired by the perceived quality of policy outcomes: rather, citizens want a transparent and objective decision-making process.3
On the other hand, trust in governments and political institutions in particular seems to have eroded since the Global Financial Crisis. Many public policy debates are characterized by skepticism via-a-vis experts and expert knowledge.4 Populism is on the rise in many countries. 83% of the respondents to a Eurobarometer poll see “fake news” and online disinformation as a threat to democracy.5
Against this background, evidence-based policy is an undogmatic concept. Rather than taking a normative approach to policy, it adopts a positive perspective: Defining policy objectives remains within the realm of the democratic policy process. But, given the policy objectives, better evidence can contribute to selecting the right policy instruments, assessing the impact of these instruments, and potentially, revising them.
The potential benefits of applying good analysis to problems facing today’s societies are in fact large. Historically, the interaction between scientific knowledge, policy-making, and entrepreneurial activity catalysed by the Enlightenment has brought significant improvements in human well-being.6 Notwithstanding the huge challenges facing us globally in terms of inequalities, environmental risks, climate change, and global tensions, societal conditions have improved across many dimensions. Improving information on which to base policy decisions thus promises large potential gains.
However, putting evidence-based policy into practice is a challenging task and requires efforts and contributions from several stakeholders. The remainder of this text presents examples of evidence-based policy, discusses the role of international institutions in facilitating evidence-based policy, and presents thoughts on how setting up the right infrastructures can reduce the costs of evaluations and improve their quality.
2 Evidence-based policy can “work”
One way to promote an evidence-based agenda is to show that evidence-based policy in fact works and that its challenges are common to many fields. Here are three examples from financial regulation and supervision.
a) Evaluation of financial sector reforms
The global financial crisis has exposed the fault lines in the regulatory system for financial markets. In 2011, G20 members thus agreed on reforms addressing the “systemic and moral hazard risks associated with systemically important financial institutions”.7 Policies have been implemented in order to improve loss absorbency and resilience, recovery and resolution, and supervision. These reforms target more narrowly defined objectives – by focusing on the responses of individual banks – and more broadly defined objectives in terms of aggregate outcomes such as the functioning of financial markets.
Many of these reforms have been implemented to date, and a first evaluation of their effects becomes feasible. Such evaluations have to take a structured approach in order to balance an assessment of costs and benefits, and to take a societal rather than private perspective. What appears to be a cost of financial regulation to individual market participants may well be of benefit to society overall by making the system more efficient and resilient.
As an instrument to operationalize such complex evaluation projects, the G20 leaders endorsed a framework for the post-implementation evaluation of G20 financial regulatory reforms in 2017.8 The framework provides an umbrella for the Financial Stability Board’s (FSB) policy evaluation projects and serves as an orientation for the practical conduct of post-implementation evaluations. It rests on two pillars. First, the framework contains provisions regarding the decision-making process and the interaction of relevant stakeholders such as regulators or market participants. This is an important element of transparency and accountability of the FSB. Second, the framework provides guidance on analytical aspects of evaluations and engenders a common understanding of what constitutes “good” evaluations and robust evidence.
Regarding analytical guidance, the framework stipulates that each FSB evaluation should answer three key questions. First, did the reforms “cause” an outcome (“Attribution”)? Second, did the reform have similar effects across markets, states of the world, or jurisdictions and regions (“Heterogeneity”)? Did the reform achieve its overall objective (“General equilibrium”)? In the spirit of the undogmatic concept outlined above, the answers to these questions should provide detailed information to policymakers on the impact of their policies on observed economic outcomes. They should enable informed policy discussions without pre-empting decisions regarding adjustments of policies.
The first two evaluations under the framework were delivered to the G20-Summit in November 2018. The first project investigated to what extent post-crisis reforms incentivized central clearing of OTC-derivatives. The second project evaluated the effect of reforms on the financing of infrastructure.9 More projects are underway. An evaluation of reform effects on SME financing will be delivered to the G20-Summit this year with a consultation report published soon – open to comments from all stakeholders. Another project considering the FSB’s reform agenda on ending Too-Big-To-Fail (TBTF) has just started. This evaluation project will assess whether reforms are effective in reducing the systemic and moral hazard risks associated with systemically important banks. It will also examine the broader effects of the TBTF-reforms on the overall functioning of the financial system. The final report is scheduled for the G20-Presidency in 2020.10
b) Macroprudential policy
Macroprudential policy11 is a relatively new policy field. Its goal is to preserve financial stability and to prevent the build-up of systemic risk that may have adverse effects for the functioning of the financial system and for the real economy. New institutions have been tasked with the implementation of macroprudential policies, and new policy instruments have been introduced. Nonetheless, uncertainty about the state of the financial system and the effects and effectiveness of these policy instruments is high. This uncertainty entails two risks: the risk of acting too late (inaction bias) and the risk of choosing an inappropriate instrument or inadequate calibration.
Both risks can be mitigated if macroprudential policy is embedded in a structured policy process. Such a policy process involves four steps. In a first step, the policy objective(s) of macroprudential policy need to be specified. Macroprudential authorities use different definitions of the policy objective, but all aim at reducing systemic risk arising from externalities for the functioning of the financial system. In a second step, intermediate objectives need to be specified, and appropriate indicators need to be chosen. Intermediate objectives are linked to the drivers of systemic risk such as leverage, risk-taking incentives, connectedness, or exposure to common shocks. In a third step, the activation or recalibration of policy instruments that address systemic risk externalities needs to be considered. The decision on whether and how to activate policy measures should be based on a structured process of ex-ante policy evaluation. Such an ex-ante evaluation provides information about the relative performance of different instruments in contributing to reducing systemic risk. In a fourth step, and once sufficient time has elapsed, the effects of the instruments need to be assessed in an ex-post evaluation. This step provides information about the effectiveness of the measure(s) taken, about intended or unintended side effects, and it also serves as an input into a possible recalibration of the policy instruments. In a nutshell, this is what the FSB evaluations mentioned above is about.
The macroprudential regulation of residential real estate markets in Germany is a good example for how the macroprudential policy cycle can be put into practice.12 Currently, macroprudential instruments for the real estate market in Germany are not activated. Before activation of an instrument, authorities are required to perform an in-depth ex-ante evaluation. The analysis should quantify the expected impact, for instance on the resilience of the financial system or on credit markets. The output of such an analysis should inform the calibration process and provide an assessment of the expected costs and benefits. After activation of the instrument, a compulsory ex-post evaluation should provide not only information on the instrument’s effectiveness but also on potential unintended consequences. The results from this step can then feed into a potential re-calibration of the instrument.
c) Learning from other policy areas
With initiatives such as the one by the FSB, evidence-based policy has become part of the policy process in the area of international financial regulation. There are other policy areas and international initiatives which can inform this process – with regard to potential avenues to success and pitfalls to be avoid. In order to promote learning from other policy areas and to facilitate a structured dialogue between policymakers and academia, the German National Academy of Science Leopoldina, in cooperation with the Deutsche Bundesbank, organized a workshop in 2018.13 Here are examples of issues that were addressed in this workshop.
In Germany, the Federal Ministry of Finance has been conducting so-called “spending reviews” since 2018; analysing revenues and spending decisions and scrutinizing their effectiveness and efficiency.14 The German National Regulatory Control Council has developed a concept for structuring ex-post impact assessments applying to all funds spend by the federal government and, more recently, argued in the favor introducing quality standards for such evaluations.15
An area in which policy evaluation in Germany has advanced most are labor market reforms. In the early 2000s, the German government embarked on comprehensive labor market reforms aimed at reducing unemployment, improving access to work, making new job relationships more sustainable, and enhancing the efficiency of the employment agencies. Public interest in a deeper investigation of the effects of these reforms was sparked by persistently high unemployment rates and the perceived ineffectiveness of policy. There were a number of critical factors that affected evaluations of these policies. First, the evaluation was designed with a view to providing a systematic and broad overview. Second, academic research had a range of appropriate tools available, making it possible to provide robust results within a reasonably short time-span. Third, a mandatory reform evaluation was enshrined in law. Finally, a common concept developed by participating research institutions provided guidance on practical and analytical aspects of the evaluation.
Another example of evidence-based policy is the “What Works” Network, an initiative launched by the British government in 2013.16 The network consists of independent What Works Centres and affiliate members. The What Works Centre for local economic growth, for instance, analyses which policies are most effective in supporting local economic growth. In practical terms, it reviews and summarizes existing evidence in meta studies, builds databases on relevant work or provides guidance to practitioners on how to meaningfully approach evaluations, for example by providing technical training, supporting piloting and testing local economic schemes or developing case studies which portray particularly useful evaluation techniques.
3 The role of international organizations
Evidence-based policy promises huge gains – better policies at potentially lower costs. But evidenced-based policy is not a holy grail. It does not overturn the political dynamics that shape policy discussions. However, it does promise a better, more structured, and more informed policy debate.
At the national level, electoral cycles have a tendency to work against a structured evaluation of policies. Good institutional design at the national level can thus serve to sustain policies and to ensure bi-partisan support to evaluations. The advantage of entrusting independent institutions with evaluations is that they can operate outside the perimeter of day-to-day pressure from political debates. At the same time, the further away responsibility for policy evaluation is placed from the political discourse, the more political accountability becomes an issue. One mechanism that can be used to overcome this concern is enhanced transparency: public consultations, transparency with regard to evaluation methods, replicability of studies, and clear responsibilities are mechanisms to ensure transparency and thus credibility.
International organizations have an important role to play as well. They can enhance transparency about evaluations and institutional designs. Rather than getting involved in detailed discussions about national policy design – where national constituencies may question their democratic legitimacy – international organizations can help setting up evaluation frameworks and policy structures. This can be part of their surveillance work. For instance, the European Commission’s “Better Regulation Guidelines”17 provide guidelines on how the impact of EU-regulations should be evaluated. The OECD has published a framework for regulatory policy evaluation.18
Obviously, objective cross-country evaluations are challenging, if not infeasible. Can we compare the effects of policies that have been implemented with very different objectives and under very different institutional settings? Despite these hurdles, benchmarking and improving information on evaluations is a low-hanging fruit. International organizations provide a range of indicators. However, information on the share of the underlying policy programs that are subject to evaluation is not easily available. A survey run by the OECD shows that members have progressed in making regulatory policy more fact-based. Ex-post evaluation has, however, not always become second nature to institutions involved in the design of regulatory policies.19
An additional important task for international organizations can be the establishment and maintenance of repositories of evaluation studies. Repositories provide a compact overview of relevant evaluation work in a certain field. Tailored keywords and meta-information allow narrowing down the set of studies to be considered. In this way, repositories provide relevant information for academics, policymakers, the public, journalists, and industry. This facilitates more efficient evaluation work in the official sector (including international organizations), for academic researchers, and the industry.
Repositories are well-established in some fields such as medicine, where the Cochrane Library or the McMaster Health Forum20 provide structured evidence on medical research, also by compiling meta-studies. Similarly, in the field of development economics, the International Initiative for Impact Evaluation (3ie) collects information on the effects of policies and programs in development economics. The J-PAL initiative does the same in the case of poverty-reduction projects. The OECD’s International Network on Financial Education (INFE)21 is a platform that facilitates the exchange of experiences on policies promoting financial education. It does so by collecting data and developing comparative reports. Finally, the Bank for International Settlements (BIS) recently launched a “Financial Regulation Assessment: Meta Exercise (FRAME)”, which enhances the transparency of existing studies and improves efficiency in terms of collecting relevant information.22
4 Infrastructures supporting good evaluations
Many projects and initiatives show that policymaking can benefit from sound evaluations and from making the best use of available knowledge. Evidence-based policy can help improve and continue successful policies, while finding negative or unintended consequences can point to areas for improvement. But evidence-based policy comes at a price. Rigorous evaluations take time, and they require input of human resources and data. Not least, evaluations require good communication on how to interpret findings and how to put them into perspective.
These costs of evaluations can be reduced by setting up the right infrastructures:
Legal mandates and administrative procedures: There will be a natural tendency of policy evaluations to be in a time and resource conflict with other administrative tasks. Moreover, unless clear procedures for evaluations have been established, political considerations may dominate the administrative agenda. Against this background, establishing and strengthening legal mandates for mandatory evaluations can overcome political biases and the tendency toward inaction. Clear legal mandates can help to identify and agree on policy objectives, indicators, and benchmarks upfront. And they help to embed procedures for policy evaluations into mandates and day-to-day work of administrations, thus helping to avoid conflicts with other tasks.
Repositories and information platforms: Generating, using, and adapting evaluation projects will happen more easily if relevant information is readily available. One way to facilitate the flow and exchange of information are repositories of evaluation studies. In addition, an important prerequisite for improving the dialogue between researchers and policymakers is that each side should learn to “speak the language” of the other side, at least to some extent. This requires clear communication of research results and methods, and the translation of high-level policy objectives into measurable indicators.
Data infrastructures: Evaluations require good data, but data infrastructures can be costly. The costs of acquiring data can be minimized if data are collected early on and, ideally, if data needs are considered when drafting new legislation. Notwithstanding the key importance of ensuring strict data confidentiality, broad use of the available data will help to improve the quality of evaluations.
Managing incentives: Ultimately, policy evaluations will be used routinely only if the right incentives for conducting them are in place. Administrations can be incentivized to conduct evaluations through institutional structures which are conducive to evaluations and legal mandates to evaluate policies. The incentives of researchers will depend, ultimately, on the research community’s criteria for assessing the quality of academic work. Giving researchers access to data can be an incentive to cooperate. But the academic community can do more: rewarding replication studies can provide additional incentives to cooperate.
Bringing more and better evidence into political decisions is beneficial to society. To make progress, we need commitment from and an open and continuous dialogue with all stakeholders. The more we can build on available infrastructures, the lower the costs of evaluations will be, and the better we can learn from past experience. Good policy design and good institutions may, ultimately, increase transparency and accountability, thus countering skepticism about “expert” projects.