How do we move from “proof of concept” to implementable programs in policy R&D? At the recent, excellent IPA
Impact and Policy Conference, I presented some research I’ve done on micro and small enterprise activity in Indonesia, including highlighting the remarkable lack of movement “upward” over time from the microenterprise sector to the SME sector. At the end of the session I fielded a question along the lines, “so, what
can we do to develop the SME sector in Indonesia?”
Up until a few years ago, evidence from rigorous evaluations of policies for SMEs was pretty scant. This has quickly changed in recent years, increasingly due to the efforts of the
IPA SME Initiative. Yet the question is still difficult to answer, because many of the existing studies have been what we might call “proof of concept,” which are mostly focused on showing that a particular intervention can work under mostly ideal (research) conditions, analogous to the basic science in the R&D innovation process. Yet policy makers want to know how much they should take actually from the proof of concept studies for their current programming efforts.
In his blog summary of conference highlights, David McKenzie discussed a number of general issues involved in drawing policy implications from
proof of concept studies. He particularly focused on the issue that proof of concept studies often provide their intervention for free, and how this can lead different participants taking up the intervention in the research study than would do so in a “scaled-up” version of the intervention (which could lead us to over- or underestimate the impact of the intervention under real-world conditions). His discussion provides a nice motivation for diagnostic questions that policymakers can use to learn from proof of concept, e.g.: in what direction might participation be distorted from the real-world pattern? or If (credit/information) failures matter, what kind of intervention could help overcome them?
In this blog post I focus on how the work in a particular area, programs to augment the “managerial capital” of microenterprises and SMEs, is evolving from the proof of concept stage, and how researchers might work with policy institutions to speed up the process.
A study that got a lot of interest at the IPA conference was presented by Greg Fischer. His study, with Alejandro Drexler and Antoinette Schoar, ran a “horse race” between two methods of training poor microentrepreneurs in the Dominican Republic: (1) a standard business training program, including the basics of double-entry accounting, and (2) a “rules of thumb” training focused on simple heuristics, with the most prominent being the directive to separate business from personal finances. They find that (1) has little impact, while (2) leads to a 30% increase in profits during bad weeks, apparently because the microentrepreneurs are more aware of their business performance. The course was very heavily subsidized, with many participants receiving it for free.
Greg talked about how this promising result opens a further research agenda on how to optimize on “rules of thumb,” both to discover the highest-impact rules and the most cost-effective way to deliver them.
Beyond formal business training, the recent couple years have seen an exciting explosion of evidence on the potential positive impacts of
context-relevant technical support and consulting for enterprises across the size spectrum. This includes a 17% increase in productivity for very large woven cotton firms in
India from international consulting services (Bloom, Eifert, McKenzie, Mahajan and Roberts, forthcoming), a 120% increase for one measure of profits for small and medium-sized firms in
Mexico from local consultants (Bruhn, Karlan, & Schoar, 2012), and a 20% increase in revenue for
Peruvian female microenterprise owners, with the biggest impacts going to the smallest, poorest firms (Valdivia, 2012). Most of these services were heavily subsidized for the research programs, but the returns achieved would pay for themselves had they been undertaken by the firms.
The interesting puzzle that emerges is if the returns are so great, why don’t more businesses avail themselves of the assistance available? Beyond the traditional economics explanation of distortions such as lack of information or access to credit, another key issue in this context is how long theimpacts of these interventions persist for: are these interventions building up long-run managerial capital within the firms, or are they providing shots of technical support that are needed on an ongoing basis because the benefit depreciates quickly due to firm- or market-level changes (and which is realistically best provided by firms that can specialize in accumulating this kind of knowledge)?
Fortunately for policymakers, the next wave of research in this area (much still largely in design or field stage) looks to have a particularly strong focus on providing such interventions at lower cost. This includes seeing if recent (native) business school graduates can cost-effectively fill the consultant role, trying to standardize technical and managerial assistance interventions and provide them through new media and communication channels, or to groups of firms, and trying to get at what specific aspects of the programs work and why (to focus new programs on the highest-impact areas).
As this research moves from proof of concept to cost-effective scale-up, the space for fruitful collaboration and experimentation between researchers and policy institutions only expands. With proof of concept in place a policy institution can justify running a sizable version of the program, with less concern about failure from trying something unproven. Researchers can learn from observations of new programs, or work directly with the policy institutions on evaluating the rollout, especially on issues like trying different pricing across space as suggested by McKenzie. In his closing remarks at the conference, Dean Karlan emphasized the importance of program replication across a number of settings, an important part of the IPA agenda.
A final issue is institutional design: where should this kind of replication and calibration work go on? Individual researchers are often well-placed to do the “basic science,” but sometimes lack the scale and the incentives to carry out large, multi-country replications, especially to “fine tune” policy parameters. While institutions such as IPA, and impact evaluation units within policy institutions, are starting to close this gap from both ends, there is still lots of space to develop the institutions to take proof of concept to market in policy R&D.
#impactpolicyconf
References
Bloom, Nick, Benn Eifert, Aprajit Mahajan, David McKenzie, and John Roberts. “Does Management Matter? Evidence from India.” Quarterly Journal of Economics, forthcoming.
Bruhn, Miriam, Dean Karlan, and Antoinette Schoar. 2012. “The Impact of Consulting Services on Small and Medium Enterprises: Evidence from a Randomized Trial in Mexico.” Yale Economics Department Working Paper No. 100.
Drexler, Alejandro, Greg Fischer, and Antoinette Schoar. 2012. “Keeping it Simple: Financial Literacy and Rules of Thumb.” http://personal.lse.ac.uk/fischerg/Assets/KIS-DFS-May2012.pdf.
Valdivia, Martin. 2012. “Training or Technical Assistance for Female Entrepreneurship? Evidence from a Field Experiment in Peru.” Grupo de Analisis para el Desarollo (GRADE).