MCC Impact Evaluation Conference

MCC Impact Evaluation Conference

Template G Content Blocks
Sub Editor

Last Friday I attended the inaugural Impact Evaluation Conference at the Millennium Challenge Corporation. It was great hearing an exciting mix of new research fresh from leading academics, and ideas from various practitioners on the challenges of incorporating evaluation into public policy.

MCC Chief Economist Franck Wiebe opened with a robust defense of rigorous evaluation, noting that more than half of the projects in the MCC portfolio and subject to a rigorous quantitative evaluation (many, but not all of which are randomized). He also noted that common criticisms of randomized evaluations and the "randomistas" would perhaps be more pertinent if the aid industry was in any urgent danger of being overrun with high quality evaluations.

Three IPA Research Affiliates presented their research, all focusing on the question of technology adoption. As Mushfiq Mobarak remarked (via video-link from New Haven!), we already have a lot of the technical solutions to development problems – bednets, condoms, hand-washing, the challenge is understanding why these technologies are not being widely enough adopted.

 

How do social networks affect development?

Not Facebook, but old-fashioned, off-line social networks. Mobarak’s research is looking into the best way of leveraging existing social relationships between farmers to encourage the spread of new and more productive farming practices.

 

Why is contraceptive use so low?

Nava Ashraf presented the results of an experiment conducted in Zambia. Mainstream opinion in the field of family planning is that both partners in a relationship should be involved in decisions on contraception. Ashraf’s findings raised some thorny issues, as women were often lying to their husbands and using contraception without their knowledge. Men in Zambia usually desire to have more children than women. Part of the explanation may be due to incomplete understanding by men of the risks of maternal mortality – suggesting that giving men better information on this may change their attitudes to contraception.

 

What can development projects learn from Google?

Michael Kremer argued that we should view evaluation not solely as an after-the-fact accountability tool, but we should be constantly testing interventions and feeding the results back into better product design, in much the same way that computer software often goes through extensive beta testing and revision.

He gave the example of research into water quality in Kenya. Researchers began by evaluating a project to encase natural springs in concrete to prevent contamination. The intervention was successful and cost-effective relative to digging expensive new wells, but could we still do better? There was still plenty of potential for water to get contaminated after being brought back to the home.

There is an easy solution to contaminated water – dilute chlorine – which is incredibly effective but currently only has a market of around 20 million people, which whilst impressive, leaves many more million who could be benefiting but are not. How to persuade people to use chlorine? Kremer and his colleagues tried out a number of different ideas, and found the best by far was installing a public chlorine dispenser at the water source. The public dispenser means that the chlorine is free at the point of use, provides a stark visual reminder, encourages habit creation, and encourages the development of new social norms. You can read more about IPA's efforts to scale-up public chlorine dispensers here

 

From Research to Policy

It was also great hearing from others in the business of encouraging the use of rigorous evidence in policy-making, hearing from staff at

All of whom are working towards the same goals as IPA, scaling-up good ideas that have Proven Impact, and fostering cultures of innovation allied with robust evaluation. 

MCC will be uploading presentations to their website, we’ll let you know when we hear anything!

January 27, 2011