Why I Wrote a Book About My Failures
I have to say, it’s been amazing to watch all the success stories in recent years of rigorous evidence being used to fight poverty, and to see IPA featured in scores of major news outlets, from NPR, to The New York Times, to The Wall Street Journal. Despite all the good press, though, not everything is always rosy.
I have conducted many studies that are not fit for The New York Times—studies that never panned out, that under normal circumstances would just be brushed under the rug, marked as “canceled,” and never documented. But with co-author Jacob Appel, who formerly worked as a research associate for IPA, I decided to write a book about these failures—and those of other researchers who volunteered their stories. That book, Failing in the Field: What We Can Learn When Field Research Goes Wrong, has just been released (available from Princeton University Press or Amazon).
To explain why I wanted to write this book, we have to go back to the invention of the lightbulb (bear with me here). Thomas Edison tried hundreds of materials before discovering that bamboo fiber could serve as a lightbulb filament. When asked how it felt to fail so much, he reportedly answered, “I have not failed 700 times. I have not failed once. I have succeeded in proving that those 700 ways will not work. When I have eliminated the ways that will not work, I will find the way that will work.”
Jake and I salute Edison and, in creating this book, we drew inspiration from his attitude. Like the lightbulb, knowledge is a product and research produces it. Just as Edison tinkered with his filaments to see which produced a bright and lasting light, researchers tinker with research designs to see which produce meaningful and reliable knowledge. The more experimenters share their private failures so that others can learn, the more quickly we will find the ways that work.
In Failing in the Field, we highlight different types of research failures—of cases where a study is conceived with a particular question in mind, but does not manage to answer it. Either researchers started out with a faulty plan, or they had a good plan but were derailed by events that took place once the study was underway.
In one study in 2006, for instance, fellow researchers and I worked with IPA to measure how micro-entrepreneurs in Ghana responded to different interest rates on micro-loans. But because the loan application process was too long and cumbersome for potential borrowers, hardly anyone ended up taking up the loan (even though initial interest was high). This meant that we could not sufficiently power the study and measure what we intended to measure.
But we learned several things. We learned that we should have conducted a longer pilot, that we should not have overburdened partner staff with extra responsibilities (which slowed down the loan application process), and that it was too early to test the product: it should have been tinkered with first. This was an instance of the first category mentioned above: we had a faulty plan.
Innovations for Poverty Action has been learning these lessons for the past 14 years, and is always adapting and improving its methods of data collection and its suggestions to potential partners about when and how to conduct a randomized evaluation. It’s a big reason why IPA is such a trusted and respected organization.
Failing in the Field is an effort to document some of the failures and lessons learned in this space so that others, to paraphrase Edison, do not have to re-prove that all those 700 ways do not work.