Goldilocks: Finding the Right Fit in Monitoring & Evaluation
The struggle to find the right-fit in monitoring and evaluation (M&E) resembles the predicament Goldilocks faces in the fable “Goldilocks and the Three Bears.” In the fable, a young girl named Goldilocks finds herself lost in the forest and takes refuge in an empty house. Inside, Goldilocks finds a large number of options: comfy chairs, bowls of porridge, and beds. She tries each, but finds that most do not suit her: the porridge is too hot or too cold, the bed too hard or soft – she struggles to find options that are “just right.” Like Goldilocks, organizations have to navigate many choices and challenges to build data collection systems that suit their needs and capabilities. How do you develop data systems that fit “just right”?
Over the last decade and a half, nonprofits and social enterprises have faced increasing pressure to prove that their programs are making a positive impact on the world. This focus on impact is positive: learning whether we are making a difference enhances our ability to effectively address pressing social problems, and it is critical to wise stewardship of resources.
However, it is not always possible to measure the impact of a program, nor is it always the right choice for every organization or program. Accurately assessing impact requires information about what would have happened had the program not occurred, and it can be costly and difficult (or even impossible) to gather that information.
Yet nonprofits and social enterprises face stiff competition for funding, and to be competitive they often need to prove they are making an impact. Faced with this pressure, it has become common for organizations to attempt to measure impact even when the accuracy of the measurement is in question. The result is a lot of misleading data about what works.
Efforts to measure impact have also diverted resources from a critical and often overlooked component of performance management: monitoring. When done well, monitoring furthers internal learning, demonstrates transparency and accountability to the public, and complements impact evaluations by providing clarity on how program activities are actually taking place.