Goldilocks Case Study: Women for Women International
Women for Women International: Monitoring and Evaluation in Conflict and Post-Conflict Settings
Women survivors of war and conflict are disproportionately affected by acts of violence, displacement, poverty, and loss of property and relatives. Conflict disrupts familial and community networks, compelling women to assume greater responsibility for generating household income and supporting their dependents and community. Women for Women International (WfWI) works in countries affected by conflict and war and addresses these issues by supporting women to earn and save money, improve health and well-being, influence decisions in their home and community, and connect to networks for support.
This case study examines WfWI’s collection and use of data in conflict and post-conflict settings to monitor and measure the results of their work.
Despite the challenging setting, WfWI has developed a data collection system that produces high quality data and is in the process of making important changes to their main indicators to ensure that they appropriately capture the work of the organization. Building on current efforts to improve M&E within the organization, we have two primary recommendations: one, that WfWI conduct a rigorous impact evaluation and two, that they modify the IT infrastructure and data collection processes in order to provide faster and more useful information for operational decision-making in country offices.
Lessons for Others
1. Test and implement M&E processes that improve data collection efficiencies.
Collect data from a representative sample of participants in more depth, rather than conducting light-touch and high-cost surveying of all program participants. Focus data collection on key indicators. Investing in a robust electronic data collection infrastructure will also deliver gains in terms of cost, quality, and speed of data use.
2. Track participants beyond program completion.
Within the bounds of available resources, conducting panel tracking surveys of program participants can be helpful in viewing participant trajectories post-program and identifying areas in which future participants might benefit from stronger or new programming.
3. Document M&E protocols and regularly train staff in them.
Clearly documenting M&E protocols and training all staff to collect data in the same way help ensure the consistency and reliability of data across program sites. Encouraging M&E staff to maintain their independence from program implementation, raise issues they have with data collection, or to propose new ideas for collecting data may also boost the credibility of the data they collect.
4. Design impact evaluations that measure overall causal impact and test programmatic tweaks.
When designing a rigorous impact evaluation, the study should measure the impact of the overall program, but also ideally allow for some testing of specific program components. This provides more learning for WfWI and for others that are implementing similar programs.