Goldilocks Case Study: Digital Green
Digital Green: Addressing Measurement Challenges in Agricultural Technology Programs
The use of information and communications technology (ICT) in agricultural services is becoming increasingly common. These technologies—radio, SMS, television, video, and Internet services—have the potential to help smallholder farmers increase their incomes1 by making it easier for them to learn about and adopt new farming methods, grow higher-value crops, or connect with new markets.
Digital Green, an international non-profit organization based in India, uses locally-produced videos and in-person facilitation to share knowledge about improved agricultural and nutrition practices. The program aims to help rural communities across South Asia and Sub-Saharan Africa understand and adopt better agricultural and nutrition practices, and the ultimate goal of the program is to have a positive impact on individual well-being. Digital Green is currently working in nine states in India, and also in Afghanistan, Ethiopia, Ghana, Niger, Tanzania, Malawi, and Papua New Guinea. Since its start in 2008, Digital Green’s program has produced over 4,000 videos reaching more than 800,000 viewers across more than 9,000 villages.
Digital Green is in the process of measuring the program’s impact on farmer livelihoods and health status using randomized evaluations in both India and Ethiopia and its effect on improving nutrition-related behaviors in India. The organization has also invested in an activity monitoring system that reports data on program implementation and tracks the adoption of Digital Green-promoted practices from remote locations. One challenge in the activity monitoring system is its reliance on data from partner organizations, which varies in quality. Recognizing the issue, Digital Green has instituted a series of data quality checks and procedures to improve quality. The Goldilocks Initiative’s recommendations for Digital Green focus on its agricultural activities, and include refining and consolidating the program’s theory of change and conducting a systematic review of data quality.
Lessons for Others
1. Create a clear theory of change and instill it in the organizational culture.
Having a theory of change that is well understood throughout an organization and clear definitions for key indicators are important for ensuring that program staff understand the purpose of data collection and reporting. This is particularly critical when an organization has a number of implementing partners. Organizations that operate programs through multiple field offices face an extra challenge in consistently aligning data collection with the theory of change and key performance indicators, and may need to take extra measures, such as specialized staff training, to ensure that there is a common interpretation.
2. Pay particular attention to data credibility and reliability when using data from external entities.
Reliance on external entities for data collection requires the lead organization to develop internal capacity to audit data quality. One option is to reduce the amount of data that partners are required to report and to focus activity monitoring on the most essential operational indicators, building capacity as necessary.
3. Carefully assess when and how to engage in rigorous impact evaluations and develop a plan for using the results.
When possible, organizations should consider piloting an evaluation approach or pursue a vetting study to show that the theory of change is operating as expected. Such evidence helps organizations further refine program delivery and confirms that the
organization is ready to measure impact.