Deciding on Costing Analysis: Why Different Questions Need Different Models for Program Evaluation

Deciding on Costing Analysis: Why Different Questions Need Different Models for Program Evaluation

©Shutterstock

Imagine a maternal health program operating in rural Kenya. Funders want to know: 'What's the cost per life saved?' The implementing team is asking: 'How much will it cost to scale this to three more districts?' Meanwhile, the monitoring team needs to know: 'Are we spending our budget efficiently compared to similar programs?' Same program, three different questions—and each requires a fundamentally different costing approach.

This scenario plays out daily across the development sector. In today's constrained funding environment, organizations face mounting pressure to demonstrate value for money. Yet many are not using the right analytical tool for the specific question they are trying to answer. For example, a cost-per-outcome analysis can tell you which program delivers the most impact per dollar spent, but it won't reveal the practical challenges of expanding that program to serve thousands more people. On the flip side, a detailed budget that maps out every operational cost gives you a roadmap for implementation, but it can't tell you whether your program offers better value than other interventions targeting the same problem. 

At IPA, we've learned that one size doesn't fit all when it comes to costing. Through our work with partners facing these exact dilemmas–from governments deciding between interventions to NGOs planning multi-country expansions–we've discovered that matching the right analytical framework to the right decision-making context can lead to more informed choices, better allocation of resources, and increased effectiveness. Below, we've outlined the key costing frameworks we've developed, along with the specific real-world problems they help our partners solve.

Cost-Effectiveness Analyses for Program Comparison

Building the Foundation: Our Approach to Cost Data

The foundation of any reliable analysis is accurate and comprehensive cost data. Detailed cost collection serves multiple purposes: it helps identify the primary cost drivers that determine program efficiency, provides crucial insights into how costs may change when programs are replicated or scaled up in different contexts, and ultimately enables accurate program comparison.

One of the biggest obstacles is the challenge of collecting cost data across multiple sources. Financial records are often scattered across different systems, costs may be shared across programs, and expenses might be recorded inconsistently across implementation sites. To address these challenges head-on, IPA has developed an automated tool that facilitates real-time collection of cost data as a critical input to a comprehensive CEA. This tool reduces the burden on program staff while minimizing errors and omissions that can compromise the accuracy of our analyses.

We're also building a comprehensive library of the CEAs we've conducted, and starting to include CEA information in our research summaries. By sharing our completed analyses, we're contributing to a growing body of public knowledge about program costs and effectiveness.

Supporting Partners' Resource Allocation Decisions

Cost-effectiveness analyses help decision-makers identify which programs offer the greatest value for money when choosing between interventions that aim to achieve the same outcome. Where impact evaluation data is available, CEAs provide these decision-makers with a clear, quantitative basis for comparing programs by examining the ratio of costs to impacts. We regularly work with funders and policymakers facing a specific resource allocation challenge: choosing between multiple evidence-backed programs that target the same development goals. An education ministry might need to decide whether to invest in teacher training, textbook provision, or school feeding programs to boost learning outcomes. A foundation might assess whether to fund cash transfers, vocational training, or microfinance programs to reduce poverty. 
When we’re thinking about which educational intervention will deliver the most literacy improvement per dollar, or which health program offers the best value for increasing vaccine uptake, we turn to CEAs. 

Social Return on Investment Modeling at Scale

Addressing the Scale-Up Question

CEA is used to compare interventions with a common outcome metric, helping partners choose between alternative programs targeting the same goal. However, we recognized that partners also needed to answer a different question: not just “Which program should we choose?”, but also “What is the full value of the program that we’re already considering?”
When a donor asks, “Should we invest in scaling up this maternal health program?” they often care about more than just lives saved. They want to understand the program’s impact on education, economic productivity, healthcare system costs, and other social outcomes. This is where Social Return on Investment (SROI) becomes valuable. SROI is designed to capture the broader social return by aggregating diverse impacts into a unified investment case, capturing value that CEA’s single-outcome focus would miss. 

IPA’s SROI framework draws on the analytical approach led by Michael Kremer (co-recipient of the 2019 Nobel Prize in Economics and co-founder of USAID Development Innovation Ventures) and others, which demonstrated how SROI analysis could guide large-scale development investment decisions by quantifying the total social value generated per dollar invested. The methodology also draws on guidance from IPA’s founder Dean Karlan to provide donors and partners with a comprehensive view of a particular program's potential return on investment when implemented at varying scales. 
This approach recognizes that both costs and impacts may change dramatically as programs move from pilot to full-scale implementation. What works in a controlled trial with 100 participants might face entirely different costs and impacts when rolled out to 100,000 recipients.

The goal of IPA’s SROI framework is to model both costs and impact at scale with equal rigor. On the cost side, we account for economies of scale and the varying cost structures across different implementation contexts. Some costs decrease per recipient as programs scale, while others might increase due to coordination challenges or quality maintenance requirements.

On the impact side, IPA’s modeling addresses questions about implementation quality, adaptation challenges, and sustainability. Programs that show strong results in carefully managed pilots can experience reduced effectiveness when implemented through different implementers or in diverse geographic contexts.

The credibility of SROI modeling depends on making realistic assumptions, documenting methodology transparently, and showing decision-makers how changes to key parameters affect the investment case. This approach ensures that our SROI models serve as robust tools for decision-making rather than exercises in overly optimistic projections.

Cost-Effectiveness Modeling for Early-Stage Decision Making

While CEAs and SROI analyses rely on rigorous impact data from completed evaluations, IPA often works with partners who need to make decisions about promising programs that have not yet undergone full-scale impact evaluations or for which there is no existing rigorous evidence base. Cost-effectiveness modeling (CEM) uses the best available data–whether preliminary pilot results, evidence from similar contexts, or expert projections–alongside transparent assumptions to project potential cost-effectiveness. Unlike CEA that analyzes what has happened and relies on existing impact evaluation data, cost-effectiveness modeling helps partners understand what could happen under different scenarios and assumptions.

Identifying Leverage Points Before Full Implementation

The forward-looking nature of cost-effectiveness modeling makes it particularly valuable at early stages of program development. A partner developing a new digital learning platform might use modeling to understand whether teacher training costs or technology infrastructure will be the primary determinant of cost-effectiveness. By varying key assumptions, the model reveals which factors and assumptions might matter most, thus  allowing implementers to focus their design efforts where they could have the greatest impact on overall value for money.

These models are inherently context- and decision-specific. We first clarify with partners what the purpose of the modelling exercise is and what decisions they will inform. These could be decisions like funding specific interventions with a high probability of being cost-effective or commissioning rigorous studies to understand to what extent hypotheses generated through modelling are true. We develop them in close collaboration with implementing partners, ensuring assumptions reflect on-the-ground reality while maintaining analytical rigor. The result bridges the gap between theoretical potential and having a robust evidence base, helping partners make informed decisions even when perfect information isn't available.

What's Next for IPA's Costing Work

As we continue to develop these tools and methodologies, we're committed to sharing our learnings with the broader development community. Our automated CEA tool and CEA library represent just the beginning of our efforts to make rigorous cost-effectiveness analysis more accessible and actionable. We're also continuing to refine our SROI framework based on feedback from donors and implementation partners.