Right-Fit Evidence

In this image:Harriet Conron, Manager for IPA's Right-Fit Evidence Unit, presents on RFE's Stage-Based Learning Framework.

Our Approach

Data and evidence have the power to unlock the full potential of the development and humanitarian sectors. But too often, Monitoring and Evaluation is little more than a burden on organizational time and resources. We're here to change that.

A right-fit approach maximizes actionable learning and data-driven decision-making while integrating seamlessly with an organization's structure and workflows. Done well, data and evidence empower organizations to improve their programs, adapt to new contexts, and demonstrate real impact.

We help organizations find a monitoring, evaluation, and learning (MEL) approach that fits their needs, and then build the systems and processes to make it a reality. For implementing organizations, this means we collaboratively develop learning goals, identify gaps in current MEL systems, and provide hands-on technical support. For funders, it means we support the design of portfolios based on existing evidence and make quality, actionable learning happen across portfolios, starting with individual grantees then aggregating and synthesizing for portfolio-level use and sharing with the broader community.


Our Work

    For Implementers: Building Strong MEL Systems and Achieving Learning Goals

    The Right-Fit Evidence Unit works hand-in-hand with implementing organizations to tackle their most pressing MEL challenges. Some organizations collect too little data to meaningfully track performance. Others collect too much data without a clear use strategy, making it difficult to take action for decision-making. We help organizations collect the right amount and type of data to inform decisions and increase impact.

    We design bespoke partnerships that prioritize collaboration and institutional sustainability by building MEL systems tailored to each partner's needs, while upholding the highest standards of rigor. Our support takes many forms:

    • Program design foundations: We define core challenges through root cause analysis to build a shared understanding of the problem space, and review research on what works in different contexts to inform program design.
    • Learning plan design: We facilitate collaborations between program and monitoring teams to identify what they need to learn most about their programs, and build MEL plans to efficiently generate evidence to inform learning.
    • MEL system and capability strengthening: We identify key challenges and gaps in current MEL processes, develop concrete plans to address them, and provide hands-on technical guidance and upskilling for rigorous qualitative and quantitative data collection, analysis, and implementation research.
    • Implementation learning and research: We design and implement rapid learning methodologies such as user testing, prototyping, piloting, and A/B testing for iterative program development, and conduct process evaluations to validate assumptions and assess whether outputs and early outcomes are being achieved, informing what needs refinement before impact evaluation or scaling.
    • Cost-effectiveness modeling and analysis: We support partners to create models to compare interventions, build an investment case for scale-up, or project cost-effectiveness for programs, particularly those without a mature evidence base.

    Ready to partner with us to build a learning-focused MEL system that increases your impact? Email rightfit@poverty-action.org to get started.

    For Funders: Enhancing Impact with the Power of Evidence and Learning

    Enabling Stage-Based Learning: A Funder's Guide to Maximizing Impact

    Better learning will ultimately lead to better results. Enabling Stage-Based Learning: A Funder's Guide to Maximizing Impact helps funders optimize learning at each stage of an intervention’s journey towards scale. This approach can transform how funders engage with their partners and the interventions they support, resulting in more cost-effective and impactful scaling.


    Funders are uniquely positioned to strengthen impact in the development and humanitarian sectors, through the use of evidence and learning. Through the decisions they make on investments in learning and reporting requirements, funders have outsized influence in moving the entire field toward better program design and implementation.

    Realizing this potential requires intentional design of learning systems, reporting structures, and funder-grantee relationships. Funders need to stay connected to the latest evidence, draw actionable insights from grantee data, and design reporting that incentivizes real learning for themselves and their grantees. Getting this right requires intentionality and the right support.

    The Right-Fit Evidence Unit partners with funders who want to realize their potential as drivers of learning for their own organizations and across their portfolios. Our support is tailored to their context and goals, often working both with the funder and their grantees in wide-ranging learning partnerships. These often include:

    • Evidence-based initiative design: We synthesize existing research and apply it to inform the design of portfolios and programs, helping funders direct resources toward approaches with strong evidence that advance their key goals.
    • Portfolio learning agendas, MEL frameworks, and operationalization with grantees: We develop coherent learning strategies that prioritize credible, actionable evidence generation across the portfolio, and translate them into practical tools: learning-oriented RFPs, reporting templates, and shared expectations that connect funder learning goals to grantee-level MEL activities.
    • Strengthening grantee learning: We work directly with grantees to improve how they generate and use evidence, providing hands-on MEL technical assistance, conducting complementary research activities such as process evaluations and impact evaluations (see our section for implementers for more on these services), and facilitating knowledge sharing across peers and cohorts within portfolio.
    • Portfolio learning and dissemination: We help funders draw actionable insights from the evidence generated across their portfolio by aggregating and synthesizing program-level findings for portfolio-level use, and supporting strategic dissemination to influence key stakeholders and the broader sector.

    To learn more about our offerings to funders and how IPA helps funders maximize impact with the power of evidence and learning, please see this one-pager.

    Interested in making your portfolio a driver of greater, more cost-effective impact? Email rightfit@poverty-action.org to get started.

IPA’s Right-Fit Evidence Unit: A Brief Introduction

Our Advisory Services

    Problem Diagnostic

    For partners designing new programs or refining strategy, we help define clearly the challenges that need to be addressed, identify the primary causes behind them, and prioritize areas for action. Through structured methodologies such as problem definition frameworks, root cause analysis, and participatory stakeholder engagement, we support partners to develop a shared understanding of their problem space. This process enables data-driven decision-making and ensures interventions target the most pressing and actionable barriers to impact. This is particularly useful when multiple stakeholders have different perceptions of the problem, when existing data is fragmented and lacks structured analysis, or when clarity on the core challenge is needed before designing solutions.

    Evidence Reviews

    When partners need clear guidance on which interventions to pursue, we synthesize findings from existing research relevant to their context and outcomes. Drawing on rigorous studies, meta-analyses, and expert insights, we provide actionable recommendations that inform portfolio strategies, learning agendas, intervention selection, and prototype development. We focus on synthesizing key takeaways across the evidence base to answer specific questions about what works, what drives impact, and what should be tested or scaled in particular contexts—helping partners make concrete decisions based on the available evidence.

    Learning Plan Design

    Through a series of collaborative workshops, we support implementers and funders in designing robust MEL plans that align teams around prioritized data collection for program improvement and strategic decision-making. We start with a clear theory of change and support partners in identifying their most critical learning questions based on their program's stage of maturity. We then co-design right-fit methodologies and indicators matched to these questions and the theory of change, ensuring credibility (valid, reliable, and rigorous), actionability (decision-relevant), responsibility (cost-effective and responsible with participants), and transportability (within- and cross-program learning). The resulting learning plan ensures all data collected directly informs decisions: whether understanding implementation effectiveness, identifying program improvements, or building evidence for scaling. Throughout, we focus on building internal capacity so teams can apply these credible, right-fit MEL practices consistently across current and future projects, ultimately fostering more cost-effective and impactful programs through systematic learning.

    Learning System Diagnostic

    For organizations with existing MEL systems, our diagnostics identify where and how to strengthen them. We review current MEL approaches and provide feedback based on our CART principles (Credibility, Actionability, Responsibility, Transportability), to assess how well the system generates actionable learning and supports decision-making. Through stakeholder interviews, document reviews, staff surveys, and collaborative working sessions, we work with teams to develop concrete action plans that address key gaps. We focus on building buy-in and clarity around next steps so that recommendations translate into actual improvements. Diagnostics can focus on a single program or assess an entire organization's MEL capacity.

    Monitoring and Learning Technical Assistance

    We strengthen partners’ ability to implement continuous learning cycles by building robust monitoring systems that generate actionable insights for program improvement. Drawing on IPA’s evidence-informed approach to learning and decision-making, we support partners to design practical monitoring tools, collect high-quality data on program outputs and early outcomes, and translate that data into structured reflection and adaptive management. Through tailored capacity-building, such as training, hands-on accompaniment, and feedback on monitoring and learning processes, we help organizations institutionalize monitoring practices that enable them to systematically track progress, identify challenges, and make informed adjustments to their programs over time.

    User Research

    To foster more user-centered, cost-effective programs at early design stages, we support partners in designing, refining, and adapting interventions by uncovering what users actually need, how they behave, and what barriers they face. This process involves creating user personas to understand different user groups, mapping user journeys to visualize step-by-step interactions and identify pain points, and conducting user testing through prototypes and observation to validate design choices before full implementation. Our approach helps organizations move from designing based on assumptions to designing based on evidence from real users, achieving greater impact through improved usability and engagement. This is particularly valuable when user adoption, engagement, or satisfaction are key concerns.

    A/B Testing

    A/B testing advisory engagements focus on helping organizations build the systems and processes needed to run rapid experiments continuously to improve their programs or products over time. Rather than supporting a single test, we work with teams to develop a learning roadmap that identifies high-value questions to test, establish the technology and data infrastructure required to run experiments reliably, and implement processes to monitor and interpret results. This approach enables organizations to systematically test refinements, learn from real user behavior, and integrate evidence into ongoing product or program improvement.

    Implementation Research and Process Evaluation

    We design and conduct implementation research and process evaluations to help partners understand how and why programs work in practice, and what needs refinement before the next stage of investment, whether that is an impact evaluation or scaling support. Drawing on implementation science approaches, we work collaboratively with program and MEL teams to define key learning questions aligned with the program’s theory of change, assess implementation fidelity and quality, and examine how contextual and operational factors influence results and whether the intervention is achieving its intended outputs and early outcomes. This helps partners make confident decisions about whether to move forward with impact evaluation, scale, adapt, or refine their interventions.

    Cost-Effectiveness Modeling and Analysis

    For organizations comparing program options or making the case for investment, we support cost-effectiveness work depending on available impact evidence. When impact evaluation data isn't available, we develop cost-effectiveness models using existing evidence and transparent assumptions to estimate potential value for money. Where rigorous impact data does exist, we conduct cost-effectiveness analyses to determine which programs deliver the greatest impact per dollar invested. In both cases, we make our assumptions explicit, identify the key factors that drive cost and impact, and help partners build decision frameworks tailored to their context, supporting resource allocation decisions and making the case to stakeholders for continued investment or scaling.

    Community of Practice Facilitation

    We serve as learning partners for Communities of Practice (CoPs)—groups of implementers, donors, and other stakeholders with shared goals who want to collaborate to maximize learning. We help CoPs develop shared learning agendas, establish clear ways of working, and build the relationships and trust needed to move beyond information sharing to meaningful collaborative learning. Our support takes two forms: coordinating collective learning across the CoP as a whole, and providing tailored technical assistance to individual members. We typically facilitate working sessions and reflection workshops and build learning capacity. In some cases, we also synthesize evidence or collect data where the learning agenda calls for it.

    Funder Advisory Services

    We work with philanthropic organizations and government funders to integrate learning and evidence generation into their strategy and practices, while simultaneously supporting the learning goals of organizations in their portfolio. Our support spans the full grantmaking cycle: synthesizing existing evidence to inform initiative design, developing portfolio learning agendas and MEL frameworks, and operationalizing these through aligned RFPs, reporting structures, and shared expectations with grantees. At the grantee level, we provide hands-on MEL technical assistance, conduct independent research such as process evaluations and impact evaluations to complement grantee learning, and facilitate communities of practice that enable knowledge sharing across the portfolio. We also help funders aggregate and synthesize evidence across their portfolios to inform investment decisions and share findings with the broader sector. Throughout these engagements, we identify specific steps that funders should take to make their learning vision a reality, and offer technical support to achieve the desired outcomes.