Peace & Recovery Initiative | Information Session for Researchers and Practitioners

Peace & Recovery Initiative | Information Session for Researchers and Practitioners

Lead Photo
Template G Content Blocks
CB30 Flex Block
Sub Editor

On October 2, 2024, IPA held a webinar for researchers and research team members who plan to submit a proposal for our Peace & Recovery Initiative's Call for Proposals. The webinar provided an overview of the P&R Initiative's thematic scope, the types of projects eligible for funding, evaluation criteria, common proposal pitfalls and tips to avoid these, as well as an interactive Q&A session.

You can view the slides from the webinar here, and the recording (in English, with a written transcript available in English, Arabic, French, and Spanish) below.

Peace & Recovery Initiative | Information Session | October 2, 2024

Ricardo Morel:
Today we are going to be having an information session to tell you a little bit more about the Peace and Recovery Initiative and more details on the upcoming call for proposals that just opened very recently. Before we start, just a few housekeeping notes. We are going to have a brief presentation, roughly 20 to 30 minutes maximum and then allowing a good amount of time for questions from you. Then if you want to submit a question, please use the Q&A button that's at the bottom of the Zoom screen. And I think by default everyone is muted and without video. But if you find yourself unmuted or with video, please keep your video and audio off during the presentation. In case you need, we have closed captions enabled. So you just need to click on the Show Captions button at the bottom of your screen and you'll be able to see them. As I mentioned, the slides and the recording will be made available after this presentation, including translations as well. Today I'm going to be joined by two of my colleagues, Nessa Kenny and Ayda Lulseged. And here in this slide you can also see who our academic advisors are, namely Christopher Blattman and Betsy Levy Paluck. In today's presentation, we will talk about what the Peace and Recovery Initiative is, what types of projects are eligible for funding, who can apply for funding? We also give you some tips on the common pitfalls and useful tips and advice for a successful proposal. And what are the restrictions for funding and all about the upcoming deadlines and contact details.

Ricardo Morel:
This is the agenda for today. We'll start with an introduction about the Peace and Recovery Initiative, and then I'll hand it over to my colleagues to talk about the call for proposals itself, some tips and tricks and the timeline. Then we'll run into Q&A. So about the Peace and Recovery Initiative. IPA might be familiar to some of you, but for those of you who are not that familiar with the organization, IPA stands for Innovations for Poverty Action, and we have the mission to reduce poverty by ensuring that policy and programs are informed by scientific evidence. One of the core activities that we do is actually generating evidence, and we do this through different channels. And having a competitive research fund is one of them. IPA as a whole has 20 country offices. However, we have worked in more than 50 countries, conducting nearly a thousand rigorous impact evaluations in partnerships with hundreds of researchers. The research that we support is usually trying to address key policy and practice questions, and identifying the causal links between an intervention and the outcomes we are seeing in a particular population. So impact evaluation research and as I mentioned before, we usually award these through competitive research funds. We care a lot about building partnerships between researchers and implementing organizations, from local to international NGOs, including as well governments. But our work doesn't stop there. We go a step further to not only generate evidence, but actually ensuring that decision-makers in the policy and practice arenas are actually using that evidence. And we have a variety of ways in which we support the evidence use.

Ricardo Morel:
Next slide please. So the Peace and Recovery Initiative in particular, very much in line with what I talked about. Being IPA's mission of generating evidence and sharing that evidence focuses on the generation of evidence and sharing of evidence in relation to the prevention, the response, and the recovery from violence and humanitarian emergencies. The Peace and Recovery Initiative is supported by the UK Government and Open Society Foundations. The main activities that we do, again in line with our mission, is to facilitate partnerships to match researchers and implementing organizations to fund research, mainly through competitive research funds. And we'll talk about that most of the presentation today. And then to disseminate evidence, once we have research findings we create project summaries. We prepare evidence synthesis products, we share this evidence through social media, traditional media, attend and present at events, and so forth. This is also a collaboration with J-PAL, which is the Abdul Latif Jameel Poverty Action Lab, and they are also running two initiatives that, together with the Peace and Recovery Initiative, are very much in line with each other and complement each other. The other two initiatives are the Governance Initiative and the Crime and Violence Initiative. And finally, in this map, you can see the different locations or countries to be more specific, where the Peace and Recovery Initiative has supported research projects with roughly 4.7 million in grant funding. These are 28 countries with the upcoming calls for proposals. We hope to keep these numbers growing. So in the next few slides, I'll have my colleague Nessa Kenny join me to talk about the call for proposals.

Nessa Kenny:
Thanks so much, Ricardo. So as he said, I'm going to be providing an overview of this funding call, including the scope and the types of grants we give out, the evaluation criteria, opportunities for research partnership. And so you may be wondering exactly what the scope of this initiative is and whether the research that you are working on or would like to work on is a good fit. And so on this slide are the parameters that we have defined in consultation with our donor, and input from dozens of researchers and policymakers and practitioners over the last seven years of running the initiative. And on the left, you'll see sort of our general parameters. I think in general, we're most interested in research projects that contribute to generalizable learning about what programs work and why. And I think the why here is particularly important to us. And we're really interested in research that probes mechanisms behind why a program works the way it does, and what components might be driving that impact. On the right hand side, you'll see the topical themes that we fund research under; there are six of them in total. The first is understanding and preventing individual-level participation in violence. So this includes things and efforts to understand incentives for participation in violence, effective prevention strategies, reintegration of individuals who've participated in violence in the past. The second is understanding, combating and reintegrating non-state armed groups, and this includes research on reducing recruitment or financing or governance by these groups and incentivizing their transition to sort of - or transition, rather to more sanctioned political and economic activity.

Nessa Kenny:
We also fund work on reducing prejudice and building horizontal social cohesion among groups. Research on this topic has sort of shown small positive effects, including research that we've funded, and we're really interested in future research that evaluates novel or refined approaches, that attempts to augment those impacts and reduce unintended consequences or potentially increase spillovers. And we also fund research on strengthening household and community resilience. And here what we mean, I think in particular, is resilience to negative shocks that might befall a household. And this, of course, is often a goal of many types of programs in fragile settings to sort of help individuals and households weather those shocks. We then fund research on building institutions, resolving disputes and delivering justice. And I think here we're particularly interested in questions related to this theme - when violence or conflict or political instability or disaster fundamentally changes the nature of governance or service delivery, justice delivery, institution building; and less so in sort of bread and butter governance or justice or institution building questions that might apply to many different contexts, even those that aren't necessarily affected by violence or conflict or disaster. And then finally, we fund work on addressing root causes and preventing future crises.

Nessa Kenny:
So of course, many of the interventions that you might think of fit as well into the above priorities and also have this goal. But there are kind of a broader set of interventions that don't necessarily fall above, around early warning or preparedness or prevention with very little evidence, that we'd be interested in building the evidence base on. Maybe one thing to note as well is that we also have a cross-cutting interest in measurement and design. And we welcome proposals that use sort of innovative designs or develop new measurement strategies to augment our understanding of peace and recovery. And for more information on all of this, please consult our call for proposals document. The document also includes a non-exhaustive list of all of the potential questions that we are interested in funding research on. You're welcome, of course, to combine questions or develop new questions when submitting proposals, provided they're related to our thematic focus. But we have sort of an indicative list of the types of things in each of these categories that we're most interested in. This slide is an overview of the types of projects in particular that we fund, which covers everything from kind of early stage project ideation to full impact evaluations of programs. As Ricardo mentioned, the bread and butter of IPA has traditionally been the implementation of impact evaluations. And so for the most part, what we fund is projects that have something to do with impact evaluations, whether that's preparatory work or the implementation of impact evaluations themselves, or sort of the application of the results from those evaluations.

Nessa Kenny:
So the first grant type is our exploratory grants. And these are really early stage projects that provide project support to travel, or relationship development with partners, maybe some initial descriptive or observational work. Notably these are earmarked for early career researchers, so researchers that are non-tenured and researchers based in lower- and middle-income countries, even if they are tenured. And this award type is capped at 10,000 USD. For our pilot studies, we expect that our pilot grants support pre-impact evaluation preparatory work that helps researchers develop subsequent proposals for full impact evaluations in future rounds. And these projects generally have a clear research question and implementing partner, but require some upfront investment in survey development or piloting or designing measurement or sampling strategies, and these awards are capped at 75,000 USD.

Nessa Kenny:
For the full impact evaluations that we fund, these projects have a clear research question and committed implementing partners and well-defined and really strong technical designs and statistical power calculations as well. Most of the studies that we fund through this category are randomized evaluations, but we're open to funding high-quality quasi-experiments as well. And we're happy to chat a little bit more about that in the Q&A. And the grants, in theory, also could fund the continuation or completion of impact evaluations that have started without funding from us, and also for long-term follow-up of previous evaluations that are related to our topical areas of work. The expectation is these grants, in particular, will result in at least one published academic paper. And the maximum that we have for this category is $500,000. Though maybe I'll note that to date we've actually not granted that much to a single study. And so all other things being equal, the lower budgets have a slightly higher likelihood of being funded. We then have our infrastructure and public goods projects, and this is a bit of a more amorphous category, but the idea is that these projects are public goods in some ways for the research community or policy stakeholders, and they often produce data or tools that can support several different types of research projects or types of analysis. And they often ultimately support the design or implementation of future impact evaluations. Feel free to take a look at our website for examples. But the types of things that we've funded under this category have included representative panel data sets of refugee populations. We've funded new surveying tools like a WhatsApp surveying interface that allows for contact with hard-to-reach or populations on the move. We've also funded measurement toolkits. So a toolkit looking at asking about violence and how that can be best done in the context of impact evaluations. And yeah, maybe one thing to note here is in this category, we're really interested in proposals that address the barriers to research for hard-to-reach or under-researched contexts or populations or topics. And then finally we have our evidence use and policy outreach support proposals. And these can support the development of relationships with policymakers, the take-up and dissemination of evidence. Maybe it could be used to embed a research staff member within an organization to help encourage evidence use, to host matchmaking events or conferences. In some cases, we're open to creative ideas and these are capped at $25,000 each.

Nessa Kenny:
So these are the criteria that we use to evaluate proposals. They're all equally weighted. And we have a review board of academic researchers and policy input as well that use these criteria to make determinations for funding. The first of those is academic contributions. So we really ask whether or not the study is going to make a significant contribution to advancing knowledge in the field. And maybe here, strong proposals in particular probe at the underlying sort of academic theory or behavioral theory for why a program might work and the mechanisms that might drive that impact for policy relevance. We ask first whether the study addresses priority questions outlined in the research agenda, but also whether the intervention can kind of have generalizable impact or generalizable implications beyond that particular case. And we also ask questions about scale in this category. For technical design, we ask whether the research design answers the questions that are outlined in the proposal and whether there are any threats that could compromise the validity of the results. And then for full proposals in particular, and I mentioned this before, we require robust statistical power calculations for viability. We're really asking like is this project going to happen? Like is the relationship with the implementing partner strong. And are there any sort of major logistical or political obstacles that might threaten the completion of the study, and whether or not the research team has a track record of implementing similar projects in the past, or somebody on the research team does. And for pilots in particular in this category, we ask whether or not research, that piloting will inform a full randomized evaluation. And then finally value for money. So we think about the cost of the study and compare it to the sort of expected contributions it might make to to science and policy. And maybe one thing to note is it's a benefit when studies leverage funding from other sources, but it's certainly not a requirement. And IPA or the Peace and Recovery Initiative in particular, often is the first funder of a study. So it's certainly not expected that you're coming to us with funding already, though it's welcome when you do.

Nessa Kenny:
There are a couple of additional considerations that we have when reviewing proposals. The first is geography. So as Ricardo mentioned, most of our funding comes from FCDO and so it must be spent. Not all of it, but a portion of that must be spent in FCDO priority countries. And there's a list that are linked in our application instructions. It's a pretty broad list quite honestly. This really doesn't bar us from funding projects elsewhere, but it's a consideration and we're happy to chat more about this. I will say, I don't think we've ever found this to really be a limiting factor in funding proposals in the past, but it is something that we consider. We also consider ethics, so whether there are risks to harm, to research participants, to our enumerators, what the risk mitigation strategies are, how the sort of possible benefits of the research compare to the possible harms. And then finally, we think a lot about team diversity. So I think we, maybe to underscore here, we really welcome proposals from diverse research teams. And we encourage prospective applicants to work across disciplines and also with researchers, both from the countries where the project will take place, but but also potentially those with lived experience of conflict as well.

Nessa Kenny:
We often get questions about the things that we can't fund. So here's a slide that identifies our major funding restrictions. The first and potentially biggest one is program or intervention implementation costs. So anything that is not a research implementation cost, but it goes towards implementing the intervention itself. And this includes any costs that the implementing partner of a program would have otherwise incurred to implement the program or the intervention being tested. And it also includes costs associated with refining or developing new approaches to programs that might be adopted by the implementing partner if proven effective. So maybe there, something to highlight is that we don't fund program R&D. We can only fund research on the programs. Happy to also chat more about that distinction as well. We are not able to fund salary costs for researchers from institutions in high-income countries, where we're not able to fund purely qualitative research that isn't sort of embedded within an impact evaluation, though we certainly welcome and sort of value impact evaluations that have qualitative components. Similarly, we cannot fund projects that are purely lab in the field or survey experiments, except when they're used as a measurement strategy within a broader impact evaluation. And then finally, we can't fund research using historical data sets, again unless it's in the context of a broader impact evaluation. And if you have specific questions about any of these, please email us your questions or drop them in the Q&A now. And then finally on this slide, we just wanted to highlight our overhead caps for our awards. So for nonprofit institutions or institutions in lower- and middle-income countries, indirect costs are capped at 15% and they're capped at 10% for universities in high-income countries.

Nessa Kenny:
We also wanted to go through a few sort of tips and tricks that we have have seen be effective after reviewing hundreds of proposals over the last seven years. So there are a bunch of common proposal pitfalls that we see that sort of often are the cause of a proposal not being funded in a particular round. And we wanted to flag those in particular for you. So the first is low academic contribution. So sometimes we get proposals for a project that we think would be useful for an implementer who wants to know whether or not their program works, but the design is not really aimed at creating generalizable, general knowledge that can be used in other contexts or by other implementers. And overall, I think maybe something to stress is that we're set up to fund research that is generalizable and has external validity, and that is useful to the field as a whole, and not just answer questions for that one implementer. Certainly we are interested in helping implementers answer their questions, but there are ways of designing evaluations that also answer sort of broader theoretical or behavioral questions alongside those. The second or sort of the other extreme, I guess, is research that has a really interesting theoretical idea, but is maybe evaluating an intervention that is not policy-relevant. Maybe it will never be scaled up or it has no application in the real world. Even though it's testing an interesting academic theory, we don't actually see the application to practice. We also see proposals periodically with poor identification strategies. So we are able to fund impact evaluations that aren't randomized. And I think especially in the spaces that we work in, there may be some cases where for logistical or ethical reasons, we might choose to adopt another type of impact evaluation methodology. That said, I think the thing we wanted to underscore here is that it still needs to be as rigorous as possible. And so if you apply for something that isn't a randomized evaluation, we need an explanation for why that isn't the case and why the method that you're choosing is the most rigorous alternative that you can use in that given situation. I also wanted to delve into the sort of lab in the field experiments comment that we made on the previous slide a little bit. So lab in the field experiments are kind of between lab experiments. So you might know them from like undergrad psychology where you get $10 to participate and you go into a university basement and you do some behavioral games or are given some vignettes that you react to. So lab in the field experiments are kind of taking those lab experiments that generally happen in highly controlled settings and implementing them in a real life setting with real people, and using these sort of same vignettes and behavioral games, but with a relevant population in a real world context. But it's not evaluating an intervention per se in the same way. And so we can certainly fund lab in the field components of full impact evaluations, specifically when these games or vignettes are a measurement strategy for evaluating the impact of a program intervention policy. But we can't fund sort of purely lab in the field experiments as part of this fund.

Nessa Kenny:
A couple of more pitfalls really quickly. The first is project stage. So full projects in particular need to have clear research questions and power calculations and have done some piloting. They need to have intervention funding. Basically, we need to know that your experiment is going to happen if it's funded. And so we just sort of wanted to underscore that if you're not quite ready to submit a full impact evaluation proposal, please submit a pilot proposal, or an exploratory proposal even, that may allow us to fund a project that needs some upfront investment before we can get to the stage where we can fund a full impact evaluation. We also tend to shy away from funding evaluations of bundled programs or purely bundled programs. And so what that means is often programs have lots of different types of components. So it might be a program that has a cash transfer component and a skills training component and a microcredit group component. And if we evaluated all of this together and found a positive impact, it would be harder to tell what actually was driving that impact. Was it the cash? Was it the microcredit? Was it the skills training? And we are particularly interested in evaluations that disaggregate the impacts of different program components and try and identify what the mechanisms or components are that are driving the impact that we might end up seeing. I'm happy to chat more about this if you have questions. We should just underscore here that in terms of researchers, at least one researcher per project must be affiliated with a university and either hold a PhD or be pursuing a PhD in a discipline that is relevant in some ways to our work. You're welcome to have other team members who don't fit this description on the PI team, but at least one person must fit this description. And then finally, exploratory funding is earmarked for early career researchers, so not tenured professors and also researchers based in lower- and middle-income countries. And I think this is in recognition that these groups often have less access to early stage project funding to get projects off the ground. And so we really want these 10K grants to make a difference in terms of project viability.

Nessa Kenny:
I'm almost done. We have a couple of tips that we would encourage you to consider when developing projects. The first is to ask us about projects. We as staff do not make final funding decisions. And as a result, we're able to engage in project development conversations, and we're really happy to do so. We have a good sort of sense of what we fund and what generally our review board is interested in. There are a couple of topics that we have a little less interest in that we should flag. The first is evaluations of psychological programs - psychosocial, I actually think, is maybe the wrong word for me to have put there. But mental health programs in particular, I think some of the rigorous research that does exist in this space, is on sort of this topic, and we're more interested in funding research in areas with a little less research. The second is research that replicates previous studies with very little innovation. I would say replication is really important, of course, for moving knowledge forward. And this doesn't mean that we don't fund replications. But most of the replications that we do fund test something else in addition, so tries to look at mechanisms, adds another treatment arm. We want to be looking at something a little bit new. We also finally in terms of topics of less interest, we are less interested in funding evaluations of programs that don't address our research questions, even when they're in crisis affected contexts. So we wouldn't fund, for instance, a project that was studying a behavioral nudge or an educational intervention in a refugee camp, just because it's a purely convenient sample to work with. And we think that the most compelling research to us answers sort of fundamental questions about building peace or reducing violence or promoting recovery. We would encourage you along those lines to consider innovations. So we're really interested in funding studies that have violence or conflict as a dependent variable or ones that include sort of real world measures of peace, social cohesion, conflict, violence in a behavioral way. We would also encourage you to reach out to IPA country offices. So in places where we have a presence, we have the ability to support with project development, and sometimes in addition to those countries, we can support it regionally. So the contact information for these offices can be found in our application instructions posted on our website. And then finally, if you're eligible, we really encourage you to take advantage of our exploratory funding, and this is sort of another plug that we're looking for early stage, exciting ideas that might have promise.

Nessa Kenny:
Finally from me, if you are an implementing organization that has found your way into this webinar, welcome. We wanted to provide you a brief overview of how we might be able to support partnerships with researchers to evaluate your programs. We often get approached by implementing organizations who have a program that they think really works well that is related to our priority research questions. The implementation has been figured out and the implementers really think the program works. And now you really want to think about how to rigorously evaluate the impact of that program. And so we're really happy to help organizations find a researcher who would like to work together to probe a program's impact. And that's what we mean by matchmaking. So to help us understand what these programs are better or what programs you want to evaluate are, we have a matchmaking form that will be linked in the slides that we provide later, and is also linked on our website, and it asks a few questions about the programs that you want to evaluate and takes you through a few considerations that we'd like you to think about as you reach out to us, and those are listed here. Maybe a couple to flag in particular: the first is the timeline piece. It's really hard to build solid impact evaluations if the program's already started, so we hope that you can tell us about opportunities early on so that we can find a match with researchers, and we sort of have sufficient time to discuss design and measurement and ethical considerations, all of the questions that you might have. So please get in touch with us as early in the process as possible. And the final I should flag is also the final point here, the size of the project. It's really important that the number of program participants or communities or households, whatever the implementation unit is, is sufficiently large to evaluate the program. The minimum number that is necessary to answer your research question and probe impact really depends, but researchers can help determine this. In terms of what you can expect from us, we will reach out to folks with your program description to see if we can find a good match for you. And we will let you know within eight weeks whether we found a match, generally much shorter of a timeline than that. Maybe to note, we might not be able to find a match for every project, but certainly for projects that we think are a really good fit. This is kind of the bread and butter of something that we do, and so we're confident we can find someone to help you evaluate your programs. That's it from me. And with that, I'm going to turn it over to Ayda.

Ayda Lulseged:
Thank you. So in this concluding section, we'll share with you helpful resources to refer to as you develop your proposals and a few key dates to take note of. So looking first at the timeline for our Round Eight call for proposals. Great. So all proposals must be submitted by November 22nd of this year and will announce decisions in February of 2025. Looking ahead, the next round will open the first quarter of 2025, with semi-annual rounds to follow. Whether you're planning to submit during this round or future rounds, consider taking advantage of our matchmaking support, as Nessa described, particularly if you're looking to connect with researchers to evaluate your programs. And for anybody watching this in the future, please check our website for the latest information on future round dates. Turning to next steps, again, please reach out for matchmaking support early in the process of developing your project. And when you're ready, you can submit your application for funding via our online application portal. However, please be advised that late submissions won't be accepted. After reviewing the call for proposal materials, if you have any remaining questions, please feel free to contact us with any questions. We're here to help you identify and connect with relevant researchers or IPA country offices to think through your project ideas and determine if your project is a good fit for our fund or other initiatives that we work with. We can also assist with policy or dissemination work, so please reach out whenever the question emerges. So on the next slide, you'll see that we have listed an array of resources that are hyperlinked for easy retrieval. Just a reminder, the slide deck and webinar recording will be shared with you afterwards. So that's all from us. So thank you so much for your attention. Please contact us at peace@poverty-action.org with any questions and we'll be happy to address them.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp4 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you'd love including transcribe multiple languages, world-class support, generate automated summaries powered by AI, automated subtitles, and easily transcribe your Zoom meetings. Try Sonix for free today.