Research Methods

Template P Content Blocks
In this image:A person holding a mobile phone. © 2018 Clique Images on Unsplash
wave
Body Copy

IPA manages over 300 active studies in over 20 countries led by more than 575 researchers, and the Global Poverty Research Lab (GPRL) is an academic hub that uses empirical evidence to address the challenges of overcoming poverty and improve wellbeing in the developing world.

With such a volume of research, IPA and GPRL have faced recurring questions regarding the best ways to ensure quality and consistency in field studies.

The Research Methods Initiative was created to help answer these questions by designing and implementing methodological studies across many projects, building and collaborating with a network of interested researchers, and developing technical products and training. IPA and GRPL’s research informs policy, but quantitative research can be biased without careful measurement and estimation. Improving these methods and measurement tools will improve data quality, promote innovation in methods and measurement, and provide higher confidence in research results.

The Research Methods Initiative is comprehensive in examining sources of measurement error in research studies including in questionnaire design, sampling, fieldwork implementation, and validation of key indicators. The initiative is organized around three themes: (1) Research Design, (2) Questionnaire Design, and (3) Fieldwork Implementation and Data Quality.

Theme 1: Research Design

At the heart of empirical analysis is the problem of establishing causal relationships in the data that we collect and from which we can provide policy recommendations that are effective. Randomized controlled trials have provided an important new tool in addressing identification in research design, but we can learn more. Examples of work within this theme include innovations in RCT designs, the implications of different sampling strategies and statistical power in research designs, and how we can design for replication and scale.

Theme 2: Questionnaire Design and Measurement


This theme focuses on how survey instruments are designed and the consequences of alternative choices that a researcher may make in how data is collected. Errors in measurement may bias key variables which have important consequences not only in the representation of population level characteristics, but also the empirical relationships estimated. Examples of work within this theme include estimating relative biases in alternative questionnaire designs from recall periods, question framing, proxy rules, and alternative units of analysis. We will also explore alternative strategies and technologies to capture hard-to-measure concepts.

Theme 3: Fieldwork Implementation and Data Quality


Theme 3 focuses on implementation decisions that limit non-random measurement error in data collected from recruitment, training, monitoring and data validation. Studies under this subtheme have the potential to yield insights on enumerator labor markets, interdisciplinary insights into personal interviewing, as well as practical tools to be integrated as best practices within IPA and other data collection organizations. Examples of work under this theme explore enumerator effects including how recruitment, training, and motivation of enumerators improves data quality. The Research Methods Initiative facilitates the standardization of IPA’s quality assurance methods and studies the effects of old and new data quality tools. Researchers are interested to know how these data quality tools impact the reliability of data. The tools include observational visits, audio audits, backchecks, high frequency checks, nightly monitoring reports, and machine learning to recognize data falsification.

The Research Methods Initiative leverages IPA’s sector programs, such as Financial Inclusion and Social Protection, to study and improve the performance of sector-specific modules. More granular research is also possible, such as how changing the phrasing on one particular survey question affects responses.

By drawing on the wealth of local knowledge and surveying experience IPA has built through its country offices, the Research Methods Initiative is leading the way in the development of accurate, standardized measurement tools for anti-poverty studies.


Phone Survey Methods


Starting in 2020, IPA along with many other research organizations responded to the COVID-19 pandemic by shifting nearly all of its field data collection to remote methods, primarily Computer Assisted Telephone Interviews (CATI). This required adapting our methods for training and quality monitoring to virtual phone banking, with interviewers working from home, sometimes with limited connectivity. It also required re-tooling the way we design and implement questionnaires.

With generous support from Northwestern University’s Global Poverty Research Lab, IPA has gathered evidence and best practices and developed extensive tools and resources to support effective collection of high-quality data. This page is to share what we have developed, learned, and are currently learning as we continuously monitor quality and run experiments to improve methods for IPA’s network of investigators and for the rest of the research community.

Projects Supported by the 2019 Research Methods Competitive Fund

    Repeater Toggles

    He Said, She Said: Testing Respondent Effects in Household Income Reporting in Uganda

    Toggle Content One

    Researchers: Nathan Fiala (University of Connecticut and RWI) and Lise Masselus (RWI)
    Theme: Fieldwork Implementation and Data Quality

    Obtaining reliable data on household and individual income and understanding the quality of this data is important to research and policy. Past research has found evidence of misreporting because, for example, husbands or wives individually may not have full information due to income hidi ng or hoarding, or one spouse may not wish to share information freely while the other is present. Part of the problem for researchers is deciding which household member to interview or whether to incur the cost of interviewing more than one person per household, and if so, whether to interview them together or separately. This study is randomizing the respondent in a household for a sample of about 3,000 households in 200 villages in rural Uganda. The results will contribute new evidence to the question of respondent effects, allowing researchers to more accurately assess the cost-bias tradeoffs associated with different data collection strategies.

    Total Recall: Duration vs. Frequency of Surveys for Measuring Job Search, Employment, and Earnings in Pakistan

    Toggle Content One

    Researchers: Erica Field (Duke University), Rob Garlick (Duke University), and Kate Vyborny (Duke University)
    Theme: Fieldwork Implementation and Data Quality

    Do shorter, more frequent surveys improve the quality of measurement of labor market outcomes? What about the amount of detail in the questionnaire itself? Using a sample of 10,000 respondents in a panel survey in Pakistan, researchers are investigating how variations in recall periods, survey frequency, and questionnaire detail influence measures of job search, employment, and earnings. The study is comparing the different approaches to gathering survey data to administrative data as a benchmark. The results will contribute to evidence about how the frequency, duration, and detail of surveys influence the measurement of outcomes and estimated treatment effects from job search and employment interventions.

    Using Machine Learning to Improve Measurement of Property Values in the Democratic Republic of Congo

    Toggle Content One

    Researcher: Augustin Bergeron (Harvard University)
    Theme: Questionnaire Design and Measurement

    Cities in developing countries often lack the financial capacity to finance public goods. Property taxation has been identified as a promising source of revenues for cities in the developing world: it generates local tax revenues, is relatively efficient, and can be progressive and capture growth in real estate values. However, many governments do not collect property taxes effectively because property valuation rolls are absent or incomplete. This study is using machine learning to construct property valuation rolls in contexts where information about property values is limited. Using a training sample of 2,000 properties, researchers are implementing machine learning and computer vision models that use either property measurements and neighborhood quality or visual features of properties to predict the values of 48,000 properties in Kananga, in the Democratic Republic of Congo (DRC).

    Detecting Bias in Observational Measures Using Past Randomized Evaluations

    Toggle Content One

    Researchers: David Bernard (Paris School of Economics), Gharad Bryan (London School of Economics), Sylvain Chabé-Ferret (Toulouse School of Economics), Jonathan de Quidt (Institute for International Economic Studies, Stockholm University), Greg Fischer (Y Analytics and London School of Economics), Jasmin Fliegner (J-PAL, MIT), Roland Rathelot (University of Warwick)
    Theme: Research Design

    Proponents of randomized controlled trials (RCTs) point out that in order to identify causal relationships with observational (non-RCT) methods, one must rely on untestable assumptions, usually about the unobservable process which determines study participants' treatment status. Researchers have tried to find ways to leverage RCTs to estimate the bias associated with alternative methods that would have been used had randomization not been possible. In this project, researchers are developing a standardized, scalable method for estimating bias in observational methods that can generate large amounts of empirical evidence to address this question; gathering published data from RCTs run in the past 20 years, focusing specifically on trials with imperfect compliance with the treatment; and implementing new methods to understand the size and direction of expected bias in observational studies, and how bias depends on measurable characteristics of programs and settings.

    Measuring Poverty through Peer Ratings in Côte d’Ivoire

    Toggle Content One

    Researchers: Pascaline Dupas (Stanford University), Marcel Fafchamps (Stanford University), Deivy Houeix (MIT)
    Theme: Questionnaire Design and Measurement

    Accurately identifying people living in poverty—a process known as targeting—is an important and challenging task for researchers studying anti-poverty interventions. This project studies the quality of information obtained from local community members about each other. Researchers are using consumption expenditures as a benchmark and compare rankings obtained from community information to that observed using a proxy-means test (PMT). The project involves 450 households in the periphery of Abidjan, Côte d’Ivoire.