Facilitating Impact Evaluations: Recommendations from the LAC Reads Evaluations
This is one in a series of briefs uncovering lessons learned from four impact evaluations of promising reading interventions funded by USAID as part of the Latin America and the Caribbean Reads (LAC Reads) project. The evaluations were conducted by Mathematica.
Successful implementation of a randomized impact evaluation can be difficult, time consuming, and expensive, causing many researchers to avoid or not fully implement this rigorous approach. As a result, funders and implementers may not receive strong evidence on which to base their program decisions. The evaluations conducted under the LAC Reads project reveal several factors that can facilitate a successful randomized impact evaluation. These factors include: engaging stakeholders and partners on the ground, being nimble and creative, evaluating validated programs, using national data to improve cost-efficiency and relevance, and using multisite approaches and multiple comparisons to generate generalizable results.