The Mother of All Evaluations: Learning from 69 Education-Sector Evaluations
Evaluating programs can help us learn from them and design better ones. Recently the Office of Education took this spirit farther and evaluated its evaluations.
The meta-evaluation assessed 92 USAID-funded evaluations evaluations conducted between 2013—the year Mission programs were required to align with the Education Strategy—and 2016 using evaluation quality assessment tool. Sixty-nine of those evaluations were included in a meta-analysis of findings. The resulting report, published in March 2018, reveals insights about the evaluations themselves as well as progress toward the Office of Education’s three main goals.
Like any good assessment, the study points out the good, the disappointing, and the inconclusive. Below are some of the findings.
Primary Grade Reading
The authors found that early-grade reading activities did boost student scores—but not usually by very much. When the baseline score was very low or already high, it was difficult to see improvements. Boys generally outperformed girls, but girls often made progress in closing the gap, especially in the early grades.
All early-grade-reading activities contained a combination of classroom instruction, teacher training, community engagement, and policy and systems.
The learning impact of teacher in-service workshops remains unclear in most of the evaluations, while coaching and in-school mentoring showed promise—but logistical and staffing issues made implementing these models difficult. Community engagement focused on parent-teacher associations, and school management committees produced minimal quantitative effects on student learning. On the other hand, some of the efforts to encourage children to read outside of school did increase learning outcomes.
Only a few interventions focused on learners with disabilities, innovative financing, or technology in classrooms—and there was virtually no evidence of impact for these areas. The study also pointed out a lack of emphasis on capacity-building, policy, data analysis, and system strengthening.
While most interventions measured student learning outcomes, they failed to report them in a consistent way. In addition, reports typically omit data on intervention delivery (duration, intensity, etc.) and fidelity of implementation, and lack metrics on cost, government capacity, or sustainability.
The meta-analysis suggests a number of ways to improve reading programs, including:
- Prioritize prompt distribution of textbooks and other learning materials, before teacher training begins;
- Conduct further studies of teacher training in terms of optimal duration, frequency, and follow-up;
- Sharpen design of community reading initiatives, using evidence;
- Support ministries of education to build robust M&E systems;
- Pay more attention to capacity-building, sustainability, and scaling up in evaluations; and
- Collect learning outcome data, including baselines, from all activities, and present them in a consistent manner across reports.
Workforce Development and Higher Education
Workforce development programs do provide the mix of skills that can help youth succeed in rapidly changing labor markets—“soft” skills, basic literacy, and work-readiness as well as technical skills.
As opposed to earlier workforce programs, all those evaluated work with the private sector to design programs that respond to local market conditions, and also offer employment services such as job placement, internships, networking, or support for entrepreneurship. An example is the AWDP—Afghanistan Workforce Development Program—which required partners to consult local employers and devise technical and vocational training that meets market demand. However, actual job placement rates were not consistently tracked.
About half of youth workforce programs—and over 75 percent of those with CVE objectives—support entrepreneurship. For example, the Economic Prosperity Initiative in Georgia connected farmers and small and medium enterprises with financial advisors to help them secure bank loans. The Iraq Foras Project conducted startup weekends and business competitions for young entrepreneurs. It also tried to help winners secure microloans, albeit with limited success.
The evaluations show impressive gains for women in specific workforce projects—despite little strategic focus on gender.
In higher education, the study found that extension services can effectively expand the reach of universities—and are a promising growth area for USAID high education programming. For instance, in Georgia, a partnership between a university and the Ministry of Education created the country’s first master’s degree in education; the partnership trained 2,300 school principals and 280 Education Resource Center officers in financial management and administration.
Unfortunately, very few programs in workforce development and none in higher education conducted impact evaluations. Considering their scale—some programs involve tens of thousands of participants with project budgets over $20 million—this gap is surprising. Impact evaluations are needed to understand program effects, and figure out which interventions work best for whom and in which contexts.
The study emphasizes that the links between youth employment and violence prevention and countering violent extremism are not currently well understood. Few project designs made explicit linkages between workforce components and violence prevention outcomes. Virtually all workforce programs for violence prevention target marginalized youth: unemployed young people who have dropped out of school, or in-school youth at risk of engagement with gangs or violent networks.
Higher-education interventions focus less on conflict prevention and stabilization, although some took place in countries experiencing violent extremism (Pakistan, Lebanon and Kenya). The projects’ theories of change and designs did not focus on stabilization or violence prevention.
Selected recommendations for future programming include:
- Incorporate issues of equity, scale and sustainability explicitly in program design;
- Invest in fit-for-purpose monitoring, evaluation, research, and learning;
- Standardize measurement of results;
- Develop sustainability metrics;
- Test cost-effectiveness of models;
- Strategically consider women and other minorities in design;
- Increase programs that focus on university extension services; and
- Conduct impact evaluations.
Synthesis of Findings and Lessons Learned from USAID-Funded Evaluations
Education in Crisis and Conflict
The meta-analysis largely confirms existing evidence on education in crisis and conflict, such as that found in What Works to Promote Children's Education Access, Quality of Learning and Wellbeing in Crisis (Burde et al., 2015) and Education in emergencies and protracted crises: toward a strengthened response (ODI, 2015).
The study found the most effective and sustainable violence-prevention/countering violent extremism programs to be cross-sectoral, sensitive to the particular context, and targeted to a range of groups, including at-risk youth and other vulnerable populations (e.g., survivors of sexual and gender-based violence) as well as government partners. Least explored in the reviewed evaluations were public-private partnerships.
Several elements proved effective in maintaining a safer learning environment: community sensitization to the importance of education as a pathway to peacebuilding; conflict-sensitive curricula on topics such as gender-based violence; interethnic integration in schools; school rehabilitation; and community volunteers to compensate for teacher shortages. Social and emotional learning (SEL) has high anecdotal value among beneficiaries, the study finds, although little quantitative evidence exists so far.
Selected recommendations for education programming in crisis and conflict-affected contexts include:
- Strengthen institutional capacity at local and national levels to improve access to basic education
- Tailor violence-prevention and CVE activities to the unique and diverse needs of target populations;
- Prioritize ongoing, lower-cost assessments to inform program design and management and to augment more traditional evaluation approaches;
- Conduct further research on SEL and its effective integration into education interventions;
- Address equity through cross-sectoral interventions;
- Emphasize the role of gender in the design, implementation, and evaluation of education interventions;
- Increase focus on disability, broadly defined;
- Leverage innovative financing mechanisms for sustainability; and
- Evaluate possible contributions of ICT to help improve access to education in crisis and conflict settings.
These are just a snapshot from the Synthesis of Findings and Lessons Learned from USAID-Funded Evaluations, Education Sector, 2013 – 2016. The full report provides a wealth of information on what works, what may not, and what we don’t know yet—as well as how to design evaluations that help us answer those questions.