Navigating Monitoring and Evaluation of Distance Learning Programs

Distance learning can be designed for all ages and levels of learners, using both short- and long-term strategies. It is a method commonly used to reach learners who need flexible learning opportunities and, with the onset of the COVID-19 pandemic, has seen increased interest and up-take globally, underscoring the importance of evidence-based strategies to measure whether distance learning efforts are serving the intended communities and achieving intended objectives. Outlined below are frequently asked questions, tips and additional resources for monitoring and evaluating distance learning efforts.
Measurement Domains
The literature review on Delivering Distance Learning in Emergencies and the Roadmap for Measuring Distance Learning (“the Roadmap”) group distance learning into four modalities: radio/audio, TV/video, mobile phone, and online learning. Paper can be a fifth modality. The measurement domains used in the Roadmap are:

- Reach, which measures who accesses technology, programming and content, relative to intended users;
- Engagement, which measures if content was used as intended and was relevant and captivating; and
- Outcomes, which measures if there was a change in knowledge, skills, attitudes, or behaviors.
Determining reach and engagement is critical before outcomes can be assessed. Additionally, collecting contextual and demographic data is critical to understanding the distance learning context and reach, engagement, and outcome measures. More information can be found in the measuring distance learning video and in responses to the questions below.
Addressing Critical Questions on How To Measure Distance Learning
How do you measure the quality of distance learning programming?
ANSWER: Quality of programming should be measured through a combination of reach, engagement, and outcome measures, and can be assessed formatively during the design of programming, and summatively when change in teaching and learning through distance learning is measured.
Quality can be assessed by examining the extent to which:
- targeted users are reached and engaged
- knowledge, skills, attitudes, or behavioral outcomes change
Formative measures of quality are especially important before dissemination of programming, as they allow teams to modify the program in real time if adjustments are needed. Wherever possible, teams should have quality standards and criteria established during the design and production phases.
TIPS: Quality can be captured quantitatively, through methods like self-reported surveys, or qualitatively through methods like focus group discussions or observations. For programming directed at young children, capturing caregivers’ perspectives is important.
ADDRESSING REACH
How do you measure access to distance learning programs?
ANSWER: There is no one-size-fits-all approach to measuring access to programming, but the Roadmap includes illustrative metrics and accompanying case studies by different modalities and contexts.
TIPS: For radio or television broadcasts, collaborate with nationally representative surveys to measure whether respondents have accessed programming and for data on radio and television ownership and coverage and listenership/viewership patterns. Project-based surveys (remote or in-person) are critical for initiatives that cannot collect nationally representative data. If using online programming and mobile phone apps, utilize backend analytics to capture how many users are accessing the content, alongside engagement data like frequency of use, duration of use and activity completion rates.
How do you measure access to marginalized individuals and groups?
ANSWER: Measuring access among marginalized learners requires having sufficient demographic data and innovative and collaborative approaches for communicating with users who do not have access to technology devices like mobile phones or computers.
TIPS: Study known approaches for communicating with marginalized communities inside and outside of the education sector. Consider using low-stakes measures where communities are not further marginalized in the process, and carefully time data collection for when participants are not under pressure or duress. Where possible, use known individuals from marginalized communities and use a combination of remote and in-person data collection approaches on multi-modal interfaces to expand reach.
See USAID resources for further discussions and tips:
- Monitoring, Evaluation and Learning during the COVID-19 Pandemic
- Guidance for USAID Education Sector Implementing Partners: Monitoring, Evaluation, and Learning During the COVID-19 Pandemic (USAID)
- Guide for Adopting Remote Monitoring Approaches During COVID-1
ADDRESSING ENGAGEMENT
What are effective ways to capture actual use of and interest in programming with the available resources?
ANSWER: Triangulating engagement data from multiple sources is recommended, which may include analyzing utilization behaviors and feedback on programming. Triangulation helps verify findings and more effectively uncover actual use and interest. Formative engagement data ensure higher quality and inclusive programming while summative engagement data is critical for analyzing outcomes.
TIPS: For non-online modalities, build feedback loops into programming to systematically collect data on use and interest, such as text message or interactive voice response prompts during radio or television broadcasts. Use self-reported surveys (quantitative) as well as observations, interviews, or focus group discussions (qualitative) to gather views of learners, educators, and/or caregivers. Ensure there are ways to verify users who have fully accessed the programs and to separate intended and unintended users.
For online learning, ensure there are analytics that measure frequency and duration of use, as well as content choice and completion in the design. Implementation teams should ensure they have access to analytics data, which may include signing agreements with collaborating tech companies. Analytics should be triangulated with other methods.
How can you measure satisfaction with content and programming? How can you ensure users are not just telling you what you want to hear?
ANSWER: Satisfaction with programming and content can be assessed through self-reported surveys, observations, interviews, or focus group discussions with learners, educators, and caregivers. To ensure participants are not telling data collection teams what they want to hear, it is important to measure satisfaction in a number of different ways and to build in checks that counter desirability bias.
TIPS: For television and radio programming with small children, consider conducting structured observations of children listening to or watching the programming and measuring children's reactions during formative assessments. For some online learning programs, software can be used to record when a learner appears frustrated or disengaged, but this approach requires substantial ethics and privacy permissions during the development phase. Focus group discussions or interviews with learners, educators, or caregivers generate critical data on which aspects of the programming are most engaging.
For addressing desirability bias in formative evaluations, make it clear to respondents that their viewpoints will be used solely to improve the quality of programming. Ask about the relatability of characters, appeal of storylines, use of music, or other elements. Include questions that ensure participants have actually listened to, watched, or accessed the programming; ask about notable events that took place. When using surveys, ask the same question in different ways in different places in the survey to measure consistency of responses.
ADDRESSING OUTCOMES
How can I capture change in knowledge, skills, attitudes, and behaviors from distance learning? How do we measure social and emotional learning (SEL)?
ANSWER: Outcome measures will vary depending on whether the content is based on an established curriculum, or supplements teaching and learning beyond a curriculum. Outcome measures (including SEL) must capture the intended learning—either the curricular objectives or other knowledge, skills, attitudes, and behaviors. See the Roadmap for innovative case studies on how to measure formative and summative outcomes.
TIPS: The Roadmap encourages MEL teams to use integrated in-person and remote data collection, multi-modal technology interfaces, and mixed methods designs to capture changes in outcomes. For SEL, consider using formative ways to check in with learners and caregivers to measure well-being.
For summative purposes, consider a number of tools developed by INEE and CASEL:
- Best Practices on Effective SEL/Soft Skills Interventions in Distance Learning (includes CASEL)
- Inter-agency Network for Education in Emergencies Measurement Library
Can we analyze outcomes collected in-person with outcomes collected remotely?
ANSWER: Yes, if an evaluation design is not intending to compare change from different time points or if using qualitative methods and analyses. When using methods that do not require validity and reliability (e.g., qualitative assessments like interviews, qualitative demographic surveys, or formative data), adapting these methods to remote formats can be done as long as protocols and procedures are put in place to protect participants and their data. For example, data on caregiver participation in their child’s learning collected through focus group discussions can be collected in-person and remotely.
No, if the outcome or impact evaluation is trying to generalize to a larger population or attribute change to an intervention. Teams need to be cautious in comparing data collected in-person and remotely for pre- and post-test and other summative and quantitative designs. In such cases, teams need to adapt or re-design an in-person outcome or impact evaluation when collecting data in-person is no longer feasible.
TIPS: When using pre- and post-test designs, establish new validity and reliability measures for the remote test. For example, in a case where an early grade reading assessment pretest was administered in-person, these validity and reliability measures do not apply to the remote test. Follow new testing (test and retest) procedures to establish validity and reliability for the remote test. This is often time-consuming and costly for teams, and therefore switching the purpose of the evaluation from a summative outcome evaluation to a more formative assessment focused on teaching and learning may be the most appropriate option during crises and emergencies. As research on how to remotely implement summative assessments grows, this guidance may change.
More Guidance on Key Strategies for Measuring Distance Learning
Learn MoreLearn More:
Related Blog Posts

Not as Easy as 1 2 3 or A B C: Calling for Renewed Attention over Gender Bias in Education

A Holistic Approach to Girls’ Education Leads to Better Learning Outcomes
