Measuring Skills for Youth Workforce Development: FAQ

Answers to Frequently Asked Questions

The below answers to frequently asked questions (FAQs) provides practical information for using and reporting on USAID's soft skills, reading, math, digital literacy, and technical/vocational/professional skills indicators. FAQs are updated regularly, so please check back often and submit additional questions to edindicators@usaid.gov.

Soft Skills Indicator

Why does this indicator refer to “soft skills” and not “social and emotional skills,” “21st century skills,” “transferable skills,” “life skills,” or one of the other terms that is also commonly used? Should we report on this soft skills indicator even if our program uses one of these other terms?

“Soft skills” is the terminology adopted in much U.S. government-funded youth workforce development, higher education, and cross-sectoral youth programming, and it is recognized internationally.  The related term “social and emotional skills” is often used within USAID-funded education activities.  Both terms are referred to in USAID’s 2018 Education Policy.  During stakeholder consultations leading up to the development of these PIRS, most participants felt that retaining the term “soft skills” would be helpful for advancing collective discussion, building upon recent USAID-funded publications in this area.  A number of alternate terms do exist, however, and some are used more frequently than “soft skills” in certain contexts or languages; by and large, these terms have highly related meanings and refer to overlapping lists of specific skills.  As echoed in the Social and Emotional Learning and Soft Skills USAID Education Policy Brief, implementers should examine the definition of soft skills provided in this guidance, and the example list of research-supported soft skills, to determine whether these skills are relevant to their activities and should be reported on this indicator—even if their curricula or program documents officially use different terminology.

Is there a certain number of soft skills implementing partners have to measure and report on for this indicator, as part of the change in the “composite score”?

USAID does not wish to indicate a specific minimum, maximum, or ideal number of skills to be measured toward this indicator.  The intent is that implementers will assess the core skills for which they are training.  Implementers are encouraged to determine how many skills to assess thoughtfully, taking into consideration training goals, intended uses of data, feasibility, and the burden that lengthy assessments place on participants.

Why will implementing partners, and not USAID, determine the amount of increase between baseline and endline that is “meaningful”?

This indicator allows for the possibility of assessing a diverse range of skills within the soft skills domain, using a variety of possible assessment tools, in very different socio-cultural contexts.  Given this range, it is impossible for USAID to set a general definition of what constitutes meaningful improvement from baseline to endline.  Additionally, the notion of “statistically significant” change only applies to group-level analysis and not to individual improvement pathways.  Therefore, it is most appropriate for implementers to determine their own desired magnitude of change, using any guidance associated with the specific chosen assessment tool, or through parallel quantitative and qualitative studies that link a certain magnitude of change in soft skills with other desired activity outcomes.  In most cases, “meaningful” change should be defined in a way that is more significant than just improvement on a single item or a small percentage point increase, as such small changes may fall within measurement error rather than showing a true increase in ability. 

The PIRS state that “activities may include retrospective items… to begin generating evidence on whether this method yields more informative analysis of change; however retrospective data will not be counted toward this indicator as currently defined.” What are retrospective items? And why will retrospective data not be counted?

Retrospective items ask participants to recall their skill level at an earlier time (such as before an intervention) and compare it to their current skill level.  For example, the item may request a comparative analysis, such as, “Compared to your behavior before you started this course, how sociable are you now?” and responses may include:  less sociable, about the same, or more sociable.  Alternatively, there may be separate items such as, “Before you took this course, how sociable were you?” and, “How sociable are you now?.”  Prompts may be more nuanced than these examples; these are included simply for the sake of illustration.

There is some interest in researching the use of retrospective items for measuring soft skills development to avoid what is referred to as a “response shift bias”:  the challenge that some implementers have experienced with relatively uninformed participants at first rating their personal soft skills very highly in a pre-test, and then once they have learned more about the meaning and depth of a soft skill, rating themselves lower in the post-test.  Retrospective items could help avoid this, since participants can draw on their post-program in-depth knowledge to rate their own skill level at two different points in time.

However, there is also significant psychometric research showing that retrospective items often carry an automatic positive bias (influencing the test-taker to exaggerate positive change over time for social desirability reasons, for example) and may additionally suffer from difficulties with recall.  For this reason, USAID cannot at this time accept composite score indicator reporting based on retrospective items; however, implementers should feel free to experiment with this approach as a supplement to their principal assessment methods in order to contribute to discussions and the generation of an evidence base on the validity and reliability of this approach.

Reading

Why does this indicator refer to “reading skills” rather than “literacy”?

Literacy is a broader term typically used to refer to both reading and writing skills.  The term “reading skills” was chosen for this indicator to clarify that we are not measuring writing skills; additionally, this parallels the “reading skills” standard foreign assistance indicator used in basic education.

What is meant by the term “self-assessment”? Why should self-assessments not be used?

Self-assessments ask test-takers to report on their own reading behaviors or abilities (such as whether they can read their own names, whether they can read instructions on medicines, whether they read the newspaper, how often they read, etc.).  These kinds of items are particularly common in large-scale household surveys that collect information on literacy rates.  However, self-assessments give poor insight into the test-taker’s actual reading abilities, they are subject to social response bias, and they are unlikely to be sensitive enough to detect meaningful changes due to program interventions.  For these reasons, USAID does not allow the use of self-assessment items for reporting on the reading skills indicator, although implementers may choose to gather self-report information as a supplement to the direct assessment of skills and for internal learning about reading behaviors.

What language should be used for assessment?

As the PIRS states, the language(s) of assessment should be the same as the language of instruction for the reading program.  This does not necessarily need to be the official language of instruction for the country’s formal education system.  If an implementer has chosen an alternate language for its own reading program, according to the labor market demand for example, this is an acceptable language for assessing and reporting on this indicator.

Can we translate or adapt existing reading assessments into other languages?

Adapting reading assessment tools to other languages is not a straightforward matter, given the many differences in orthography (writing systems) and language structure, and even the length and difficulty of some words that might be easy or common in another language.  Where possible, it may be helpful to use an existing reading assessment already in the language.  Lower-level reading assessments have been created in many languages, in conjunction with the PAL Network’s citizen-led assessments and also associated with USAID-funded early grade reading assessment (EGRA) efforts.  International assessments that cover a wider range of reading abilities, such as PIAAC, have also been translated into many languages; example items for these assessments are usually available online.  While EGRA is concerned with early grade reading skills, the advice provided in the EGRA toolkit around language adaptation can also be helpful for adapting youth reading assessments.

Math

Why does this indicator refer to “math skills” rather than “numeracy,” “arithmetic,” or another term?

The term “numeracy” does represent the scope of skills that USAID is interested in for this indicator— in both the formal and informal domains.  The term “math skills” was chosen to parallel the “reading skills” standard foreign assistance indicator used in workforce development and basic education and to align with supplemental indicators on math at the primary level.

What is meant by self-assessment? Why should self-assessments not be used?

Self-assessments ask test-takers to report on their own math behaviors or abilities.  Self-report items are particularly common in large-scale household surveys that collect information on numeracy rates, and even in the numeracy component of the World Bank STEP assessment.  However, self-assessments give poor insight into the test-taker’s actual math abilities, they are subject to social response bias, and they are unlikely to be sensitive enough to detect meaningful changes due to program interventions.  For these reasons, USAID does not allow the use of self-assessment items for reporting on the math skills indicator, although activities may choose to gather some self-report information as a supplement to the direct assessment of skills and for internal learning about math use behaviors.

What methods could help implementing partners assess real-world math skills? Can partners use word problems?

There is a long tradition of research into the use of math in everyday life, although there appear to be few codified assessment methods for examining these practices.  An oral approach has been used in some research, in which mathematical questions are posed and the participant is prompted to both arrive at a solution using their own methods and explain those methods out loud.  Word problems, in which a situation requiring math skills is delivered in ordinary language rather than using formal mathematical symbols, are also often used in academic settings and can help to assess real-world math skills.  However, if the assessment requires the word problem to be read individually by the assessee (rather than explained orally, for example), caution must be used to ensure that the word problem is written at the appropriate reading level and does not require unreasonable cognitive load to understand and process the scenario.

Can implementing partners measure financial literacy through the math skills indicator?

One real-world application of math skills is financial management, often associated with the larger skill domain of “financial literacy.”  Financial literacy is one of the skills that USAID considered during the background review for the development of these new indicators, but ultimately decided not to recommend as a standard foreign assistance indicator due to the less-developed state of knowledge around its impact and appropriate assessment methods.  Implementers that teach financial literacy may choose to include some financial literacy-related questions on an assessment intended to report on the math skills indicator.  However, it is important to note that financial literacy may include many aspects that are not solely math-related; implementers that are specifically interested in this skill set are also encouraged to assess and report on financial literacy directly through the introduction of a custom indicator that is specific to that activity.

Digital Literacy

Why is this called “digital literacy” instead of “ICT skills” or another alternative?

Digital literacy is the term that has gained the greatest currency internationally, although alternate terms are also in use.  The definition provided for digital literacy here, including the many skills that are related to it or subsumed within it, clarifies the meaning of this term for those who are more accustomed to an alternative designation.  Implementers should examine the definition of digital literacy skills provided here to determine whether these skills are relevant to their activities and should be reported on through this indicator—even if their curricula or program documents officially use different terminology.

Why does the provided definition not refer explicitly to data literacy, digital citizenship, digital creativity, digital entrepreneurship, and others? Are these not included?

This is USAID’s currently accepted definition for a rapidly evolving field; it is intended to be illustrative, rather than exhaustive, of the variety of related skills that contribute to aspects of digital literacy.  Activities should teach and measure the specific digital literacy skills that are relevant for the target population and activity goals, without, however, going beyond the primary intent conveyed by the definition—a focus on the ability to effectively use digital tools as relevant for participants and their context.

Why will implementers, and not USAID, determine the amount of increase between baseline and endline that is “meaningful”?

This indicator allows for the possibility of assessing a diverse range of skills within the digital literacy domain, at a variety of levels, using a variety of possible assessment tools.  Given this range, it is impossible for USAID to set a general definition of what constitutes meaningful improvement from baseline to endline.  Additionally, the notion of “statistically significant” change only applies to group-level analysis and not to individual improvement pathways.  Therefore, it is most appropriate for implementers to determine their own desired magnitude of change, using any guidance associated with the specific chosen assessment tool, or through parallel quantitative and qualitative studies that link a certain magnitude of change in digital literacy skills with other desired activity outcomes.  In most cases, “meaningful” change should be defined in a way that is more significant than just improvement on a single item or a small percentage point increase, as such small changes may fall within measurement error rather than showing a true increase in ability. 

Technical, Vocational, or Professional Skills

Why does this indicator refer to “technical, vocational, and professional” skills, and not just technical and vocational?

The chosen phrasing signals the inclusion of a wider variety of functional and occupational skill sets, including recognized post-secondary credentials.  It recognizes that USAID-supported workforce development activities include a wide variety of educational levels and focuses that respond to varying context and market conditions, as well as customized demand-driven trainings.

Why should new assessments only be developed when they are part of the activity design and are part of a demand-driven training? If there is no relevant assessment, and our activity does not meet these criteria, are we not allowed to assess/count the acquisition of these skills?

The primary intent of assessing these skills is context-relevance and value for the participant beyond the confines of the specific activity’s implementation targets.  Creating an assessment that is useful and carries meaning for the broader context is not a simple endeavor; it may require significant background research with employers or customers, and even some marketing to ensure that its value is publicly recognized.  For this reason, this indicator is primarily intended for activities that feature technical, vocational, or professional training as part of a core market-driven theory of change on how to increase or improve employment.  Within that category, activities that are offering new demand-driven trainings that respond directly to contemporary market conditions are the most likely to need to develop new assessments because there may be no existing assessment tools for the new skill sets they are teaching.  This supplemental indicator is not intended, however, to introduce additional measurement burdens on activities that do not already focus strongly on teaching and assessing market-driven technical, vocational, or professional skills.  These activities can still count their achievements toward the standard foreign assistance indicator  “Percent of individuals who complete USG-assisted workforce development activities,” and can employ another custom indicator of specific relevance to their activity goals.

Why should assessments of these skill sets not be sample-based?

The primary reason why assessments of these skill sets should not be sample-based, but should rather extend to all participants, is an equity one.  If the assessment is context-relevant, as defined and required by the indicator, it may therefore have some market value for those who have received a passing score.  As a result, all participants who are eligible to complete the assessment (in that they have met the program’s own requirements such as a certain minimum attendance), should be allowed to do so.

Can individuals be counted toward this indicator if they have received a participation or completion certificate from a program that does not require assessment?

No, they cannot be counted.  While it is true that some training programs issue a certificate or certification based solely on participation or completion, this is not an adequate measurement of learning outcomes or skill attainment for the purposes of reporting on this indicator.  This indicator measures the passing of a context-relevant assessment, and not participation, completion, or mere possession of a certificate or certification document.  For example, an individual who completed a community health worker training and was issued a completion certificate without taking a final assessment cannot be counted toward this indicator. 

To be counted toward this indicator, does the assessment have to be in written form?

No, the assessment does not have to be in written form.  Most high-quality technical, vocational, and professional assessments have both written and practical or demonstration-based components.  For the purposes of this indicator, however, the type of assessment is left open-ended other than the requirements given under the definition for “context relevant”; the assessment may be solely written, solely practical or demonstration-based, or a combination of the two.