The EGRA and EGMA assessment models, developed by RTI for USAID post-2006, were designed to provide simple, low-cost measures of literacy and numeracy. Both assessments are oral and examine the basic skills underpinning... [ view full abstract ]
The EGRA and EGMA assessment models, developed by RTI for USAID post-2006, were designed to provide simple, low-cost measures of literacy and numeracy. Both assessments are oral and examine the basic skills underpinning development of reading and numeracy eg: (1) letter recognition, phonemic awareness, reading simple words, listening comprehension; and (2) number recognition, comparisons, ordering sets of objects. EGRA has been administered in at least 11 countries and 19 languages under the EdData II programme and has been used in more than 30 countries and 60 languages by other organisations. EGMA has also been widely applied.
EGRA and EGMA were intended as formative tools of measurement, to be used within schools and by practitioners to measure progress within a class, in order to improve progress. However they have been mainly used as random anonymised samples (baseline, midline and endline). This use is often donor-mandated: donor agencies demand post-hoc evidence of each intervention's impact, and use EGRA/EGMA success as proxies. When well conducted these tests do give a fairly accurate picture of performance of the sample. But they have drawbacks including high cost, dependence on external training, moderation, calibration, data collection and cleansing, and unsustainable investment in equipment (pre-programmed tablets etc) and travel. This expensive new industry has diverted money away from a more urgent need, namely providing teachers in resource-poor settings with a means of assessing all their own children's progress, for formative purposes.
However a substantial pilot in Government schools in northern Ghana (2014-17) has shown that randomised tests calibrated to EGRA (and shortly to EGMA) can be held on a mobile phone, output to a mini-printer and used with every child, termly, for under $0.01 per assessment (thus assessing a class of 40 children thrice yearly costs $1 a year). The Ghana work indicates that such teacher-led assessment, if supported by good local and national follow-up, tends to increase very sharply the proportion of early-grade children achieving each EGRA and EGMA threshold. It also provides a live database of results with very low data collection and input costs. This work is now expanding across Ghana, and will shortly travel to four other African countries and Asia.
This paper aims to open a dialogue with particants on some related topics:
- What should teacher-led tests of early literacy and numeracy skills) ideally look like? How far should they differ from EGRA/EGMA in substance as well as delivery?
- How far must such tests be available for mother tongues as well as English and other world languages, even where the latter are the main language of instruction from (say) Year 4?
- What types of teacher training would need to accompany a shift towards teacher-led assessment rather than just "teaching from the front"?
- What techniques of data collection, data analysis, and data feedback are appropriate?
- Can unique learner and teacher numbers be used to facilitate longer term tracking? If so can this be to a national model, even before the assessment programme includes all learners?
- What does sustainability mean, in different contexts? (Note: unit cost per child assessed by USAID-funded 2014 EGRA/EGMA in Ghana was over $250!)
- How easily can mobile phones and mini-printers, which require no mains electricity or internet within the classroom, be used by teachers in resource-poor schools?