UNESCO / Nguyen Thanh Tuan

On the way forward for SDG indicator 4.1.1a: supporting countries’ development needs

By Manos Antoninis, Director of the GEM Report

A blog last week by the UNESCO Institute for Statistics outlined the technical factors that help explain why so few countries have been reporting on SDG indicator 4.1.1a – the percentage of students who achieve minimum proficiency in reading and mathematics by grades 2/3. It was a response to a barrage of interventions by the Global Coalition for Foundational Learning, the Center for Global Development, the People’s Action for Learning (PAL) Network, and the Global Education Evidence Advisory Panel, which questioned the decision of the Inter-agency and Expert Group on SDG Indicators to mark the indicator for potential deletion. These interventions also pointed at three alternative assessment families that, according to them. could be used instead for reporting: the Early Grade Reading Assessment (EGRA), the foundational learning module of the Multiple Indicators Cluster Survey (MICS), and the citizen-led assessments of the PAL Network.

These interventions did not mention the Member State-led governance process, supported by the Conference on Education Data and Statistics, which sets standards on when an assessment is suitable to be used for reporting. Last week’s blog explained the criteria that these assessments need to meet to be eligible as tools that countries could rely upon to report on indicator 4.1.1a should they wish to use them. But on top of technical reasons, this blog argues that development reasons should not be ignored either. The end does not justify the means. The purpose of monitoring the SDGs is not just to produce data for the sake of global reporting but to do so in a way that serves countries’ education development needs.

The three alternative assessments being suggested do not meet this condition. A discussion of their merits (or lack thereof) for reporting can therefore be misleading. The results of these assessments may have been used by (a small number of almost exclusively global North) researchers and have been used for advocacy, but they have not helped countries develop the capacity of their education (assessment) systems. Even if the ongoing review process were to confirm that they meet the eligibility criteria, none of them is, or possibly could ever be, part of what would be considered good practice for a national education assessment system.

Assessments have different strengths and weaknesses

Without discounting the potential role of weak country demand for assessment, the main obstacles for low coverage in poorer countries are cost and capacity. The cost of learning assessment is far from negligible: the government of Liberia spent USD 21 million for primary education and USD 14 million for secondary education in 2021, for instance. In such a context, allocating USD 300,000 for a learning assessment is exorbitant for a national budget. External support is therefore a precondition.

Capacity is also very limited. The set of skills needed to carry out a learning assessment and analyse its results are scarce in low- and lower-middle-income countries and in high demand for other related uses. However, such capacity needs to be built and, if the international community values such capacity, then external support should also factor in the costs of building it. But such objectives require a 30-year planning horizon, not the usual 3-year horizon of projects funded by development assistance.

In our Spotlight series on foundational learning in Africa, a case study on Sierra Leone showed that, between 2014 and 2022, as much as USD 15 million may have been disbursed on assessment by the FCDO, GPE, UNICEF, USAID and the World Bank. Yet the country still today lacks an assessment unit and does not report on indicator 4.1.1 at any level of education.

A common problem in development assistance is that efforts focus on justifying whether dollars were well spent to taxpayers (i.e. whether children’s learning improved in the schools targeted by an intervention) and not to governments (i.e. did the aid dollars contribute to building a national assessment institution). This approach is not sustainable. The 2022 Spotlight continental report also noted that early grade reading projects, which often included EGRA-type components tended to cost more per student than the total national public expenditure per student. The old adage ‘Give people a fish, and you feed them for a day; teach them to fish, and you feed them for a lifetime’ could not be more poignant.

Since 2006, EGRA has raised awareness of low learning levels in poor countries. It has also contributed to a better understanding of the minimum proficiency level components. But its objective was not to support globally comparable measures of learning. EGRA studies have been largely limited to project evaluation instead of institution building. They have been administered through international service providers who account only to their funders. Documentation and transparency have been limited and uncoordinated. Despite repeated requests over the years, the GEM Report has not been able to access a single EGRA dataset. The same is true for the UIS, which is the custodian agency. With a very small number of exceptions, over almost 20 years now, EGRA has not become part of national assessment systems.

Since 2017, the foundational learning module has been a helpful addition to the MICS, a multi-purpose household survey. In households with children aged 7–14, one child is assessed in basic literacy and numeracy. The information assesses children in and out of school, at different grades, and different household contexts. The GEM Report has used the MICS data for a page on learning trajectories in its SCOPE website, in partnership with the RISE programme. But under the tight constraints of time and location, it cannot substitute an assessment with an education focus. There are too few questions to capture the minimum proficiency level, while consistent test conditions cannot be guaranteed.

Since 2005, citizen-led assessments, the first of which, ASER, was implemented in India before spreading to other low- and middle-income countries, have focused on easily communicable learning outcomes to make policy makers aware that learning levels are lower than they think. Civil society organizations united by this model formed the PAL Network and worked diligently to ensure assessments are comparable to each other and globally. But even if these developments are rolled out at national scale, governments are unlikely to wish to report results produced by non-government entities. The aim of the PAL Network is to generate consistent, good quality data over time to prompt government action. But the aim cannot realistically be to report on the global indicator.

What is the most promising way forward?

Ultimately, the aspiration of the international community should be to develop country capacity, through strengthening the quality of national assessments and/or participation in cross-national assessments that:

  • generate comparable data;
  • have a frequency that allows effective policy-making;
  • are suitable for the country context; and
  • crucially, help education ministries monitor curriculum implementation and improve teacher professional development.

Unfortunately, the cost of assessments that fulfil these objectives remains very high. Expertise is limited, nationally and even internationally. But perhaps the biggest problem is how international funding is allocated. As the example of Sierra Leone showed, the amount of donor resources allocated to assessment is several times higher than what is needed. But is misdirected. Instead of supporting assessment systems, it favours project evaluation. It is stop-gap and not strategic. Both factors serve to create assessment markets that are captured by few service providers, while the cost is higher than needed because funding flows are uncertain.

Three solutions deserve far more attention.

First, cross-national assessment programmes need long-term, stable and predictable financing that considers the need for long-term national professional development.

Second, in 2021, the UNESCO Institute for Statistics developed the Assessments for Minimum Proficiency Levels (AMPL), a tool that can be used to measure indicators 4.1.1a and 4.1.1b – either as a standalone assessment or as a module within a national assessment. They do so at a fraction of the cost, while helping countries reflect on the weaknesses of their national assessment to bring sustainable improvements.

Third, the assessment market is neither efficient nor equitable. Countries are not well informed about the respective strengths and weaknesses of different learning assessments. Not all countries pay the same price or receive equal support. Often countries are not even involved in decisions negotiated between assessment providers and donors. Even donors do not know how much money they disburse on assessment and do not have a clear policy on the matter. The solution is to reshape the market, shifting from a donor-driven to a country-driven approach: each country should be eligible for funds, which cover the full cost in the poorest countries and part of the cost in wealthier countries. Setting the level right would increase competition among providers, would help lower the cost of procuring services and would help countries choose the most appropriate service for their needs.

If ever there was a common action problem in development that could be solved by improved coordination, this is it.

 

Share:

1 comment

  1. Hello, this is a great post and I could not agree more, however, there are some countries that have designed national learning assessment systems including early grades both in EGRA for decoding and paper pencil test for reading comprehension, such as Cameroon, its a long term effort, but the main goal of learning assessment systems is to provide information for the country to improve the pupils skills and its not to report on SDGs, I think it relates to the way countries have been pushed somehow brutally to report on benchmarks and SDG, the communication on this has to be rethinked clearly, data exists and if its not reported then you should investigate why, if someone acts as the “gendarme” of learning its not going to work

Leave a Reply