Kastoimages Freepik

Learning data: How do we measure progress towards SDG 4 – Part 3

By Silvia Montoya, Director, UIS

This is a part of a series of blogs, aiming to inform about some of the core challenges and solutions to collecting quality data which will be discussed in depth next week at the first ever Conference on Education Data and Statistics, convened by the UNESCO Institute for Statistics (UIS).

When the SDG 4 goal on education was set in 2015, it moved the agenda from a focus on getting children into school to ensuring that they are also learning. It called on the international community to assess whether students meet at least a minimum proficiency level in reading and mathematics at grades 2/3, at the end of primary and at the end of secondary school. Yet, half-way to the 2030 deadline for our SDG 4 goal, still only 34 countries are reporting at the end of grades 2/3, almost 100 (or around half) at the end of primary school, and 85 at the end of secondary school. Why?

As for all data used at the global level, and as discussed in the previous two blogs just posted on administrative data and household surveys, the outputs need to be comparable across countries and representative of each country. They need to be compliant with minimum standards of quality, so that there is full understanding of whether students are being compared like-for-like. Ideally, the measurement needs to serve not just the purpose of reporting; it needs to also develop national capacity to carry out learning assessment and use the results to improve curriculum, teacher education and assessments. Last but not least, countries need to report the results.

The world has come a long way in a short amount of time. Prior to 2015, when SDG 4 was set, there was no global consensus on how to define minimum proficiency. Today, not only is there an indicator but there are also standards of what should be measured at each one of three key steps of students’ learning progression and criteria of what a good measurement looks like. It is now also possible to use a variety of tools instead of only one. Already in 2018, the major cross-national studies at global (e.g., PIRLS, TIMSS) or regional level (e.g., PILNA, SEA-PLM, PASEC, LLECE, SACMEQ) agreed how the proficiency levels they were measuring mapped on to the global their proficiency levels match to the global minimum proficiency level.

Challenges

A number of challenges stand in the way of realizing these objectives and increasing country coverage.

Comparability and quality: No one country is the same. Not every country wants, and has a policy, to measure learning at each one of the three levels. Not every country aims to teach the same content in its schools as those in other countries. There are so many test formats and sampling decisions to make that a minimum set of procedural standards is needed to make sure that results are robust and comparable. The UIS has produced guidance on what background information should be provided to help assess the comparability of results.

Timeliness: The turnaround time between learning being assessed and results being reported stretches so long in some cases that the purpose of monitoring is not served, while at the same time results cannot even be used to influence policy.

Costs: The cost of assessment is low relative to the cost of not measuring learning. Assessment systems, after all, have positive impacts that go way beyond simply producing statistics. Still, assessments are relatively costly, especially for the poorest countries. This makes some countries reluctant to invest without external support. But such support tends to be short-term, fragmented, and often not taking the best interest of countries into account. Rarely does such support build institutions and develop capacity.

There is a high chance that low country coverage of the indicator on minimum proficiency in early grades, 4.1.1a, will lead to it being dropped from the list of global SDG indicators in 2025. Dropping an indicator from the global list for pragmatic reasons (i.e., because countries are not reporting) does not mean that the indicator loses its relevance: the out-of-school rate is not a global SDG indicator, yet it continues to receive global attention. But it shifts attention to the lack of a coherent long-term approach to help countries develop their capacity to assess and monitor learning in early grades of primary and beyond.

When the standards were set in 2018, it was agreed that some other assessments were not suitable but could one day be considered for reporting if they worked towards these standards. There is still potential for these assessments that currently do not meet the standards – because they were not designed to be comparable, or they measure proficiency at a level below the minimum, or governments are not prepared to report their results – to meet them in the future, even though this might be too late to increase coverage in time, given how long it takes to administer assessments.

But this is also a critical moment to reflect what approach to assessment will empower countries. Measurement should not be done for the sake of measurement. It is supposed to be a tool to help countries develop. UIS has proposed that funding of learning assessment needs to move to countries: depending on their income and capacity, they should be eligible for a funding entitlement, and it is then countries that should decide which of the measurement options that meet standards best meet their capacity objectives and are cost efficient. These are issues that will be discussed during the Conference this week in Paris.

Various tools have been used to promote comparability and ownership

Apart from promoting consensus by getting existing assessments to map their descriptions of proficiency to the global minimum proficiency level, the UIS has also promoted comparability in at least three other ways.

The first is statistical. Aptly called, Rosetta Stone, named after the famous archaeological discovery that enabled translation between different written languages, this approach harmonized data sets from two regional assessments in Latin America and francophone Africa and one international assessment to test how robust the consensus-based approach was.

The second set of tools, policy linking and pairwise comparison of items, are not statistical. They rely on a panel of national experts to judge how the national assessment aligns with the global minimum proficiency level.  Still in their piloting phase, these tools can empower countries to use their national assessments for comparable global reporting.

The third tool, Assessments for Minimum Proficiency Level, or AMPL, is a more comprehensive approach. It is a set of 20 questions that enable countries to report to the global indicator. The questions can be added to any assessment that governments are already carrying out. It is low-cost approach that has been able to produce results within months. It has been used already in nine African countries (Burkina Faso, Burundi, Côte d’Ivoire, Gambia, Kenya, Lesotho, Senegal, Sierra Leone and Zambia), has been administered in Urdu in Pakistan, and it has been piloted in India in English and in Hindi. The huge potential of this tool will be presented at the Conference this week.

Solutions

Solutions, then, fall out of the above challenges:

  • Assessment harmonization and reporting handbook: With the progress made in recent years, it is time to compile and regularly update a handbook with all the information on eligibility criteria for reporting. Among the elements to be included would be the following: alignment to standards and frameworks; representativeness; administration comparability; process transparency; and participation feasibility (costs, schedules, capacity building, and overall burden for a country).
  • Assessment accreditation system: Accordingly, it is also the right time to introduce a clear and transparent accreditation system. Assessment providers, including government organizations, will be able to apply to having assessments vetted for their fitness of purpose to report on SDG indicator 4.1.1. Based on the handbook, a checklist will contain the standards and eligibility criteria with which applicants need to comply.

 

Share:

Leave a Reply