By Silvia Montoya, Director of the UNESCO Institute for Statistics (UIS) and Kevin Macdonald, consultant to the UIS
Without the existence of and widespread participation in international learning assessment programmes, it would have probably been impossible to measure how well students are learning at school, as captured by SDG indicator 4.1.1. Few countries undertake national assessments that (i) measure the set of skills captured in international learning assessments, (ii) employ a rigorous methodology to make their scores comparable both between countries and across time, and (iii) are not as high-stakes as national examinations. Even if more countries collected and published such national assessment data, there is the additional task of making their results internationally comparable. International learning assessments exhibit these critical features and, while their definitions of minimum proficiency are not identical, the constructs measured are similar enough from a global monitoring perspective, especially with consensus reached on a Global Proficiency Framework (reading and mathematics) and a common definition of the minimum proficiency level.
International learning assessments also provide data on SDG 4 indicators other than learning outcome measures, as the Mapping of SDG indicators in learning assessments by the Global Alliance to Monitor Learning (GAML) shows. In the UIS dataset, learning assessments have added over 5,000 data points for indicators related to school infrastructure and resources, prevalence of bullying, whether students are learning in their home language, and whether teachers have received in-service training. In many cases, these assessments provide measures for countries where administrative data sources – the typical source of UIS indicators – lack the capacity to collect, such as detailed information about school infrastructure, teacher training, data from non-state schools, student reporting of bullying at school or language spoken at home.
Another important contribution of international learning assessment data is in measuring inequality. While government data sources often (but not always) have data disaggregated by sex, they rarely publish or even have access to data disaggregated by household wealth or by language spoken at home. International learning assessments provide insights into aspects of inequality that generally would not be possible by relying on administrative data alone.
Finally, learning assessments are in a unique position to measure the effect of COVID-19 on learning outcomes and to identify the impact of different distance learning mechanisms put in place to remediate the learning disruptions generated by the pandemic. For instance, the Monitoring Impact on Learning Outcomes (MILO) project, developed by the UIS, is a direct response to that need, providing a way for countries to measure progress against SDG indicator 4.1.1b (reading and mathematics at the end of primary) prior to, during and, more importantly, after the pandemic, while at the same time providing high-quality, timely data aligned to global standards. The project aimed to investigate the changes in learning outcomes, and the effectiveness of emergency teaching/learning strategies, in six countries in sub-Saharan Africa: Burkina Faso, Burundi, Côte d’Ivoire, Kenya, Senegal and Zambia.
How do learning assessments collect data?
International learning assessments generally collect data in a similar fashion. Students are administered a cognitive test that follows a framework and consists of a set of questions as well as a background questionnaire that are aimed to identify drivers of learning in different contexts (student, family, classroom and school) and ask a range of questions regarding behavioural and socio-emotional questions such as experience at school, opinions about learning, and their homes and families.
This information allows a better understanding of learning outcomes by linking them to issues such as language of instruction or whether students have been exposed to bullying. The student questionnaire also provides indicators of sub-populations for measuring inequity, including sex and often socioeconomic status. School directors’ questionnaires ask information about characteristics of the school, which can provide information about the types of infrastructure and resources in schools, and consequently help in measuring SDG indicator 4.a.1. In many cases, teachers are administered a questionnaire which can inform SDG indicator 4.c.7 on continuous professional development. In most assessments, a system questionnaire and a curriculum mapping complete a deep and thorough data ecosystem to understand learning outcomes.
How comparable are international learning assessments?
The comparability of indicators derived from different international assessment programmes may be a limitation that is common to survey data. Generally, the learning outcome measures produced by the assessment programmes are not comparable and there are a number of differences in background questionnaires for students, teachers and school directors that can limit their comparability. For example, while the learning outcome measures are not directly comparable across learning assessment programmes, the GAML has identified various levels of proficiency reported in each of the assessment programmes, which approximately correspond with each other,
The background questionnaires have different definition for some of the constructs including their reference period. Still, strong similarities can be found. Both PISA and TIMSS, for instance, define bullying in very similar terms. ERCE and PASEC ask school directors whether specific infrastructure exists in their school. Whether or not the language of the test is spoken at home is also asked similarly across assessment programmes. The strategy adopted by the UIS to address comparability issues is to create and publish detailed metadata so that the information on how the indicators are derived and how they differ is clearly defined and accessible.
There are also two other critical but often overlooked roles that international learning assessments play in monitoring SDG 4. The first is transparency. International learning assessment datasets, their sampling methods, questionnaires and analysis methods are typically available to the public. This allows a level of scrutiny that is typically not available with government administrative sources.
The second is the participation of various education system actors, which allows analyses at different levels. International learning assessment surveys are administered to school leaders, teachers and students who report opinions and provide information. In this sense, the stakeholders directly involved in student learning are also participating directly in global efforts to monitor SDG 4.
The UIS published last month a strategy to producing SDG 4 data using learning assessments, which shows the critical role that international student assessments are playing in SDG 4 monitoring. Section 7 presents a methodology to replicate how SDG 4 indicators from learning assessment data have been estimated, while an appendix presents the codes to replicate the results. Those elements, along with the Metadata and Methodological Documents for each of the indicators, provide countries and any user with all the needed information to understand and replicate the indicators published.