Improving, not over-hauling learning assessments post-2015

post2015_data_200The United Nations Secretary-General’s Independent Expert Advisory Group recently released its report on “Mobilising the Data Revolution for Sustainable Development”. Revolutionizing education data may indeed capture our imagination, but there are less complex and arguably more effective ways to measure new education targets post-2015, especially as regards learning.

While calls to create a global platform to compile data on learning outcomes certainly have merit, we argue, however, that much can be gained from simply enhancing national assessments and capacities instead. Global monitoring may be strengthened by building linkages among international and regional assessments, but enhancing existing assessment practices in countries may better serve efforts to improve student learning.

A wealth of data on learning outcomes already exists. Before considering a new global framework of expanded cross-national assessments, whose impact on improving classroom learning may be tenuous, we suggest first examining whether or not countries already have systems in place they can be used to monitor learning. A cursory look shows that considerable progress has been made in this regard already.

blog_quoteThe importance of learning and the need to measure its progress have grown throughout the 1990s and 2000s. Most attention has focused on countries which have participated in international assessments (such as PISA, TIMSS and PIRLS) and regional assessments (such as LLECE, PASEC and SACMEQ) of student achievement. As will be reported in the 2015 EFA Global Monitoring Report coming out in April next year, however, along with international and regional assessments, there has also been a sharp growth in the number of countries conducting national assessments over the last 25 years. In the pre-Dakar period from 1990 to 1999, 70 countries conducted at least one national assessment, while double that number (142) did so between 2000 and 2013.

The increase of national assessments of learning among developing countries has significantly reduced global disparities in assessment activity since Dakar, giving a broader picture of the quality of education than was previously possible. Between 2000 and 2013, 82% of developed countries, 65% of developing countries and 78% of countries in transition conducted at least one national assessment. The respective figures before 2000 were 49%, 34% and 6%.

It is also notable that most of the recent increase in learning assessments has taken place in regions where few countries had conducted assessments before 2000. In fact, the prevalence of national assessments has dramatically increased in Central and Eastern Europe and Central Asia (from 13% to 83%), Asia and the Pacific (from 17% to 67%) and the Arab States (from 25% to 70%). The percentage of countries conducting at least one national assessment has also increased in sub-Saharan Africa, from 35% to 61%. This development is particularly welcome given the increased ownership that national assessments bestow on countries that monitor their students’ learning outcomes.

Click to enlarge
Click to enlarge

National learning assessments improve monitoring and accountability

The evident critique is that most national learning assessments are not designed for comparing learning outcomes across education systems. Their results have been therefore largely overlooked in international discussions of education quality, and by various education stakeholders preparing to make the paradigm shift to access and learning. Before we dismiss these rich sources of data, however, it is worth pausing to consider how existing assessment exercises and capacities can be improved.

As it stands, current national learning assessments are already a valuable tool for monitoring learning outcomes, capturing closely what instruction students have been exposed to and what they have actually learned. They already provide crucial nationally relevant information about which student learning objectives are being achieved and differences among relevant subgroups. If school-based and home-based data are collected, countries will be able to analyse which policies are driving improvements in learning, and which are not. This is an important task in itself.

We must also remember that comparable assessments are not necessarily needed in order to hold countries to account. If collected over time, context-specific national and subnational data from assessments provide a significant insight for evaluating educational policies and practices. Moreover, if global education targets are to be fine-tuned by national authorities, as suggested in the Muscat Agreement, then national assessments of learning outcomes would have an even more important role to play.

Clearly we must work harder to improve the rigor, quality and usefulness of national assessments. We also need a better understanding of the nature of assessment process: How and by whom are assessments designed and administered? How do teachers and school leaders view national assessments? How do different stakeholders use data and how does it feed back into debates about education policies and actual changes in classrooms? These non-revolutionary changes would improve assessments’ use – and our ability to monitor learning post-2015 – substantially.

To conclude, let’s think carefully before we devise whole new systems. Let’s take careful note of those assessment exercises already in place, which produce vast amounts of valuable data for the post-2015 agenda. Those debating ‘data revolutions’ must first consider how existing resources could be better used to help countries make better use of the assessments they are working on. Improved methodologies and stronger scientific rigor would increase the validity of these existing assessments exponentially and enable countries to better monitor new education targets post-2015.

Join in: Please join the online consultation currently live about new indicators for measuring a global education agenda post-2015



  1. This is a very wise and rarely heard view. Many countries have had many assessments. Much publicity accompanies these, but countries are rarely able to improve learning as a result of assessments. Until clear pathways have been established, perhaps there should be a moratorium on testing.

    But some cynics would say that the testing industry and consultant companies worked very hard to prioritize testing on the development agenda. Now pay time has come. So should they rather than governments be held accountable? Several companies get contracts to conduct tests in various countries. Currently they helicopter in and out of the country with the data, to analyze them from the comfort of their offices. Contractors could be barred from further business unless they followed up closely and over time the countries where they tested to figure out how best to modify curricula and practices through findings. At their expense ! (particularly for those with a non-profit tax status.)

    What do you think?

Leave a Reply