By Silvia Montoya, Director of the UNESCO Institute for Statistics.
With a new set of post-2015 education goals and targets on the horizon, the international community is looking to the UNESCO Institute for Statistics (UIS) to help collect global data on countries as they seek to improve the learning outcomes of their children and youth. UIS is the official source of cross-nationally comparable education data, uniquely placed to identify and produce a range of new indicators with the support of its technical and financial partners. The challenges ahead are tremendous. While addressing the myriad of issues related to data production, we must address the following critical issue: How will the data be used?
The view from the field where scorecards eclipse analysis
Before joining the UIS as Director, I led a series of learning assessment initiatives in my native Argentina. With a small team, we focused on two key areas: producing the policy-relevant information needed for our constituency and ensuring accuracy in every step of the process, from psychometrics to sampling, test administration and data production. There was no room for error – we were producing high-stakes data that would shape education policies and resource allocations for generations.
The work was complex but inspiring. We were producing data that could positively influence the lives of children and their families. But to be honest, I was always nervous about the release of the results. I actually cringe when learning assessments make the front-page news, with headlines focusing on a ‘bad grade’ instead of suggesting recommendations to address the problems faced by the students, their teachers and their families.
Suddenly, all of the time and money spent producing and analyzing the results are eclipsed by a ranking table comparing the results of your constituency’s children with those living in different countries and circumstances. It feels as if you were reading the final scores of a sports championship rather than the results of a test designed to diagnose the educational strengths, opportunities and risks of a generation of learners.
The real reason why we conduct educational assessments is to provide all stakeholders – from ministers to teachers, parents and students – with the information needed to improve learning outcomes. But let’s not delude ourselves in thinking that media reports reflect true access to information.
Leave or stay and get involved
Almost 50 years ago, Albert Hirschman developed the concept of exit, voice and loyalty to describe how people can resolve disputes over the quality of goods in a market. Basically, they have two options: leave (exit) or stay and get involved (voice). Economists tend to prefer the exit solution because it can impose discipline and give the incentives to improve a service: unless you fix the problem, the client will stop buying your product and go with the competition. The alternative action involves voice or dialogue, whereby both parties work to resolve their differences.
Now let’s consider how these options apply to learning assessments. Imagine the scenario: the latest round of testing reveals that students in a certain district are not acquiring key skills and falling behind those in other areas of the country. Do parents exit the system or voice their complaints? Do they even have the choice?
Let’s begin by assuming that everyone has access to test results, which is often not the case. How many families can afford to leave the public education system and pay for private schooling? This is not an option for a sizeable portion, if not the majority, of families. I am willing to bet that most school boards know this, which means that the exit option is not a credible threat to bring about change. So the families of students must rely on their voice and exert pressure through different channels, such as municipal elections, school board meetings and associations. But do they have the information to bring about change?
The scorecard approach only tells parents that their children are at a disadvantage without providing any insight on how the results might reflect differences in their socio-economic status. Perhaps most importantly, there is little or no discussion about the skills that their children are missing. The ability or right to exercise your ‘voice’ must be based on a solid understanding of the challenges. Lack of knowledge generally means no voice.
This shows that we must re-think the way that learning assessment data is disseminated and with what purposes. Instead of just releasing the results, we need to prepare ready-to-go analysis for different types of users – from policymakers to teachers looking to reinforce their pedagogical approaches and to parents who want to work with their children on what they learn at school. Currently, we tend to use the data to assess and manage education systems, when in fact we also have a rich source of information to directly address the needs of students. In short, we need to tap into the potential of learning assessment data by making it more accessible and usable for different actors.
Through this approach, we can also expand on Hirschman’s concept by adding ‘loyalty’ to the equation. By making assessment data accessible to all, schools will benefit from the loyalty of students and their families by informing and empowering them in efforts to improve their learning outcomes.
For more information, follow Silvia Montoya on Twitter @montoya_sil.