Evaluating Liberia’s private school partners: why policy decisions must be based on more than numbers

stuart-cameronBy Stuart Cameron, Senior Education Consultant at Oxford Policy Management.

‘Partnership Schools for Liberia’ (PSL) is a hot topic in education policy circles. A pilot programme outsourcing control of government primary schools to eight private providers, it has attracted considerable controversy, as it represents an unusual approach to public school reform. A key aspect of this controversy is the inclusion of for-profit Bridge International Academies as one of the providers. To respond to the criticism, the Liberian government commisisoned a randomised control trial (RCT) that would assess the success of the pilot, which will be rolled out initially in 94 schools.

The RCT, to be led by Justin Sandefur of the Center for Global Development and implemented by Innovations for Poverty Action, will test whether private management improves school management, teacher accountability, and student performance, with complementary analysis of sustainability, scalability, cost-effectiveness, and equity. The RCT aims to make a fair assessment of the outcomes of the programme by randomly assigning public schools to be privately managed and comparing these to the schools that remain in public hands.

Education International, the world federation of teacher trade unions, and ActionAid, an international NGO which serves in the steering committee of the Right to Education Project, recently circulated a call for qualitative research proposals, listing concerns about the RCT. In turn, the RCT team responded with a comprehensive defence of the RCT and a criticism of the call for proposals in this blog post.

The EI/ActionAid call for proposals dismisses the results of the RCT without fully understanding how the evaluation design already deals with many of the complexities they highlight – such as the possibility that PSL schools will exclude less able children. They are, however, open about wanting to use the qualitative study “for advocacy work in the respective countries to challenge privatisation trends”. I’m not sure this is tantamount to advertising the conclusions in advance, as the RCT designers argue, but it is hardly conducive to objective research. The issues of how researchers can engage with development advocacy without compromising objectivity have been widely recognised and discussed, but an explicit commitment to unbiased research, and to publishing the qualitative study regardless of its conclusions, would have been helpful.

Arguably, however, the RCT team understate the potential problems with their own evaluation. Their focus is on a relatively narrow set of research questions, and despite efforts to deal with the complexities inherent in the progamme, there are some issues that haven’t been fully accounted for. The evaluation is not designed to disentangle reliably the extent to which differences in funding and perks will influence results.  For example, not only are PSL schools getting more funding than non-PSL schools, but Bridge are getting more money than other PSL operators. They also had more time to set up their schools – they have been in Monrovia preparing since December 2015 whereas the other operators began the process in June 2016. As the Sandefur article mentions, Bridge also had other advantages in choosing the infrastructure of the schools.

While there are truths and flaws on both sides, the exchange between the RCT team and EI/ActionAid is actually important because it highlights a fundamental distraction from a key problem: there is a high chance of the RCT results being misused in a high stakes political debate.

Many policymakers commission RCTs because they expect them to provide conclusive evidence. If positive, the RCT results are likely to be used widely and persuasively to argue for the further roll-out of Bridge and charter school schemes generally. The idea that a difference in test scores – with or without an RCT – can settle a complex policy debate is difficult to dispel. Influential and highly reputed sources such as The Economist, for example, already cite suggestive evidence on test scores (from studies conducted by Bridge itself) uncritically, and with little context, in support of Bridge International Academies.

As Justin Sandefur acknowledges, however, the evaluation will not tell us why the programme worked or not. This means that even if we know that it works, we won’t know how to scale it. As is commonly the case for RCTs, it provides a limited basis for inference to other contexts, and the context of the sampled schools for each PSL provider clearly differs from other schools in the country. These are flaws of the evaluation as well as the programme. What steps need to be taken to ensure students are not excluded? What aspects of the context of each school allow private management to work or fail? How does it interact with existing systems of management? A rigorous process evaluation and some qualitative work could have elucidated some of the mechanisms at play: Instead, significant resources are being spent to come up with a single number to represent the impact of a bundled and varied set of interventions.

liberia

Of course it is no bad thing to collect evidence, and no study with a finite budget tells us everything we would like to know about how and whether an intervention works. But the defence of the RCT seems naïve about how its findings will be used in the real world. According to the Minister for Education, George K. Werner, the evaluation has been commissioned “to provide an independent measure of the effectiveness, equity and sustainability of PSL. The research team works hand-in-hand with the Ministry of Education so we get the data we need to make sensible policy decisions about the future of PSL.” The scale up of PSL “will be dependent on the results of this evaluation”. But, while it might be a good start, the evaluation emphatically will not provide a sufficient basis for deciding whether to scale up the partnership or not.

In light of this, a more useful role for organisations advocating for public education – and for anyone trying to get a full picture of how public-private partnerships work in Liberia and elsewhere – would perhaps be to raise concerns about this blanket acceptance, highlighting the dangers of basing policy decisions around incomplete knowledge while proposing sensible recommendations for addressing such gaps.

This debate points to some wider issues around trust and the unfortunate – but widespread – muddling of politics and research methods. People’s initial beliefs often dictate whether they have more trust in an RCT or a qualitative study. This isn’t a good situation for evidence-based policy. As Minister Werner writes, “don’t judge PSL on ideological grounds. Judge us on the data—data on whether PSL schools deliver better learning outcomes for children.” In order for this to become a reality, RCT evaluators need to be more explicit in admitting the limitations of their studies, and engage more carefullly with the critisisms of campaigners who, in turn, need to channel such critisisms into evaluation design rather than blanket dismissal of the programmes being evaluated. And it’s always worth remembering that, ultimately, it’s the futures of actual children at stake.

Share:

4 comments

  1. For those who are interested in this continuing debate please see here the detailed response I have provided to Justin Sandefur’s original blog for CGD:
    http://www.actionaid.org/case-qualitative-research-complement-rct

    There is a strong case for the complementarity between the RCT and the qualitative research we have proposed with Education International, There has been some unfortunate selective quoting used to criticise us (and admittedly some unfortunate phrasing in the call for proposals) – but our TORs are clear that the research we propose may either “verify or challenge some of the initial concerns and observations”.
    At the same time we do not claim to be entirely neutral – as we have a long track record of evidence based work that informs our advocacy in this area. See for example the report Private Profit Public Loss which we made substantial contributions to and which offers a very comprehensive review of the evidence around low fee private schools:
    http://www.campaignforeducation.org/en/news/global/view/690-private-profit-public-loss-gce-s-report-shows-scant-evidence-that-low-fee-for-profit-private-schools-are-better-than-public-schools

    We recognise the Liberia case is different – and this is one of the reasons for us wanting to do new qualitative research to inform (and possibly challenge) our advocacy work. I hope we can have a more constructive dialogue between the quantitative and qualitative research going forward – and that we can all recognise some of the serious challenges for both that arise from the original design of the Partnership Schools for Liberia pilot programme.

  2. Partnership Schools for Liberia’ (PSL) is a hot topic in education policy circles. A new committee has formed in the name of randomised control trial (RCT). The new idea new thinking to make scholars to earn. These all are doing to gain the political mileage. Poor people are never been contacted to express their views, their troubles and the viability. Just like Public Private Partnership which was a total failure. Poor people became more for poor, this shown them big hope for the success. Ultimately poor people of E9 countries lost everything to the private partners against false hopes. I am afraid of, Liberia may face same fate as E9 countries faces.

Leave a Reply