Making evaluations work for education equality and inclusion

By Karen Mundy, Ontario Institute for Studies in Education

International organizations typically have well-developed evaluation units, generating large volumes of evidence about their policies, programs and practices. Yet, while synthesis of evidence on international education development has evolved considerably in recent years, synthesis of evidence from the independent evaluations undertaken by international organizations has not.

coverA new ‘evidence synthesis’ released this week from UNESCO’s IOS Evaluation Office and a group of international partners partly fills this gap. The study reviews 147 independent evaluations commissioned by 13 international organizations, all with a focus on measuring and assessing some aspect of education equality or gender equity. Using a rigorous search process, systematic coding and narrative analysis, the study gives a bird’s eye view of the types of interventions being evaluated by international organizations and synthesizes evaluation findings. It also proposes important recommendations to help improve evaluations commissioned by international organizations and ensure that these evaluations support country progress in achieving Sustainable Development Goal (SDG) 4.5.

What are the main findings from the study?

The volume of evidence is impressive: in an open search of the evaluation databases of 16 international organizations, we found that 147 of a total of 156 evaluations of education published between 2015 and 2019 included objectives or outcomes related to gender parity, equality and inclusion.  Approximately 30 to 40 education evaluations were published each year.

There are strengths and gaps in these evaluations. Their predominant focus is on interventions to support access and participation. Very few include learning as a measured area of impact. Furthermore, while the issue of girls’ education is well covered in these evaluations, the impact of programs on other aspects of equity, related to inclusion for those with disabilities, and disadvantage related to ethnicity and language, were less commonly studied. Geographically, the largest number of evaluations in the dataset are based in Africa – signaling an important new body of evidence on education in the continent.

Only 28 of the evaluations used rigorous quantitative methods with a counterfactual. The strongest evidence appears in evaluations of cash transfer and school feeding programs. Very few of the evaluations look at the equity impact of interventions that directly target improvements in service delivery, with a notable lack of strong evidence on what works to improve teaching practices for more equitable learning outcomes.

With a few exceptions, evaluations in this dataset are unable to show a convincing link between large-scale system-wide reform programs and improvements in learning equity and alleviation of other forms of educational inequality, in part because rigorous and consistent use of theory-based evaluation design is rare.

Furthermore, as noted in an earlier study (and also noted in a podcast), there is little attempt to compare and learn from system reform programs by looking across countries, or across the different organizational forms of support provided in a single country. Yet complex and multi-pronged ‘system-wide’ programs form an increasingly large share of donor-funded interventions in education. Tantalizing but incomplete findings from evaluations of system reform programs include the fact that decentralization and school-based management may have negative impacts on equity and inclusion; and that results-based financing has mixed impact on implementation.

Thaneshwar Gautam

In conclusion, this new report calls for international organizations to strengthen their evaluation of SDG 4.5 in four specific ways.

First, address evidence gaps by improving evaluation of the equity impact of interventions focused on changing frontline service delivery (improving classrooms, teachers and schools), including by incorporating stronger measures of learning equity.

Second, use the evaluation enterprise to contribute to stronger, country-owned generation and use of data.

Third, strengthen evaluation methodologies.

Finally, based on validation workshops in five countries, the report calls on international organizations to make evaluation evidence more usable and useful to national stakeholders, by ensuring they are involved in the selection and timing of evaluative studies, and by preparing evidence syntheses to support ongoing learning.

In addition to the full report, a methodological note and list of the evaluations will be available on the UNESCO IOS website in due course.

Share:

2 comments

  1. Excellent blog, needed to be said. One thing to note is that in evaluating systemic reforms, “rigor” has to be understood very differently from the way it is understood in evaluating one-input interventions (e.g., a stipend approach or a particular pedagogy). Rigor is still possible but needs to be understood differently…

    1. Couldn’t agree more! For systemic change there is no counterfactual. However, a strong theory based process tracing approach based on the best available research can allow us to “test” whether existing assumptions (hypotheses) about what things work is either proven or disproven by an individual case, and can contribute to knowledge building.

Leave a Reply