Let’s decide how to monitor school-related violence

1Today, a large symposium is opening in South Korea on School Violence and Bullying: From Evidence to Action, with more than 250 participants from 70 countries coming together to discuss how to combat the issue.

A new Global Status Report on School Violence and Bullying is being released this morning by UNESCO and the Institute of School Violence Prevention at Ewha Womans University. It compiles data from 19 low and middle-income countries and found that 34 % of students aged 11–13 reported being bullied in the previous month, with 8% reporting daily bullying.

These are shocking findings. They sit alongside many other similar findings, which give us snapshots of school-related violence in different countries, and regions, and confirm that bullying and school-related violence are issues we all need to pay more attention to. But, as our new paper, released in time for this Symposium shows, these disparate findings, taken from various cross-national and national surveys can almost never be compared one with the other. From the perspective of a monitoring body aiming to look at the global prevalence of the issue, and help inform policy makers with those findings, this measurement issue needs addressing.

2

Our paper runs through the multiple cross national surveys covering school-related violence in some way, from TIMSS, to TERCE, PISA, SACMEC and the GSHS, as well as many more. It also runs through some of the most notable national surveys on school-related violence, including Chile’s Child Maltreatment Survey, India’s Study on Child Abuse, Germany’s National Survey on Violence Against Teachers, and the Young Lives Study, covering Ethiopia, India, Peru and Viet Nam with questions on corporal punishment.

3The issue it highlights again and again is the lack of consistency across questions in these surveys. This is mainly due to different researcher and practitioner objectives, study histories, cultural aspects, scientific perspectives and resources availability. It means that the results from the different studies are expressed through very different indicators. Even surveys that use similar indicators produce results with significant discrepancies that are hard to reconcile.

Let’s look at a few tangible problems our research brought to light in the surveys that is making comparison hard.

Type of violence: The GSHS questionnaire, for instance, captures physical violence and bullying, but does not show what type of bullying is being discussed, and so misses out on finding out about cyberbullying. Nor does it cover sexual violence or violence from teachers to pupils.

Gender breakdowns: PISA, meanwhile, fails to capture a gender dimension of school-related violence, which we know cuts across the issue.

Time issues: TERCE doesn’t specify the period that its questions are covering, while TIMSS, for example, asks specifically how often violence occurred and when.

Definitions: Bullying is often described differently in the questions used by surveys. HBSC calls it being ‘teased repeatedly’ or ‘nasty and unpleasant things’ as well as being ‘deliberately left out’, while TIMSS relates it to the sharing of embarrassing information or the spreading of lies, stealing, being hit, hurt or threatened.

Age: Different surveys cover different age groups. TIMSS covers grades 4 and 8, for instance, while TERCE and SACMEQ cover grade 6. Similarly, GSHS/HSBC cover 11-, 13- and 15-year-olds, while PISA covers only 15-year olds.

4Why, apart from the obvious harm that school-related violence causes to children and adolescents, should we care about measuring and monitoring the problem?

The endorsed thematic indicator for Target 4a in the new global education goal calls for us to measure the ‘percentage of students experiencing bullying, corporal punishment, harassment, violence, sexual discrimination and abuse’. The Technical Cooperation Group on SDG 4 indicators is looking to develop this indicator further.

We believe this development cannot cover all students and all forms of violence. It is unrealistic for the indicator to cover all students and all types of violence.

We also believe that there are three main options for building consensus across the different measurement tools currently available and helping find the data needed to monitor the Target:

  1. Standardize the results of different surveys using a triangulation approach. This would involve estimating the ‘risk’ of a country having incidents of violence where data is not available.
  2. Support convergence between tools. This would encourage those working on tools to critically reflect on their methods. That said, many are reluctant to make changes, and there’s a risk that converging them would dampen innovation of new research methods, too.
  3. Select one instrument and expand it to more countries.

View and share our social media resources on the release of our policy paper.

Share:

Leave a Reply