How can we know if contemporary antislavery initiatives work?

Posted on: 22 June 2018 by Helen Bryant

Last month a parliamentary committee found that modern slavery was not fully understood by the British government. It could not establish whether its crackdown on modern slavery is a success.[1] Shortly after, in response to this report, the UK’s first Independent Anti-slavery Commissioner, Kevin Hyland, resigned from his post.

This leads us to an important question; how can we know if contemporary antislavery initiatives work? There is currently an ongoing discussion around this and some reasons cited include lack of understanding on the scale of potential victims as well as insufficient data monitoring and recording systems. But this is not a problem which only plagues government programmes. Poor monitoring and evaluation seems to be the Achilles heel of much development work. As we do not fully understand the breadth of human trafficking and modern slavery, programmes and development agencies seem to have a difficult time monitoring progress and success in a way which moves the sector forward. Instead we are left with hyperbolic success stories and promises of ‘eradication’ which leave out true representations of the effects anti-slavery efforts may have on vulnerable individuals and communities.

The UN Evaluation Group outlines a good programme evaluation as one where the process is transparent, completed by a person who is impartial and possesses expertise in the field of evaluation, purpose driven, planned to ensure correct data collected and followed up (such as with a follow up evaluation after a determined period of time to allow for programme activities to settle). As part of my work helping develop the AHRC-funded Antislavery Knowledge Network, I have been examining multiple project evaluation reports and the evidence suggests that these are not the steps organisations generally take when completing evaluation reports.

Deanna Davy, in Anti-Human Trafficking Interventions: How do we know if they are working? argues that anti-trafficking programmes lack effective evaluations due to many factors including: lofty objectives that cannot be measured, objectives that are process orientated and therefore are too easily defined as success, confusion over the difference between monitoring and evaluation, and utilising process evaluation over outcome evaluation.[2] Likewise these reports often are produced by the organisation who completed the work based on superficial data and without any follow up.

Many of the issues associated with evaluation processes can be identified as starting at the project planning stages. Short time frames, too broad objectives, lack of logic framework, and lack of regular monitoring are all contributory factors that lead to trivial or ineffective evaluations. My review of the evaluations shows issues with linking project logic with actual activities and objectives. This indicates that objectives may have been too ambitious or there was no theory of change in the beginning.  Programmes need to be clear about programme logic and build monitoring and evaluation as part of the programme in order to ensure consistent data collection and monitoring which can then feed into the evaluation process.

Another factor for poor evaluations is the lack of understanding as to the difference between monitoring and evaluation. Davy suggests that through her review of evaluations there was an apparent confusion between ‘monitoring’ and ‘evaluation’ which has led to a barrier in conducting quality reports. I too have concluded that often organisations mix up tracking, monitoring and reporting with evaluation and therefore subsequent ‘evaluation reports’ are in fact just summaries of the former. This is where research institutions and interventions may be able to bring this part of development work forward. The Antislavery Knowledge Network will be supporting programmes in this field and encouraging them to cast a critical and analytical eye over development activity. One of our key objectives is use processes within the arts and humanities to enable more effective evaluations which we hope will lead to more effective development work.

Currently when programmes are successful in producing a quality evaluation it often ends up being a process evaluation as opposed to the more effective impact evaluation. Where evaluating processes of development programmes is beneficial, avoidance of a focus on impact means we are often not understanding the full benefit of activity. This oversight also means that we are not often able to fully appreciate the cost efficiency of development programmes. One way we can combat ineffective evaluation methods it to determine their source such as weak programme design and misunderstanding of effective models of evaluation. Process evaluations also lead to evaluations that offer ‘no clear evidence of effectiveness’[3] as they often just represent any success as directly linked with any associated programme events and not directly linked to the any positive outcomes on the beneficiaries involved.

When beneficiaries’ comments are included in the evaluation reports they often lack first hand commentary on how the programmes positively impacted their lives in any meaningful or lasting way. Case studies are often just representing the victim’s story but leave out any mention of how development efforts may have helped make a difference. As a result, any direct correlation is lost between programme activity and lasting change. This discussion in evaluations should focus on results led activity and not just the story of the activism.

We need more stories about the programmes from the beneficiaries – i.e. explanation of education programmes, arts programmes or work training that took place. First-hand accounts as to how they were involved with the anti-slavery effort taken on their behalf. Using the term ‘beneficiaries’ can sound quite formal, but it avoids other, more problematic, terminology which tends to marginalise people who are at risk: though these people may be victims we do not need to victimise them. Reports often make claims on their behalf but to establish their efficacy we need to hear more of their voice. Suggestions for a change to the gathering of evidence could be collecting video documentation of oral histories, recorded interviews with targeted questions, follow-up- working into our budgets or using additional funding to follow up with individuals to ensure increased quality of life. Of course, these techniques may raise ethical issues, but with the right processes in place, many of these can be overcome.

When evaluations are insufficient understanding the efficacy of development programmes is impossible. How are we meant to measure reduction of incidence of human trafficking for example if we do not often follow up with programme leaders and beneficiaries in the longer term? Likewise, how can we ensure that antislavery efforts had long term positive effects on communities or individuals and that our humanitarian intentions did not in fact negatively impact on beneficiaries’ lives?

Properly planned and measured evaluations are essential to take a critical eye and learn best practice not only from our own development projects but those of other – similar- projects taking place globally. This is why high quality and well-designed monitoring and evaluation should be a key component in all development projects.

Back to: Department of Politics