James Turner considers whether the incentives are in place to be serious about evaluation

There is‎ an increasing focus on evidence and evaluation, not just in education but in public policy and the third sector more widely. But the rhetoric here is considerably easier than the reality: that robust evaluations are difficult, that they can produce uncomfortable results which challenge vested interests and – the subject of this blog – that there are significant organisational obstacles to making them happen.

Setting aside the moral and intellectual arguments, are there actually the right incentives in place for someone to commission, participate in and faithfully use the results of a truly robust evaluation? I suspect often not.

For organisations delivering programmes, the stakes are high. Of course, a good result from a robust evaluation will stand them in very good stead for winning further support. But, perversely, an organisation which has had the integrity to evaluate its work properly and then gets a negative result could find itself in a worse ‎position than an organisation which has had no evaluation – or, perhaps even more absurd, an organisation that promotes ropey research as proof of impact.   They risk being abandoned by funders, losing revenue and people losing jobs.

Grant funders – detached from day to day delivery – should be able to take an objective view of where resources should be focussed and therefore be genuinely interested in what works. And there are certainly confident foundations that are true to this ideal. But many don’t use evidence extensively or at all when making funding decisions. And, at the other end of the process, who in reality wants to be the person telling a trustee board – or a funding partner – that the project you recommended, and in which they have invested a considerable amount of money,‎ hasn’t worked? Much better for your professional standing and the organisation’s reputation to present the rosiest view possible.

It is a similar picture for commissioners‎ and policy makers in local or national  government. It is a brave official that would tell an elected representative that their pet project has no impact. And it would be a brave politician who admits a policy has failed — and a magnanimous opposition which gives them credit for doing so. Who’s ever become a leader on the basis of stopping or rethinking a flagship policy that wasn’t working?

The incentives are perhaps best aligned‎ when it comes to the academic evaluation teams themselves. The way they are funded and their work is recognised in universities means they should just be interested in the facts – if anything, an academic is more likely to make their name proving a high profile intervention doesn’t work, rather than showing it does. But this can breed suspicion among practitioners and funders that researchers are more interested in indulging their academic interests than providing useful and practical results.‎ A commitment to absolute purity can mean never being definitive about anything.

I don’t have easy answers to any of this.

And I certainly continue to believe that robust evaluation to gather evidence to inform practice and spending is absolutely critical. The alternative – to live in ignorant bliss or‎ to convince ourselves with superficial evidence – is not sustainable or morally justifiable. But while the intellectual argument is being won, there are real world obstacles at individual and organisational levels.

The Education Endowment Foundation is doing sterling work in not only building evidence, but also trying to change the way research is used by teachers and decision-makers, and bring academics and ‎schools closer together.  But reorientating all parts of the system- from the grass roots deliverer upwards – is a massive challenge.   ‎One modest suggestion from me, after ten years at the Sutton Trust, is for funders and commissioners to take a long term view, that mitigates the risk of high stakes evaluation. Pick the programmes which‎, based on existing research are the most likely to work, evaluate them robustly – but use those findings in a constructive way to improve and refine the programme so that it works better, not to abandon it and move on to the next bright new initiative.

The question, of course, is how to devise a research trial to prove my hunch is right.

Media enquiries

If you're a journalist with a question about our work, get in touch with Sam or Rocky on the number below. The number is also monitored out of hours.

E: [email protected] T: 0204 536 4642

Keep up to date with the latest news