Improving the utilisation of evaluation evidence by policy makers and public managers has been a concern for many (Pollitt, 2003; Chelimsky, 2008). Yet, there is still widespread recognition that evidence from evaluation and... [ view full abstract ]
Improving the utilisation of evaluation evidence by policy makers and public managers has been a concern for many (Pollitt, 2003; Chelimsky, 2008). Yet, there is still widespread recognition that evidence from evaluation and elsewhere is under-utilised (Hird, 2005; Pawson and Tilley, 1997; Sanderson, 2000; Stone, 1997).
Whilst many approaches to the under-use of evidence have focused on the ignorance of policy makers and inefficacy of communication. The perspective of evidence makers, such as program evaluators, has been largely unsought. Indeed, there are few studies on those who produce evidence, their voices/perceptions as to why evidence is failing and how this could be resolved represented (Tourmen, 2009; exceptions include Fitzpatrick et al, 2009; Wond, 2012).
This paper explores the findings from an ongoing research project that sought the perspectives of program evaluators towards the effectiveness of their practice. The study involved conversational interviewing with evaluators from a range of policy areas (e.g. health, education, foreign aid, social mobility, enterprise support) to understand the experiences and challenges they felt limited their practice and effectiveness of evaluation. Interviews (administered via telephone, face-to-face and Skype) were undertaken with 19 practising evaluators (9 female and 10 male) who undertake evaluation in various capacities (academics, evaluation consultants, internal evaluators undertaking program evaluation). 10 further interviews are scheduled, and more are planned for 2018. The researcher’s own involvement with evaluation societies/networks in the UK and Europe has supported access to evaluators (opportunity-sampling) and led to a conversational interviewing approach, free from a strict interview guide (Brinkmann, 2013). Interviews were recorded in writing when possible (retrospectively in the case of some face-to-face exchanges; otherwise contemporaneously). Data (so far) has been transcribed and initially analysed using NVIVO.
This paper draws on evaluators perspectives of the evidence-use gap that emerged during these interviews. To date, there are several themes that have emerged from the interview data, including:
- The prevalence of a perceived ‘dusty shelf syndrome’ with evaluation reports often thought to be underutilised;
- Evaluations meeting less scientific, and ‘more pragmatic’, conclusions as evaluation methodologies have needed to deviate 'off plan';
- An occupation with pleasing the funder, by providing ‘sugar-coated reports’, in order to enhance the likelihood of future evaluation opportunities;
- Newer ways of presenting evaluation data having more recognition (visual, social media distributed);
- Frustrations at explicitly or implicitly being asked to present biased reports;
- Concern for the lack of learning that emerges from experimental forms of evaluation (RCTs for instance) by some;
- Concern for negative messages being perceived as overly-negative, and so avoided;
- Varied involvement with 'What Works' networks (UK);
- A high consciousness of Patton’s ‘Utilization Focused Evaluation’, which appeared to be the holy grail for many.
- A frustration that evaluation results are not translating into improvements in policy and practice.
The paper supports ongoing efforts to explore and enhance evidence use in the public sector and enhance evaluation value (in alignment with this panel track), but advocates, that to advance this debate, we must collaborate with our evidence makers to collectively understand and explore such challenges.
Evidence use in government – its contribution to creating public value