Positioning capacity-building on the long walk to evidence-informed development

Positioning capacity-building on the long walk to evidence-informed development

Last month our UJ-BCURE programme reached its halfway milestone, an event coinciding with our regional AEN meeting. The team since had some time to reflect on the programme’s progress so far as well as taking a step back from our daily activities to zoom out and deliberate a bit on the wider development policy context in which we operate. And yes, as you might have guessed, I am indeed talking about those by too much wine induced type of meaning of life conversations. In this case though, we actually ended up not just with heavy heads the next morning but with a paper submission entitled Walking the last mile on the long road to evidence-informed development: building capacity to use research evidence to the Journal of Development Effectiveness a month later! In a series of three blog posts, we will present some of the ideas we tried to communicate in the paper as well as revive the deceased darlings that did not make the paper at all.

In a nutshell, we tried to tease out how our efforts to support decision-makers’ capacity to use research evidence could conceptually be linked to tangible improvements in the capabilities and livelihoods of people living in poverty. We realise this is a far stretch, but would like to believe that it is ultimately the final objective of our programme.  Evidence use as an outcome in its own right is important, but remains nevertheless for the most part a means to an end – with the end being improved capabilities and livelihoods for citizens. To visualise our thinking we developed a theory of change for evidence-informed development defining evidence-informed development broadly as ‘the systematic and transparent use of scientific knowledge in the formulation, design, and implementation of development policies and programmes’.

The above diagram is a visual representation of the theory of change for evidence-informed development that we came up with.  We identified five main steps required for evidence-informed development to take place (boxes 1 to 5 in the middle) as well as a number of contexts and mechanisms (not exhaustive!) needed to move from on step to the next.

To begin with, we assumed that actors in development policy need to subscribe to the rationale for evidence to inform policy and programme decisions. This assumption shouldn’t be taken for granted. For long, rigorous research evidence in development – quantitative evaluation approaches in particular – has often experienced a backlash as a positivist form of knowledge that is inherently incompatible with bottom-up and community-driven approaches to international development (more on this here and here ). This is still partly evident in the straw man case against randomised controlled trials as Kirsty Newman has pointed out.  All in all though, and not least through the work of agenda setting institutions such as 3ieCGD, and DFID there seems to have developed consensus that a high-quality, context-relevant, and methodologically diverse evidence-base of what works and why is of virtue to inform development polices.

The increasing manifestation of this rationale has further led to an increase in the production of fit-for-policy-purpose research evidence, which we assumed to be the second step in the theory of change. Cameron and colleagues recent overview paper fits neatly into this discussion, showcasing how the production of impact evaluation has increased in development since the CDG’s cri de coeur of when we will ever learn. Updating the authors’ numbers and adding systematic reviews, we identify 2937 evaluations of development interventions published between 1981 and 2012 of which all but 132 have been published since 2000 with the strongest growth of evidence production taking place after 2008. This can be understood as the supply of evidence often associated with the metaphor of ‘pushing’ evidence onto the agenda of decision-makers.

However, push activities alone will rarely be sufficient to support EIDM. Pull activities fostering demand among decision-makers play an equally important role. Here is where we would position UJ-BCURE and its capacity-building efforts. We would regard decision-makers’ capacity to access, appraise, and synthesis evidence as one of multiple mechanisms that might improve individual’s use of evidence.  Though, as you can see from the theory of change, we don’t assume this to be the end point or holy grail of EIDM. Building equilibrium between supply of and demand for evidence, on its own, is not sufficient to create meaningful changes in the realities of people living in poverty. Let alone discussion of the need to pay attention to the actual behaviour of decision-making, an isolated practice of evidence use falls short of the objective of evidence-informed development.

To illustrate this case, take the example of healthcare in developing countries. While evidence of the effectiveness of individual development programmes, in particular deworming tablets, has achieved policy influence, other areas of health care such HIV prevention and neonatal care have not experienced similar successes. Too often an equilibrium between evidence demand and supply remains an exceptional phenomenon, and by and large development interventions are neither systematically assessed for their effectiveness, nor are the findings of evaluations systematically fed back to decision-makers. It since seems to us that the incorporation of research evidence across the decision-making and implementation process in development requires a systemic change.

This systematic change is characterised by an institutional culture of using evidence to inform decision-making. The objective is to establish a state of the art in development in which there is an intrinsic motivation to use evidence – manifested in institutional structures and personal habits – as the use of evidence is thought of ‘the right thing to do’.  We acknowledge that such systemic change is unlikely to come from individual projects as UJ-BCURE. Rather, structural interventions and mechanisms might be more relevant, for example changing promotion structures or creating a regulatory body as a development analogue to the UK’s National Institute for Health and Care Excellence.

In a last step then, and assuming perfect world conditions, the culture of evidence use would firmly establish itself at all levels of development. A bottom-up demand for and supply of evidence would be met by a top-down institutional value of using evidence. The interplay between the two would create a development eco-system in which evidence in form of knowledge from research, practice, and experience is generated at the roots through learning in practice and then fed back into the decision-making processes at policy level and communicated to the wider community of practice. This system requires evidence literacies at all levels ranging from the farmer receiving fertilizers, to the agricultural extension worker, up to the national policymaker. Such a system fueled by adaptive capacities and feedback loops across development policy, practice, and research could be described as evidence-informed development and move the domain closer towards the uptake of effective principles providing room for the advent of irreversible changes in development policy – which will be the topic of next month’s blog!

In sum, and without this diminishing our belief in the virtue of our work, capacity-building to use evidence is a small but important piece in the puzzle of evidence-informed development. Understanding how UJ-BCURE is embedded within the overall theory of change has helped us refine our own programme theory and proved a valuable exercise to understand the wider context in which we operate.