Science of using science: researching the use of research evidence in decision-making

2016-04-28
Science of using science: researching the use of research evidence in decision-making

‘So what exactly are the best ways of getting research used by decision-makers? Evidence rarely speaks for itself and you may have witnessed some impressive ways for research to get noticed and used. Maybe a high-level policy seminar, mentoring programme, or a journal club used by nurses. But do they really work? Our pet approaches to knowledge exchange may fail to deliver, and we need to evaluate if they really cause impact.’ This is how Jonathan Breckon and Jane Dodson introduce our recent reports on the Science of Using Science and Using Evidence  – What Works; and they couldn’t be more spot on: our systematic review of what works for research use exposes a number of interventions that, so far, fail to have an impact on decision-makers’ use of evidence. The review also highlights how the evidence-informed decision-making (EIDM) community itself might employ interventions and methods that themselves are not in line with the most up-to-date evidence base. For example, in EIDM capacity-building, are we consistently applying pedagogies and learning techniques that are evidence-based? And what about the evidence positioned to be used by decision-makers? Is this evidence itself fit-for-purpose to inform decision-making? What is it that is fed in our latest knowledge exchange platform, and are knowledge exchange platforms an effective tool themselves to facilitate access to evidence? Such were some of the questions our review set out to answer and encouragingly for some interventions we found reliable evidence that they can increase research use. In this second blog of the Science of Using Science series (for Blog 1), we will focus on the high-level results of our review, which are naturally reported in more detail in the published research reports here and here.

Evidence of effects – so, what works for research use?

Based on existing systematic reviews that synthesise the findings of primary studies evaluating the effects of research use interventions, we identify three key groups of interventions that have been found to increase evidence use. The first refers to interventions that increase decision-makers’ skills to access and make sense of using evidence (mechanism 5 in our conceptual framework). Interventions such as EIDM capacity-building, critical appraisal training, and formal university courses were consistently found to be effective to increase the use of evidence by diverse groups of decision-makers, for example nurses, senior policymakers, and hospital administrators. However, this finding only held true if these interventions improved both the capability to use evidence (e.g. being able to appraise a research study for its reliability) and the motivation to use evidence (e.g. having a more positive attitude towards evidence use). It was this particular COM configuration that led to behaviour change of decision-makers.

The second effective intervention group referred to the facilitation of communication and access to evidence (mechanisms 3 in our conceptual framework). Interventions such as evidence repositories and dissemination were effective to increase evidence use only if communication and access provided the opportunity as well as the motivation to use evidence. For example, an evidence database for health policymakers piloted in Canada was not effective on its own to increase evidence use. However, once the database was complemented with an SMS service that send tailored and targeted messages to policymakers relevant to their content expertise, decision-makers’ use of evidence increase as measured by the Global EIDM index.

The third group of interventions, which existing systematic reviews identify to support the use of evidence, referred to changes to decision-making structures and processes (mechanism 6 in our conceptual framework). These interventions aimed to embed the use of evidence and the mechanisms of change required for it in the routine working processes of decision-makers. For example, instead of just building decision-makers’ EIDM skills through training programmes, supervision structures were amended to monitor and reward the application of these skills in the daily practice of decision-makers. Likewise, instead of merely installing an evidence database, rapid response services and decision-making protocols requiring the use of the database were used to embed this evidence access into organizational processes. There is no evidence yet, however, that such structural changes on their own (i.e. without being combined with other mechanisms of change) are sufficient to increase evidence use.

Lastly, there was also evidence that a few individual interventions characterized by a highly intense and complex intervention design led to an increase of evidence use. An example of such an intervention is a year-long collaborative research utilization programme that included co-production of evidence, access to evidence, EIDM capacity-building, and institutional incentives.

Evidence of no effects – so, where do we have to think more carefully about our intervention approach?

Existing systematic review evidence also provides insights into what has so far not worked well to increase evidence use, i.e. interventions for which there is overall no evidence that they have led to changes in decision-makers’ use of research. A number of findings of no effects echo the above results of the importance of COM-B configurations. For example, passive dissemination or access to evidence was not effective to increase research use (remember the evidence database without motivation-building programme features such as tailored and personalized messages). Likewise, interventions that aimed to build EIDM skills but did not follow an explicit educational approach, for example once-off seminars, provision of training manuals, and communities of practice, were not effective to increase evidence use.

Overall, unstructured interaction and collaboration between decision-makers and researchers also tended to have a lower likelihood of success. The emphasis here in on unstructured. Simply bringing researchers and decision-makers together without an underlying logic model and facilitation of how this interaction leads to evidence was reported as ineffective in a majority of reviews. However, clearly defined, light-touch approaches to facilitating interaction between researchers and decision-makers were effective to increase intermediate COM outcomes. This referred to decision-maker engagement in particular, for example decision-makers inputting on research questions and providing feedback on reports and dissemination plans; these types of ‘interaction’ improved motivation and opportunity to use evidence, but the final outcomes of evidence use was not assessed.

Absence of evidence – so, where are the evidence gaps at?

We also found a number of interventions whose effects so far have rarely been evaluated and synthesized in existing systematic reviews. Interventions to raise awareness for and build positive attitudes towards EIDM for example currently are subject to an evidence gap. We therefore do know little about the best ways to increase support for the concept of EIDM. This is an important lesson for EIDM proponents  such as the AEN and A4UE. Do we really know whether and how our work makes a difference? A second major gap opened when assessing the evidence on interventions aiming to build agreement on policy-relevant questions and what constitutes fit-for-purpose evidence. So, while we acknowledge that there is not one gold standard of evidence to be used in all decision-making, we seem to know much less about how to find out which evidence is fit-for-purpose in which context, and whether providing fitting evidence does in fact lead to its use. Survey research consistently identifies this lack of policy-relevant research as a key barrier to evidence use and the research community continues to have great appetite to knock down the strawman of a perceived tyranny of RCT evidence; but we currently have no evidence to suggest empirical approaches to agree on fit-for-purpose evidence. Co-production, for example, has been found to improve practice outcomes (Boaz et al 2015), but its impact on evidence use is less researched. Lastly, there were also a number of thematic gaps in the identified evidence base. Out of 36 included reviews, only six were conducted outside the health sector. There was barely any mention of cost-effectiveness and the retention of effects in the long term. Surprisingly, what research evidence was to be used to inform decision-making was rarely made explicit leaving an important question (or exclamation) mark behind the ‘evidence’ in evidence-informed decision-making.

As much as the above have started to answer some parts of the question of what works to increase evidence use, we only seem to be scratching the surface yet. Capacity-building by and large increases evidence use, but there is little use of explicit adult learning theories (with some notable exceptions here and here). Access to evidence requires motivation-building components to nurture evidence use, but there are many ways to increase motivation and entire professions are built around communicating and presenting information in a manner that encourages its use. A golden threat throughout the review findings is the centrality of the decision-maker, understanding her context, her decision-making processes, her biases, her motivation, etc. (see for a similar argument here); there seems to be much we still don’t know on how and why evidence is used or not used if we only look at the traditional research use literature. The second part of our Science of Using Science project therefore ventured into the Wild Wild Social Science West looking for suggestions and insights on what other interventions could be applied to support EIDM. The results of this scoping review will be presented in the next blog of this series and touch for example on:

  • The role of information design, risk communication, and narratives when communicating evidence.
  • The importance to consider behavioral norms in our theory of change for evidence use.
  • The collaborative overload and insights to be gained from social network analysis.
  • The continued rationale to consult organizational change and learning literature.
  • The need to get serious about evidence use nudges, evidence literacies, and EIDM apps.