Investing in capacity: our barriers to evidence use in government workshop for government officials in South Africa -Part 2

2015-09-11
Investing in capacity: our barriers to evidence use in government workshop for government officials in South Africa -Part 2

The challenge of modern governments is to deliver high quality and accessible services that transform people’s lives and lead to societies that are prosperous and caring.

How you achieve that often occasions intense debates, but what is clear is that you need tools that are fit for purpose and that afford government and external stakeholders the opportunity to assess progress made in achieving policy goals. At a recent University of Johannesburg Building Capacity to Use Research Evidence (UJ-BCURE) workshop, these were precisely the issues that occupied a select number of government officials as the workshop deliberated on practical challenges that represent barriers to increasing the use of evidence in decision-making in government.

We convened three thematic groups to address this issue and used our programme experts as soundboards to providing useful feedback to the officials.  The three groups discussed the challenges in developing credible Monitoring and Evaluation (M&E) systems, the role of Organisational Development (OD) in promoting evidence use, and, discussing how Information and Communications Technologies (ICTs) in Education can assist in achieving education goals.

In reflecting on the challenges, it is perhaps best to invoke the spirit of South Africa’s key public finance law, namely the Public Finance Management Act (PFMA). This law demands that departments, through a routine and regular process, establish how best to deliver services (costs, relevance, quality) and to invent alternative service delivery mechanisms to improve the quality of services and reduce the overall costs of these services.

In the M&E thematic group, officials noted that achieving this ideal state is made difficult by the lack of resources (money and expertise) and that the resulting M&E systems are not yet performing this optimal function. However, the quality and usefulness of M&E systems varies across departments as evidenced in the reports by the Office of the Auditor-General (AG). We heard that sometimes the problem is related to the inability to construct valid performance indicators, or that indicators are changed midstream to bring about a more favourable assessment, and in many instances, the challenge of managing information across spheres of government presents formidable problems for M&E systems. Given the outstanding work that was first done by the National Treasury and subsequently by the Department for Planning, Monitoring and Evaluation (DPME), there is now a widespread acceptance that these M&E systems are necessary, but meaningful resourcing still elude many departments.

We also heard that although M&E units are now commonplace, there are instances where esoteric scientific evidence (especially in more science-orientated departments) requires skilful translation into understandable social and political language in order to make the use of such evidence acceptable. Finally, national departments that oversee other departments appear to face different M&E challenges than those who are direct implementers of policy. The challenge relates to the ability of the national department to draw relevant and credible data from another sphere of government where it has limited authority and control.

In the OD group, three pertinent issues stand out, namely how departments choose to use intergovernmental forums, matters related to the policy implementation cycle in departments, and how priority-setting in government affects the delivery of services. Members of the OD group recognised that intergovernmental forums, intended to deliver policy consensus, are often under-powered and so that the right people are not present to take the big decisions. So instead of promoting consensus and cohesiveness in the way policy is implemented, sector-specific departments may actually contribute to the fragmentation in the implementation of policy. Unplanned requests that emanate from higher levels of government present yet another problem. Participants argued that priorities that develop at a higher level, and which require immediate actions, could potentially scupper a more considerate way of implementing programmes and gathering the necessary evidence.

Finally, the OD group made observations about the policy implementation cycle and asked whether policies that never get to be implemented are really good policies. Members of the group raised concerns about the systems in place to process service delivery information (M&E systems) and the ability of departments to use such information to improve service delivery (planning). A full policy implementation cycle is needed to shift policy implementation from mere concerns about access to deeper issues about the quality of that service.

For the ICT in Education group, two central issues were flagged, namely the ability of the education sector to impart its imprint on the implementation of ICT policies in schools, and the undeveloped nature of evidence in this important sector. Officials in the group pondered whether the large roll-out of ICT projects in schools is sensitive enough to education realities. They answered this question in the negative and although openly welcoming of large-scale roll-outs, argued that the needs of the education sector should prevail in such interventions. They also worry about the state of evidence in the sector, because of the large number of un-coordinated roll-out and implementation of ICT projects in schools. According to the officials, clarity is needed on what works, when and under what conditions and they are determined to produce useful and credible evidence that would enable ICT to contribute to achieving education goals.

So what have we learned? One, government is not some undefined homogenous entity and that pockets of excellence are beginning to emerge in areas as diverse as effective data collection methods across government spheres, the development of integrated data systems across sectors, and the continued, successful capacity development of officials.

Two, as the role of the DPME deepens, more useful evaluations are conducted and these are directly fed into the service delivery improvement cycle.

Three, we have learnt the shared value of open sessions for detailed contributions from participants and discussion of specific issues arising within their workplaces. We have a remit to ‘build capacity in the use of research evidence’, something which is often interpreted as the provision of ‘training’. We prefer to provide more open ‘workshops’ where professionals within and outside of government can discuss issues related to research use and we suspect we learn as much from our colleagues in government as we are able to share with them, particularly through open sessions such as this.

Finally, although the availability of resources continues to present challenges, their effective utilisation and the need to offer more joined –up service delivery solutions may improve overall outcomes and help generate better evidence for what works.