Does Evaluation really matter? Lessons from life experiences

2015-02-20
Does Evaluation really matter? Lessons from life experiences

 

By Louis Gerald Maluwa 

Professional Evaluation is a relatively new discipline which evolved from various academic fields into an independent scientific discipline in the early 20th Century. By definition, evaluation refers to the systematic and objective assessment of an on-going or completed project, programme or policy, its design, implementation and/or results, with the aim of determining the relevance and fulfilment of objectives, development efficiency, effectiveness, impact and sustainability (Organisation for Economic Cooperation and Development – Development Assistance Committee (OECD-DAC) 2002:21-22). Since its emergence, evaluation has received mixed views. While some regard it as an essential professional activity designed to analyse phenomena more systematically and objectively, others view it as a natural and trivial activity that does not require professional know-how. There are even those who regard evaluation as a completely unnecessary practice that is nothing but a waste of resources. At the heart of this debate lies a central question: Does evaluation really matter? Some life experiences may help answer this question more appropriately.

Sometime in 2013, when I was doing my master’s degree in Public Management and Governance, I approached a faith-based organisation in Johannesburg, South Africa, which runs a number of programmes/projects aimed at serving the socio-economic needs of the church members. The purpose of my visit was to request the organisation’s authorities to allow me to conduct an evaluation of one of their social programmes for the purpose of my master’s dissertation. The response I got was a bit disheartening; I was told there was no need for me or anyone else to evaluate any of the organisation’s programmes or projects, for, as per the authorities’ words, “everything was going according to plan, and all listed programmes were very successful”. I commended them for the job well done, but feeling unsatisfied with their response, I mentioned to them that the proposed evaluation could actually be a great opportunity for them to demonstrate to the church the commendable work they were doing. The subsequent response was a bigger blow; without mincing words, I was told that it was irrational and impermissible for any individual to evaluate any of their programmes, for God alone was their monitoring and evaluation (M&E) officer. In other words, evaluation has, or more precisely had, no value in this organisation.

In a surprising turn of events, a year later (2014), this same organisation called me, requesting that I conduct an evaluation of their bursary programme. Apparently, the programme funders had demanded that the organisation demonstrate the impact of the programme on church members. It was a very tough assignment for the organisation’s management because they did not have an M&E system in place. Neither did they maintain data on the bursary programme. Clearly, it was impossible to determine, let alone demonstrate the effect or impact of the programme. This state of affairs acted as a cue to management to consider M&E as an essential management practice and to incorporate it in the organisation’s strategic management plan.

In a separate incident in 2013, I had the opportunity to evaluate, as an alternative for my master’s dissertation, one local economic development (LED) project in the Greater Taung Local Municipality (GTLM), in the North West province of South Africa. Initially the project owners were not keen to have the project evaluated. For them, the status of their business was clear: the project was underperforming due to inadequate funding, exorbitant land lease costs, lack of essential farm equipment, high costs of farm inputs, and lack of access to reliable markets. The cooperative owners, therefore, felt it was unnecessary to conduct an evaluation of a project whose reasons for underperformance were obvious. However, with the persuasion of the GTLM’s LED manager, the cooperative owners finally agreed, and allowed me to conduct the evaluation. Contrary to the initial perceptions, the evaluation results showed that the major cause of the project’s failure was the initial business plan, which was impractical and over-ambitious. The business plan had been drafted without consulting the project owners, and was not informed by a priori feasibility studies. The presumed problems were therefore spill-over effects, not the root problem. The formal evaluation identified the root problem.

So what do these experiences suggest?

Evaluation is a vital management practice that should be embraced by all development organisations if they are to achieve and, more importantly, demonstrate programme results. It is an indispensable management tool aimed not only at keeping an organisation’s programmes in check, but also for accountability to project stakeholders on the performance of the project. To this end, evaluation should form an integral part of programme management, from the planning phase through the implementation phase (formative evaluation), to the closeout phase vis-à-vis temporary programmes (summative evaluation). Also, crucial for effective evaluation is proper monitoring, which involves collecting, analysing, and reporting data of inputs, activities, outputs, outcomes, impacts and external factors, in a way that supports effective management (The Presidency 2007:1). Monitoring provides stakeholders with regular feedback on progress, results, and early indicators of problems that need to be addressed. It must be understood that a programme or project works as a system where one component feeds into the other, and a flaw in one component is more likely to affect the other component. Thus, a defect in programme inputs may affect programme activities, which may in turn affect programme outputs, outcomes, and eventually programme impact. An early detection of flaws in programme components or phases through monitoring will therefore prevent the need to identify the root cause of programme failure at a very late stage. Monitoring eventually provides data or information for the evaluation activity and, therefore, should be an integral part of evaluation.

At the end of it all, Osborne and Gaebler’s viewpoint on evaluation is my flagship: “If you don’t measure results, you can’t tell success from failure…; if you can’t see success, you can’t reward it…; if you can’t reward success, you’re probably rewarding failure…; if you can’t see success, you can’t learn from it…; if you can’t recognize failure, you can’t correct it…; if you can demonstrate results, you can win public support” (Osborne & Gaebler 1992:147-154).

 

References:

Organisation for Economic Cooperation and Development – Development Assistance Committee (OECD-DAC). 2002. Glossary of Key Terms in Evaluation and Results Based Management. Paris: OECD-DAC. http://www.oecd.org/development/peer-reviews/2754804.pdf. Accessed 10 February 2015.
Osborne, D. & Gaebler, T. 1992. Reinventing Government: How the Entrepreneurial Spirit Is Transforming the Public Sector. Reading, MA: Addison-Wesley.
The Presidency. 2007. Policy Framework on the Government-Wide Monitoring and Evaluation System. www.thepresidency-dpme.gov.za. Accessed 10 February 2015.

Disclaimer: The views expressed in published blog posts, as well as any errors or omissions, are the sole responsibility of the author/s and do not represent the views of the Africa Evidence Network, its secretariat, advisory or reference groups, or its funders; nor does it imply endorsement by the afore-mentioned parties.