Realist Evaluation for Impact Evaluation in South Africa

Nombeko Mbava

Nombeko Mbava

Nombeko Mbava is a PhD Student at Stellenbosch University with a strong research interest in programme impact evaluation methodologies.
Nombeko Mbava

Latest posts by Nombeko Mbava (see all)

Monitoring and evaluation of government performance and programmes have become less of a haphazard knee-jerk reaction, a nice-to-have or an afterthought, and more of a deliberate, systematic and planned process. This is primarily because taxpayers are increasingly vocal, demanding that government must account for the use of public funds. The public is increasingly dissatisfied with huge budgets that are spent on public programmes that do not effectively demonstrate conclusive results and clear impact.

It used to be that government would periodically report that millions, even billions of South African Rands have been spent on certain ‘big ticket’ programmes resulting in certain outputs. That was generally where the story ended. Now, increasingly the public is demanding value for money and asking ‘so what’. Accountability is demanded for the vast resources spent, and for demonstration of the ultimate outcomes and impacts in the lives of the intended beneficiaries, some of which are the poorest of the poor, and the most marginalised.

We hear a lot about the ‘results agenda’ and ‘impact agenda’, first driven by the Millennium Development Goals and now by the Sustainability Development Goals. It used to be that in the public sector international development cooperation and donor funds drove these agendas. Now, globally, public sectors are taking leadership in monitoring and evaluation as a means of being an accountable state. This has led to the development of ‘country-driven’ rather than ‘donor driven’ monitoring and evaluation systems.

In this vein, the South African public sector has over the past few years been successfully leading and embedding its country-led M&E system in government. A key aspect of the system is the evaluation of programmes and policies of government as informed by the National Evaluation Policy Framework which also identifies impact evaluation as one of the main evaluation foci. The framework prescribes a National Evaluation System(NES) that implements and provides oversight over public sector evaluations.

ImpactEvaluating impact  

Evaluation of the impact of programmes is important. The ‘impact agenda’ calls for impact evaluations that are relevant to policy-making, and that communicate evidence that provides clear policy direction. It encourages progress from the monitoring of programme outputs, towards the evaluation of outcomes and impact. This has resulted in the focus on programme impact evaluation, driven by calls for evidence of ‘what works for whom, why, how, when and under what circumstances’.

Internationally, the appropriate methodological approaches for impact evaluation remains a hotly debated issue. This is because some of the methodological approaches have been found wanting – by some – when used for the assessment of attribution and causality, and ultimately finding out ‘what works’. As a result, policy-makers are left in the dark as to the identification of the key drivers of programme success, or lack thereof. This had implications for programme applications in other settings. In order to judge a programme’s impact, strong evidence of what works in terms of programme efficacy has become a necessity.

Impact evaluation in South Africa

Within this context, I recently conducted a forthcoming study to explore the methodological approaches applied in impact evaluations in South Africa. These as well as the views of policy decision-makers, commissioners and implementers of evaluations were investigated in order to establish the usefulness of the evaluation results in offering new insights. In addition, the study sought to establish the potential value of Realist Evaluation as a suitable approach in the methodological toolbox of impact evaluations in the South African public sector.

Emerging research findings indicate that impact evaluations are as yet not widely represented within the National Evaluation System. The most prevalent type of evaluations currently carried out in the public sector are implementation and diagnostic evaluations. Impact evaluations that have been done of social programmes commissioned by the South African public sector usually adopted experimental designs which, by virtue of their design, meant that there was a lack of focus on coherent programme theories of change.

A key finding from this study was that the expertise for the design of impact evaluations specifically for complex interventions is mostly outside the public sector. Impact evaluations that have been completed have largely been led by multinational expert teams who had the skills and know-how to design highly complex evaluations.

In addition, commissioners and implementers of evaluations in the public sector have indicated that the evaluation methodologies and the way evaluations are designed posed critical limitations. They are not always appropriate to inform the needs of policy-makers. Some of these factors included the absence of the theory change that establish how the programme works, in what context and under what conditions. Other factors highlighted the limited utilisation of evaluation evidence in policy-making, as the policy cycle often progresses without diffusing the available evidence into policy-making. More critically, public sector budgetary constraints impact on whether, and what type of evaluations are actually done.

The potential value of Realist Evaluation   

Realistic EvaluationRealist Evaluation (also called “Realistic Evaluation) is located in the ‘methods branch’ of evaluation schools of thought. It is a theory-driven method which makes explicit a programme’s theory of change. It does this through an explanatory focus which seeks to understand and interrogate ‘what works, for whom, in what context and in what respects’. It specifies how the combination of the programme’s context, in what is believed to be a complex social system, and the programme’s mechanism of change contribute to the observed outcomes.
Contextual conditions under which programmes are implemented are critical. Social programmes are influenced by their surrounding social environments. The same programme will thrive in one social environment and fail in another setting due to surrounding circumstances and other contextual factors. The programme’s mechanism of change is largely influenced by the reasoning of the intended stakeholders of the intervention. Stakeholders include the intended programme beneficiaries, the programme staff, policymakers and other actors who are involved in programme implementation.

When a programme is planned, its theory of change should explicitly indicate the pathways to change. If these propositions are accurately predicted the observed outcomes should be more or less as envisaged and in sync with the programme’s overarching aims. However, if the programme stakeholders do not respond in accordance with this programme theory, the integrity of the supposed programme implementation chain is weakened.

Therefore, within the wider international evidence-based policy-making arena, Realist Evaluation was found to have emerged as key contributor in the systematic review of policy evidence. Whilst in some instances poor application and misinterpretation of the method persist, Realist Evaluation is increasingly applied in public sector interventions across all policy environments. It has been found most suitable in complex interventions where gaining insights into the programme mechanisms and programme efficacy is a key objective.

These were some of the conclusions drawn by the UK DFID’s 2012 commissioned report entitled ‘Broadening the Range of Designs and Methods for Impact Evaluations’.

Realist Evaluation, like all other types of theory-driven approaches, enhances methodological rigour. It requires advanced understanding of programme theory and research skills, and is time- and resource-intensive. A Realist Evaluation design cannot be a regular part of all evaluations. Some evaluations do not require such a level of depth and rigour to answer the evaluation questions and may require less probing strategies.

A Realist Evaluation design can be gainfully adopted especially in the following cases:

First, where evaluation questions are asked that seek to find knowledge and insight about the workings of a programme.

Second, where a programme is being implemented in a new context with no previous evidence of how it might work.

Third, where a programme is being adapted in a different context.

Fourth, in instances where outcome patterns are contradicting prior implementations.

The approach may serve to confirm and provide empirical evidence of how the programme works, why, under what circumstances, and who can most benefit from it.

Conclusions

The small number of impact evaluations conducted annually is a shortcoming in the National Evaluation System in South Africa, as it is one of the evaluation types prescribed in National Evaluation Policy Framework. This might point to the capacity and capability challenges when conducting impact evaluations, which are arguably the most theoretically rigorous and resource-intensive type of evaluations.

Impact Evaluation

There are also indications that there is currently a strong focus on programme design and implementation. This symbolises that the NES is attentive in improving the performance information emanating from the policies and programmes of government. Quality baseline data from programme monitoring systems should enhance programme impact evaluations. This then should provide policy makers with the requisite evidence to accurately assess programme failures and successes.

This is specifically relevant when considering the importance of the South African Government Outcomes Approach which seeks to ensure that the measurement of outcomes and impacts remain the key focus of government, rather than activities and outputs. In this environment it is expected that there will be an increasing demand for impact evaluations. It is therefore important to prioritise country-led and country developed capacity and capability to conduct these evaluation designs.

In this context, where the capabilities of home-grown evaluators have to be strengthened and nurtured, high level international expertise in impact evaluation should be sought and utilised where appropriate. However, this should be balanced with evaluation skills transfer and development that fosters the country’s evaluation know-how.

Impact evaluation methodologies that enlighten policy makers as to how and why programmes work should be preferred. Theory-based methods, such as Realist Evaluation, serve to open the ‘black box’ and provide the ‘enlightenment’ aspect that is sorely missing in many programme evaluations.

Lastly, the observation from Pawson and Tilley (1997:147) is illustrative in this regard:

Evaluation reports simply indicating whether or not there has been a change associated with the introduction of a programme should not be commissioned or accepted by policy-makers. They are of no value, since nothing can be learned from them about what and what not to do in the future. Evaluation reports must identify not only the changes associated with the introduction of a programme but also what brought them about.

Share this Article

4 thoughts on “Realist Evaluation for Impact Evaluation in South Africa

  1. Nombeko, thanks for this posting. The development of the evaluation system was not the result of a demand for accountability, but more that government realises it is not delivering to the satisfaction of citizen and wants to improve its delivery, responding rather to a political challenge. So accountability on a broad sense for government as a whole, or a few services like water, but not in general.

    I agree with you that there are few impact evaluations in the NES. That is not a shortcoming per se but a realisation that most programmes are not impact evaluation-ready. They don’t have good programme definitions, don’t have theories of change, don’t have baseline data and don’t have a comparator/control. So to ascertain the attribution of changes is very difficult. For that reason we have focused on implementation evaluations to get on top of how programmes are working, get a solid theory of change and get them redesigned to deliver that theory of change, and later it would be appropriate to look at impacts. In addition undertaking impact evaluations can take several years, and our evaluations typically complete in around a year, so there is faster feedback to implementation.

    There are some impact evaluations that have been done, and the best way we can do it currently is to see if the programme theory is working. The other challenge is that politically it is often difficult to get rollout in a randomised way that would facilitate impact evaluation. For example on informal settlements, at an early stage we discussed whether the rollout of upgrading to some 130 informal settlements could be random, and then we could compare those upgraded earlier with those later. However the department did not see it as possible to rollout in a random way. So there is still much to do to get impact evaluation established, and for impact evaluations to be designed in from the outset, but meanwhile if the programme is not even working it is a waste of time to do an impact evaluation. So an appropriate key focus at the moment is to get large programmes working on a solid footing, and that gives us much greater chance of impacts.

    • Dear Ian.

      Thank you for the insightful and valuable perspectives on this discussion.
      The impracticality of some impact evaluations at present whilst programme M&E systems are being strengthened is appreciated. Equally, the focus on programme theories of change can also serve to further strengthen the NES and possibly result in evaluation findings that are meaningful and useful.

  2. Dear Nombeko, what a insightful and thought provoking read. We often, due to multiple deadlines and delivery targets in our project plan neglect to make provision for pre-during and post Evaluation intervals.

    I am highly impressed with your article and sincerely appreciative of the fact that you took time and effort to remind us as to the importance and impact of Evaluation.

    Regards,
    Francois Koeberg

Leave a Reply

Your email address will not be published. Required fields are marked *