Rethinking Success in M&E

Mokete Mokone

Mokete Mokone

Mokete Mokone is an Evaluation and Learning Specialist with the Jobs Fund at the South African National Treasury.
Mokete Mokone

Latest posts by Mokete Mokone (see all)

Measuring programme effectiveness should allow for an opportunity to reflect on results that are not primarily intended by an intervention.The expectation that evaluation reports will conclusively establish the outcomes or impacts of an initiative might not be realistic. Perhaps we need to rethink what ‘success’ should look like, and consider what alternative perspectives might be available to guide us.

In actual fact, both those who commission and those who respond to terms of reference do so with anticipation that an evaluation will confirm the strength of a theory of change. But what happens when an evaluation is not able to prove the effectiveness of a programme? Should we not have other indicators of success that can be measured to assess impact?

In this discussion I highlight some ideas to ponder when evaluation commissioners and practitioners negotiate the type of results to be investigated, while still making the process part of learning and improving the sustainability of programmes.

We need to value and use monitoring   

Monitoring should be used to track what is happening, provide a quick response mechanism, and make remediation possible during implementation. Instead of waiting until the end of programme implementation to make critical decisions, there is a feedback loop that should be fully utilised. Monitoring has become synonymous with data collection and reporting, but frequently neglect the analysis stage. As a consequence of funding requirements, organisations merely collect data and complete various templates for funders without necessarily understanding themselves the likelihood of success and the early warning indicators.

So in the rush to be compliant, we often make the error of underestimating the value of monitoring data while missing material information about programmes. Monitoring data should actually be used as a first indicator of whether the programme is on track, and allow organisations to take remedial action to improve or add a supplementary component to allow it to be (more) effective.

This means that the same vigour and intelligence should be applied in both evaluation and monitoring concerning what type of programme data gets collected and analysed. Such measures also mean that programmes can negotiate early on with funders what results they will be able to realise and what success will look like; given how the programme unfolds over time. Instead of waiting for a final evaluation that will determine whether funding will be cut or not, or whether the programme should continue.

Organisations should have regular monitoring reports that they use to make sense of their data. This would also address the challenge of commissioning evaluations at the end of the programme and then realising that key information is not available or is insufficient. Regular feedback on trends, forecasts and progress should be part of monitoring, which would make for better implementation. For example, if a programme collects information about the success of the candidates who completed their training, yet does not make an effort to understand the intake of candidates, the issues raised during course attendance, and reasons for drop-outs, they will not have a full picture of the merit of implementation processes, and the likelihood of success.

In short, we should focus more on adaptive management.

We need to rethink ‘failure’

One of the most difficult traits of being human is the inability to embrace failure; this is even more true for organisations. When we design programmes, we make assumptions about how they will work and what results they are likely to achieve. When this is not confirmed, we discard the entire process as a failed attempt, while seeking to diagnose the most important design or implementation weaknesses.

successIn the process, we miss a valuable opportunity to learn and improve current and future programmes. The word ‘failure’ – especially for public and donor funded initiatives – would mean wasteful expenditure, with blame assigned to unprincipled officials. Instead, we can take a wholesome view and begin to take stock of lessons that can be learned.

Rethinking failure would mean assessing the ‘learning experience’ in order to improve own understanding, processes and systems. Even when a programme has not realised its intended results, there is value in assessing what can be learned. And, in fact, unintended positive results might actually mean success, even if they are different to what was intended. Because a programme is multi-dimensional, it is not merely about producing “this and that”. It can also be about the immediate outputs that have been realised, and how next steps can be taken to make the results (more) sustainable.

For example, consider an educational programme that is aimed at exposing young people to entrepreneurship, and supporting them to start their own businesses. The programme may fail to realise those new businesses, but impart skills about financial management, personal development, branding and risk-taking. An alternative approach to evaluating this would be to look at how these skills might have changed, or have the potential to change the lives of the young people – and consider what should be done next to help them get to a next level of development.

Good lessons may also emerge when assessing for example any partnerships and networks that were established, or how the programme was able to deliver at output level – that is, it is not always necessary to focus on impacts.

We should consider unintended positive consequences

It is quite normal that some programmes will not realise the results that they have been funded to achieve. However, as mentioned, this is not necessarily a bad thing. There are other key indicators of success that can still be measured, even if they have not been contracted with the funder. These may still offer valuable information about the impact of the programme beyond what has been funded. This idea is discussed at length in Jonny Morrell’s book which looks at uncertainty and unintended consequences, and advises evaluators to accommodate these surprises as part of their assessments.

I agree that it is important to link the results with what has been funded. However, this does not mean it has to be done in isolation of other factors. There are spin-off effects of programmes that often do not make it as part of evaluations, either because they are too broad or the link is not quite clear. There is still a case to be made for the unintended positive consequences of programmes, which are often not identified or given the credit they deserve.

This does not mean that if programmes do not meet their actual targets we should go on a wild chase for any positive ‘story’. On the contrary, it is about taking account of the contribution of the programme to outcomes or impacts that were not expected. This approach seeks to identify the role of the programme among other key activities, and to actually assess the level of impact even though some expectations may not have been fulfilled.

For example, a programme aimed at job creation among rural youths might enable the majority to start their own business. A study of unintended positive consequences will take account of the strengths of the training, the newfound confidence and resourcefulness of candidates, and access to finance opportunities – in addition to, say, expected improvements in the ability of the candidate to improve their own and their families’ material standards, and access to better social services such as health, childcare or transport.

In summary

Evaluations should be used to assess the full impact of interventions. The challenge will always be to find the right balance between evaluating the direct results of the programme (which have been funded); and evaluating broader systemic contribution of the programme (which have not been funded) and determining which one tells a convincing  story of success. Or maybe it can be a combination of the two and not necessarily a single approach. There should be an overall intention to seek to understand how a programme contributes to better outcomes and made positive impact which can be direct or universal.

Share this Article

4 thoughts on “Rethinking Success in M&E

  1. Dear Mokete I agree with most of what you say. Where programmes and projects have not delivered as as expected lessons can only be learned if the evaluations tell us why these initiatives have failed. If we understand why they have failed then we can learn what to avoid or change. Further, there is need to define Results (Impacts, Outcomes and Outputs) clearly from the outset problem is that resources can utilised to produce intended results. This is where the challenge is. Most projects are activity based and not results oriented. While activities may be interesting they do not produce the intended results and hence accountability for Results and Resources becomes a challenge. Indeed projects do produce many positive unintended results, evaluations should take stock of these and show how these unintended results contribute to the overall development process. In most cases this is not well argued and hence its value is lost.

  2. Brilliant assessment!! Monitoring project progress in terms of pre-determined outputs should not be linear, but should also take into consideration the “off-cuts” of the project, whether intended or negative. These all feed into, as you indicated, closing the loop towards understanding what works, what can work if measured differently and what can work in the future, based on the “unintended” learning. This approach allows a different approach towards projects, one of learning and not of an “inspector” with a checklist of outputs to be achieved.

  3. Hi Mokete,
    Thanks for the enjoyable article.
    I agree with you on the importance of monitoring and the importance of analyzing monitoring data to derive lessons. I also think that the quality of evaluations is significantly influenced by the quality of monitoring work conducted. The major reason that it is not done is due to programme design.
    If I use your example of the educational programme aimed at exposing young people to entrepreneurship, and supporting them to start their own businesses.
    If that is all the information regarding the programme then one might find that the evaluation conducted will only reveal information related to whether young people were exposed to entrepreneurship and whether they started their own business following the programme. However, if it was understood upfront that in the course of the programme there will be an imparting of skills, which may have the potential to change the lives of the youth, then the evaluation is also able to take this into account and report on it as part of the evaluation.
    It has been my experience that very often, when presenting evaluation results, clients ask about the unintended consequences of a programme. They present ideas that they did not include at the commencement of the programme, as if they were always intended, and thus conclude that the evaluation was not rigorous enough, when in fact the problems began at the programme design phase.
    Also, it is also quite often true that the costs (time, money, people, and resources) of conducting a thorough evaluation often dissuade clients from including them in the scope of evaluation. However, they realise much later that such information would have been beneficial. For example, for every extra dollar that must be spent on an evaluation, much more must be generated in order to finance the cost of the evaluation.
    I would recommend that evaluators (different to those who will actually conduct the evaluation) are included at programme design phase to provide a more holistic understanding of what may happen through the programme. This will enable the final evaluation to capture a much wider range of possible outcomes and impacts. Also, I would encourage robust discussion regarding the budgets for evaluations prior to the beginning of a project in order to determine what kind of evaluation results can be determined and reported on.

Leave a Reply

Your email address will not be published. Required fields are marked *