Stretching boundaries of evaluation practice with PIALA in Ghana

Adinda Van Hemelrijck

Adinda Van Hemelrijck

Adinda van Hemelrijck works freelance in international development on design, evaluation and learning around impact.
Adinda Van Hemelrijck

Latest posts by Adinda Van Hemelrijck (see all)


The PIALA Adventure: Setting out in Ghana

September 2014. After having waited for several months for green light, I finally found myself almost impatiently queuing for boarding for what was my first flight to Ghana. Finally I would get this second PIALA pilot up on the rails. PIALA stands for Participatory Impact Assessment & Learning Approach and was piloted with funding from IFAD and BMGF in the impact evaluations of two IFAD-funded government programmes: first in Vietnam in the Developing Business with the Rural Poor Programme (DBRP) in 2013, and next in Ghana in the Roots & Tubers Improvements and Marketing Programme (RTIMP) in 2015.

My first task upon arrival in Accra would be to support national procurement of the research firm. Right after this we would start the training and design work. While boarding I realised though it would be just me there in Ghana. Everybody in my team was busy with other work. So I would need to produce the golden egg and get right what had gone wrong in Vietnam. For a moment I was seized with anxiety. Looking back more than a year later, I realise that this opportunity bestowed me a golden goose on top of the golden egg.

The opportunity: A second pilot

BananasThe whole PIALA adventure started almost four years ago with an email from Edward Heinemann (IFAD policy expert) asking if I’d be interested to lead on an IMI-funded project (Innovation Mainstreaming Initiative) for developing a scalable and cost-effective participatory impact assessment approach focused on learning. I was quite sceptical at first, asking nasty questions such as “learning by whom” and “participation what for” or “why would people want to waste their time on participating in IFAD’s impact evaluation”. But that’s exactly was Edward wanted.

Three months later we organised a workshop at IFAD to clarify these questions and got the BMGF on board as a partner and co-funder. We asked Irene Guijt, Andre Proctor and Jeremy Holland to join our team andhelp us think through the concept of PIALA. We wanted a team of people who’d be as passionate as we were about the idea of developing an approach that would leave a mark –or in the team’s own words “would help a large multilateral agency stretch the boundaries of its own practice and contribute to something that might influence prevalent thinking/practice in impact evaluation” (March 2013). Obviously, this could only happen with Edward championing the work and mobilising the necessary support and interest at IFAD.

So I was armed with good thinking and useful inputs from my core team, a strong backup from IFAD headquarters, and lots of feedback on the first Vietnam pilot from a group of experts in and outside IFAD. Important pieces in our PIALA construct were still missing though, and I knew this second pilot would be very challenging because of the expectations, the scale and scope, and the amount of funding we received. The Ministry of Food and Agriculture (MoFA) and the IFAD Country Office (ICO) in Ghana had decided this impact evaluation needed to take a ‘full scale & full scope’ design, thus covering the entire country and the entire programme. This implied a substantial scale-up of our first pilot in Vietnam that was conducted only in one province.

Pondering design options

Prior to the procurement and design, I had given the commissioners three major design options with detailed budget estimations and explained the value-for-money they could expect from each. The first was a ‘full scope & limited scale’ design that emphasises learning about programme contribution to impact in select cases under specific conditions, which would have saved time and money on scale, but not permitted generalization of findings.

The second was a ‘limited scope & full scale’ evaluation that would assess the effectiveness of one or two particular aspects or mechanisms of the programme on the entire population, saving time and money on scope but running the risk of reaching the wrong conclusions.

This is the tendency in mainstream practice –slicing a programme into measurable parts and looking at intervention and effect for each in isolation– which often leads to perilous recommendations when it comes to inclusiveness and sustainability of impacts.

For example, a cost-effectiveness study of Farmer Field Forums in Ghana recommended a scaling up because of the high adoption of new technologies positively impacting on livelihoods, while the PIALA evaluation six months later showed that in the downward conjuncture this had contributed to market saturation offsetting the initial positive effects and negatively affecting livelihoods across the entire country.

The third or ‘full scale & full scope’ design option is the best for reporting and learning from a systemic perspective, but obviously more expensive and requiring a higher research capacity to handle the scale and quantum of data. Fully aware of the consequences and related costs, the commissioners chose the third and expressed their trust by signing off on a budget of nearly USD 233,000.

Those who know how much a nation-wide mixed-methods impact evaluation can cost in an African country that suffers from long distances with bad road infrastructure and high transportation costs, must think this is a joke. For many though, it’s an evaluation budget one can only dream off. For us, it was working on a shoestring –knowing that, in all, over 2,000 people took part in the various methods of this evaluation, of which 750 also participated in sensemaking workshops we organised as part of the analysis (23 at district level with a total of 650 participants, 70 % of which were beneficiaries, and one at national level with 100 participants, of which 30 % beneficiaries).

The research team leaders proved masters in turning around every penny and meticulously keeping track of every one spent or saved.

Design by participation

Workshop

The scope was another major challenge, particularly from an analytical point of view. How do you process and link large amounts of data on performance of many different mechanisms, to observed changes in many different areas, and then link these to data on household impact, in order to come up with some rigorous numbers and credible explanations of the programme’s share?

The classic approach would be to identify control groups and compare with the baseline. But what if the baseline is out-dated and incomparable, and control groups are too costly and difficult to find (e.g. due to programme expansion, self-targeting, innovation generating emergent and unpredictable results, and a high causal density of other programmes and influences) and, besides, also won’t produce the desired explanations? This challenge was brought to the table in a design workshop with commissioners and national stakeholders.

The design workshop sought to build a shared understanding of RTIMP’s Theory of Change (ToC) and its causal claims, and reach agreement on the mechanisms, assumptions and questions on which the evaluation needed to focus. There it was decided NOT to drain resources into identifying and inquiring control groups, but to concentrate on the supply chain systems that the programme had developed and replicated across the country. The programme had tried to develop commodity chains supplied by these localised supply chain systems and linked to national and export markets and industries. The evaluation had to inquire four of these (gari made from cassava, high quality cassava flour, cassava flour for plywood industry, and fresh yam for export) and unpack to what extent and why their supply chain systems had (or had not) affected livelihoods under various circumstances.

So we needed to find an alternative way to do the causal analysis.

Causal analysis, the PIALA way

We developed a configurational method using systemic heterogeneity as the basis for doing the counterfactual analysis. Instead of the classic comparison of ‘treated’ and ‘control’ groups of households, we analysed and compared supply chain systems with different systemic configurations –configurations of differentiated treatment (by selected program mechanisms), functional conditions, influences, outcomes and impacts. The textbox shows the type of findings this generated.

RTIMP Evaluation Report

The supply chain systems for the four different commodities were quite different but loosely could be defined by their supply chain leaders (such as processing centres and local factories) and their catchment area of suppliers (smallholder producers and processors). Geographically they covered average three communities. Their influences on livelihoods and household poverty were quite different depending on the conditions (e.g. road infrastructure, market vicinity) and other influences in the area (e.g. other programmes, macro-economic influences, migration, Ebola).

So we sampled randomly 25 districts with 30 of these supply chain systems, proportionally representing the four commodities and covering all zones in the country. Within these, we then sampled randomly 900 households (837 after selection) for the household survey and quasi-randomly 1180 beneficiaries for the PRA-based methods and distributive feedback methods. In addition we conducted over 100 key informant interviews with local and national officials, service providers, industries and bankers.

PIALA’s systemic ToC approach formed the backbone for the entire evaluation. It involved a collaborative process of reconstructing and visualising the programme’s presumed theory based on a desk review and conversations with the designers and implementers of the programme.

The outcome was a diagram depicting the systemic links and feedback loops in and between the programme’s causal claims, and the mechanisms and external influences therein, collectively leading to the desired impact. In the design workshop, this was used to decide on the focus and frame for the evaluation. Data collection and collation were then organised around the different claims and links in the ToC forming the focus of the evaluation. Different methods were selected to inquire different links with different groups, thus complementing and building on each other analytically, but also partially overlapping to permit triangulation.

This enabled us to build a strong evidence base about the systemic changes and impacts for each of the supply chains. The configurational method was then used to compare the evidence across all the supply chains and identify and analyse the patterns. In the sensemaking workshops, stakeholders were engaged in constructing a causal flow diagram with the evidence, mirroring (thus validating or refuting) the ToC.

RTIMP Theory of Change

Figure: Causal claims & links in the RTIMP Theory of Change. Please click on the image to view a larger version.


PIALA as a joint learning journey, a partnership

Naturally, even the best package of methods, tools and guidance can’t do the job on its own. Capacity really trumps all! Capacity to thoughtfully, consistently and responsively employ the methods and tools is indispensible for conducting a ‘full scale & full scope’ impact evaluation using PIALA.

 PIALA as a joint learning journey, a partnership

In Ghana, despite the shoestring budget, the evaluation was able to answer all the questions and didn’t sacrifice on quality, largely due to the motivation and experience of the researchers with large-scale participatory and mixed-methods research. The researchers’ solid grasp of the ToC helped them facilitate the processes and collate and triangulate the data rigorously. Data collation and quality monitoring (including reflections on participatory processes) were part of their daily practice. The teams were able to identify data gaps and weaknesses on time before organising the district sensemaking workshop on the last (4th or 5th) day of their inquiries in a district.

Crucial was also the intensive engagement in fieldwork and thrust for quality of the evaluation coordinator, Glowen Kyei-Mensah (who wrote the first PIALA blog and is the managing director of Participatory Development Associates, the firm that conducted the evaluation), as well as my presence during the first weeks to support her and the teams in the field whenever needed.

Moreover, the strong leadership and active engagement of IFAD’s Country Programme Manager Ulac Demirag and his team fostered great interest among the partners and stakeholders. Overall the impact evaluation was experienced more as a joint learning journey, a partnership rather than a technical consultancy.

Into the future

So learning from Vietnam, the pilot in Ghana turned PIALA into a golden goose with feathers of pure gold creating a staggering interest. We’ve presented our golden goose at 3ie, IDEAS, IDS, BMGF and multiple Evaluation Society conferences around the world, and published our first article in the CDI series at IDS.

And the story of our golden goose continues, with new publications (including a guide) and new cases forthcoming, and perhaps even a PIALA training course later this year in Accra…

Share this Article

Leave a Reply

Your email address will not be published. Required fields are marked *