What am I Doing Here Part 4: Monitoring and Evaluation

No matter what you do or where you work, if you are spending someone else’s money, eventually, they are going to ask you what the hell you are doing with it, how effective you have been, and what you have been able to accomplish. This need for accountability and documentation is, however, only one part of the much larger picture usually referred to by the conjunctive phrase: monitoring and evaluation (M&E). Having not worked in the social sciences or studied development in school, my eyes have been opened to this fascinating and convoluted world of M&E, [sometimes also called Planning, Monitoring, Evaluation and Learning (PMEL)], over the past year.

Though the idea of planning your work, making sure it is on track, evaluating the relative success or failure and trying to learn from the experience has been around forever, the M&E requirements of development projects have been turned into an art form. There are elaborate methodologies, old school tricks of the trade, hot new frameworks and cutting edge research papers being published on the topic monthly. Careers are being made and lost trying to prove what works and what does not, NGOs are pouring more of their budgets into checking all the latest and greatest boxes, and every few years there’s a renewed push by the major multilateral agencies and governments to “get M&E right this time”.

Despite all this, there still seems to be an air of general dissatisfaction with our collective ability to find out what works, make sure it happens, and to reproduce it somewhere else. There is also the frustrating realization that we may never be able to measure all the complex and nuanced subtleties that are inherent of change in human systems. Moreover, it may be that these unmeasurable changes are the most powerful and important, yet remain outside of our reach of understanding and reproducibility. As an always on point Albert Einstein noted, “Not everything that can be counted counts, and not everything that counts can be counted.”

einstein-not-everything-that-can-be-counted-counts

There is also a growing feeling, within the development world, that we need to stop looking at issues like diseases and poverty as linear effects stemming from directly linked causes. The so-called “systems approach”, which is the antithesis of this, has been gaining steam over the past decade. In this approach, the sum is greater than the individual parts, the relationships are just as important as the players, and everything is at once linked and constantly evolving. While this is a far more accurate representation of the complex, emergent, human systems found in development, it throws a bit of a wrench into the whole M&E thing. How do you know where you are going when you can’t see where you have been? How do you measure the change you intended to create, when everything is changing all the time? How do you learn from one instance of a system when those exact conditions are likely never to exist again? These issues are crucial across the board in development and social change projects. Within the impact investing world, where BDSA works and I find myself, M&E is where the rubber meets the road and is at the crux of proving the central concept that businesses can be drivers of positive social change.

Monitoring: The systematic & continuous assessment of the progress of a piece of work over time, which checks that things are ‘going to plan’ and enables adjustments to be made in a logical way.

Evaluation: The periodic assessment of the relevance, performance, efficiency, and impact of a piece of work with regards to its stated objectives.

The history of M&E begins several decades back, when the rigour of scientific studies and evidence based results began filtering over to the social sciences and government. Within a few more decades, these ideas had made their way into the development context and had been sufficiently evolved to the point where it became almost impossible to do a project without applying a logical framework (logfram) or Results Based Management (RBM) tool. In the former, the project planner (usually on-high, in an air conditioned office far, far away), lays out the overall intended outcome of the desired change. He or she then works backward to determine the measurable outputs or metrics that would indicate the outcome has been achieved, the activities needed to produce these outputs and finally the inputs needed for each activity. At the end of the project the impact of the work is determined by subtracting the counterfactual (i.e. what would have happened if no intervention had been made) from the outcome.

M&E Impact Chain

This approach works great in the sciences where experiments can be carefully controlled, economies of scale allow for statistically valid results, and where causes can be closely linked with effects. In the real world of development, these conditions are rarely met, and the logfram approach leaves you with a decent planning tool, but a horribly rigid and impractical measurement and change management tool.

Formula from the World Bank's guide on M&E
Sample formula from the World Bank’s guide on M&E

 

So, development project implementers find themselves in a bit of a bind: on one hand, they need rigour and accountability, and on the other they need flexibility and constant adaptation to a rapidly evolving reality. They need to be able to prove their interventions were effective, substantiate the attribution of the results to their donors (i.e. determine the counterfactual), and be relatively confident that these results are reproducible and sustainable (i.e. the project continues on, as designed). This drives an excessive focus on donor accountability; an obsession with control, causation, and attribution; and an overall rigidness and inflexibility that is more of a hindrance than a help. To say nothing of the fact that the beneficiaries/victims of these projects (i.e. the poor) are often cut out completely from the planning, evaluating and learning process and are left scratching their heads when the NGO declares the completion of yet another successful project.

In order to counter some of the obvious short comings of the traditional Results Based Management (RBM) approach, various tools, frameworks and approaches have been introduced over the years. Most of these attempt to capture the qualitative aspects of projects through questionnaires, interviews, stories and through participatory, beneficiary-driven planning and evaluation. They also attempt to design for, or at least acknowledge, the complex, emergent nature of the systems of which they are a part and to weigh the needs of the beneficiaries above those of the donors. And while there is no silver bullet or one-size-fits-all approach that is going to work everywhere, a mix of these different tools and approaches is helping to breakdown the orthodoxy of the purely quantitative result.

Here are a few key trends in M&E that have been getting attention lately and some sources on where to find more information. For a great overview of all these recent trends and others, see this paper.

Developmental Evaluation – This approach is basically an attempt to reduce the feedback cycle between learning, doing and correcting to almost zero. By collecting data in real time and making decisions based on a constant feedback cycle, the theory is that the project can adapt and evolve in conjunction with the system, thereby avoiding the need for major course corrections down the line.

Shared Measurement – In this case, common metrics are used across organizations on “scalable platforms” in order to facilitate the sharing and discussion of results and learnings on a much greater scale. It also helps organizations share responsibility for their data collection and learning.

Big Data – As the name suggests, this approach is based on the assumption that if some data is good, lots and lots of it must be better. Using short feedback cycles, real-time digital data from a variety of sources (such as website traffic, twitter, blogs, phone records, etc.), and data visualizations and infographics, it is hoped macro-trends and insights will emerge.

Problem Driven Iterative Adaptation – The PDIA approach is based on four key principles: First, focusing on solving locally nominated and defined problems in performance (as opposed to transplanting pre- conceived and packaged best practice solutions). Second, it seeks to create an authorizing environment for decision-making that encourages positive deviance and experimentation (as opposed to designing projects and programs and then requiring agents to implement them exactly as designed). Third, it embeds this experimentation in tight feedback loops that facilitate rapid experiential learning. Fourth, it actively engages broad sets of agents to ensure that reforms are viable, legitimate, relevant and supportable.

QUalitative Impact Protocol – Qualitative information is often hard to communicate between stakeholders, even though it provides rich and relevant learning. The QUIP approach is an attempt to get qualitative data taken seriously by collecting it in a systematic and structured way.

Most Significant Change – MSC is a story based approach to help identify the causes of a significant/critical change (positive or negative) relating to key objectives, rather than looking for trends related to a certain phenomenon. This makes it easier to track stories of changes related to less easily quantifiable issues such as “capacity building” or “gender equality”.

Here is a brief look at a few more:

M&E Summary

Even with all of these tools and techniques being simultaneously developed, piloted and perfected. There is still much to be done in changing the development system itself. Here are a few recommendations for the future:

  • First and foremost, there needs to be much greater trust between donors and implementers and a lot more freedom given to experiment, adapt and learn. By far the biggest hurdle standing in the way of creative solutions to poverty reduction is that donors don’t trust implementers with their money, and implementers don’t trust donors with their program designs.
  • Donors , implementers, beneficiaries and other stakeholders need to come together to create spaces for innovation, seed the soil for new ideas, and embrace the failure of some projects in the name of a better overall result.
  • Agreement on the big picture problem definition or mission is necessary between stakeholders. This shared understanding should then serve as the organising principle when adapting activities and plans to ensure that practitioners are beholden to the ultimate mission, rather than the activities themselves.
  • Direct attribution of an impact is neither possible nor desirable in a complex adaptive system. The need for implementers and NGOs to clearly attribute how their work created a specific change should never take precedence over achieving the highest quality and most impactful aggregate change in the overall system. If implementers were able to put as much effort into achieving absolute results, as they do into competing for and seeking credit, everyone would benefit.
  • Finally, we need to give up the obsession with finding a be-all and end-all, silver bullet solution to our M&E needs. There will never be one perfect technique, just lots of little imperfect ones and the goal should be to continuously inch them forward.

So what is this all about, and why should you care? Well, whether you are a rural farmer in Ghana, a student in the UK, or a hospital patient in Canada, your life is probably significantly affected by the type of M&E performed by the organization with whom you are interacting. It may be that your story is being left out, or that numbers are not accurately capturing the whole reality, or that the information and accounting needs of the donor/government are being put above the learning and adapting needs of the organization serving you. Or, it may be that people working in these organizations are not taking (or being given) enough time to adequately learn and reflect on their work due to outside pressures to reduce overhead and produce results. Whatever the case, if we can continue to push for a more holistic, systems-based, human centered monitoring and evaluation, we will at least have the chance to correct some of the major issues with the status quo and put ourselves on the path towards a better world.

References:

http://usaidlearninglab.org/lab-notes/taking-time-stop-and-think-shifting-aid-models-manage-systemic-change

http://www.hks.harvard.edu/centers/cid/programs/building_state_capability/what-is-pdia

Andrews, Prichett, and Woolcock. “Escaping Capability Traps through Problem Driven Iterative Adaptation (PDIA).” June 2012. Working Paper No 240. Center for International Development. Harvard University.

http://www.springfieldcentre.com/wp-content/uploads/2013/07/Evidence-Based-Policy-and-Systemic-Change1.pdf

http://www.seepnetwork.org/monitoring-and-measuring-change-in-market-systems—rethinking-the-current-paradigm-resources-937.php

http://www.undp.org/content/undp/en/home/librarypage/capacity-building/discussion-paper–innovations-in-monitoring—evaluating-results/

http://www.fsg.org/tabid/191/ArticleId/964/Default.aspx?srpush=true

http://tamarackcommunity.ca/downloads/vc/Developmental_Evaluation_Primer.pdf

http://blogs.worldbank.org/category/tags/big-data

http://www.intrac.org/data/files/resources/145/Using-Qualitative-Information-for-Impact-Assessment.pdf

http://www.mande.co.uk/docs/MSCGuide.pdf

 

 

3 Replies to “What am I Doing Here Part 4: Monitoring and Evaluation”

  1. um now my head hurts…but in social work this need for quantitative and qualitative data and showing that blah blah did blah blah was part of the process…at the long term care place I had some quantitative at least as in I’d give a cognition test and come up with numbers and be able to put the numbers on a patient’s chart. at ARC it was so frustrating because even after services were put in place totally the client would mess up so you would return to sq. 1, then I would be “evaluated” based on positive or negative responses so the worst clients had the most contact & usually made the most noise…not to mention the all powerful Executive Director called nebulous & unsubstantiated criteria the evaluation…enough to drive a crazy woman even more, anyway do what you can it is challenging

  2. Egads…………. I don’t even know what to think. So much words. I suppose I could print it all out and study it, then respond – but my printer is low on ink.

    So have you asked out Gemina yet or not? Or did they tell you to leave their women alone? Dude, I would totally hit on that!

    1. Sorry Paul, I am actually typing most of these posts as first drafts on my phone while I’m travelling, so I can get carried away. Also the point of this blog is to be at least somewhat informative, but some of the topics will have a very niche audience. If you’re looking for a great way to read long web content, I highly recommend getting Pocket or Instapaper, they’ve changed the way I read online and making plowing through 5000 word essays on arcane development topics a lot less painful.

      PS> Gemina is thoroughly married with kids, but lots of other interests out there.

Leave a Reply

Your email address will not be published.