HERE’S HOW TO DEAL WITH THE EVALUATION COMPLEXITY

Dealing with complexity is becoming more and more relevant in light of the ongoing  pandemic. Over the past few years, we at DeftEdge have been assisting the development sector to manage increasingly more complex assignments. These include assignments that involve widely divergent stakeholders with conflicting views and expectations on what the evaluland is, and consequently, what the evaluation should accomplish, and ones that require assessing impact-level changes of projects taking place on multiple continents. Even before covid-19, we were helping evaluation departments identify strategies to more effectively conduct and manage the increasing numbers of complex assessments within their portfolios. Its criticality has only multiplied by an order of magnitude since then.

Despite the improvements in assessing multi-faceted and challenging interventions, systems and environments within the evaluation sector, the quest for even more complexity-responsive evaluation processes persists. Multiple factors are driving this need – including new and diverse sets of partnerships; more ambitious and difficult to measure development objectives; increased demands for more rigorous evidence on results; expectations for evaluations to contribute to measuring achievement of Sustainable Development Goals (SDG) targets; and now and in future crises, the need to adopt practices that do not risk the health and well-being of communities, stakeholders and evaluation teams.

These issues have implications for evaluators, evaluation managers, and those who provide oversight to evaluation processes.  Over the past few weeks, many UN organizations and other development partners have been issuing important guidance for evaluation teams and program managers on the need for continued accountability and learning in this time of uncertainty, and for how evaluations should proceed. But what does this all mean for behind-the-scenes management processes, evaluation quality, and evaluation budgets? Here are five things to consider:

  1. Assess complexity as part of evaluation planning

One strategy is to use complexity assessment frameworks as part of the design and management of evaluations as suggested by scholars such as Bamberger, Vaessen and Raimondo (2016) and Tavistock Institute of Human Relations (2019) . Developed in the evaluation planning stage, these frameworks aim to classify the complexity of the intervention in terms of:

  • what it’s trying to achieve
  • the number of component parts and the number of expected results to be measured
  • the diversity of stakeholders as well as the nature of their involvement
  • the environment in which the evaluation takes place including opportunities for accessing stakeholders and the level of risk involved.
  • Once such factors are identified, it becomes easier to determine what level of effort, budget and other resources are required to adequately support the evaluation process, as well as to ensure evaluation integrity and quality.
Picture1.png

2. Pull out the program logic model

Theories of Change (ToCs) are particularly important for showing how results are supposed to happen within complex interventions which often have multiple pathways through which to achieve their goals. And these models, whether in narrative or diagram form, can take on a renewed importance in times of uncertainty. If a reasonably accurate ToC exists, then it can be a useful tool for planning rapid assessments and the more narrowly focused evaluation processes that are now being used – for determining what causal relationships are most feasible to assess and for revealing the pathways that have been blocked and new ones to forge. If you’re not so confident in the accuracy of the ToC, have the evaluators test and reconstruct it as part of their scope of work.

3. Build flexibility into evaluation processes and budgets

Consider a phased approach so that the scope and design of the assessment can respond to new learnings or situations that arise as the evaluation proceeds. Evaluability assessments may be a good place to start.

4. Support local evaluation expertise

The need for engaging national- and community-level evaluators and researchers has only become more critical when it comes to carrying out complex evaluations and as safety issues affect travel. Where there are gaps in evaluation expertise, be creative about pairing local and international talent.

5. Account for complexity in evaluation ratings

Complexity should also be incorporated into evaluation quality assessment processes. Although ethical standards for evaluations cannot be compromised when the environment gets more complicated, there may be times when the evaluators cannot conform to the complete set of best practice standards and evaluation quality assurance systems  should recognize such constraints.

This blog post is based on Ms. Sutherland’s recent study for UNODC. For more details on the study or personalized advice and support, feel free to contact her at sutherland@deftedge.com.

Note: A follow-up note focuses on the use of Big data and other unconventional data sources for dealing with crises and complexity.



Leave a Reply