Skip to main content
Previous Blog Home Next

Stop (Just) Measuring Impact, Start Evaluating

“We want to know what impact our work has had.”

“Does our model work?”

“What’s our ROI in terms of impact?”

In my decade and a half of working with measurement, evaluation, and learning, I have gotten comments and questions like the ones above on a regular basis. People want to be able to tangibly measure their impact, prove their model, and assure their boards and stakeholders that they are indeed being good stewards of resources. The call for impact measurement has only gotten louder over the last few years. The “moneyball” movement has brought data analytics and performance management to the public and social sectors, with an emphasis on better impact tracking; we now have an impact genome project, an impact analysts association, and several different impact measurement conferences. Impact investing has grown exponentially.

Which makes one wonder: In all this enthusiasm to measure, document, prove, and fund impact, are we missing the boat on evaluation?

Before we delve into that, let me clarify what I mean by “evaluation.” In a blog post last year, my colleague Hallie Preskill and I offered the following definition:

Evaluation is a systematic and intentional process of gathering and analyzing data (qualitative and quantitative) to inform learning, decision-making, and action.

While measuring impact could be a purpose of evaluation, it is by no means the only one. Evaluation also helps uncover insights related to context, implementation, strategy, and organizational and system capabilities. There are several reasons why investing resources and energy in evaluating, rather than just measuring impact, is beneficial:

  1. It helps us get to the “how” and the “why:” In a recent New York Times piece, a data scientist at Facebook and a former data scientist at Google eloquently make the case for “small data,” which in their estimate has gotten short shrift in the light of an obsession with big data.  Their argument: small data (surveys, qualitative data, and contextual information) helps us get to the how and why. The same is true for evaluations—they help get underneath the “what” and understand root causes behind why something is transpiring the way it is, and how.
  2. It helps us understand what works in context, not in the abstract: At the Future of Evidence Symposium organized by the Center for the Study of Social Policy last fall, Tony Bryk, president of the Carnegie Foundation for the Advancement of Teaching urged that the question isn’t “what works;” rather it is illuminating under what conditions, in what contexts, for what groups of people, in what ways, and to what extent does it work, so that we can achieve efficacy reliably at scale. This becomes so much more relevant when the interventions are complex and/or living in complex environments, as my colleagues and I wrote about recently in a brief on evaluating complexity.
  3. It helps us comprehend what factors are helping and hindering success:  As any graduate student in the social sciences knows, and as this recent Education Week article about blended learning illustrates, the answer to most questions about the effectiveness of interventions is, “it depends.” However, evaluations can help us understand what exactly it depends on—in other words, the various factors are accelerating or impeding progress. 

In many ways, the focus on impact measurement is refreshing. Instead of “counting chickens,” as a recent foundation client who funds rural small businesses put it, we are now talking about increased income and improved wellbeing. However, staying with the animal theme, as a former colleague of mine used to say, “You can’t fatten a pig by weighing it.” In other words, you can’t get better at impact if all you do is measure it. It is true that what gets measured gets done, but only if it provides actionable information to actually get it done. That’s where evaluation comes in. 

Srikanth "Srik" Gopal

Former Managing Director, FSG