Measurable Does Not Mean Important

Reaching back to 2014 for this blog where I discussed one of the best and shortest evaluation books I have read, E. Jane Davidson’s, Actionable evaluation basics: Getting succinct answers to the most important questions. I still use it! It is what inspired my go-to phrase “big bucket” evaluation questions. If you are a client, you have heard me say that A LOT!

Flipping through E. Jane Davison’s book, Actionable evaluation basics: Getting succinct answers to the most important questions, a phrase jumped out at me --- “Measurable does not mean important.”

Isn’t that the truth! At CES, we do quite a bit of evaluation for youth development and afterschool programs. Some of these programs require grantees to collect a lot of data. Much of that data is not very important or useful, at least from the perspective of some of my clients and me. They are required to report on a multitude of indicators that are of interest to the sponsoring federal or state agency, but not necessarily the local site. For example, one of the many things a site may have to report is a change in students’ math and/or reading scores. The change can be as little as one point. Now we all want children to improve, but just how meaningful is a 1 or 2 point change in grades in the grand scheme of things? What the program really is interested in are things like, are kids safe after school? Do they get the extra help they need to understand their homework? Does the program help them get the credits the student needs to graduate from high school on time? Has the program helped improve literacy? By the time we answer all of the required items, there are little time or money to answer more interesting (and important) questions for these underfunded projects. Moreover, we have found funders uninterested in outcomes not on the required list of outcomes.

Another consequence of measuring outcomes that are unimportant is not measuring things that are important. Over the years, we have worked with many programs housed in schools and some with transient and/or immigrant populations. Funders with very specific requirements and processes sometimes fail to consider cultural influences and the complexity of the systems that surround these children.

Some questions suggested by Davidson include:

  • How well designed and implemented was the program?

  • What worked best for whom, under what conditions, and why?

  • How worthwhile was the program overall? Which parts generated the most valuable outcomes for the time, money and effort invested?

Funders aren’t the only guilty party here. Programs and even evaluators may choose to measure the low-hanging fruit rather than the levers that trigger the outcomes in which we are most interested.

Are you measuring what is really important? Are you measuring the things that lead to community change? We should ask these questions and design our evaluations accordingly.

What’s your measurement story? I would love to hear from you.

Take Care-

Ann

P.S. I am getting ready to record some new podcasts. In the meantime, hear what you have been missing on Community Possibilities.

Previous
Previous

Value of Condiments and Connections

Next
Next

Ten Tips to Keep Your Coalition or Collaborative Happy