In my previous Part 1 post on measures, I discussed Measuring Project-based Objectives. This time I'll talk about how to deal with the omnipresent problem of too many measures.
Ideally, Strategy Execution projects include a lot of time of figuring out what critical few objectives are important to an organization and picking a few measures. Sometimes, however, there are industry standard measurement frameworks that are designed to make it easier to compare performance across different organizations. These frameworks serve a great purpose and can really help define focus in an organization. But if you are not careful, you may find that the sum of the parts is not really a useful tool to help get the results you are looking for.
Take HEDIS, (Health Plan Employer Data and Information Set) for example. This is a set of more than 60 measures (the number changes with new releases) that indicates everything from how fast a health plan answers a phone to how well they screen their members for cancer. It's a great tool for businesses to compare health plans and many health plans work hard to improve their numbers. There are lots of other examples of such frameworks in hospitals, IT Organizations, Government and many other categories and the dangers we are talking about apply to them as well.
In an ideal world, the HEDIS measures might be sprinkled across many different scorecards in the organization -- owned by those accountable for them. So the director of the call center might own the two or three metrics related to that, the Chief of Cardiology Standards might own the few related to heart treatment, etc. Invariably, though, top executives want a single number that tells them "how we are doing on our HEDIS measures."
Now it might be that improving HEDIS scores is a real objective for an organization. To actually improve these scores would require going through the list of the measures, figuring out how you are performing now in each, defining targeted improvement projects based on gaps, and setting performance targets for the future. That's great. But what many organizations would actually do is put all 60+ measures on the top-level scorecard, which is incredibly messy and certainly doesn't help the organization focus on the critical few that need work.
So how do you get a view of overall HEDIS performance without doing that? We could just average all the measures, but that won't work because how do you average a 3 miniute call wait time and a 3 out of 1000 patient infection rate? It doesn't make sense. You could create a "percentage of target" for each individual measure (also called a "child measure") and then average all of the child measures (with or without weights). Then, at least you are getting one number that fairly represents the performance of the group of measures (this type of roll-up is called "average attainment" in ActiveStrategy Enterprise software).
If you don't like the fact that exceeding targets on one child measure (i.e., achieving 300% of target) could obscure poor performance on other measures, you could index each measure on a 1-100 scale before we roll up and average (again with or without weights) all of the measures. This approach is usually called "indexing." Some performance management methodologies (and products) are based entirely on indexing -- everything is viewed as % of target (in ActiveStrategy Enterprise, this roll-up method is called an "attainment Index").
While there are times when average attainments and attainment indexes make sense, their overuse can be a real problem for several reasons:
- loss of visibility to details: suppose you have 10 child measures to an indexed measure. It is really difficult to pick a weighting methodology that will tell you what is truly going on. The universal first instinct of managers and executives is to drill into any "red" problem areas to see what is causing the problem.
- easy to miss early problems: it might be that we have one red child measure below the indexed measures described above, but the rest of the measures are green. If my index measure is green, I might not get into the details of the red child measure during a business review and could miss some trouble that is brewing.
- what does "red" mean: beyond the difficulty of picking a weighting scheme that gives a good picture into the child measures for an indexed measure is the problem of setting a target for the measure. Is it OK that the weighted average index of the 10 child measures is 92% or not?
- the "bubble up" fallacy: some organizations deal with the problem above by setting the target for the index very high (sometimes 100%). That basically eliminates most of the benefit indexes are supposed to give you -- you almost always need to look at each of the child measures because the index is usually red. Some products try to solve this problem by "bubbling up" any red that exists in children below a parent index measure. If you try this scheme in the real world with any meaningful level of cascaded measures, the result will likely be a top-level scorecard with nothing but reds.
OK, so there are some pretty serious pitfalls to average attainment and indexed measurement schemes used on top-level scorecards. So what is one to do? Well, it turns out there are some different measurement types that can be more helpful in showing what is going on at lower levels of a measurement hierarchy than the simple index. I'll cover those in my next post. Stay tuned! And tell me what you think in the meantime.